See AMD Vitis™ AI Development Environment on amd.com |
The AI Engine Development Design Tutorials demonstrate the two major phases of AI Engine-ML application development: architecting the application and developing the kernels. Both phases are demonstrated in these tutorials.
| Tutorial | Description |
| Versal Custom Thin Platform Extensible System | This is an AMD Versal™ system example design based on a VEK280 thin custom platform (Minimal clocks and AXI exposed to PL) that includes HLS/RTL kernels and AI Engine kernel using a full Makefile build-flow. |
| AIE-ML Programming | This tutorial helps you understand the differences between AI Engine and AI Engine-ML architecture, based on the matrix multiplication which is a usual algorithm in Machine Learning applications. |
| Prime Factor FFT-1008 on AIE-ML | This Versal system example implements a 1008-pt FFT using the Prime Factor algorithm. The design uses both AI Engine and PL kernels working cooperatively. Developers hand-code AI Engine elements using the AIE API, and they implement PL elements using Vitis HLS.. The new v++ Unified Command Line flow manages system integration in Vitis. This tutorial targets the AIE-ML architecture. |
| AIE-ML LeNet Tutorial | This tutorial uses the LeNet algorithm to implement a system-level design to perform image classification using the AIE-ML architecture and PL logic, including block RAM (BRAM). The design demonstrates functional partitioning between the AIE-ML and PL. It also highlights memory partitioning and hierarchy among DDR memory, PL (BRAM), Memory tile and AI Engine memory. |
| AIE API based FFT for Many Instances Applications | This tutorial walks you through the design and the implementation of an FFT for many parallel signals on a Real-Time system, using the AI Engine APIs. The design performance objective is minimizing power and utilization, maintaining a high throughput to at least match the Real-Time acquisition bandwidth. Moreover, the design leverages the AIE-ML Memory Tiles to minimize programmable logic utilization. The considered case study comprises 128 parallel signals, each with a 125 MSa/s sample rate and CINT16 datatype, with a total aggregated bandwidth of 64 GBytes/s. |
| Softmax Function on AIE-ML | Softmax is an activation function often used in the output layer of a neural network designed for multi-class classification. This tutorial illustrates how to implement the softmax function for developing custom machine learning inference applications on AI Engines. |
| Migrating Farrow Filter from AIE to AIE-ML | Many applications, including digital receivers in modems, commonly use a fractional delay filter, a digital signal processing (DSP) algorithm. This filter is essential for timing synchronization. The AIE architecture already includes the implementation of the Fractional Delay Farrow Filter design. This tutorial guides how to migrate the existing Fractional Delay Farrow Filter design from AIE to AIE-ML architecture. |
| Polyphase Channelizer on AIE-ML using Vitis Libraries | This tutorial demonstrates how to leverage Vitis Libraries IP blocks to build a high performance Polyphase Channelizer on AIE-ML running at 2GSPS. |
| MNIST ConvNet on AIE-ML | This tutorial implements a Convolutional Neural Network classifier on AIE-ML for identifying hand-written digits from the MNIST database. The goal is to illustrate how to partition and vectorize a simple machine learning example to Versal AI Engines. |
Copyright © 2020–2025 Advanced Micro Devices, Inc.