The AI Engine is a very long instruction word (VLIW) processor with single instruction multiple data (SIMD) vector units. It has application-specific vector extensions to make it highly optimized for compute-intensive applications, specifically digital signal processing (DSP), 5G wireless applications, and artificial intelligence (AI) technology, such as machine learning (ML).
This VLIW vector processor is hardened in 7nm @ 1 GHz at the lowest speed grade, increasing for faster speed grades. It is software programmable, so you can write C/C++ code, and a compiler will schedule and compile all instructions.
The AI Engine has a 32-bit scalar unit and 512-bit SIMD vector unit that supports both fixed-point and floating-point precision. The AI Engine also has 16 KB of program memory, an instruction fetch and decode unit, two load units, one store unit, three address generators, a stall handler, an accumulator stream FIFO, and a control/debug/trace unit. So, the Versal devices provide a wide range of neural networking capabilities.
In this workshop, you will learn about the Versal™ adaptive SoC and the architecture of the AI Engine, the various interfaces available in the AI Engine tile, Versal AI Engine Tool Flow and Vitis Model Composer for AI Engine Design.
All rights reserved. Copyright © 2023 TechSource Systems and Ascendas Systems Group