Not to be confused with the AMDXDNA accelerator driver for the Ryzen AI NPUs, AMD software engineers today posted patches for review on the “amd-ai-engine” accelerator driver. This new AMD AI Engine driver is for supporting the IP found on their Versal adaptive SoCs.
The nearly three thousand lines of new driver code posted today for review on the way to the Linux kernel is enabling the AMD AI Engine that is currently found in their adaptive SoCs with the Versal product line-up originating from their Xilinx acquisition. The patch series cover letter explains:
“AI engine is a tile array based acceleration engine provided by AMD. These engines provide high compute density for vector-based algorithms and flexible custom compute and data movement. It has core tiles for compute, memory tiles for local storage, and shim tiles to interface the FPGA fabric and DDR.”
The AMD AI Engine is explained in more detail on this AMD.com product page.
The v1 patch series for this new amd-ai-engine driver is now under review. Once the code is in a state for merging it should be upstreamed into the mainline Linux kernel in the hopefully not too distant future.