While the AMDXDNA driver was merged for the Linux 6.14 kernel for enabling the Ryzen AI NPUs atop a mainline kernel build, there’s still the user-space software needed for making use of the neural processing units found in Ryzen AI SoCs. AMD talked more about programming Ryzen AI NPUs last weekend in Belgium at the FOSDEM 2025 developer conference.
For the user-space software is currently the AMD AIE Plugin for IREE. IREE in turn derived from the Multi-Level Intermediate Representation (MLIR) IR that is part of the LLVM compiler stack. IREE supports importing from ONNX, PyTorch, TensorFlow, and other machine learning frameworks. Those wanting to learn more about IREE itself can do so via IREE.dev.
AMD has also been working on Peano as a new LLVM-based compiler for Ryzen AI NPUs. There’s also the AMD Unified AI Software Stack tieing more of AMD’s different compute/accelerator products together in a more formal/unified way using MLIR, but so far we haven’t seen the Unified AI Software Stack formally break cover… Previously there was talks that would be out before the end of 2024.
AMD engineer Jorn Tuyls was at FOSDEM 2025 this past weekend in Brussels to talk about MLIR-based data tiling and packing for Ryzen AI NPU programming. If that interests you and/or just wanting to learn more about how the AMD Ryzen AI NPUs function, see this FOSDEM.org presentation page for all the media assets from the interesting talk.