AMD today released a new version of Ryzen AI Software, the user-space packages for Microsoft Windows and Linux for making use of the Ryzen AI NPUs for various AI software tasks like Stable Diffusion, ONNX, and more.
Ryzen AI Software 1.7 brings some hearty performance improvements when it comes to its CNN/Transformer compiler with much better performance and quicker compile times. There is also support for various new large language models (LLMs) on Ryzen AI NPUs like Qwen-2.5-14b-Instruct, Qwen-3-14b-Instruct, Phi-4-mini-instruct. Plus in preview form is Sparse-LLM support for the GPT-OSS-20b NPU model and VLM Gemma-3-4b-it. There is also now long context support for hybrid execution LLM models.
On the Stable Diffusion side there is new model support for SD3.5-Turbo with 8x dynamic resolutions and 2x dynamic batches (Text2Image and Image2ImageControlNet) as well as Segmind-Vega 1024×1024 (Text2Image). Stable Diffusion with Ryzen AI Software 1.7 is also seeing up to a 40% performance improvement for all supported models using the native BFP16 format.
Downloads and more details on Ryzen AI Software 1.7 via GitHub. Windows and Linux installation instructions for this user-space Ryzen AI NPU support can be found via the Ryzen AI documentation.
