The open-source ZLUDA project for bringing CUDA to non-NVIDIA hardware that can run unmodified is out with a new progress report. ZLUDA had a productive fourth quarter with now enjoying better Microsoft Windows support, full support for running Llama.cpp atop ZLUDA, AMD ROCm 7 support, and other enhancements.
ZLUDA developers had hoped to provide robust PyTorch support by the end of 2025 but that didn’t quite turn out. But the Llama.cpp support using the NVIDIA CUDA back-end is now considered complete. Granted, Llama.cpp has AMD ROCm and even Vulkan back-ends too. But at least having this ZLUDA support for Llama.cpp allows for some useful performance comparisons to those otherwise native back-ends on the likes of AMD GPUs. ZLUDA developers report that their route achieves “nearly identical” performance to the native ROCm back-end.
AMD ROCm 7 support was recently squared away as another step forward for running this CUDA implementation on AMD GPUs.
While ZLUDA began focused on Linux use, ZLUDA’s Microsoft Windows support has improved with time and over the past quarter especially has become more robust. The zluda.exe on Windows is now a more “acceptable quality” and working as well as on Linux.
Some other recent ZLUDA changes include now shipping their own bundled LLVM to avoid the AMD ROCm comgr library, performance improvements, compiler enhancements, and ongoing PyTorch work.
More details on this ZLUDA progress over Q4’2025 can be found via the project’s GitHub.
