The ZLUDA open-source effort that started off a half-decade ago as a drop-in CUDA implementation for Intel GPUs and then for several years was funded by AMD as a CUDA implementation for Radeon GPUs atop ROCm and then open-sourced but then reverted has been continuing to push along a new path since last year. The current take on ZLUDA is a multi-vendor CUDA implementation for non-NVIDIA GPUs for AI workloads and more. More progress was made during Q2 on this effort.
The Q2’2025 status update for ZLUDA was posted today where they shared they have now doubled in size: there are now two developers working full-time on the project.
In addition to onboarding a second developer, ZLUDA has been dealing with ABI breaks in ROCm, continued efforts on ensuring bit-accurate execution across GPUs/drivers, improved logging, some progress on NVIDIA PhysX support, and more. ZLUDA also now enjoys automated builds produced on GitHub.
ZLUDA has also begun making progress on llm.c support for large language model training in raw C/CUDA.
More details on the progress made by ZLUDA over the past quarter via the project GitHub site.