The past several months Intel software engineers have been quite busy with LLM-Scaler as part of Project Battlematrix. LLM-Scaler is a Docker-based solution for AI workloads on Intel graphics hardware to ship an optimized vLLM stack and other AI frameworks. Out today is a new LLM-Scaler-Omni release to help enhance ComfyUI performance on Intel hardware.
There have been quite frequent updates to LLM-Scaler for expanding model coverage on Intel graphics hardware and delivering other new AI features for primarily the Arc “Battlemage” GPUs. With today’s llm-scaler-omni beta release 0.1.0-b4 release, there is now SGLang Diffusion support. WIth that SGLang Diffusion support, there is around a 10% performance improvement when running ComfyUI on a single Arc Graphics card.
A ten percent improvement at this stage is pretty nice and with ComfyUI remaining quite popular. ComfyUI for those unaware is an open-source solution capable of generating videos, images, 3D, and audio with AI via Stable Diffusion pipelines and works with a wide variety of different models.
The new LLM-Scaler-Omni release also adds new ComfyUI workflows for Hunyuan-Video-1.5 (T2V, I2V, Multi-B60), Z-Image. More details on this updated Intel release via GitHub while the pre-built containers are available from Docker Hub.
