One of the initiatives launched by Intel in 2025 was LLM-Scaler as part of Project Battlematrix. The open-source LLM Scaler is a Docker-based solution for helping to deploy Generative AI “GenAI” workloads on Intel Battlemage graphics cards with frameworks like vLLM, ComfyUI, SGLang, and more. There continues to be routine new feature releases of LLM Scaler for broadening the large language models supported and other improvements.
Released overnight was llm-scaler-vllm beta 0.11.1-b7 as the newest version of LLM Scaler atop the vLLM framework. The new release adds symmetric 4-bit integer quantization support for some models, support for the PaddleOCR model, GLM-4.6v-Flash support, and various fixes:
– Supported sym_int4 for Qwen3-30B-A3B on TP 4/8.
– Supported sym_int4 for Qwen3-235B-A22B on TP 16.
– Added support for the PaddleOCR model.
– Added support for GLM-4.6v-Flash.
– Fixed crash errors with 2DP + 4TP configuration.
– Fixed abnormal output observed during JMeter stress testing.
– Fixed UR_ERROR_DEVICE_LOST errors triggered by frequent preemption under high load.
– Fixed output errors for InternVL-38B.
– Refine logic for profile_run to provide more GPU blocks by default
The llm-scaler README continues to note that this GenAI solution is optimized for the elusive Arc Pro B60 graphics card but ultimately should work fine on other Arc Graphics hardware as well, permitting enough vRAM obviously for the different models.
This new release is available via GitHub and the Docker images via Docker Hub.
