Cyber Week 2025: If you wish to enjoy the site ad-free, multi-page articles on a single page, and other benefits, consider joining Phoronix Premium. This week only is our Cyber Week promotion to help support all of our Linux/open-source hardware and software operations while enjoying the added premium benefits at a discounted rate. Thanks for your consideration and support this holiday season with providing daily original content for over 21 years.
Intel software engineers continue to be hard at work on LLM-Scaler as their solution for running vLLM on Intel GPUs in a Docker containerized environment. A new beta release of LLM-Scaler built around vLLM was released overnight with support for running more large language models.
Since the “LLM-Scaler 1.0” debut of the project back in August there have been frequent updates for expanding LLM coverage on Intel GPUs and exposing more features for harnessing the AI compute power on Intel graphics hardware. The versioning scheme though remains a mess with today’s test version being “llm-scaler-vllm beta release 0.10.2-b6” even with “1.0” previously being announced.
As for the changes with today’s llm-scaler-vllm beta update, they include:
– MoE-Int4 support for Qwen3-30B-A3B
– Bpe-Qwen tokenizer support
– Enable Qwen3-VL Dense/MoE models
– Enable Qwen3-Omni models
– MinerU 2.5 Support
– Enable whisper transcription models
– Fix minicpmv4.5 OOM issue and output error
– Enable ERNIE-4.5-vl models
– Enable Glyph based GLM-4.1V-9B-Base
This new beta update for those interested in using vLLM on Intel GPUs via this Docker environment can find all the details on GitHub. The Docker image is available via intel/llm-scaler-vllm:0.10.2-b6.
