ASUS AI POD has been the star of the new AI infrastructure that the Taiwanese manufacturer has presented at the GTC 2026 conference. Based on the NVIDIA Vera Rubin platform, it offers a comprehensive end-to-end and fully liquid-cooled solution.
Under the motto “Reliable AI, total flexibility”this customizable framework—from rack-scale AI factories, desktop AI supercomputing, edge AI to enterprise AI solutions—enables enterprises and cloud providers to build large-scale, high-performance, energy-efficient AI clusters, “with unmatched efficiency and a drastic reduction in PUE and TCO”.
ASUS AI BRIDGE
As a supplier of the NVIDIA GB300 NVL72 and NVIDIA HGX B300 systems, ASUS’ flagship offering is the ASUS AI POD, based on the NVIDIA Vera Rubin platform: a powerful liquid-cooled rack-scale solution designed for massive AI workloads.
Through strategic alliances with leading cooling and component suppliers, ASUS offers various cooling modalities, tailored thermal solutions and redundancy to meet any business requirement. As demonstrated by the successes of customers around the world, ASUS offers specialized consulting, a broad portfolio of AI and storage solutions, seamless infrastructure deployment, application integration and continuous services, combining scalability and sustainability to drive business value and intelligence.
ASUS AI Factory in action
At the forefront is the flagship XA VR721-E3. Based on the Vera Rubin NVL72 platform and designed specifically for trillion-parameter models and to deliver massive AI performance for large-scale AI factories, it is a 100% liquid-cooled rack-scale system. Its TDP is 227 kW (MaxP) or 187 kW (MaxQ), providing up to 10 times more performance per watt.
To meet the rigorous demands of data centers, ASUS has also introduced its latest series of servers based on NVIDIA HGX Rubin NVL8 systemswhich features eight NVIDIA Rubin GPUs connected via 6th generation NVIDIA NVLink with an integrated bandwidth of 800G per GPU. To facilitate a smooth and cost-effective transition to liquid cooling, ASUS offers two distinct solutions: the XA NR1I-E12L, an innovative hybrid cooling option; and the XA NR1I-E12LR, a 100% liquid cooling system. The XA NR1I-E12L, with hybrid cooling, specifically combines direct-to-chip (D2C) liquid cooling for the NVIDIA HGX Rubin NVL8 motherboard with air cooling for the two Intel Xeon 6 processors.
The product portfolio is reinforced by high performance scalable servers such as the XA NB3I-E12, based on NVIDIA HGX B300 systems, which guarantees a solution for every demanding AI workload; the ESC8000A-E13X, based on NVIDIA MGX and integrated with NVIDIA ConnectX-8 SuperNIC for extreme connectivity between GPUs; and the ESC8000A-E13P, accelerated by NVIDIA RTX PRO 4500 Blackwell Server Edition or NVIDIA RTX PRO 6000 Blackwell Server Edition GPU, delivering breakthrough performance for demanding data processing, AI, video and visual computing workloads in a low-power design.

Making physical AI a reality
ASUS has a complete ecosystem for physical AIproviding the critical computing power needed from initial development to final deployment. The journey begins at the developer’s desk with ASUS ExpertCenter Pro ET900N G3, a desktop supercomputer powered by the NVIDIA Grace Blackwell Ultra platform.
With NVIDIA NVLink-C2C interconnects and 748 GB of coherent unified memory, it handles the heavy lifting of training massive models. Alongside it, the ultra-compact ASUS Ascent GX10 delivers agile petaflop-scale performance powered by the NVIDIA Grace Blackwell Superchip, ideal for rapid iteration of scalable edge models and configurations.
This development capability transfers seamlessly to the PE3000N, a robust inference engine powered by NVIDIA Jetson Thor. With a rate of 2070 TFLOPS, the PE3000N provides the real-time computing necessary for sensor fusion and autonomous navigation. Together, these systems form a unified workflow where open models like NVIDIA Cosmos and Metropolis vision AI libraries can perceive, reason, and act effectively in the physical world.
Real-time enterprise AI
To accelerate enterprise AI, the manufacturer has introduced the ASUS AI Hub, a turnkey local AI platform optimized with ESC8000 series servers and open source LLM technology such as NVIDIA Nemotron and Gemma. It enables businesses to create custom AI assistants, implement RAG-enhanced document intelligence, and maintain full data sovereignty for security and compliance.
Tested internally on more than 10,000 employees with peak loads exceeding 600 requests per hour, OCR accuracy >80% and efficiency gains >30%, the platform features domain-specific modules for various applications, including the newly developed internal ASUS Agent Business Intelligence platform, which allows senior managers to instantly access critical information on costs, sales, gross margins, factory operations and other key metrics through simple questions and answers in natural language, transforming complex data into immediate, actionable executive decision-making power.
ASUS and NVIDIA are also working together on NVIDIA NemoClaw, a open source stack which simplifies running OpenClaw’s always-on wizards, more securely, with a single command.
Green computing and sustainability
Sustainability is a fundamental pillar of ASUS’s design philosophy, with green computing innovations integrated into both hardware and software to minimize TCO and environmental impact. At the hardware level, ASUS servers incorporate Thermal Radar 2.0, which uses up to 56 sensors to intelligently optimize fan performance, reducing energy consumption by up to 36%.
This commitment extends to software with ASUS Control Center (ACC) Data Center Edition, a unified management platform that enhances security and includes automated monitoring of carbon emissions, providing companies with the tools necessary to achieve their critical ESG objectives.
