The deployment of the Nvidia Blackwell Ultra platform has been the highlight of the keynote That the company’s founder and CEO, Jensen Huang, has celebrated at the Computex 2025.
The new hardware platform for acceleration of large -scale artificial intelligence systems had been advanced in the CES and now NVIDIA announces more details, launch and partners, as well as a new interconnection system to accelerate communication between the different chips that work for the processing of AI.
As we already knew, Nvidia Blackwell Ultra promises Reasoning of AI, agentic and physical AIas Huang explained at his conference:
«The AI has made a gigantic leap: the reasoning and the agent demand greater orders of magnitude of computational performance. We design Blackwell Ultra for this moment: it is a unique and versatile platform that you can easily and efficiently perform inferences to training, after training and reasoning ”.
The platform promotes the GB300 NVL72 system, A computing rack with liquid cooling which uses 72 GPU Blackwell Ultra and 36 CPU Nvidia Grace. NVIDIA states that this system offers 1.5 times more yield than its predecessor, the GB200 NVL72, and multiplies by 50 Blackwell’s income opportunities compared to systems made with Nvidia Hopper.
Systems can be integrated with Spectrum-X Ethernet and Quantim-X800 infiniband platforms, with data performance 800 gb/s for each GPU del Systemconnected through the new Connectx-8 Supernic network accelerator. To achieve a better communication bandwidth between GPU, Connectx-8 integrates 48 PCIE Gen6 lanes with an integrated PCIE Gen6 switch, consolidating GPU communication to GPU and GPU to NIC to a single high performance device, instead of the dedicated PCIE switches used above.
Nvidia Blackwell Ultra systems will be available through the partners of the Green Giant, Cisco, Dell Technologies, Hewlett-Packard Enterprise, Lenovo and Supermicro from the Second half of 2025. Cloud instances with Blackwell technology will also be available in AWS, Microsoft Azure, Google Cloud, Oracle Cloud, as well as in the cloud suppliers of GPU Coreweave, Crusoe, Lambda, Nebius, Nscale, Yotta and YTL. The GB300 NVL72 system will also be included in the Nvidia cloud platform, DGX Cloud.
NVLink Fusion
The main obstacles to the inference of AI models are computation, memory and bandwidth. THE NEW NVLINK FUSION seeks to solve the obstacle of bandwidth to create efficient and scalable systems (also called hyperscalars) necessary for the processing of AI.
NVIDIA indicates that its new Nvlink Fusion chip is able to offer 1.8 TB/s Bidirectional bandwidth (Total up and down) between systems. Nvlink Fusion is not only silicon, but part of a scalable architecture. Hyperescalad data centers can integrate their semi-translated ASIC with NVLink Fusion and can also connect with the CPUs, NVLink, Connectx Ethernet switches, data processing units (DPU) Bluefield and Quantum and Spectrum-X switches in NVIDIA.
The Nvlink Fusion partner ecosystem includes custom silicon designers, CPU partners, IP and OEM/MDG, which provides a complete solution to implement personalized silicon by NVIDIA on scale and create what the company calls «AI Factories». Nvlink Fusion and its services are already available with associated companies such as MediaTk, Marvell, Alchip, Astera Labs, Synopsys and Cadence.