Blackwell Ultra is another of the great novelties presented by the CEO of Nvidia, Jensen Huang, at the master conference of the GTC 2025. A premium version of its flagship chip for servers with which “smooth The way for the era of reasoning with AI »as described in their official presentation.
Nvidia has confirmed its commitment to hardware for acceleration of artificial intelligence systems. It is the great current business, which has allowed it to increase its market capitalization to more than 3 billion dollars and has made it one of the most valuable companies in the world. Although Depseek’s emergence caused its greatest drop in the stock market, the green giant has been recovering making it clear that analysts and investors continue to believe that advanced hardware for AI will continue to be essential for the development of these technologies.
It does not miss, therefore, that a good part of Nvidia’s ads They have put the focus on AI. In previous entries we met the two personal supercomputers of AI created by the company (DGX Spark and DGX Station) and here we will know an advance of what its new graphic accelerator will offer.
Blackwell Ultra
The Nvidia Blackwell Ultra Ai Factory platform promises to improve the inference of scaling in time of trial and training (the art of applying more computation during inference to improve precision) to allow organizations around the world to accelerate applications such as the reasoning of AI, agent and physical AI.
“IA has made a great leap: reasoning and AI with agency demand much greater computational performance”says Nvidia Chief Jensen Huang. “We design Blackwell Ultra for this moment: it is a unique and versatile platform that allows you to make inferences of AI easily and efficiently, both before and after training, as well as with reasoning”.
The platform star is the new chip NVIDIA GB300a solution that aims to provide substantial improvements in performance with respect to GB200. For example, the company promises an improvement in 50 percent FP4 yield against it. This superchip includes the ‘Grace’ CPU (an ARM created in collaboration with MediaTek) and has greatly improved the memory system, which now uses the new form of form, LPCAMM and scale up to 288 GB of HBM3E RAM.
The advanced horizontal scaling network is a fundamental component of any AI infrastructure to reduce latency and fluctuation and for this, Blackwell Ultra has updated its network system To support the NVIDIA Spectrum-X Ethernet and Connectx-8 infiniband networks with a data performance of 800 GB/s available for each system GPU. It has support for NVIDIA network acceleration engines and expansion of optical modules from 800 to 1600 gigabits per second (GB/s) data performance.
NVIDIA has announced different rack scale solutions under this platform, such as NVIDIA GB300 NVL72 and the NVIDIA HGX B300 NVL16 system. It is expected that the products based on blackwell ultra are available from the Second half of 2025 Through the large number of partners that will distribute servers with these solutions, such as Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo, Asus and Supermicro, among others. Cloud service providers, Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud, will be the first to offer instances with this technology.