Dell Technologies Inc. could sell more than $5 billion worth of artificial intelligence servers to xAI Corp., Bloomberg reported today.
The paper’s sources said that discussions about the potential deal are in an advanced stage. However, they cautioned that the transaction terms could still change.
Launched by Elon Musk in March 2023, xAI develops a family of large language models called the Grok series. The LLMs are available through an eponymous chatbot and an application programming interface. To support its development efforts, xAI has built an AI-optimized supercomputer called Colossus in Memphis.
The system came online last September with 100,000 Nvidia Corp. graphics processing units. Dell has supplied tens of thousands of the GPU-equipped servers that power Colossus. In December, the hardware maker revealed that it hopes to supply an “unfair share” of the GPU servers xAI will add to the supercomputer down the road.
Following a $6 billion funding round in December, xAI revealed that it’s working to double Colossus’ GPU count to 200,000 chips. The company’s long-term goal is to increase that number to one million.
According to Bloomberg, the servers that Dell would supply for Colossus under the deal it’s discussing with xAI are based on Nvidia Corp.’s GB200 design. The system (pictured) includes 36 of the chipmaker’s GB200 Grace Blackwell Superchips. Each such processor features one Grace central processing unit and two Blackwell B200 GPUs.
Grace features 72 cores with a clock speed of 3.2 gigahertz. Those cores are based on Arm Holdings plc’s Neoverse V2 core design, which is optimized for use in data centers.
The Blackwell B200, the other component of the GB200 Grace Blackwell Superchip, is Nvidia’s newest and most capable GPU. It features 208 billion transistors made using Taiwan Semiconductor Manufacturing Co.’s four-nanometer process. The chip stores AI models’ data in an onboard DRAM pool with 192 gigabytes of capacity.
The CPU and two GPUs in each GB200 Grace Blackwell Superchip are linked together by a technology called Nvidia-C2C. It can transfer data between the processors at speeds of up to 900 gigabits per second. That’s seven times the throughput provided by PCIe 5, an industry-standard technology for linking together chips.
Another key selling point of Nvidia-C2C is that it provides memory coherence. This feature reduces the need to copy data between the CPUs and GPUs in an AI cluster, which speeds up processing. It also makes it easier for developers to configure the hardware.
The 36 GB200 Grace Blackwell Superchips in the GB200, the server at the center of Dell’s potential deal with xAI, are stored in 16 compute trays. Those trays each provide up to 80 petaflops of performance. The heat they generate is dissipated by cold plates, flat metal plates connected to a liquid cooling system.
It’s believed Dell could deliver the servers to xAI later this year. Analysts cited by Bloomberg estimate that the hardware maker’s AI server sales will reach $14 billion in the 12 months through January 206, which would represent a 40% year-over-year increase. Dell shares closed 3.75% higher today on the report.
Image: Nvidia
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU