Nvidia is the sweetest customer right now. It is the glue of artificial intelligence, it invests millions in AI startups and, although Big Tech tries to be the new Nvidia, the truth is that they are currently the only ones with the necessary hardware to meet the hyperscalers’ objectives. His real achievement is to make the entire conversation revolve around him and make everyone have to work for you.
And Foxconn is the last to tie itself to Nvidia’s future.
In short. Nvidia is currently preparing the launch of Vera Rubin. It is a training and inference platform that is bringing together the latest generation hardware from companies such as Samsung or TSMC. However, Jensen Huang already commented that he would need all available hands this year to meet his goals, and that’s where Foxconn comes in.
The large (and controversial) Taiwanese company has been chosen by Nvidia as the exclusive supply chain supplier of the equipment necessary for Groq 3 LPX. The two companies were already working together, but the agreement implies a tenfold increase in the planned delivery volume and, furthermore, sooner than expected: for the third quarter of 2026.
Transformation. This agreement puts Nvidia’s intentions on the table. If until recently we were talking about GPUs like the H200, which were desired by Chinese Big Tech, now it’s time to talk about the aforementioned Vera Rubin. It is a platform that will be able to do both training and inference work, something increasingly important in the era of agentic AI. According to Nvidia, the Groq 3 LPX rack has 35 times better performance in inference for billion-parameter AI models compared to the previous generation Blackwell.
It’s not for nothing that Intel is shifting its business to Xeons for data centers and ARM shares hit highs after presenting its CPU for AI. Foxconn will focus its resources on that LPX platform powered by Groq to accelerate the entire critical decoding phase of agentic AI models. And, aside from Groq 3 LPX, the Taiwanese company itself is one of the main suppliers of Vera Rubin NVL72 cabinets. In short: Foxconn is, right now, playing in the Champions League.
According to industry sources, shipments of the LP30 and LP35 chips that will use these LPX cabinets will be 1.5 million by 2026 and 2.5 million by 2027. In total, 6,000 racks for this year and 10,000 for the one that comes only with those chips.
The fat customer. And here there are two sides of the coin. On the one hand, that of Nvidia, the company that has found such a rich vein in AI as to abandon those who were its main clients not so long ago. Working for Nvidia is to ensure profits as long as its hegemony lasts in the age of AI. An example is Samsung, which rushed to win the HBM4 memory race because it knew that if it delivered, it would leapfrog its biggest customers in the race to become Nvidia’s supplier.
Another example is TSMC itself. The largest foundry in the world had had Apple as its main customer for years. That ensured that if there was any crisis in the supply chain, Apple knew it would have its chips because TSMC’s future was tied to its own. But now things have changed and the one who has guaranteed wafers is Nvidia.

Accumulating big contracts. The other side of the coin is that Foxconn, which is not in the daily conversation as one of the engines of AI, is winning big, juicy contracts. One is the one mentioned exclusively with Nvidia, but they are also suppliers of chips to other giants such as Google with its TPU, Microsoft and Amazon AWM.
Of Google, for example, it has 15% of the market share. And the company’s CEO, Liu Yangwei, sticks out his chest in this situation, pointing out that they can produce more than 1,000 complete cabinets per week and that they are working to double that capacity before the end of the year.
Oh, and in case you’re wondering if this will have any impact on the price of RAM, keep in mind that each of these thousands of racks that Foxconn manufactures will have 256 chips with 128 GB of SRAM and 12 TB of DDR5 memory. It also helps understand why the memory majors are stopping making DDR4 memory to focus on DDR5. And why prices will continue as they are now for many months.
Imagen | Hillel Steinberg
In | That Qualcomm prepares its own AI chips is good news. Whether it has an opportunity in the market is a very different thing.
