In recent years, NVIDIA has gone from being a gaming hardware company that did interesting things in the field of artificial intelligence to being, directly, the glue of the entire AI industry. And it has achieved this through graphic muscle, but also thanks to deep pockets and a very clear vision: to become the largest AI incubator in the world. Although, for some, it has become an uncomfortable partner.
You can’t talk about AI without talking about NVIDIA, but along the way it has achieved something else: turning its main allies into rivals.
NVentures. A few years ago, Jensen Huang, head of NVIDIA, realized something: the AI had to be from NVIDIA or it would belong to no one. The CEO identified the need to become an investor, but also the technical support point for startups that were beginning to Buy many of his chips for AI training. Years before, NVIDIA had Inception, a branch focused on non-financial support, but in 2022 it launched NVentures.
It is the corporate venture capital arm of the company and was born at the dawn of the generative AI that we know today. In fact, it was launched a few months before the public launch of ChatGPT, which was precisely what popularized the massive use of NVIDIA GPUs to train large-scale models.
If with Inception more than 19,000 AI startups went through the advisory program (with training, cloud credits and discounts on massive GPU purchases, but without direct investment), with NVentures things also escalated quickly. From one direct investment in 2022 they went to 30 in 2023, 54 in 2024 and 67 in 2025. Some are larger than others, but all are investments of tens of millions of dollars that have served to boost the current ecosystem in a kind of circular economy.
Do you think I’m a bank? This Techcruch article lays out the investments perfectly and separates them into “clubs.” There is that of 100 million with companies like Ayar Labs, Hippocratic AI, Kore.ai or Runway that have received more than 100 million dollars. That of hundreds of millions with Cohere, Commonwealth Fusion, Perplexity, Lambda or Black Forest Labs as exponents. And then the billion dollar club. In that bag are the big names such as Cursor, xAI, the French Mistral, Reflection AI, Thinking Machines Lab, Figure AI or Scale AI. Also two uncomfortable partners: OpenAI and Anthropic.
The relationship between OpenAI and NVIDIA has been long and symbiotic. Both have helped each other put themselves on the map of generative AI, but NVIDIA is going to turn off the tap. Recently, Huang himself commented that they will put 30 billion in OpenAI, or… and that the two mega-operations will probably be the last. The two companies are expected to go public later this year, so they will have to start fending for themselves.
Business turnaround. That does not mean that NVIDIA is going to stop injecting money, it simply implies that they are going to allocate that money to be in more places at the same time. Instead of such large amounts, more financing for more “modest” companies in models, software, infrastructure, robotics, cloud and even autonomous driving and biotechnology to continue expanding the network of companies that scale on their platform. In fact, this investment in small companies that are beginning to grow is very lucrative.
An example is the Reflection funding round. Of the $2 billion the company raised, $800 million came from NVIDIA’s pockets, and much of that money, along with interest, will flow back into its pockets. NVIDIA is so important that the company points out that “when you talk to it, you are talking to NVIDIA.” That dependence on NVIDIA is what makes the company an uncomfortable partner because it has enormous power.
Inference. But the other turn is not so much from NVIDIA as from the industry itself. These last few years we have focused on training. Increasingly powerful chips to power increasingly larger data centers in which increasingly capable models are trained. However, once trained, the model must be useful for something, and that is where inference comes into play.
Because it is estimated that the big growth of the future of AI will not be so much training the next ChatGPT, but the ability to manage billions of AI requests cheaply and efficiently. This implies that there must be more specialized chips with different architectures than a classic training GPU. Analysts are already pointing out that the speed at which the need for inference increases is faster than expected.
From lovers to enemies. And that’s where other companies come into play. On the one hand, classic rivals like Huawei with equipment for both training and inference. Also an AMD that is gaining contacts like Samsung to create training GPUs and inference CPUs. Intel, Amazon and Google also have their own chips. But NVIDIA’s biggest customers don’t want NVIDIA to dictate their future.
OpenAI is working with Broadcom to develop its own chips that may be focused on that inference, and both Tesla and xAI (now part of SpaceX) have also taken the same path. The two companies have needed NVIDIA until now, but they do not want to depend on it for an inference where there may be more profit margin. Because the idea is to create chips that are very specialized in request management to lower the cost of AI as much as possible.
China is an example of this. The country’s big technology companies and startups have focused on one thing: training specialized models and making inference so cheap that the user doesn’t mind paying. There are already those who point out that 80% of the cost of AI in the short term will be inference, and solutions are needed.
The ace in the hole. But if almost all allies have been preparing their deck for some time to stop depending on NVIDIA’s cards, NVIDIA has also been doing the math and keeping the ace up its sleeve. It is again related to money and, specifically, the Groq license. This company attracted a lot of attention a few years ago because it specialized in creating extremely efficient inference chips.
NVIDIA realized this and did not buy Groq directly, but it did license its technology for $20 billion to create its own solutions, ones with which to attack the inference market, make those who are currently creating alternatives think that, again, it is better to buy them from NVIDIA and, above all, enter the Chinese market. A market that Huang estimates at $50 billion.
In WorldOfSoftware | ChatGPT’s milestone is not being a good AI: it is having become one of the biggest attention grabbers in history
