In 2014, Google announced its
For the first few years of the labs’ coexistence in the tech market, their efforts at competition were restricted to sparse publishing of research papers and a limited attempt at commercializing AI products. This period of casual research, where AI companies were still considered long-term investments producing research alongside universities and slowly advancing the field of machine learning, was quickly and decisively broken by OpenAI’s quick rise of popularity thanks to the viral spread of ChatGPT through the internet in late 2019. What followed was a complicated period of power transfers, shifting priorities, solidifying allegiances, and, above all, a race to keep building the most powerful AI model possible. With advancements getting faster and faster, we are at a status shift for the ongoing AI race—what happens now is more important than ever. In this article, we seek to explain and explore the nature and consequences of the “pivotal moment” of the AI race.
Setting the Stage
To start off my analysis, I would like to acknowledge that there are far more than three AI companies: whether it be other AI development programs like xAI or startups applying AI to a specific field, there are an endless number of successful yet comparatively small players in the AI industry. To understand why I discount them, consider the definition of “winning” the AI race—creating AGI before other companies so that it can reproduce itself at a sufficiently higher rate than the efforts of competitors to advance their technology. I have chosen to include the three biggest and most well-equipped AI companies—Google Deepmind, Anthropic, and OpenAI—because I believe they have the highest chance of staying at or near the top of state-of-the-art AI in the near future on account of their talent and resources (more on that later). As for the smaller companies, they will doubtlessly be acquired or fall out of favor as AI becomes more agentic, generalized, and well-versed in specific applications that many smaller companies have become based around.
In fact, for a year or two, OpenAI dominated the AI industry, releasing new iterations of ChatGPT back-to-back, each promising greater capabilities for both laymen and professionals, giving OpenAI a huge advantage as it accumulated a large base of consumers as well as more investment for its for-profit branch. Interestingly enough, Google, a tech company with a very stable cloud compute system and massive amounts of financial and human capital resources, didn’t introduce a chatbot to challenge ChatGPT until 2023, almost a year after the platform’s peak popularity. Google Bard was intended to challenge OpenAI’s rival model, yet it fell short for many reasons. First of all, Google was still reeling from a critical flaw in the way that it produced AI products. While, as previously mentioned, many AI companies prioritized research, Google’s two AI research divisions (Google Brain and Google DeepMind) worked distinctly on many projects that didn’t have a defined pipeline from R&D to commercialization. The ChatGPT team, on the other hand, already had a robust framework for pushing newer, more capable models to users the moment they were released. It’s not an exaggeration, therefore, to say that although Google’s divisions both produced groundbreaking research (Google Brain was responsible for developing the Transformer AI architecture, the backbone of LLMs like ChatGPT; Google DeepMind eventually won a nobel prize for its research in using Reinforcement Learning to develop protein folding predictions using the AlphaZero RL framework), the top brass at Google failed to implement policies that would allow Google (and the public, for that matter) to benefit from these discoveries. Furthermore, the divide between the company’s top AI research teams meant that compute and human effort had to be distributed across many different projects, while OpenAI, even though it still maintained other services like DALL-E and Codex at the time, poured most of its resources on its flagship product, proving the necessity and effectiveness of betting it all.
Google’s inaction does not change the fact that it is one of the most powerful tech companies in the world. Apparently alarmed by OpenAI’s rapid success (and probably the fact that Microsoft is behind the emergent AI startup), Google made several changes to streamline production processes. To address the two points discussed above, Google officially
Unlike Google, OpenAI had always publicly maintained its vision of building the world’s first AGI model. As the saying goes, “with great power comes great responsibility”. The creation of AGI has slowly transformed from a lofty goal to a storm of alignment and safety concerns. To partially remedy the immense dangers of unrestrained AI development, OpenAI established the
This was not the only example of how conflict within OpenAI led many of the lab’s researchers to move on. Although the acme of disagreements resulting from discussions on alignment occurred near the high-profile mass resignation of the Superalignment team in 2024, many of the basic ideas about the careful deployment and commercialization of advanced AI models already started to populate the company. In 2021, at a point where OpenAI suddenly ramped up the commercialization of ChatGPT-3 through a series of fast-paced releases, several top researchers (including Daniel Amodei, former VP of Research at OpenAI) left in protest,
Onset of Specialization
The start of late May has seen a flurry of action among what I call the Big Three (Anthropic, Google, and DeepMind) hinting at the rise of specialization in the climate of AI development. As previously mentioned, a commonly accepted trend was to create a general-use LLM that could perform many tasks at once. As the capabilities of such an LLM increased, the reasoning went, it would theoretically approach the AGI singularity that many dreamt to achieve. My counterargument: it’s called a singularity for a reason. Such an accomplishment as the literal creation of Artificial General Intelligence will likely not come as a result of the evolution of current Transformer-based architectures, which lack many of the “building blocks” of human intelligence (think: kinesthesis, latent and apparent memory). Instead, I believe that this singularity may arise from human research accelerated by agentic AI tools like AlphaEvolve, although this topic is to be thoroughly discussed in a future article.
As this longstanding status quo of AI development played out, however, it became apparent that there is an all-out AI race to maintain the #1 spot in LLM rankings is starting to become untenable for anyone but Google, the Big Three member with the largest talent pool, the most sophisticated compute center network, and the most amount of data. The tech giant, long grown past its days of Google Bard, now boasts the state-of-the-art in general multimodal LLMs with Gemini-2.5, an iteration intended for release at the 2025 Google I/O conference along with a series of impressive models, including the new SoTA few-shot video generator (veo3) and real-time pattern-of-speech-preserving video translation tools, both of which will probably become built-in with Gemini’s
Google has the resources to pursue side projects along with maintaining its brisk lead in the AI race, and OpenAI has recently opened up a path for itself to go down if it wants to. In contrast to these two, Anthropic has established itself with optimizing Claude, its LLM, for coding, recently releasing the new SoTA with Claude Opus 4, although its lead in the benchmarking test is modest. In addition, Anthropic is notably the world’s biggest AI Ethics research organization, maintaining divisions studying AI safety as well as the societal impacts of new technologies. In line with this characterization, it portrays its mission as “Winning the AI race without losing [its] soul”, a somewhat bold tagline that antagonizes Google and OpenAI for leaning into a multitude of alleged data privacy and labor rights violations, two huge caveats for AI development that, so far, has not dissuaded any companies from partaking in the sheer hype of the Race.
In a sense, these specializations in AI development objectives would mean that in the near future, top AI companies will cease being direct competitors of each other as the natural effects of product differentiation kicks in. One could argue that if Anthropic does focus on agentic coding models and Google, chatbots, the search for AGI could theoretically be sped up as redundant resource use is eliminated. Yet, I contend that specialization would actually be unfeasible for the broader evolution of AI in the future for a few reasons:
-
Competition breeds innovation. Even though additional resources are not being consumed by a single company in development, the lack of impetus to innovate would hinder advancements despite this fact. If Google already held an extremely stable monopoly, it would not have incentive to spend billions on additional R&D to upgrade a product already in high demand.
-
Companies do not share all discoveries. Assuming that all companies actually do invest into further development despite the emergence of specialization as mentioned above, AI companies have been historically reluctant to share the specifics of a model (even lobbying against proposed government legislation for 3rd party review of proprietary model information), which would practically negate any potential benefit from the “you do this, I do that” model of AI development. True open-source AI is still an ideal right now.
-
There is a heavy overlap across specialized models. For example, as video generation models get more and more convoluted, they require more complicated transformer models trained using extensive data, including the breakdown of textual prompts and inherent reasoning like that found in traditional LLMs (for example, a well-trained video-generator would inherently assume that a “wood board boat” would refer to a boat made out of wooden boards instead of a personified log boarding a ship).
This article is brought to you by Our AI, a student-founded and student-led AI Ethics organization seeking to diversify perspectives in AI beyond what is typically discussed in modern media. If you enjoyed this article, please check out our monthly publications and exclusive articles at https://www.our-ai.org/ai-nexus/read!
…Consequences
Back to the question: why is the emergence of specialization in AI products a pivotal moment for AI development? The answer, in my opinion, is because this moment has the potential to change the landscape of AI development by directly deciding what they will do in the near future. It involves scenarios that I can only speculate about, due to my lack of connections in Silicon Valley, but have everything to do with standard game theory. Let’s assume, hypothetically, that Google and Anthropic both drop out of the AI race by abandoning the development of their respective AI models, Gemini and Anthropic. Although competitors may rise up to take their place, it’s unlikely that they would be able to trump OpenAI, which would almost certainly become a massive monopoly overnight, sucking up the two other companies’ talent almost instantly. Do the same for the other two companies, and a pattern emerges: If two fall, the one remaining company will become dominant on its own. However, when you apply this scenario to an AI company of lesser scale (say, Mistral or Meta), you may conclude that it’s far less likely for this company to become an AI superpower, since it would be more susceptible to market disruptions.
Why exactly is this the case? As previously mentioned, the quest for AGI differs from other evolutions in product development since this is the first instance (that I know of) in which a prototype product can directly assist in the development of a latter version. This fact will doubtlessly become more prevalent in the future along with the rise of agentic AI, but we can receive a salient example from Google’s AlphaEvolve model, a reasoning-and-coding model that has managed to increase DeepMind’s capacity for producing and training AI by
Considering all of these factors, it is unlikely that any of the Triumvirate of AI development will stop in its tracks anytime soon—all three have sunk deep into layers and layers of venture capital, brand advertisement, and, most importantly, societal expectations. With hundreds of billions of funding pouring into these massive endeavours, let’s hope that something good comes out of them.