Over the past year, tech companies invested hundreds of billions in the new data centers needed to power rapidly increasing demand for the technology. The investment is motivated in part by confidence that major AI labs such as those at OpenAI, Anthropic, and Google will continue to wring more intelligence out of their models. Indeed, fears have receded that the AI labs’ go-to strategy of supersizing models, training data, and computing power was no longer yielding large leaps in intelligence. Instead, the cadence of bigger and better models has accelerated, in part because AI coding tools are playing an increasing role in building new models.
That’s certainly true at Anthropic, which says that 70% to 90% of its new code is now written by its breakthrough coding agent, Claude Code. The tool, which generates and tests software code based on natural language prompts, was originally meant for internal use by Anthropic engineers, but the company decided to release it as a real product in May 2025. In just six months, Claude Code became a moneymaker, reaching a $1 billion revenue run rate.
Another reason for the acceleration in model releases was the arrival of Google at the front of the race. Its Gemini 3 family of models smoked competing LLMs on a number of industry benchmark tests, putting other AI labs on alert. The Gemini 3 models became the engine for many Google services, such as AI search and ads, and gave a boost to the company’s cloud business as well as to its Gemini chatbot.
Other AI companies are specializing, honing their models for narrower use cases and skill sets. Hume AI, for example, has focused on emotional intelligence; Its newest models are surprisingly good at both listening for a wide range of emotions in the human voice (say, a customer support caller), and generating voices that convey a range of emotions. World Labs, cofounded by AI pioneer Fei-Fei Li, has focused on models that understand the world very differently than large language models. The company has launched Marble, a “world model” capable of processing physical and spatial data in order to generate realistic world simulations that can be used to train self-driving cars or guide the movements of robots.
1. Google
For creating an LLM that’s suitable for powering agents
With the release of its Gemini 3 family of multimodal AI models, Google cemented its position as a dominant—and still rising—force in AI. The new models, which were developed by the company’s primary AI lab, Google DeepMind, and began deployment in November 2025, were meant to unify the multimodal, reasoning, and agentic properties introduced in the Gemini 1 and 2 models. They’re among the first to be trained from the ground up to process and understand images, video, audio, and code, not just text.
The Gemini 3 models also offer the reasoning, planning, and ability to use tools (such as web search) needed to power AI agents. Gemini 3 now provides the brain for a number of Google’s core consumer-facing products, including the Gemini chatbot app, which now has more than 750 million monthly active users, and the AI Overviews in Google Search, which Google says now reach more than 2 billion users monthly.
On the enterprise side, usage of Gemini 3 and other Google cloud models by independent developers and companies reportedly surged in 2025. Google says that Gemini Enterprise, a platform for enterprise search, AI assistants, and agents, has grown to 8 million paid seats. With a wealth of AI talent and a plethora of training data at its disposal, such as YouTube videos, Google is likely to seriously challenge OpenAI, Anthropic, and xAI for frontier model dominance well into the future.
