In his Nvidia GTC keynote this week, Chief Executive Jensen Huang spoke broadly about artificial intelligence in the context of three vectors:
- AI in the cloud;
- AI in data centers (on-premises);
- AI for robotics (edge).
What these three pillars have in common is data lives in each. Data has gravity and is expensive to move. Speed-of-light dynamics mean that the most efficient way to process information is to bring compute to the data. In essence, the three opportunities Jensen described can be thought of in terms data locality. In other words, wherever the data lives, that’s where AI will go.
It was appropriate that Jensen referred to last year’s GTC as the “Woodstock of AI” and this year called it the “Super Bowl of AI,” which is how John Furrier referred to GTC earlier this year. Why is this significant? Because Woodstock was a cultural Big Bang and as I said last year after GTC 2024, it was in my view the most important computing conference in the history of such events. And like the Super Bowl, it’s a big event that will happen year after year, but 2024 got it all restarted after a COVID hiatus.
Transition to a new computing era
In a recent Breaking Analysis, David Floyer and I wrote the following:
We are witnessing the rise of a completely new computing era. Within the next decade, a trillion-dollar-plus data center business is poised for transformation, powered by what we refer to as extreme parallel processing or EPP — or as some prefer to call it, accelerated computing. While artificial intelligence is the primary accelerant, the effects ripple across the entire technology stack.
In that post we shared that annual data center spending, which had been stuck in the low $200 billion range, with low-single-digit growth, suddenly exploded from $220 billion in 2023 to $350 billion in 2024, growing 63%, up from 5% the previous year. What’s even more astounding is that traditional workloads, commonly powered by x86 architectures, declined in 2024. But AI workloads grew from $43 billion to $180 billion, a whopping 319% annual growth rate.
Floyer and I are both traveling and we spoke about how best to analyze the significance of GTC 2025. Posts such as the one John Furrier put out after GTC provide a great overview of the keynote and we feel it’s important to build on top of these and not repeat such analyses. So we landed on a new approach to analyzing Jensen’s keynote.
In times of market transition, like the one we’re in now, it can be confusing to predict the next best move. Watching the stock market day-to-day only makes it more confusing, as we saw with the reaction to DeepSeek’s inexpensive AI model earlier this year.
Moreover, in transitions such as this one, there’s a rush to innovate with imperfect data. The failure rate for projects will invariably be higher. As the appetite for failure wanes, executives become impatient and capital allocation among: 1) Run the Business (RTB), 2) Grow the Business (GTB) and 3) Transform the Business (TTB) initiatives becomes trickier.
When this happens, we see volatility, market confusion, panic and opportunity. When markets transition, for example from traditional workloads to AI workloads, every part of the tech stack transitions with it. Those companies with architectures, portfolios and go-to-market motions aligned with the new conditions, or those that can pivot quickly, invariably prosper. But the new markets, while growing fast, often aren’t yet large enough to offset the decline in older markets, creating even more confusion, and the Innovator’s Dilemma kicks in.
A novel methodology to forecast technology adoption
Floyer has been working on a new method to predict market transitions and technology adoption. The method looks at the case for a new technology around three Vs: 1) The underlying value a technology can bring; 2) the volume or scale of that technology; and 3) the velocity or rate of adoption of the technology.
By observing past markets, making assumptions for how transitions occur in tech, and analyzing scenarios using ogives, we believe a more accurate long-term opportunity analysis can be derived. We decided skip this week’s Breaking Analysis and use to the time to apply Jensen’s three vectors of growth to frame the opportunity.
Next week, we’ll break down our total data center forecast and separate out the cloud and on-prem opportunities. We’ll begin to frame the third vector, which we broadly see as AI inference at the edge of networks. And specifically the robotic opportunity, which we’ll try to quantify in the future.
We feel that quantifying the market, which we all believe is massive, will allow us to better forecast the transition to accelerated workloads, and we look forward to your feedback.
Photo: Robert Hof/ News
Disclaimer: All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by News Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.
Disclosure: Many of the companies cited in our analyses are sponsors of theCUBE and/or clients of theCUBE Research. None of these firms or other companies have any editorial control over or advanced viewing of what’s published .
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU