For a while, AI felt like a cheat code. Mention AI on an earnings call, announce a bigger data center plan, sign a flashy partnership, and the market filled in the rest. Spend meant ambition. Ambition which meant valuation.
That world is gone.
Over the past few quarters, markets have quietly flipped from “reward any AI headline” to “show me the economics.” Not because AI stopped mattering, but because it started costing real money. Annual AI-related capex is now pushing past $600 billion, and investors are no longer debating whether AI is strategic. They are debating whether companies are overfunding it relative to their ability to turn spend into cash.
That shift does not just affect public stocks. It changes how AI companies should be built, financed and exited.
The early signs are out
Look across Microsoft, Oracle and even the Nvidia–OpenAI relationship, and you see the same pattern repeating. First come massive commitments, huge infrastructure plans to build capacity well ahead of proven demand. Then comes the uncomfortable question: Are we spending because this makes economic sense or because we fear not to?
Hyperscaler capex for the “Big Five” — Alphabet, Apple, Meta, Amazon and Microsoft — is projected to reach around $600 billion in 2026, up roughly 36% year on year, with about 75% tied directly to AI infrastructure, which is also heavily funded by debt.
That begs the question: Will these investments be converted into durable cash flows?
Microsoft’s recent earnings brought this tension into focus. Capital expenditures jumped roughly two-thirds year on year, exceeding $37 billion in a single quarter, while Azure growth slowed and AI capacity constraints limited upside. The stock fell sharply, losing 21% over the past six months, wiping out hundreds of billions in market value.
Oracle faces a different version of the same issue. Demand for AI cloud infrastructure is real. Cloud revenue is growing around 50% year on year, and GPU-related revenue is surging. But Oracle plans more than $50 billion in capex for fiscal 2026 and expects to raise $45 billion to $50 billion through new debt and equity on top of an already leveraged balance sheet.
Even Nvidia and OpenAI are not immune. The widely publicized idea of a $100 billion Nvidia-backed infrastructure commitment has died down, with Nvidia clarifying that no firm commitment was ever made. At the same time, OpenAI has been actively diversifying suppliers, exploring AMD, Cerebras Systems and others, to reduce over-concentration risk.
If the market is questioning AI overfunding at Microsoft, Oracle and the very center of the AI ecosystem, no one else gets a free pass.
What founders should take from this
For founders building AI companies with exits in mind, the implications are immediate.
- First, your product cannot be a capex sink. Acquirers want assets that make existing AI spend more productive. Lower cost per inference, better GPU utilization, faster deployment or higher revenue per dollar of compute will soon join the traditional SaaS-based unit economics.
- Second, flexibility matters. The Nvidia-OpenAI wobble is a warning. Multi-cloud, multi-model and multi-chip architectures reduce buyer risk and make deals easier to approve internally.
- Third, run your company as if public-market skeptics are already in the room. Clean unit economics after infrastructure costs, sustainable growth strategies and KPIs that matter also in the public markets, as these will be your future acquirers.
Itay Sagie is a strategic adviser to tech companies and investors, specializing in strategy, growth and M&A, a guest contributor to Crunchbase News, and a seasoned lecturer. Learn more about his advisory services, lectures and courses at SagieCapital.com. Connect with him on LinkedIn for further insights and discussions.
Stay up to date with recent funding rounds, acquisitions, and more with the
Crunchbase Daily.
