Orbital data centers could solve AI’s biggest constraints – and create a trillion-dollar opportunity
In 1900, if you wanted to power a factory, you built your own generator.
You hired engineers to design it and bought fuel to feed it. You built your entire operation around the assumption that power was something you produced yourself, on-site, because there was no other option.
Then the grid arrived. The constraint – not the technology, the capital, or the ambition – disappeared overnight. What followed was an entirely new industrial era.
AI is living in its ‘generator moment’ right now. And the grid that’s coming isn’t on the ground.
It’s in orbit.
Orbital compute – AI data centers launched into low Earth orbit, powered by limitless solar energy, cooled by the void of space, beaming processed results back to Earth via laser link – is already being built. And the investment opportunity taking shape around it may be the largest we’ve seen in a generation.
What’s driving it comes down to a constraint the market hasn’t fully priced in yet.
The Real Bottleneck: Land, Power, and Water
The numbers are staggering; and they’re not the problem.
Gartner put 2025 AI infrastructure spending at $965 billion and sees it growing to $1.37 trillion in 2026. McKinsey estimates $7 trillion in total data center investment will be needed by 2030 just to keep pace with projected demand.
Contrary to what those astronomical numbers might have some thinking, the bottleneck here isn’t capital, or even hardware supply. These companies have the money, and securing the chips isn’t a problem.
The actual binding constraint is something far more intractable: Land. Water. Power.
Prime real estate for data centers – places with cheap power, cool climates, fast fiber – is being consumed.
Water rights for cooling are drawing regulatory scrutiny and community opposition across the American West.
And grid utilities can’t connect new data centers fast enough. Interconnection queues now stretch three to five years in the U.S. Bloomberg estimates that nearly half of all AI data center projects in the U.S. will be delayed this year due to power constraints.
Microsoft (MSFT), Alphabet (GOOGL), and Amazon (AMZN) have all found themselves in the absurd position of having the chips and the capital, yet not being able to get a power connection in time.
These are resource problems.
The solution has to come from outside Earth’s resource constraints entirely – which is precisely why the tech world is now looking to space for answers.
Why Orbital Data Centers Solve the Problem
Space has three things Earth’s data centers are running out of: energy, cooling capacity, and proximity to where the data originates.
Solar panels in low Earth orbit receive 1,400 watts per square meter of raw energy when in direct sunlight. At Earth’s surface – with ideal conditions – peak sunlight reaches roughly 1,000 W/m², but atmospheric losses, day-night cycles, and weather reduce that average greatly.
Solar panels in space also turn that stronger sunlight into a much steadier stream of power. In low Earth orbit, high-efficiency arrays can produce around 400 watts per square meter in direct sunlight and still average north of 200 watts over time thanks to near-constant exposure. Back on Earth, even top-tier solar farms usually average just 20 to 60 watts per square meter once nightfall, clouds, and shifting sun angles take their toll. That combination of higher intensity and near-continuous exposure gives space-based solar a structural advantage no terrestrial installation can match.
The energy advantage is only half the story above the atmosphere.
Space is a near-perfect vacuum, sitting just a few degrees above absolute zero. That means heat can be dumped straight off of GPUs and into the void through large radiator panels; no fans, water, or intensive cooling infrastructure required. Meanwhile, terrestrial data centers burn through billions of gallons of water each year just to keep systems from overheating. It’s a fundamentally different setup in orbit – one that becomes more important as water constraints tighten and cooling turns into a real bottleneck on Earth.
Then there’s an underappreciated third advantage – one that may become commercially significant faster than anyone expects: data proximity.
A staggering volume of AI workloads analyze satellite imagery, defense surveillance data, weather telemetry, and maritime tracking information – all of which originates in space. Today that data is sent down to Earth and processed in ground-based data centers before results are distributed. The downlink is a bandwidth bottleneck. Move compute closer to the source, and you cut that constraint dramatically. For an Earth observation satellite, it’s the difference between transmitting 100 gigabytes of raw imagery and just 1 megabyte of actionable intelligence. That advantage is available today, with current technology, at current launch costs.
Orbital Compute Is Already Here
This might sound fascinating but distant – years away from mattering to a portfolio.
Consider what has happened in just the last six months.
- November 2025: Starcloud launches a satellite containing an Nvidia (NVDA) H100 GPU – the first chip of that class ever sent into orbit.
- December 2025: That satellite trains NanoGPT on the complete works of Shakespeare – the first time a language model has been trained in space.
- January 2026: SpaceX files with the FCC for authorization to launch up to 1 million satellites as orbital AI data centers. Not a press release – a regulatory filing.
- February 2026: SpaceX acquires xAI in the largest private merger to date, valuing the combined entity at $1.25 trillion. Orbital compute is the stated rationale.
- March 2026: Nvidia announces the Vera Rubin Space-1 module at its GPU Technology Conference. The module is a chip platform purpose-built for orbital data centers. During his keynote, Jensen Huang said: “Space computing, the final frontier, has arrived.”
- April 2026: SpaceX confidentially files with the SEC for an IPO targeting a $1.75 trillion valuation – the largest in market history – with orbital compute as the central thesis.
The Economics of Space-Based AI Infrastructure
Now for the numbers… because this is what has folks so skeptical.
Today, in a state-of-the-art terrestrial data center, running one H100-equivalent GPU for one hour costs roughly $1.00. Hardware amortization, electricity, cooling, facility costs, network, labor – all in, about a dollar. That’s what the hyperscalers pay before they mark it up for cloud customers.
In orbit today, that same GPU-hour costs approximately $142 – a 142x premium.
But that $142 breaks down in a very specific way: $85.62 of it is pure launch cost. That’s 60 cents of every orbital dollar going toward getting hardware off the ground. The “free” solar energy of space costs $22.83 per hour to access because you have to deploy the solar panels in orbit first. In other words, the energy itself is free; the infrastructure to harvest it is not.
The entire orbital compute thesis is a bet on one number declining: dollars per kilogram to low Earth orbit.
Starship – SpaceX’s next-generation reusable rocket – is designed to bring that cost from ~$3,000/kg today to somewhere in the $50-100/kg range at scale. Google’s own engineers published a feasibility study concluding that at $200/kg, orbital compute becomes cost-competitive with terrestrial. The trajectory of launch cost reduction is real. Costs have fallen roughly 10x in 15 years.
The question is whether Starship crosses the final threshold.
Our analysis suggests the economic crossover happens around 2038. At that point, orbital compute flips from being the premium option to being the low-cost option for AI inference.
How? Orbital compute costs are technology-bound: subject to Wright’s Law and learning curves. They reliably fall. Terrestrial compute costs are increasingly resource-bound: subject to physical scarcity in power, land, and water. They reliably rise. Both forces are already in motion. Neither is reversing. The lines cross around 2038 – and then they keep diverging.
Who Adopts Orbital Data Centers First
If orbital compute is still more expensive than terrestrial through the early 2030s, how does this market scale before the economics fully flip?
The answer comes down to urgency. Several buyer groups already have strong reasons to move early, even without perfect cost alignment.
Defense and sovereign systems. Next-generation missile defense and space-based sensing programs already run into the hundreds of billions. These architectures depend on real-time data processing, and pushing everything through ground infrastructure introduces latency and operational constraints that don’t always work. In certain scenarios, keeping compute in orbit becomes a practical requirement.
Power-constrained regions. Energy costs in Europe and parts of Asia remain structurally higher than in the U.S., putting pressure on large-scale compute economics. That dynamic shifts the timeline, making alternative energy models more attractive sooner for some operators.
Carbon pressure. Hyperscalers have made aggressive decarbonization commitments, while policy frameworks continue tightening. As carbon accounting evolves and clean energy becomes more contested, access to reliable power starts to shape infrastructure decisions in a more direct way.
Strategic positioning. Infrastructure transitions tend to reward early movers. Companies that built cloud capacity ahead of demand ended up owning the stack. A similar dynamic could play out here, where establishing orbital capacity early provides long-term leverage.
Space-native workloads. A growing set of applications – real-time Earth observation, autonomous satellite operations, on-orbit systems – benefits from having compute close to the source. In some cases, performance drops off meaningfully without it.
The economics don’t have to be perfect for a real market to form. Strategic necessity has a way of moving faster than spreadsheets.
The SpaceX IPO: A Catalyst for Orbital Compute
The SpaceX IPO, confidentially filed April 1, 2026 – and targeting a $1.75 trillion valuation and up to $75 billion in proceeds – is a structural market catalyst with five distinct acceleration mechanisms.
First, it crystallizes the narrative. Right now, orbital compute exists as disconnected signals scattered across market coverage. The S-1 will force every institutional investor and every corporate strategy team at Microsoft, Google, Amazon, and Meta (META) to form a unified view. Is orbital compute going to restructure the AI industry? Am I positioned, or am I behind?
Second, it injects capital. The potential $75 billion raised will go directly into Starship scale-up and orbital data center deployment, making the story come true even faster.
Third – and most consequentially – it forces hyperscaler hands. SpaceX/xAI is not a neutral infrastructure provider. It runs Grok, which competes directly with OpenAI‘s latest models. It’s building orbital compute infrastructure with a vertically integrated AI competitor at the center. Microsoft cannot cede the infrastructure layer on which OpenAI depends to Elon Musk. Google cannot let a Grok competitor own the substrate of AI compute. Amazon cannot let a rival define the next generation of cloud infrastructure. They will all respond. And their responses will accelerate each other.
Fourth, competitive hyperscaler entry creates launch demand that exceeds SpaceX’s alone, which funds launch competitors – Rocket Lab (RKLB), Blue Origin, United Launch Alliance – to scale. Launch supply competition is the most direct mechanism to accelerate the $/kg cost curve.
Fifth: all of this together could pull the crossover date from 2038 to 2036. Our base case may prove conservative. The bull case is already in motion.
How to Invest In Orbital Data Centers
The best returns of the terrestrial AI infrastructure boom came not from the hyperscalers themselves but from the picks-and-shovels layer.
Nvidia on chips. Vertiv (VRT) and Eaton (ETN) on power. Coherent (COHR) on optics.
The orbital compute stack has direct analogs to every one of those layers. Here’s where we’re focused.
Pre-IPO Exposure Strategies
The Tema Space Innovators ETF (NASA), launched March 31, 2026, is the cleanest instrument, currently offering 12% direct SpaceX exposure via a Forge/Schwab SPV alongside RKLB, Planet Labs (PL), and AST SpaceMobile (ASTS). SpaceX accounts for around 16% of Destiny Tech100‘s (DXYZ) portfolio and roughly 10% of the ERShares Private-Public Crossover ETF (XOVR). Own these now. Trim aggressively when SpaceX lists.
Launch Infrastructure Winners
Rocket Lab is the “Nvidia of 2022” for this theme – early, overlooked, and deeply embedded in the infrastructure layer before the narrative caught up. It boasts an $805 million SDA contract, $1 billion-plus backlog, and vertical integration across rockets, satellite buses, and components. Every orbital data center satellite that goes up is revenue for RKLB.
AI Chip Leaders
Alongside Nvidia with its Space-1 platform, we like Microchip Technology (MCHP) – one of the most underappreciated names in the stack. It’s the dominant producer of radiation-hardened field-programmable gate arrays (FPGAs) that every satellite needs and is growing space revenue ~40% annually with no one covering it as an orbital compute play.
The Networking and Memory Layer
Broadcom (AVGO) and Marvell (MRVL) provide the networking application-specific integrated circuits (ASICs) and electro-optical components that distributed orbital compute clusters require. Micron (MU) supplies the high bandwidth memory (HBM) that every AI GPU in orbit needs, extending the HBM supercycle with additive demand.
Speculative Orbital Hardware Plays
Redwire (RDW) makes the ROSA solar arrays deployed on the International Space Station – the exact technology scaling to orbital data centers. Planet Labs is the first major commercial customer and a potential operator through its Google Suncatcher partnership.
Long-Duration Compounders
Alphabet is the most operationally advanced hyperscaler in this space, with Project Suncatcher targeting first prototypes in 2027, making it the highest-quality way to own the hyperscaler response to orbital compute at scale.
Palantir (PLTR) rounds out the software layer. Its AIP platform is already the operating intelligence layer for defense workloads – the same workloads that, as orbital compute scales, will increasingly run on space-based infrastructure. Palantir processes what satellites see. That positioning gets more valuable as the orbital stack matures.
The Risk: Space Cycles Always Reprice
Before you move, know the history.
Every significant space investment cycle has ended in violent repricing. Iridium. Globalstar. Virgin Galactic’s SPAC arc from $1,100 to $267. Capital markets cannot distinguish “thesis correct, timing uncertain” from “thesis wrong.” They price the bull case when the narrative is hot, and they reprice hard when execution disappoints – regardless of whether the long-term story has been invalidated.
The SpaceX IPO will price in the 2035 bull case on day one. Everything in the supply chain will spike. And then, 12 to 18 months later, the first execution disappointment will arrive – a Starship delay, an FCC ruling, a radiation-degradation result. None of it will kill the thesis. All of it will reprice the sector.
So the playbook writes itself: build positions now, in the pre-IPO window. Trim at the spike. Buy the correction with the proceeds. Hold for five to 10 years.
This sequence has played out in cloud, fiber, mobile, and shale. The market will hand you the entry. The only way to miss it is to chase the narrative instead of the price.
Because the thesis itself isn’t going anywhere.
The Bottom Line
AI demand is growing exponentially. Earth’s land, water, and grid power are not. The gap between them has to be filled by something that exists outside those constraints – and that something is in orbit.
The technology is demonstrated, the capital is committed, and the cost curve is moving in one direction.
The largest IPO in market history is the starting gun.
AI’s ‘generator moment’ has arrived. The only question worth asking is whether you’re positioned before the grid comes online – or scrambling to catch up after.
That’s how these transitions tend to play out.
The infrastructure gets built first. Then the companies sitting on top of it – turning raw compute into products, platforms, and recurring revenue – capture the majority of the upside.
We saw it with the internet. We saw it with the cloud. And we’re starting to see it again here.
Because while orbital compute is about where the next generation of AI infrastructure lives… the bigger story is what runs on top of it.
And right now, one company sits closer to that center of gravity than anything else in the market: OpenAI.
It’s the platform layer – the interface between all of this compute and the applications that actually generate value.
But it’s still not publicly traded.
Which means most investors will only get access once the story is obvious… and a large part of the upside is already behind it.
We’ve taken a closer look at how that moment may unfold – and how to position ahead of it, before the broader market fully catches on.
