I’ve spent the last few months looking at data center deals, and I keep running into the same wall: power. Not chips, not real estate, not even capital. Just boring old electricity.
The numbers are brutal. A single AI facility can require 96 megawatts, enough to power a small city. And unlike traditional data centers that hum along at steady capacity, AI workloads spike unpredictably. You might go from 30% utilization to 95% in an hour when a new model training run kicks off.
This creates a nightmare for grid operators. They have to provision for your peak demand, even if you only hit it 10% of the time. And communities are starting to notice. I’ve watched deals fall apart because local utilities couldn’t guarantee the capacity, or city councils rejected permits after residents complained about rate increases.
So when I saw the announcement this morning about Emerald AI’s Manassas facility, I almost scrolled past it. Another hyperscale build, another “AI-ready” marketing pitch. But when I dug into the technical architecture, I realized this is different.
NVIDIA, Emerald AI, EPRI, Digital Realty, and PJM announced the Aurora AI Factory, a 96 MW facility in Manassas, Virginia, slated to open in the first half of 2026. The core idea: what if the data center could negotiate with the grid in real time?
Emerald’s Conductor platform sits between NVIDIA’s orchestration layer and PJM’s grid signals. When renewable generation drops or demand spikes, it can slow down or pause non-critical model training, reroute inference jobs to less congested data centers, and modulate power draw depending on renewable generation and peak demand, while maintaining acceptable Quality of Service for training and inference.
In other words, they’ve built interruptible compute into the architecture. The facility essentially becomes a variable load instead of a fixed drain.
What Makes This Investable
Here’s what caught my attention from a diligence perspective. The software capabilities that Arushi Sharma Frank (Emerald’s senior adviser on power and utilities) detailed in Utility Dive this morning show this isn’t vaporware.
The system can deliver targeted 20-30% power reductions for multi-hour windows during grid peaks, with no snap-back surge afterward. It can sustain curtailments for up to 10 hours. It responds to both rapid (10-minute) and planned (2-hour) dispatch signals. And critically, it can participate in wholesale electricity markets by mapping locational marginal prices into dispatchable bid curves.
From an investment perspective, this matters because it changes the unit economics. Utilities are more willing to approve facilities that reduce peak load rather than add to it, which means faster interconnection. Variable loads pay less than fixed loads in most tariff structures, which means lower capacity charges. The facility can sell demand response services back to the grid, creating new revenue streams. And perhaps most importantly, this makes data centers politically defensible, creating a regulatory tailwind.
The proof is in their earlier testing. A demonstration showed Emerald AI can reduce AI workload power consumption by 25% over three hours during a grid stress event, while ensuring acceptable performance. That’s measured, not modeled.
The Market Structure Question
Now here’s where I get skeptical. They claim that if this reference design were adopted nationwide, it could unlock an estimated 100 GW of capacity on the existing electricity system, equivalent to 20% of total U.S. electricity consumption in a year.
That feels optimistic and assumes perfect coordination across thousands of facilities. But the directional concept is sound. If you can make AI compute interruptible without breaking SLAs, you solve two problems: you reduce infrastructure costs, and you make data centers politically palatable again.
The real test will be whether customers accept the tradeoff. Training runs that take 36 hours instead of 24 because you’re opportunistically using cheaper off-peak power? Some will bite. Others won’t. The phrase “acceptable Quality of Service” is doing a lot of work here. It means some workloads will run slower or pause when the grid needs relief.
What I’m watching for: whether this creates a two-tier market. Latency-sensitive inference stays on traditional fixed-capacity infrastructure, while cost-sensitive training migrates to flex-power facilities. If that split happens, the economics of data center real estate start looking very different, and so do the returns.
The Aurora facility will serve as a live innovation hub, running demonstrations with EPRI’s DCFlex Initiative to validate performance during heatwaves, renewable shortfalls, and peak loads. Real-world proof matters more than whitepapers at this point.
Bottom Line for Infrastructure Investors
We’re past the point where you can just throw more diesel generators at the problem. The grid won’t allow it, permitting won’t support it, and the math doesn’t work. Power flexibility isn’t a nice-to-have anymore. It’s table stakes for the next wave of deployment.
For anyone evaluating data center infrastructure plays, the questions to ask are shifting. Can the facility participate in demand response programs? What’s the economic model for interruptible versus fixed capacity? How does power flexibility affect interconnection timelines? What percentage of workloads can actually tolerate curtailment?
The announcement came from Virginia Governor Glenn Youngkin this morning, calling it critical for both AI competitiveness and grid affordability. That tells you how serious the political pressure has become around data center power consumption.
We’ll see if the tech scales. But at least someone’s solving the right problem.


 
			 
                                 
                              
		 
		 
		 
		 
		 
		 
		 
		