The data center is undergoing its most dramatic reinvention in decades, driven by the insatiable growth of artificial intelligence models that demand more power, scale and efficiency than ever before.
What’s changing isn’t just the hardware but the way data itself moves and interacts across massive GPU clusters. Traditional copper connections can no longer keep pace with the bandwidth and thermal demands of today’s AI factories, opening the door for light-based photonic fabrics that promise faster communication, lower energy use and higher GPU utilization. This evolution marks a turning point where data movement — not compute — has become the defining factor in the race to build the next era of intelligent infrastructure, according to Preet Virk (pictured), co-founder and chief operating officer of Celestial AI Inc.
Celestial AI’s Preet Virk talks about AI in the data center.
“Large cluster sizes result in a reach issue — you can’t pack these GPUs as close as you would like to because of the thermal considerations and the heat considerations, and therefore the reach is starting to become an issue where photonics comes in,” Virk said. “Copper without re-timers, re-clockers is good enough for 2, 3 or 4 meters. Beyond that, optics clearly is the right answer. Photonically connected, extremely large cluster sizes that have very high bandwidth, very low latency and very low power as measured in picojoules per bit are what these modern data centers really need.”
Virk spoke with theCUBE’s John Furrier at theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event, during an exclusive broadcast on theCUBE, News Media’s livestreaming studio. They explored how by using photonic interconnects, disaggregated memory and new scaling models, the data center is being redefined into a high-efficiency factory producing tokens — the new currency of AI.
The data center must deal with the AI model explosion
Over the past few years, AI models have grown at an unprecedented pace. What once required a single GPU now demands clusters of hundreds of thousands. Subsequently, even the metric for measuring compute has expanded to count massive GPU clusters as single units, Virk explained.
“Everybody realizes now that the AI models have grown at a pace that nobody anticipated,” he said. “As the AI models get larger and larger, you need more and more XPUs, GPUs to compute and process that model, which translates simply into, you need massive clusters, which three, four years ago we would’ve never imagined. What that means is that all these GPUs need to be connected so that they work almost as if they were a single unit.”
This exponential scaling introduces new bottlenecks. GPUs must be connected efficiently across racks and even data centers, while managing heat, power and bandwidth. Copper connections can’t keep up at scale. That’s where photonics — a light-based interconnect technology — emerges as the clear solution, according to Virk.
“We focused on the data movement problem early on, but to be honest, we never realized that this problem is going to become such a large component of a modern data center infrastructure,” he added. “What we focused on is the photonic fabric and the scale-up network day one, not scale-out. That’s where, as they say, the pain is, and that’s where the photonic fabric comes in. What we allow is for the industry to build very large clusters in a very efficient fashion.”
In modern AI data centers, data movement, not compute, is the largest energy drain. Sixty percent of data center energy is spent on moving data, not processing it, Virk emphasized. Even worse, this inefficiency drags down model flop utilization, the metric that shows how effectively GPUs are being used.
“You would want to include a lot of compute dies on a single package, inside a single package, and these are full radical dies,” he noted. “At Celestial, we actually built an optical multi-chip interface bridge that allows you to optically connect these large dies that have high-density, high-speed IO in a small space within a package.”
Celestial AI’s photonic fabric attacks this head-on by reducing energy per bit for data movement from 55 picojoules to just 13. That fourfold power savings translates directly into efficiency gains, lower costs and higher utilization of expensive GPU assets.
Here’s the complete video interview, part of News’s and theCUBE’s coverage of theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event:
Photo: News
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.