The hyperconverged edge is where AI factories collide with wireless networks.
That shift is forcing telecom infrastructure, enterprise networking and compute architecture to converge in ways the industry has long discussed but rarely executed. Nvidia’s push to extend artificial intelligence beyond centralized data centers is accelerating that change, and companies such as Veea Inc. are aligning architectures accordingly. The real issue is whether telecom operators will actually capitalize on this opportunity, according to Allen Salmasi (pictured), chairman and chief executive officer of Veea.
“I really believe that this is a huge opportunity for telcos to effectively come out of this dump pipe type of service offering and really provide for a lot of value added into the use cases that are running at the edge,” Salmasi said. “Effectively for that, there are a number of architectures that they can integrate into their telco infrastructure, especially given the fact that we used to have these huge SS7 switches at telco aggregation points that are no longer the case because the core network has shrunk into smaller racks of equipment.”
Salmasi spoke with theCUBE’s John Furier for theCUBE + NYSE Wired: AI Factories – Data Centers of the Future interview series, during an exclusive broadcast on theCUBE, News Media’s livestreaming studio. They explored how Nvidia’s AI strategy is converging with telecom networks at the edge, among other topics.
How the hyperconverged edge aligns with Nvidia’s network strategy
Nvidia’s influence in this shift is not subtle. Its Open RAN work and public alignment with Nokia signal that AI acceleration is moving directly into the radio access network. Rather than treating wireless as a transport layer, Nvidia is embedding compute into the network fabric itself, Salmasi explained.
“We are completely aligned with that vision that Jensen [Huang] articulated at GTC in Washington, D.C. and discussed it on the panel or on the stage with Nokia,” he said. “They developed an Open RAN type of architecture roughly about four years ago that was first tested by my colleagues at Vapor in Las Vegas. What they’ve done since that trial, they have completely opened it up and made it open source. We are piggybacking off of the work that Nvidia had done on Open RAN to extend it all the way to the edge.”
That Open RAN foundation is now being extended to the edge, where higher frequency bands and distributed radios demand equally distributed compute. The implication is clear: AI workloads cannot sit miles away in a hyperscale data center if they are meant to power real-time logistics, healthcare or transportation systems, Salmasi noted.
“When you get into terahertz range of frequencies, now effectively every room, every small room is going to have some type of a radio head,” he said. “It’s fully distributed in terms of radio communications, but that has to be aligned with, completely in sync with, a distributed compute type of capability. Our architecture supports a compute mesh, as well as microservices mesh on top of this radio.”
AI acceleration, zero trust networking and telco reinvention converge at the edge
In that environment, networking becomes inseparable from AI acceleration, according to Salmasi. Security, orchestration and latency all move into a single architectural stack. That is where Nvidia’s reference architectures begin to matter, especially as AI agents become active participants in operational systems.
“You have communications and cybersecurity fully intertwined and integrated as one fabric,” Salmasi added. “It’s totally embedded into all of the connections that you make on a zero-trust basis. Because, frankly, unless you can trust the AI agents that you’re connecting to, you are not going to really have physical AI introduced at the edge.”
The opportunity for telecom operators hinges on whether they can evolve from bandwidth providers to edge compute orchestrators. They already control facilities, power and aggregation points; what changes is the value layer sitting on top.
“They have the facilities, they have the cooling, they have all the wiring in these thousands of locations that are referred to as telco aggregation boards,” Salmasi said. “It’s just really a question of how they architect it and how they effectively integrate that into their existing infrastructure.”
Here’s the complete video interview, part of News’s and theCUBE’s coverage of theCUBE + NYSE Wired: AI Factories – Data Centers of the Future interview series:
Image: News
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
