As artificial intelligence agents shift from experimental projects to production systems, enterprise technology leaders are worried that today’s infrastructure can’t handle the coming scalability demands.
They have good reason, according to Spencer Kimball, chief executive of Cockroach Labs Inc., maker of a distributed SQL database highly regarded for its high availability and resilience. The company’s recent survey of 1,125 cloud architects and technology executives found that all expect AI workloads to grow in the next year, with more than 60% forecasting growth of 20% or more.
Much of the industry’s attention has focused on graphics processing units as the biggest AI bottleneck, but Kimball said the bigger issue is the fragility of the operational systems behind AI applications. “Every time you click one of those buttons or access an [application programming interface], you’re eventually hitting back-end operational databases,” Kimball told News.
That means agentic AI will accelerate back-end demand far beyond the growth patterns enterprises have become accustomed to. Traditional applications are designed to adapt to human-paced usage cycles, such as a click every few seconds. AI agents, by contrast, operate continuously and can generate massive request volumes.
‘5,000 actions a second’
“When a Python script is accessing your API, you’re not talking about an action every two seconds; you’re talking about 5,000 actions in a second,” he said.
The Cockroach Labs report suggests many enterprises already see a breaking point ahead. Eighty-three percent of respondents expect their data infrastructure to fail without major upgrades in the next 24 months, and 34% believe the breaking point will come within the next 11 months.
Kimball: “In three years, there’s going to be 10x growth; in five years, it’s probably going be 100x.” Photo: Cockroach Labs
Kimball called agents a looming “tsunami” of demand, driven both by higher transaction volumes and the unpredictable behavior of autonomous systems. Enterprise databases have historically scaled at a manageable pace of “a bit more than 10x every 10 years,” he said, but the AI era will compress that timeline dramatically.
“In three years, there’s going to be 10x growth,” he said. “In five years, it’s probably going be 100x.”
The report also highlights the financial consequences of downtime: Ninety-eight percent of respondents said a one-hour outage costs at least $10,000, and nearly two-thirds said the cost exceeds $100,000 per hour.
Those costs could be magnified by agents’ ability to quickly switch to alternative suppliers and vendors, Kimball said. He cited the example of an agent detecting slowdowns in a bank’s account management systems and proposing to switch customers to a competitor, handling the transition automatically.
“I can move all your bill payments over in about 10 minutes. Would you like me to move it?” Kimball said.
The report also identifies where failures are likely to emerge first. While 36% said cloud infrastructure or service providers would be the first point of failure, the database layer came in second at 30%.
Cloud isn’t a panacea
Kimball said cloud elasticity alone won’t address the problem. Even with workloads running on hyperscalers, architectural choices at the database and data-management layer will determine whether systems can handle continuous AI-driven load.
“While cloud infrastructure provides the raw materials, it’s the data architecture layered on top that determines whether systems thrive or fail at AI scale,” the report states.
The report found that 85% of respondents are spending at least 10% of their total information technology budget on AI initiatives that place significant demands on data infrastructure, and 24% are spending more than 25%. However, it suggests leadership teams may not fully appreciate the urgency.
Sixty-three percent of respondents said executives underestimate how quickly AI demands will outpace existing infrastructure.
Kimball said this disconnect could leave organizations unprepared for sudden shifts in usage patterns, particularly as agentic workloads grow to rival human-generated traffic.
“Agent traffic is small compared to human traffic,” he said. “That is why everything’s not breaking yet.”
Enterprises are pursuing a mix of scaling strategies, with about half adopting a hybrid or dynamic scaling approach, 26% focusing on horizontal scaling, and 22% on vertical scaling.
Kimball said the hybrid approach is pragmatic. “Moving everything all at once to fully distributed infrastructure is risky,” he said. “You want to walk before you run.”
Cockroach Labs is positioning itself to capitalize on these trends. Kimball said the company has long emphasized reliability but expects scaling pressure from AI to become a more prominent driver of database modernization decisions.
“The moment calls for exactly the differentiators we’ve been building for more than 10 years,” he said.
Image: News/Ideogram
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
