As enterprises embrace cloud-native architectures to manage sprawling multicloud ecosystems, one challenge dominates the conversation: how to navigate complexity without losing momentum. Cloud-native observability is emerging as a key lever in that effort, helping businesses tame tool sprawl, bridge skills gaps and respond to rapid advances in artificial intelligence.
TheCUBE Research’s Paul Nashawaty, Savannah Peterson and Rob Strechay on set during the event.
“In our recent research, 75% of respondents indicated that they’re using six to 15 different observability tools and they want to consolidate,” said Paul Nashawaty, principal analyst at theCUBE Research.
Nashawaty joined fellow analysts Savannah Peterson and Rob Strechay for an exclusive broadcast during KubeCon + CloudNativeCon Europe on theCUBE, News Media’s livestreaming studio. They spoke with cloud-native pioneers and enterprise leaders about how organizations are managing scale, skill gaps and sustainability in today’s evolving tech landscape. From platform maturity to AI innovation to cloud-native observability, the conversations revealed key shifts in how cloud-native infrastructure is being reshaped to meet tomorrow’s demands. (* Disclosure below.)
Here are three key insights you may have missed from theCUBE’s coverage:
1. Cloud-native observability is proactive, real-time and everywhere.
Once a tool quietly humming in the background, observability has stepped into a starring role as organizations contend with dynamic workloads, AI-native architectures and multicloud complexity. No longer limited to backend monitoring, it now spans developer tooling, active compliance checks and strategies for business resilience. Industry leaders at the event emphasized a growing push to unify platforms and simplify operations in response to the overwhelming variety of point solutions and the unpredictability they create.
“They talked a lot about the actual functions and the process, to your point about breaking down silos and how people are organizing,” Strechay said, during the event. “They’re also scaling back the number of tools; they’re looking for more of a platform approach.”
This need for simplification is even more critical in AI environments, where applications behave differently, and transactions often don’t repeat. In these cases, developers must quickly triage errors without relying on predictable execution flows. Observability now serves as a real-time diagnostic engine across live, ever-shifting systems, according to Alois Reitbauer, chief technology strategist at Dynatrace LLC. And cloud-native observability is critical for managing unpredictable, AI-driven environments.

CNCF’s Brian Douglas talks with theCUBE about how cloud-native technologies are enabling scalable, production-ready AI.
“These people find themselves in a situation where they have the most dynamic, most unpredictable applications they have ever had that they need to optimize about,” he told theCUBE, during the event. “Suddenly, what they need to do … [is] end-to-end tracing. They need to be able to debug in production, what we call live debugging — more or less, setting non-breaking breakpoints that need to see all the information in context for very specific transactions that the user’s looking at that will never occur again.”
Regulatory pressure is also forcing observability to evolve beyond diagnostics and performance. In today’s compliance-heavy environments, organizations are expected to detect violations as they happen — not retroactively — requiring observability tools to act as active compliance monitors. This shift underscores how cloud-native observability is expanding its role from passive monitoring to an essential engine for compliance, security and real-time decision-making.
“We started at Dynatrace to work on … [a] Compliance Assistant, being able to do compliance monitoring for our customers,” Reitbauer said. “A key point is [to] do it continuously because you want to know when you fail. You want to act in real time [because] if you see that something is not working properly … ideally, you can react very quickly, avoid fines and even don’t have to explain to anybody why it’s not working.”
Cloud-native strategies also play a critical role in expanding the reach of observability into AI development at scale. While many organizations are still experimenting with deployments, conversations at KubeCon highlighted the growing need for tooling, community playbooks and data visibility that meets the demands of production-level AI systems, according to Brian Douglas, head of ecosystem at the Cloud Native Computing Foundation. As organizations push toward production-ready AI, CNCF projects already offer powerful blueprints.
“We’ve got over 200 projects within the CNCF,” Douglas told theCUBE, during the event. “If using 10 of those, there’s a story of how you compiled those and made that work to be AI native and AI ready. That’s a story that we should really start getting out there, and I think we’re ready for that.”
Here’s theCUBE’s complete video interview with Alois Reitbauer:
2. Platform engineering has matured, but so have the growing pains.
As platform engineering becomes more central to cloud-native strategies, teams are hitting different speed bumps depending on where they are in the journey, according to Harriet Lawrence, principal product manager for OpenShift at Red Hat Inc., and Kirsten Newcomer, senior director of OpenShift and security product management at Red Hat. Some are still wrapping their heads around DevSecOps and figuring out how to build secure foundations. Others are further along, navigating platform sprawl at scale and piloting advanced tools, such as Red Hat’s OpenShift Lightspeed. Either way, the conversation is shifting; it’s now less about tools in isolation and more about how people, processes and platforms actually work together.

Red Hat’s Kirsten Newcomer and Harriet Lawrence talk with theCUBE about the evolving landscape of platform engineering.
“We still see a lot of our customers at the beginning of their adoption. They’re still new to platform engineering,” Lawrence told theCUBE, during the event. “You see folks right at the other end who are right at the front edge. They’re doing lots of really exciting stuff in platform engineering. They’ve had platform engineering teams for years. They’re very involved in it.”
Even for companies well past the early stages, growth brings its own kind of chaos. At HSBC Group, the container platform spans more than 200 clusters and 13,000 nodes, making every upgrade cycle like repainting the Forth Road Bridge: Endless, essential and precision-critical, according to Steve Lewis, global head of engineering for container platforms at HSBC, and Venkat Ramakrishnan, vice president and general manager of Portworx by Pure Storage Inc. For teams of this scale, managing updates, data integrity and cloud-native observability can seem like a never-ending balancing act. To keep pace with Kubernetes releases and regulatory demands, the bank has partnered with Portworx by Pure Storage Inc. to streamline backup, recovery and data integrity at scale.
“The constant flow of upgrades — new versions of Kubernetes are coming out every four months — and we’ve got to keep pace,” Lewis told theCUBE. “Normally, a state the size that we have (200 plus clusters) … it never ends. We are constantly upgrading [and] maintaining.”
As cloud-native platforms evolve, so does the definition of who gets to build. Kubernetes is no longer just the realm of backend engineers — it’s becoming the launchpad for a broader range of creators, from generalists to product managers and beyond. That shift is influencing how companies design platforms, favoring developer experiences that are intuitive, flexible and friction-free, according to Vish Abrams, chief architect of Heroku at Salesforce Inc.
“I think the thing that Heroku is known for is making it super easy to take your code and just get it out into production,” Adams said, during the event. “This question about are we still going to have coders? Is this going to happen? Well, if you think about it, your administrators, your product people, everybody’s going to be able to build applications now.”
Here’s theCUBE’s complete video interview with Venkat Ramakrishnan and Steve Lewis:
3. Smaller AI, but bigger implications: Kubernetes meets the moment.
AI at scale may grab headlines, but conversations at KubeCon + CloudNativeCon Europe revealed a shift toward more grounded approaches and potentially more revolutionary ones, according to Holly Cummins (pictured, left), senior principal software engineer for Quarkus at Red Hat Inc., and Vincent Caldeira (right), chief technology officer for APAC at Red Hat. With its resilient architecture and orchestration muscle, Kubernetes is becoming the backbone of this movement.

Salesforce’s Vish Abrams talks with theCUBE about how Kubernetes development is empowering more people to build and scale applications.
“What we’re actually doing is we’re seeing better results with lower costs, which is really cool,” Cummins told theCUBE, during the event. “For example, with something like a smaller model is often … you know it’s faster, the costs are lower and it’s more tailored to your domain. It actually gives you better results.”
While production-ready AI remains limited, organizations are testing the waters with graphics processing unit reservations and open tooling, according to Douglas. This growing interest highlights a clear opportunity for stronger partner ecosystems, better tooling and more real-world success stories. The CNCF is working to meet that need by promoting open collaboration and shared frameworks to help cloud-native AI move from theory to production.
“What we’re seeing right now is a lot of folks are using your GitHub and your Cursors or Windsurf that test the waters,” Douglas told theCUBE, during the event. “And you get to a certain extent of, yes, it works, but am I going to use this … behind the firewall for my engineers? Maybe yes or no, it’s up for a debate.”
As organizations move toward more domain-specific models and production-ready deployment, cloud-native observability is becoming essential for understanding how these smaller models behave under different workloads and building trust in how they operate at scale.
Beyond infrastructure, AI is changing how development tools function, making them more iterative and inspiring. Heroku’s evolution hints at what’s next, according to Abrams. Heroku’s Kubernetes rebuild is helping usher in a new wave of AI-native platforms prioritizing creativity over configuration.
“I would like to be able to say that the new set of people that are just figuring out about vibe coding, this is a new thing,” Abrams said, during the event. “That class of people that are now building applications, I would love to be able to say all of those people are deploying their applications on Heroku. There is an easy path for this new wave of builders to have a place to deploy. That’s my dream.”
Here’s theCUBE’s complete video interview with Holly Cummins and Vincent Caldeira:
To watch more of KubeCon + CloudNativeCon Europe, here’s our complete video playlist:
https://www.youtube.com/watch?v=videoseries
(* Disclosure: TheCUBE is a paid media partner for KubeCon + CloudNativeCon Europe 2025. Neither Red Hat Inc. and Cloud Native Computing Foundation, the primary sponsors of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or News.)
Photo: News
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU