In major move signaling how Amazon Web Services Inc. intends to redefine Kubernetes operations for the artificial intelligence era, AWS today announced Amazon EKS Capabilities, a fully managed suite of Kubernetes-native tools that integrate popular open-source tooling directly into the EKS control plane. The launch marks one of the company’s most aggressive steps toward reducing operational complexity for enterprise platform teams while improving developer productivity.
In an exclusive pre-re:Invent interview in Seattle, Eswar Bala (pictured), director of container engineering at AWS, said Kubernetes has quietly become the default control plane for AI, sparking unprecedented growth in AI workloads running on containers. EKS Capabilities is AWS’s response.
“Developers spend 70% of their time today managing infrastructure,” he told theCUBE. “EKS Capabilities flips that model. We take on the heavy lifting so they can focus on building.”
AWS is seeing an annual doubling of use of graphics processing units managed by Kubernetes,” Bala said. “Agentic workloads, multimodal inference, GPU batch jobs—customers want automation, scale, and reliability,” he said. “That’s what this launch delivers.”
Kubernetes at scale
AWS is introducing three fully managed components that support Kubernetes use at scale. Argo CD is a declarative GitOps system already used in production by nearly half of all Kubernetes teams, according to AWS. AWS handles all of Argo CD’s infrastructure, including upgrades, patching, high availability and scaling.
AWS Controllers for Kubernetes enables organizations to manage AWS cloud resources directly through Kubernetes application programming interfaces. AWS said it takes on the tasks of deploying, operating and troubleshooting control plane integrations so customers don’t need to.
Kubernetes Resource Orchestrator lets platform teams build reusable, opinionated resource bundles that abstract away complexity while remaining fully native to Kubernetes.
Together, the services are intended to help customers manage scalable, standardized Kubernetes platforms without building their own GitOps pipelines, resource orchestration layers or control integrations.
“Instead of installing these tools yourself, they run inside AWS-owned service accounts,” Bala said. “We handle scaling, patching, upgrades. Customers just use them.”
Bala said the role of containers has changed dramatically over the past decade. What started as lightweight packaging for web services is now the backbone of advanced AI deployments.
“Foundational model builders rely on Kubernetes,” he said. “Dynamic GPU allocation, scheduling, massive scale—none of this happens without the maturity the Kubernetes ecosystem reached over the past 10 years.”
AWS has been preparing for this shift:
- EKS Auto Mode, announced last December, automates GPU provisioning and rightsizing.
- Karpenter, introduced last fall, dynamically scales workloads across GPU and CPU fleets.
- EKS Ultra Clusterssupporting up to 100,000 nodes, announced in July, support foundational model training and hyperscale inference.
- Amazon Q integrations introduced AI-driven troubleshooting that AWS says reduces operational tasks from days to minutes.
Invisible infrastructure
AWS said its goal is to make Kubernetes feel like a native AWS service rather than a self-managed ecosystem. With the new EKS Capabilities, customer teams no longer need to maintain Argo CD clusters or ACK controllers. AWS automatically updates, patches, and analyzes compatibility issues. Identity and access management and single sign-on integration are handled by the AWS Identity Center. Platform teams can more easily templatize and standardize cluster resources. Developers interact with Kubernetes declaratively.
Bala hinted that future developments will focus on agent-oriented application architectures that demand stronger isolation and new orchestration patterns beyond standard containers.
“You’re going to have many agents working together,” he said. “They’ll need sandboxed, isolated environments. Containers may evolve further, or entirely new boundaries may emerge.”
Generative AI is itself a runtime, and the convergence between the container runtime and AI runtime is accelerating, he noted.
Amazon EKS Capabilities is available now in commercial AWS Regions with no minimum fees. Customers pay only for what they use.
Breaking analysis
Today’s launch isn’t just a convenience update; it’s a strategic bet by AWS that Kubernetes will anchor the next decade of AI infrastructure.By operationalizing GitOps, AWS resource APIs, AI-driven troubleshooting, and large-scale GPU automation under one umbrella, AWS is evolving EKS from a container orchestration service into a fully managed AI cloud platform.
The message, Bala said, is unmistakable:“The next decade of AI runs on highly automated, container-native infrastructure. EKS Capabilities is how we’re delivering that future.”
Photo: News
Exclusive video with Eswar Bala, director of containers, AWS
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
