The team behind Google Kubernetes Engine (GKE) revealed that they successfully built and operated a Kubernetes cluster with 130,000 nodes, making it the largest publicly disclosed Kubernetes cluster to date. The milestone underscores how far cloud-native infrastructure has advanced and its readiness for the large-scale, compute-heavy workloads of the AI and data era.
The feat was achieved by re-architecting key components of Kubernetes’ control plane and storage backend, replacing the traditional etcd data store with a custom Spanner-based system that can support massive scale, and optimizing cluster APIs and scheduling logic to reduce load from constant node and pod updates. The engineering team also introduced new tooling for automated, parallelized node pool provisioning and faster resizing, helping overcome typical bottlenecks that would hinder responsiveness at such a scale.
As AI training and inference workloads grow, often requiring hundreds or thousands of GPUs or high-throughput CPU clusters, the ability to run vast, unified Kubernetes clusters becomes a critical enabler. With a 130,000-node cluster, workloads such as large-scale model training, distributed data processing, or global microservice fleets can be managed under a single control plane, simplifying orchestration and resource sharing./p>
At the core of the scale breakthrough was Google’s replacement of etcd as the primary control-plane datastore with a custom, Spanner-backed storage layer. Traditional Kubernetes relies on etcd for strongly consistent state management, but etcd becomes a scaling bottleneck at very high node and pod counts due to write amplification, watch fan-out, and leader election overhead. By offloading cluster state into Spanner, Google gained horizontal scalability, global consistency, and automatic sharding of API objects such as nodes, pods, and resource leases. This dramatically reduced API server pressure and eliminated the consensus bottleneck that typically caps Kubernetes clusters at tens of thousands of nodes. The API servers were also refactored to batch and compress watch traffic, preventing control-plane saturation from constant node heartbeats and pod status updates.
For the infrastructure resources, Google introduced highly parallelized node provisioning and scheduling optimizations to avoid the thundering-herd problems that occur when tens of thousands of nodes join a cluster simultaneously. Node pools were created using aggressive parallelism, while kube-scheduler was tuned to reduce per-pod scheduling latency and minimize global locking. Networking and IP address management were also reworked to avoid CIDR exhaustion and route-table limits at extreme scale. Crucially, the engineers treated “cluster scale” as a full-stack systems problem, spanning API efficiency, database architecture, scheduling algorithms, and network control planes, rather than simply increasing resource quotas. This architectural shift is what allowed Kubernetes to move from tens of thousands of nodes into true hyperscale territory.
This milestone also represents a dramatic leap over past GKE limits. Just a few years earlier, GKE documented support for up to 65,000-node clusters. While those limits already supported large-scale workloads, 130,000 nodes more than double that capacity, a strong signal of Google Cloud’s ambition to support what it calls the “AI gigawatt era.”
Even so, Google cautions that this cluster was built in “experimental mode” primarily as a proof of concept to validate scalability. For most real-world deployments, limitations such as auto-scaling, network policies, resource quotas, and scheduling constraints may require more conservative configurations.
For organizations pursuing large-scale AI or data workloads, this announcement shows that cloud-native infrastructure, once considered suitable only for small to mid-size services, can now scale to hundreds of thousands of nodes. It shows that Kubernetes, when properly optimized, remains a viable backbone for even the most demanding compute needs.
However, this sort of cluster scaling is not unique to Google. In July 2025, AWS announced that EKS now supports clusters of up to 100,000 worker nodes, a dramatic increase over typical limits. This enhancement is aimed squarely at ultra-large AI/ML workloads: according to AWS, a single EKS cluster at this scale could support up to 1.6 million Trainium chips or 800,000 NVIDIA GPUs, enabling “ultra-scale AI/ML workloads such as state-of-the-art model training, fine-tuning, and agentic inference.”
AWS documents how they achieved this scale with extensive re-engineering under the hood. They optimized the data plane by tuning the Kubernetes API servers, scaling control-plane capacity, and improving network and image-distribution pipelines to handle extreme load. During tests, the cluster handled millions of Kubernetes objects (nodes, pods, services) and maintained responsive API latencies even under high churn and heavy scheduling load.
The recent EKS announcement demonstrates that the scalability ambition showcased by GKE is not unique to one cloud vendor. The fact that a major cloud provider has committed to offering Kubernetes clusters with 100 K nodes in production strengthens the argument that Kubernetes is ready for the “AI gigawatt era.” It also provides an option for companies evaluating whether to invest in custom, large-scale engineering (like the GKE’s 130 K-node build) or adopt a managed, high-scale service via EKS.
