The Cloud Native Computing Foundation (CNCF) published a blog post discussing how vCluster, an open-source project by Loft Labs, addresses key multi-tenancy obstacles in Kubernetes clusters by enabling “virtual clusters” within a single host cluster. This approach enables multiple tenants to have isolated control planes while sharing underlying compute resources, thereby reducing overhead without compromising isolation.
Traditional namespace-based isolation in Kubernetes often falls short when tenants need to deploy cluster-scoped resources like custom resource definitions (CRDs) or when platform engineering teams want to maintain strong separation between teams. According to the CNCF post, vCluster offers a practical alternative. A virtual cluster runs as an application in a namespace on the host, but presents a full Kubernetes API server, controller manager, and datastore for tenant workloads. A syncer component ensures that pods, ConfigMaps, secrets, and services from the virtual cluster are mirrored into the host namespace, allowing them to execute as normal on the underlying host nodes.
One of the most compelling use cases described is where teams require autonomy (for instance, to install CRDs) without granting them broad admin rights on a shared cluster. Without virtual clusters, teams would face a set of unappealing options: deny the request and risk friction, give expanded rights and weaken isolation, manage the resources centrally and increase burden, or provide a dedicated cluster at higher cost and operational overhead. The vCluster model sidesteps the trade-off by letting tenants behave almost as if they had their own cluster while keeping the underlying resources shared and controlled by the platform team.
The blog also explores how common platform-engineering tools, such as Kyverno for policy enforcement or Falco for runtime security monitoring, interact with virtual clusters. For example, Falco installed on the host cluster can still detect suspicious activity in workloads originating from a vCluster, because those workloads exist on the host with transformed names and namespaces. Similarly, Kyverno policies defined at the host level can validate or enforce rules against virtual cluster workloads. However, the article notes there are details around sync latency and which resources are mirrored that teams must plan for.
While virtual clusters hold promise for balancing cost, isolation, and developer autonomy, the article also cautions teams to understand trade-offs. Because workloads still share the same host cluster nodes, strong tenancy isolation around data, network, and potential “noisy neighbour” effects still requires proper configuration. Furthermore, support for certain cluster-scoped or node-specific resources may still be limited in the virtual cluster model, a consideration that organizations must weigh as they evaluate whether to adopt this architecture.
It’s not just vCluster, though that is looking ot develop in this space. Capsule offers a namespace-centric operator approach. It enhances Kubernetes native namespaces by layering in RBAC, resource quotas, network policies, and admission controls geared toward multi-tenant usage. While it does not deliver fully separate control planes per tenant, it is relatively lightweight and integrates well into existing clusters. This makes Capsule a strong fit for organizations that require self-service namespace provisioning and shared cluster economies without the full overhead of virtual-cluster orchestration.
Meanwhile, Kamaji targets a slightly different angle: it aims to enable “cluster-as-a-service” by provisioning dedicated control planes on behalf of tenants, yet still sharing underlying infrastructure and control-plane management. In other words, it delivers per-tenant control-plane separation more akin to dedicated-cluster models but automates many of the multi-cluster operational burdens.
The CNCF’s coverage of vCluster underscores a maturing approach to Kubernetes multi-tenancy: moving beyond simple namespace segregation, toward real virtual-cluster abstractions that deliver better separation without the cost of fully separate physical clusters. For platform teams grappling with multiple tenant teams, this may offer a practical path forward.
