Introduction
Multi-tenancy in Kubernetes refers to the ability to isolate and manage multiple user groups or ‘tenants’ within a single Kubernetes cluster. This approach is essential for organizations that want to maximize resource utilization while maintaining isolation and security between different user groups. Typically this can be achieved by either logical isolation mapping namespaces as representing a separation of areas of our cluster to relevant teams and lock down with authorization and auditing. Another approach is splitting up the namespace-as-a-service further abstraction and that’s where the tenant concept for Capsule accomplishes.
Getting Started
Capsule is provided with the following YAML for installation to get this up and running on a existing Kubernetes cluster. For this demo I’m running Kubernetes 1.29.0 on kubeadm via Killercoda sandbox, I encourage the use of open-source as well if you’d like to experiment further on this as a safe alternative without using your CPU resources.
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/install.yaml
After running this command you should see some actions performed with the installation as shown below.
We can see the capsule-controller-manager resides in our namespace capsule-system this essentially will stay isolated to the designated namespace by default installation, I’m sure this can be altered if needed.
Capsule projects tenants these are areas of your cluster we are isolating as other tenants and are represented by the following syntax.
kubectl create -f - << EOF
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
name: west-prod
spec:
owners:
- name: steven
kind: User
EOF
Notice once this is applied to the cluster we’ve initially created a new tenant that is represented as west-prod (this could be any nomenclature just using for demonstration purposes). We can see if any quota is designated by namespace along with the namespace counts that are associated under the tenant along with if we have a node-selector (dedicated).
If you’re following along for this tutorial we have to create a kubeconfig for our user (owner designated) in the previous yaml, you can use the script located at this link https://github.com/projectcapsule/capsule/blob/main/hack/create-user.sh as reference and enter the values as I’ve done.
Now we run the following to explore further as shown in the output of the script (helpful for context)
Tenant is the creation we used Capsule for now we’ve designated ourselves as the tenant owner in that context important distinction this serves as the highest form of the permissions in the tenant assignments thereof after the creation can authenticate via OIDC tokens. The documentation outlines a example of this representation of “user_groups”.
Now that we’ve exported our ./kubeconfig we are authenticating the actions as the tenant owner and can create namespaces that are essentially under our tenant so I’m going to create the following.
west-prod-devsecops / west-prod-staging
Now we can run a few commands on the west-prod-devsecops and see what we are running along with our authorization limitations such as kubectl get pods -n kube-system returns a error.
However, we can view our workloads we’ve deployed in our assigned permissions such as nginx pod deployed.
Additionally we can see further that enumeration of kubectl get tenants returns a error as well.
For sanity checks further you can always leverage the kubectl auth can-i <action> <api-resource> to see what you can/can’t do this will validate further the enforcement.
Security Aspect
Notably, your use case for exploring orchestration such as kubernetes is likely not limited to using a single cluster but rather many like any other resource that is dependent on a abstraction of resources you have to evaluate the uses of tools such as Capsule. Multi-tenancy is growing in multiple organization in terms of adoption to alter or also have more cost-effective measures rather dedicating entire clusters to parts of the business or purposes you use the same underlying compute resources restricted by boundaries that are enforced with different mechanisms. In this case the tenant provides a level of isolation but also support pod-security admission as well as the older versions of pod security standards. If your organization is considering this aspect I’d say from the design perspective you’ll likely have to explore that route.
A example provided by the documentation explores the control aspect typically you can address the cluster roles broadly or namespace roles with capsule the abstraction of tenant you can define this fine tuned.
apiVersion: v1
kind: Namespace
metadata:
labels:
capsule.clastix.io/tenant: oil
kubernetes.io/metadata.name: oil-development
name: oil-development
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/audit: restricted
name: oil-development
ownerReferences:
- apiVersion: capsule.clastix.io/v1beta2
blockOwnerDeletion: true
controller: true
kind: Tenant
name: west-prod
Summary
Capsule represents a CNCF Sandbox project however it has gained relative traffic as I’m constantly seeing webinars and training provided from clastix.io they also have a enterprise offering likely a more turn-key platform offered as a SaaS solution. I’m definitely seeing acceleration in the growth of concepts associated with multi-tenancy and this project addresses the challenges by creation of a segmented area of CRD’s representing the tenant layer. It’s a great project to take a look at if your organization is struggling with keeping up with costs and want a more tailored approach to segmentation further with support. I’d imagine this will be a topic of focus in KubeCon North America and Europe this coming year as more organization address these challenges.
Resources