What Is Cilium and How Does It Work?
Cilium is an open source networking and security solution for containers that can be used on premises or in the cloud. It provides a high performance, scalable way to secure communications between containers without the need for a central controller. Cilium uses the Linux kernel’s built-in networking capabilities to provide a fast and efficient way to connect containers.
How Does Cilium Work.
Cilium works by using the Linux kernel’s built-in networking capabilities to provide a fast and efficient way to connect containers. By using the kernel’s networking stack, Cilium is able to provide better performance than other solutions that use virtual machines or software-defined networks. Cilium also has a number of security features that make it ideal for use in containerized environments.
Benefits of Using Cilium on Azure.
When compared to a traditional networking setup, Cilium provides improved security in several ways. First, Cilium uses network policies to control traffic between services, which reduces the chances of accidental data leaks. Second, Cilium encrypts all communication between services, making it more difficult for attackers to eavesdrop on network traffic. Finally, Cilium’s built-in service discovery mechanism ensures that only authorized services can communicate with each other, further reducing the attack surface.
Increased Network Performance
Cilium’s use of container-native networking results in significantly reduced latency and increased throughput when compared to traditional networking solutions. This is because Cilium uses the host kernel’s networking stack instead of creating its own virtual network layer. This allows Cilium to take advantage of the host kernel’s optimizations and avoid the overhead associated with virtual networking solutions.
Automated Network Configuration
One of the advantages of using Cilium is that it automates many of the tasks associated with configuring and maintaining a complex network infrastructure. For example, Cilium automatically configures routing and load balancing based on service discovery information. This saves administrators from having to manually configure these settings and frees up time for other tasks.
Step-by-Step Guide for Setting Up Cilium on Azure.
Limitations
- Available only for new clusters
- Available only for Linux and not for Windows
- Cilium Layer 7 policy enforcement is disabled
- Hubble is disabled
- Kubernetes services with internalTrafficPolicy=local aren’t supported – https://github.com/cilium/cilium/issues/17796
- Multiple Kubernetes Services can’t use the same host port with different protocols (ex TCP or UDP) https://github.com/cilium/cilium/issues/14287
- Network policies may be enforced on reply packets when a pod connects to itself via service cluster IP https://github.com/cilium/cilium/issues/19406
Prerequisities
- Azure CLI version 2.41.0 or later. If you need assistance on knowing which version you have you can run az –version to see installed version.
- Azure CLI with aks-preview extension 0.5.109 or later.
- If using the ARM templates or the REST API, the AKS API version must be 2022-09-02 preview or later
Installation
Install the aks-preview extension
az extension add --name aks-preview
Then run the following command to update the latest version of extension
az extension update --name aks-preview
Register the ‘CiliumDataplanePreview’ feature flag
az feature register --namespace "Microsoft.ContainerService" --name "CiliumDataplanePreview"
This will take a few minutes we are looking for that value to change from “Registering” to Registered we can verify this by running the following command
az feature show --namespace "Microsoft.ContainerService" --name "CiliumDataplanePreview"
Once the state moves to Registered refresh the registration of Microsoft.ContainerService resource provider running this command below
az provider register --namespace Microsoft.ContainerService
Create a AKS cluster with Azure CNI Powered by Cilium
You have a few options we will be using the Assign IP Addresses from a VNet
First we provision the resource group (think of this as a isolated folder that will house our services)
# create the resource group
az group create --name cilium-aks-east --location eastus
# Create a VNet with a subnet for nodes and subnet for pods
az network vnet create -g cilium-aks-east --location eastus --name aks-vnet-dev --address-prefixes 10.0.0.0/8 -o none
az network vnet subnet create -g cilium-aks-east --vnet-name aks-vnet-dev --name nodesubnet --address-prefixes 10.240.0.0/16 -o none
az network vnet subnet create -g cilium-aks-east --vnet-name aks-vnet-dev --name podsubnet --address-prefixes 10.241.0.0/16 -o none
So due to the nature of the parameters in this next command I’ll have the subscriptionid portion as (<subscriptionid>) you’ll replace this with your subscription id.
Provision AKS with –enable-cilium-dataplane
az aks create -n aks-cni-cilium100 -g cilium-aks-east -l eastus \
--max-pods 250 \
--network-plugin azure \
--vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/cilium-aks-east/providers/Microsoft.Network/virtualNetworks/aks-vnet-dev/subnets/nodesubnet \
--pod-subnet-id /subscriptions/<subscriptionId>/resourceGroups/cilium-aks-east/providers/Microsoft.Network/virtualNetworks/aks-vnet-dev/subnets/podsubnet \
--generate-ssh-keys
--enable-cilium-dataplane
So if your going to keep this cluster up in Azure you’ll want to store the generated ssh key to back up safe location instead of quickly provisioning and deleting the cluster.
Navigate back to portal.azure.com and we can see our cluster in AKS service now
So we have officially deployed AKS with the Azure CNI backed by Cilium in this now realize that this is in preview so its not supported in terms of troubleshooting so don’t use this in production. The downside if you wanted to use hubble UI in Cilium you’ll likely have to use the BYOCNI which was previously in preview but appears GA which this allow you to grab any CNI you’d like to install.
Now to clean up resources once you’re done with installation and exploring the out of the box configuration you can delete the resource group with the following command
az group delete --resource-group cilium-aks-east
This should give you a prompt such as y/n
Let this operation run and once finished your AKS cluster and our VNet provisioned will get deleted.
I also do want to take the time to point out the differences in features of Defender for Containers that populates any findings of your cluster I had a cluster running some elevated privileges it looked like this when I investigated.
What makes the findings more relevant is it is mapped to the MITRE Attack Framework from the finding to make you aware of the tactics of the alert.
You can dig into the entities of your resources that are affected with visuals to aid into finding the affected resources
For more exploration on the findings when I go back into the details of active alerts its a nice layout.
This is not doing full justice to Defender for Containers but would like to show off the capabilities of protection that are available native to Azure.
Conclusion
In conclusion, having options of configuration is the ultimate goal of being able to switch between CNI’s every managed platform such as Google Cloud, Amazon Web Services and Azure have multiple options for configuration but in my opinion Microsoft has been really pushing innovation of open-source for AKS for integration of course I could be biased but I will explore EKS as well I’ve mostly deployed Calico across AWS/Azure and would like to show off both the capabilities further of Cilium in future posts. Stay tuned and keep learning!