Kubernetes with Calico – BYOCNI

Microsoft Azure Kubernetes Service opens up a whole world of exploration with the option for the customer to bring in container network interfaces of your choice.

Wait what’s a Container Network Interface? Okay, lets start at the top for Kubernetes to communicate with networking services a Container Network Interface is needed the Cloud Native Computing Foundation created this project to standardize libraries for writing plugins to configure for network interfaces.

What makes a container network interface valuabe? Well a number of things for instance in the ecosystem of kubernetes each plugin plays a role to your overall strategy whether you’d want isolation workloads working with Cilium or Calico is a great choice. Perhaps you’d like to use another CNI such as Flannel.

In any case the choice is yours todays video we will have a barebones AKS cluster with no CNI installed and go through the installation process of Calico and explore possibilities of utilizing Calico as a CNI.

Pre-requisites to install your own CNI on Azure as of this writing

  • Azure CLI – version 2.39.0
  • Virtual Network for the AKS cluster must allow outbound internet connectivity
  • AKS cluster may not use 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 for the kubernetes service address range, pod address range, or cluster virtual network address range
  • The cluster identity used by the AKS cluster must have at least Network Contributor permissions on the subnet within your VNET. If you wish to define a custom role instead of using the built-in role of Network Contributor role, the following permissions will be needed
    • Microsoft.Network/virtualNetworks/subnets/join/action
    • Microsoft.Network/virtualNetworks/subnets/read
  • The subnet assigned to the AKS node pool can’t be a delegated subnet
  • AKS doesn’t apply Network Security Groups (NSGs) to its subnet or modify any of the NSGs associated with that subnet. If you want to provide your own subnet and add NSG’s associated with that subnet, you must ensure the security rules in the NSG’s allow traffic within the node CIDR range.

Okay now the details are out the way let’s move to the cloud shell or cli if you’re on your local workstation.

az group create -l eastus -n aks-calico-east
az aks create --name aks-calico-east \
--resource-group aks-calico-east \
--location eastus \
--network-plugin none \
--node-vm-size Standard_B2ms \
--os-sku AzureLinux \
--generate-ssh-keys \

Mind you this will kick off the credentials stored locally for the kubeconfig in terms of access.

The flag used for node-vm-size is to minimize our deployment to a B-Series (Burstable) keep our costs low but also allow us to have the amount of proper nodes.

After creation ensure we are able to establish access to kube-api server run the following commands

az aks get-credentials --resource-group aks-calico-east --name aks-calico-east

Then we run a kubectl get node -o custom-columns=’NAME:.metadata.name,STATUS:.status.conditions[?(@.type==”Ready”)].message’

This is normal after running this command we can see the three nodes we’ve deployed aren’t running a CNI.

Now to install calico run the following commands

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml

kubectl create -f - <<EOF
kind: Installation
apiVersion: operator.tigera.io/v1
metadata:
  name: default
spec:
  kubernetesProvider: AKS
  cni:
    type: Calico
  calicoNetwork:
    bgp: Disabled
    ipPools:
     - cidr: 192.168.0.0/16
       encapsulation: VXLAN
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
   name: default
spec: {}
EOF

So a few items in our YAML we’ve specified our provider in this instance AKS (Calico can run anywhere) we’ve also declared encapsulation with use of VXLAN. Additionally we add the next line to separate and add the operator to the API Server.

After this you can run watch kubectl get pods -n calico-system

Eventually – relatively quickly this status should change to the image below.

A complete list of pods can be found at kubectl get pods -n calico-system

Now for exploring the Calico Network Policies what opens up Calico to extending network capabilities is that you can have namespaced network policies or global network policies the difference is namespaced encapsulates the items associated with the namespace while global extends outside of the namespace and is indepdendent and can be applied to any kind of endpoint.

For our demo let’s quickly launch a nginx image and verify are existing access

kubectl create deployment nginx –image nginx

kubectl expose deployment nginx –port=80

kubectl run access –rm -ti –image=busybox /bin/sh

If we run a ‘wget -q –timeout=5 nginx -O –

We should see that our response is that Nginx is up and running now let’s test another local site to see our access

‘wget -q –timeout=5 google.com -O –

Let’s take a peak at example YAML file for Global Deny-All Network Policy

apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: default-deny

spec:
  selector: projectcalico.org/namespace not in {'kube-system', 'calico-system', 'calico-apiserver'}
  types:
  - Ingress
  - Egress

The schema isn’t much different from the native Kubernetes Network Policy with some different API version and selector looking for a condition of not existing in namespace kube-system/calico-system/calico-apiserver.

I ran into issues installing the calicoctl locally on the shell so I ran this with kubectl

If you’re like me you’ve likely left this other shell idle or exited out of it, in that event run the command above to open up the shell

We can see now we can’t reach our nginx pod from our busybox pod because of our deny rule across our entire cluster.

But what if we need to open this up to egress from our busybox?

Simply put the documentation guides us through this with the following YAML

apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
  name: allow-busy-box-egress
  namespace: default
spec:
  selector: run == 'access'
  types:
  - Egress
  egress:
  - action: Allow

Since I deployed the access pod in the default namespace I’ve labeled that namespace “Default”

So notice two things here, we’ve opened up access for egress but not to “nginx” so see the response outside the cluster we can communicate with google but not inside the cluster with nginx.

To modify this we use the following YAML

apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
  name: allow-nginx
  namespace: default
spec:
  selector: app == 'nginx'
  types:
  - Ingress
  ingress:
  - action: Allow
    source:
      selector: run == 'access'

We can now run back into our ‘access’ pod shell and try out our policy effectiveness

Our access should be validated to run to our nginx pod from our busybox pod. I ran into some issues on this I was able to still connect to google.com but kept getting a download time out on the nginx pod.

This is just scratching the surface on kubernetes network policies specifically the extension of using Calico.

Summary

Network policies are your pillar for ingress/egress you’ll encompass this in your cluster security operations but knowing the estate of available extensions that assist can help you make architectural decisions on strategy. Consider the native capabilities as each CNI whether native to the CSP like Azure CNI have pros/cons and of course do your due diligence as networking is at the core of your operations.