Deciphering Network Policies in K8s

It’s no secret the pluggable nature of Kubernetes unlocks limitless potential for instance, the Container Network Interface is changeable and depends on the CSP you’re running this if its PaaS.

What does all that mean? Essentially if your needs can’t be met with out of the box kubenet (traditional) you’ll like start shopping around of CNI’s that could work for you.

Azure as of right now allows you to bring your own CNI and I believe is in GA

https://learn.microsoft.com/en-us/azure/aks/use-byo-cni?tabs=azure-cli

It’s likely you’ll encounter many different variations and don’t feel limited but that’s a little out of scope of what I’ll cover today so we will demonstrate a Network Policy on a playground K8s running 1.25

---
apiVersion: networking.k8s.io/v1 ##this is the api reference we are "calling"
kind: NetworkPolicy #this is the type of resource we'd like
metadata:
  name: default-deny-egress #this can be what you want the policy to be called
  namespace: runner #let us define a ns to segment this policy
spec:
  podSelector: {} ##***this is specifying all pods use wisely
  policyTypes:
  - Egress

Ok so let’s break this down we’ve defined in our YAML configuration using the documentation from

https://kubernetes.io/docs/concepts/services-networking/network-policies/

Screen cap of output

So of course I had to create the namespace because it didn’t exist (important to obviously) have this prior to the apply command.

Now this controls traffic as shown below used this illustration to explain further but also conceptualize the purpose of these.

So now lets consider what is already in motion we’ve isolated communication from the namespace runner as being isolated to not speak to outside pods in other namespaces.

Let’s now create our ingress rule to specifically limit what is able to move to this namespace perhaps we’re separating data that is ingested from a database that is on the web-server.

Before we do this let’s also specify a label we want added to the namespace for assignment purposes in the following policy we will do that by the following

kubectl edit ns runner

This will open the yaml you’ll see the syntax labels: under metadata and add a new line

app: data

Now hit wq! + Enter to save this and then run the following

kubectl describe ns runner

We will then get the YAML file ready to go and apply to our cluster with the old kubectl apply -f <file>

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: access-runner
  namespace: runner
spec:
  podSelector:
    matchLabels:
      app: granted
  ingress:
  - from:
    - podSelector:
        matchLabels:
          access: granted

Let’s also remember to expose and create a pod in the namespace runner

kubectl run nginx --image=nginx -n runner -l access=granted --dry-run=client -o yaml > nginx.yaml
kubectl apply -f nginx.yaml 
kubectl expose pod nginx -n runner --port=80
kubectl get svc -A

Now let’s run back to our control plane and check out our logs to see if the service is receiving any responses.

Since remember we exposed this through a service that essentially created a endpoint

We can only get access from pods that match the labels we defined from earlier this is one way of limiting access. Another thought moving forward is if we are talking from pods or data is transmitted how are we encrypting this? We will cover this in another post.