AKS Running Istio and Prometheus

Terraform by popularity and selection is by far the premier Infrastructure as Code go to for most Infrastructure due to the ability to universally move from one CSP to another. In this blog I will use a few modules such as kubectl and helm that will be used to install Istio on AKS. I’ve been experimenting with more advanced features of using more of the functionality outside of the native resources in Azure. If you’d like to follow along the pre-requisites are listed below

Pre-requisites

  • Kubectl has to be installed locally for this function properly
  • Azure CLI
  • Terraform installed on the client (shell) if your running on Windows OS use WSL
  • Azure Subscription (this will incur some charges so use at your discretion)

Since we are doing this from the CLI we first start with the basics of authenticating

az login

This will pop open a browser and you’ll authenticate after this is completed we are now ready to get started

I’m working out of this repo I’ve forked and modified

https://github.com/sn0rlaxlife/istio-aks-example

git clone https://github.com/sn0rlaxlife/istio-aks-example.git

Additionally if you want to utilize the istioctl you’ll have to run the following commands to get this on your local machine

If you’d like to do this you’ll have to run the following

curl -L https://istio.io/downloadIstio | sh -

If you want a specific version you can run the following below

curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.17.0 TARGET_ARCH=x86_64 sh -

Then we move into the folder

cd istio-1.17.0

Now we add the istioctl client to your path (Linux or MacOS)

export PATH=$PWD/bin:$PATH

Okay if I lost you no worries again remember this is to get up and running with a configuration after all so it might be a little cumbersome for this set up but well worth the effort.

Navigating the Repo

The folder structure is as follows

-istio-aks-example

—istio-on-aks

—multicluster-istio-on-aks

cd istio-aks-example
cd istio-on-aks

If we then run a ls command we should see this output

If you want to change the node machine image you’ll edit the tfvars file

By default this will run the following image Standard_DC4s_v2

cp tfvars .tfvars
terraform init -upgrade
terraform apply -var-file=.tfvars

If you get this message everything has been initialized and we are ready to proceed to the apply portion mind you – run terraform plan so you can catch anything for these purposes I’m skipping this.

terraform plan
AKS Details

You can also run a terraform validate if you’d like to however I will apply this as the output matches my preferences

terraform apply -var-file=.tfvars

I did get a prompt on a argument being deprecated I will address this at a later time after this post.

Depending on how your subscription is set up you might have a quota limit on specific nodes ability if this is the case you have request a quota increase.

If you run into this issue check on the Quotas page for what the limitations are set on your subscription I had to modify my size of the tfvars for this

After quite sometime for installation of the prometheus helm chart we are now up and running lets take a peak at a our cluster

Let’s grab our credentials then authenticate to our cluster

az aks get-credentials --resource-group istio-aks --name istio-aks
kubectl get pods -A

Injecting Sidecar

For us to inject the side car we will run the following command to annotate the label on our default namespace

kubectl label namespace default istio-injection=enabled

I took a quick view at the Azure Portal to see the resources in our resource group

We can see the sidecar status running the following command

istioctl proxy-status

We will run the following command for a example

kubectl apply -f - <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echoserver
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      run: echoserver
  template:
    metadata:
      labels:
        run: echoserver
    spec:
      containers:
      - name: echoserver
        image: gcr.io/google_containers/echoserver:1.10
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: echoserver
  namespace: default
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    run: echoserver
EOF

Now let’s rerun the following command

istioctl proxy-status

We can see our addition of the sidecar

Configure Gateway

Our terraform provisioned a istio-ingress the Gateway is an envoy pod at the border of the service mesh.

We can view this running the following command

istioctl proxy-config listener -n istio-ingress $(kubectl get pod -n istio-ingress -oname| head -n 1)

The gateway is exposed with a Kubernetes service LoadBalancer, to grab those details run the following command.

kubectl get service -n istio-ingress istio-ingress -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'

Without any binding the connection will be refused we will configure the gateway resource

kubectl apply -f - <<EOF
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: istio-ingressgateway
  namespace: istio-ingress
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - '*'
EOF

Your output should look similar to the image above.

Now we will run the following command to show the gateway listening on port 80 this will still serve a HTTP 404.

istioctl proxy-config listener -n istio-ingress $(kubectl get pod -n istio-ingress -oname| head -n 1)
curl -v $(kubectl get service -n istio-ingress istio-ingress -o=jsonpath='{.status.loadBalancer.ingress[0].ip}')

Configuring the Virtual Service

For the next iteration of this post we will configure the virtual service this apiversion is referenced as virtualservices.networking.istio.io. We are describing how a request is routed to our service.

Since we deployed the echoserver to route the request ClusterIP service created in the default namespace we will use the following code

kubectl apply -f - <<EOF
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: echoserver
  namespace: default
spec:
  hosts:
    - "*"
  gateways:
    - istio-ingress/istio-ingressgateway
  http:
  - match:
    - uri:
        prefix: "/"
    route:
    - destination:
        host: "echoserver.default.svc.cluster.local"
        port:
          number: 8080
EOF

Then we will check if we can reach it running a curl command to echoserver pod.

curl -v $(kubectl get service -n istio-ingress istio-ingress -o=jsonpath='{.status.loadBalancer.ingress[0].ip}')

We are now in business with the status 200!!

Ok back to business lets run a health check probe on port 15021 with the following code

kubectl apply -f - <<EOF
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: healthcheck
  namespace: istio-ingress
spec:
  hosts:
    - "*"
  gateways:
    - istio-ingress/istio-ingressgateway
  http:
  - match:
    - uri:
        prefix: "/probe"
    rewrite:
        uri: "/healthz/ready"
    route:
    - destination:
        host: "istio-ingress.istio-ingress.svc.cluster.local"
        port:
          number: 15021
EOF

PeerAuthentications to enforce mTLS

We can enforce mTLS referencing the API peerauthentications.security.istio.io

kubectl apply -f - <<EOF
---
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: istio-system
spec:
  mtls:
    mode: STRICT
EOF

So two things to consider this is enforcing mTLS at the receiver of the TLS connection. We now have to work with Destination Rules to enforce at the client level.

Destination Rules

By applying the following DestinationRule the mTLS protocol is enforced when a connection is started by any client. Note: Istio ingress gateway will not be able to connect to a backend without a sidecar…..

Further documentation for working with this are located at the link below

https://istio.io/latest/docs/reference/config/networking/destination-rule/

kubectl apply -f - <<EOF
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: force-client-mtls
  namespace: istio-system
spec:
  host: "*"
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
EOF

Verifying Encryption

Let’s start the verification process of our mTLS we will tcpdump from an AKS node this will use https://github.com/alexei-led/nsenter

wget https://raw.githubusercontent.com/alexei-led/nsenter/master/nsenter-node.sh
bash nsenter-node.sh $(kubectl get pod -l run=echoserver -o jsonpath='{.items[0].spec.nodeName}')
tcpdump -i eth0 -n -c 15 -X port 8080

This will launch into a privileged shell then run the tcpdump command

This gargled numbers and letters is the output of interception this is not clear-text this is why encryption is important running in kubernetes.

Once your done with analyzing the output ensure you run exit

You’ll see the following output on your terminal then we will move into Observability

Observability in Prometheus

In the files we’ve cloned the prometheus.tf installs the Helm chart for Prometheus and Grafana, and creates the configuration to scrape the istio sidecars

Let’s access the dashboard

kubectl -n prometheus port-forward svc/prometheus-grafana 3000:80

Now lets navigate to localhost

http://127.0.0.1:3000

If all goes well your screen should look like this

Default username and default password is prom-operator. (Ensure this is changed…….!)

Navigate to the side menu that looks like the image below we are moving to the Imports page

If you have trouble finding this no worries the url will be http://127.0.0.1:3000/dashboard/import

Importing the dashboards for our Istio Visibility

https://grafana.com/grafana/dashboards/7645-istio-control-plane-dashboard/
https://grafana.com/grafana/dashboards/7639-istio-mesh-dashboard/
https://grafana.com/grafana/dashboards/7636-istio-service-dashboard/
https://grafana.com/grafana/dashboards/7630-istio-workload-dashboard/
https://grafana.com/grafana/dashboards/13277-istio-wasm-extension-dashboard/

Then select Import this is to show you what to expect

After those are imported we can now run the Authorization Policies

To see what the Istio dashboard looks like I used this as a Istio Service Dashboard

Authorization Policies

The Authorization Policies allow or deny requests depending on conditions if you want to explore them further https://istio.io/latest/docs/reference/config/security/conditions/

We are going to use the example that runs the nip.io simple wilcard DNS service for IP Address

kubectl apply -f - <<EOF
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: probe
  namespace: istio-ingress
spec:
  action: ALLOW
  rules:
  - to:
    - operation:
        hosts: ["*.nip.io"]
  selector:
    matchLabels:
      istio: ingressgateway
EOF

Now let’s run the following curl command

curl -v $(kubectl get service -n istio-ingress istio-ingress -o=jsonpath='{.status.loadBalancer.ingress[0].ip}')/ 

No let’s run the following command appending the nip.io

curl -v $(kubectl get service -n istio-ingress istio-ingress -o=jsonpath='{.status.loadBalancer.ingress[0].ip}').nip.io/

Now lets run the next command to navigate back to our dashboard see if our request are populated

kubectl -n prometheus port-forward svc/prometheus-grafana 3000:80

Okay now we’ve done quite a lot in this if you’ve stuck with me hopefully you’ll takeaway that you can quickly get up and running with terraform and extending capabilities of kubernetes.

Ensure we destroy our resources running a terraform destroy