Paralus Secure Kubernetes Access in AKS

Introduction

Paralus is a CNCF project in sandbox status that I’ve sat through a series of use cases in the Cloud Native Security Conference a good while back and felt like this deserved more attention for areas of focus that many organizations are struggling with providing remote access to clusters securely without running up costs of extra virtual private networks. A central control panel to handle access to multiple clusters from various providers with enablement to privately access clusters securely via a console. For this post we’re going to take a closer look at implementation and capabilities of this tool.

Requirements for Paralus

Azure Account

Domain (You can configure, I’m purchasing a additional domain) – don’t feel obligated you can test locally on KinD clusters as well

Helm

Quickstart

I’ve provisioned a AKS cluster running v1.27 on the US Central Region with one node pool for quick experiments typically to keep costs low, after you have cluster access run the following.

helm repo add paralus https://paralus.github.io/helm-charts
helm repo update paralus

Note that for the next line of code we will provide the domain we are going to direct to this domain in the parameter shown below stating chartexample.com

 helm install myrelease paralus/ztka \
  -f https://raw.githubusercontent.com/paralus/helm-charts/main/examples/values.dev-generic.yaml \
  --set fqdn.domain="paranoiasec.com" \
  -n paralus \
  --create-namespace

Since I’ve updated the chartexample with the new domain this is what is populated mind you the admin password has to be extracted with the command shown on the kubectl logs with flags provided.

So for your domain provider you’ll direct the request in the records to the external-ip

If you’re running a watch command such as the following ‘watch kubectl get pods -n paralus’

It should look like this with all pods running as desired, now the issues I’ve ran into with other cluster configurations is able to access the dashboard but not able to import the dashboard.

Since we are going to use our custom domain we will have to provision a Custom DNS resource and point our NS records to our domain registrar DNS settings.

Include the domain for the custom DNS zone and this should look like this for the NS records.

We put these into our DNS settings of our domain we’ve registered these settings may vary but to show you how this looks on my end I’ll screenshot the input on the registrar.

This will give you output on the update taking time to refresh this is typically a quick turn around, now we can navigate back to our cluster and grab the loadbalancer IP to add those A records to our custom DNS resource in Azure.

kubectl get svc -n paralus

We will use the External-IP to put into our private DNS zone A records for the following as shown in the image below.

Reset the password with running the kubectl logs -f namespace paralus

kubectl logs -f namespace paralus $(kubectl get pods --namespace paralus -l app.kubernetes.io/name='paralus' -o jsonpath='{ .items[0].metadata.name }') initialize | grep 'Org Admin default password:'

Once we reset we are in the console and have access to start creating a project prior to onboarding the cluster, lets implement SSL.

Installing Cert-Manager

Cert-manager is a popular tool that allows us to add certificates and certificate issuers as a resource type in our cluster.

Run the following command to start the installation

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml

After the CRD’s and other resources are installed you can run a kubectl get all -n cert-manager

Your output should look similar to the image above.

Deploying ClusterIssuer and Certificate Objects

Paralus that we installed via helm sits in the paralus namespace in our cluster, since we are creating a ClusterIssuer this will work across the entire cluster not just one namespace.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
 name: letsencrypt-prod
 namespace: cert-manager
spec:
 acme:
   email: <email-address>@<.com>
   privateKeySecretRef:
     name: letsencrypt-prod
   server: https://acme-v02.api.letsencrypt.org/directory
   solvers:
   - http01:
       ingress:
         class: contour

Now let’s create our certificate via the yaml below you can use nano cert.yaml


apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
 name: consoleparalus
 namespace: paralus
spec:
 commonName: console.paralusdemo.com
 dnsNames:
 - console.paralusdemo.com
 issuerRef:
   name: letsencrypt-prod
   kind: ClusterIssuer
 secretName: consoleparalus

Ensure we have the dns/common name as our domain in my case i’ve used console.paranoiasec.com

Apply this to our cluster with a ‘kubectl apply -f’ then we can run the following

kubectl get certificate --all-namespaces

A secret is created with the following command we can see this output

Now we have to update our annotations in HttpProxy

Let’s find our resource by running the following command.

kubectl get httpproxy -n paralus

Now we want to focus on the console as our certificate was created for that so lets run the following to edit the YAML manifests and add our annotations that are needed.

kubectl edit httpproxy console -n paralus

Update the annotations portion of the manifest by adding the following

cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"

Then we proceed to the update the virtualhost

virtualhost:
   fqdn: console.<domain>.com
   tls:
    secretName: consoleparalus

If all goes well after you hit :wq! to save the file it should have this output showing the edit applied to the /console.

Now let’s go back to our console and navigate to https://console.<domain>.com

Our certificate is valid and we are using SSL now in our console, now lets explore Paralus further lets dive into the portal.

Select Go to Project to navigate to our existing project (these act as segmentation for access) this will contain the following items +Download Kubeconfig/+ New Cluster.

Select + New Cluster

Select your environment and input your cluster name with a description for this we are adding AKS as you see other platforms are supported as well.

We will select Continue as you can see you can also select Proxy Configuration if needed.

Download the BootStrap YAML manifest and apply this to your cluster feel free to review the YAML manifests and inputs this is allowing paralus access/visibility to our cluster.

We can see what the resources we are provisioning service account along with cluster roles and binding it to our cluster.

Navigating back to the console let’s take a peak at audit-logs

This will list any actions taken against the clusters that are imported notice this captures commands in a immutable fashion along with API logs.

This also captures system logs and allows for export of the logs for quick viewing for your organization.

We also can integrate a IdP for SSO by navigating to the IdP page as shown in the image below.

We can restrict/access how kubectl is used in our console as well along with disabling browser access with durations.

Paralus uses similar RBAC functionality for Zero Trust in the verify explicitly you can use the pre-defined roles as shown in the roles page.

If you want to provision access to the console for another user you can place the info and group assignment fairly simple via the UI.

Paralus also provide a command line tool to access clusters and console you can navigate to Tools

You can also revoke /kubeconfig for users as well as generate a new kubeconfig if needed (mind you I’m accesing as super user).

This would have your profile with the assigned projects (with clusters that fall under the project so you can have this as prod/dev) separated with different permissions and roles assigned to restrict access.

I’ve provisioned Managed Grafana for visuals to monitor the consumption of paralus as shown below to assist with planning which nodes to place this workload on access.

Summary

Paralus offers a plethora of capabilities for any organization to leverage for granular access to multiple clusters in a secure manner without a huge burden on provisioning external VPN clients or leveraging private endpoints in bulk with complex architectures for developers to access clusters for access. When I first saw this project at CNCF I instantly felt this could fill the void of privileged access management I’ve also heard of a similar iteration with organization teleport that is addressing kubernetes api access securely for developers as well and will post on that as well.