Wazuh on Kubernetes

Wazuh is a open-source XDR and SIEM with cloud workload protection in this blog post we are covering the kubernetes deployment of resources for Wazuh in a cluster. For starters we are going to need to clone our repo to follow along mind you I’m hosting this in AKS.

git clone https://github.com/wazuh/wazuh-kubernetes.git -b v4.5.1 --depth=1
cd wazuh-kubernetes

For clusters involving EKS in the envs folder you will use the EKS folder directory with kustomize, since I’m running on kind cluster we are going to check our clusters storageclass to ensure we are good to deploy.

Generating certificates for our dashboard and indexer

In our directory we’ve cloned we are searching /wazuh-kubernetes/wazuh/certs with two folders (dashboard_http) and (indexer_cluster).

You’ll use the ./generate_certs.sh script to automate the creation to see the contents you can run a cat generate_certs.sh.

Then navigate to the indexer_cluster and also run the generate_certs.sh script

Our directory should be populated as the image below with the files listed.

kubectl get sc #check we have a running sc
kubectl apply -k /envs/locals-env

Once the deployment is up and running, we can run the following command to get services in our namespace wazuh.

kubectl get svc -n wazuh

As we can see the type loadbalancer for the dashboard is on the external-ip we can now access this.

We can see we didn’t have the SSL/TLS certificate for the area mapped to create a secure connection you can hit advanced and the login should load and look like the image below.

Use the username as admin and password as SuperPassword

Our dashboard should look like this mind you this cluster is the host in this regard housing our server.

Now we can install agents selecting the Add Agent hyperlink and run through the wizard.

Prior to that in our shell we can run kubectl get nodes -o wide

To ensure our build we will use so we go to the agent configuration page shown in this next screenshot and select Ubuntu.

We now will navigate to our terminal and run the following command to access the underlying node.

kubectl debug node/aks-system-22086191-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0

This will open a privileged shell to access our node, since this isn’t public this is just a temporary access.

Now run the following to access our underlying host – with the following chroot /host.

curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg

echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list

We ran into a couple obstacles in regards to edit the following file /var/ossec/etc/ossec.conf -> you’ll see a parameter MANAGER_IP have this replaced with the service running on the port 1514 this for mine was named wazuh-worker.

Now ensure we exit out of our privileged session by running exit.

Then we run the kubectl delete pod node-<values>

After this navigating back to our dashboard, lets go to Modules -> Security Events

I’ve used some of the sample data to show the power of the visuals that are out of the box as you can see you can get a quick view into notable attacks by grouping.

If you want to map frameworks against your agents/systems you can navigate to modules and show you visualizations of NIST 800-53.

This even comes with threat intelligence from the MITRE ATTACK Framework this is broken down by categories.

Summary

This just scratches the surface of what Wazuh is capable of since its a open-source EDR/SIEM/Cloud Security management system. I’ve wanted to get this up and running in a public cloud to understand its use case and capabilities further, it was welcomed that this extends to cloud security and maps deviations in Google Cloud Platform while you can also have these agents sit on VM’s in other CSP’s as well with the agent sensor installed. Items used in this tutorial include a AKS cluster launched via terraform with private IP addresses that encompass costs you should delete these resources so you have no surprise costs.