Project Lula is a tool written in Go by Defense Unicorns a organization that works in the cloud native space supporting the public sector that is to assist with auditing configuration to provide context if a expected input is not compliant at the command line you aware of it. Along with the findings will details specifically the reference while this is still in its early stages I was able to test out the tool and want to extend the conversation to other practitioners in the field.
Getting Started
The repository to start using this tool is located at the following link.
https://github.com/defenseunicorns/lula
Requirements:
- A running kubernetes cluster
- GoLang Version 1.21x
- Kubectl installed
First we start with cloning the repo following the commands below.
git clone https://github.com/defenseunicorns/lula.git && cd lula
If we run a ‘ls’ command we can see the contents in our directory since we’re cloned into it directly.
We now run the ‘make build command’
make build
If you run into issues ensure you follow the instructions here at go docs – fresh install for Go 1.21.6
https://go.dev/doc/install
As part of the repository it talks about using the example docs to explain this further let’s examine the YAML.
---
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
namespace: validation-test
labels:
foo: bar2
spec:
containers:
- image: nginx
name: pods-simple-container
This manifest above represents the example of a pod-failure that should kick back on the CLI with output, the below YAML represents a validation pass notice the difference is just the label syntax stating foo:bar.
---
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
namespace: validation-test
labels:
foo: bar2
spec:
containers:
- image: nginx
name: pods-simple-container
Investigating the OSCAL-Component.YAML under the hood to understand the syntax ruling and how it operates.
We can see that the validation text using OPA with rego code to validate that the pod retrieves the label matching foo bar.
Let’s get back to testing out lula we are instructed to push a namespace to our cluster via the namespace.yaml this is located in our directory already so don’t fret.
kubectl apply -f demo/namespace.yaml
Then we apply the failed pod with the invalid label.
kubectl apply -f ./demo/pod.fail.yaml
./bin/lula validate -f ./demo/oscal-component.yaml
The expected output as we see from Status states Not Satisfied this also saves to a log once we view the log it will show more context for us in a readable format.
Notice from the output that is saved and we are reviewing this has our title the remarks, and results in a readable format. The unique part at least from this view also captures directly where the relevant-evidence and summarized passing resources/failing resources.
Now to satisfy this further we run the next command to get a passing resources with the following.
kubectl apply -f demo/pod.pass.yaml
Running our next command ensure you follow the below syntax
./bin/lula validate -f demo/oscal-component.yaml
We can see at a high-level is this good? From our status stating “satisfied” now we know we’ve updated our previous failed status.
Opening the saved assessment-results-01-20-2024 timestamp file we can see we’ve updated and are now running a passing resource.
Security Lens
Just starting to leverage this tool I can see a few things that will likely be achieved long-term in this would be a ideal companion to compliance checks in real time but also to assist with large cluster operations the best part is OPA is as native as it gets to leverage with the rego rules its extremely effective. I’m only stating this because this is what most tools in the backend that are running validation tests are doing such as Kyverno, Azure Policy and Kubescape to my knowledge I’d imagine many more just haven’t dug deeper into the nuts and bolts. Additionally, its always important to have pre-defined guard rails in your cluster but also mechanisms that can block this outright from the native stack of kubernetes the admission controllers job is a heavy hitter in this enforcement typically. As far as auditing I’d also have these checks coincide with what my logging method is looking for such as pods running inconsistent labels not aligned with my policies. I could see this being expanded further to give relevant evidence with the finding to assist many tools give you a high-level view but are more challenging to gather information in the command line at least from my experience.
Summary
Defense unicorns continues to innovate the open-source landscape and I think this project will be a great step moving forward for that innovation while this is inherently early such as release v0.0.2. Check out the project at the following link https://github.com/defenseunicorns/lula. I plan on checking out some other projects that they have such as Zarf and other tools. If you want me to cover any tools extensively further such as related to security in kubernetes I’m including more break-downs this year and simplifying the tools for the wider audience feel free to reach out on LinkedIn.