Introduction
Azure Kubernetes Service while having many additions and capabilities continues to implement more native security controls and recently announced the use of signed images with leveraging the open-source project Ratify for a parameter known as ImageIntegrity. This is not only a step-forward of first party native capabilities but also a guard-rail that extends the use of Azure Policy (OPA Gatekeeper) with verifying explicitly that images that are deployed are only vetted and are from a trusted source. As always as disclaimer this is in Preview and you should not run this in production until this goes GA. As you’ve been following along in my blog posts I’ve highlighted how Oracle Cloud leverages a similar iteration of this using first party integration of Key Management Service I can see this also becoming a extension of key vaults use with notary in the future.
Requirements
- Azure CLI
- AKS-Preview CLI Extension version 0.5.96 later
- AKS Policy Enabled on AKS Cluster
- AKS Cluster enabled with OIDC Issuer
- The EnableImageIntegrityPreview and AKS-AzurePolicyExternalData feature flags registered on the Azure Subscription which we will do in the next area
Registering Features
First steps of installation if you’ve authenticated either via the Cloud Shell or Azure CLI on your local machine run the following commands.
# Register the EnableImageIntegrityPreview feature flag
az feature register --namespace "Microsoft.ContainerService" --name "EnableImageIntegrityPreview"
# Register the AKS-AzurePolicyExternalData feature flag
az feature register --namespace "Microsoft.ContainerService" --name "AKS-AzurePolicyExternalData"
This will take some time to install so ensure that this is reflecting state “Registered” when you run the following commands.
# Verify the EnableImageIntegrityPreview feature flag registration status
az feature show --namespace "Microsoft.ContainerService" --name "EnableImageIntegrityPreview"
# Verify the AKS-AzurePolicyExternalData feature flag registration status
az feature show --namespace "Microsoft.ContainerService" --name "AKS-AzurePolicyExternalData"
After this is complete we will then run the installation of the Microsoft.ContainerService provider.
az provider register --namespace Microsoft.ContainerService
For the next command depending if you are running on Windows or Linux you’ll need to run the following commands.
The Properties under Resource ID is where you gather the subscription id needed if your running on Windows.
This code will allow you to assign the policy for the trusted images essentially telling the cluster to enforce the policy and then we will create the remediation task.
export SCOPE="/subscriptions/${SUBSCRIPTION}/resourceGroups/${RESOURCE_GROUP}"
export LOCATION=$(az group show -n ${RESOURCE_GROUP} --query location -o tsv)
az policy assignment create --name 'deploy-trustedimages' --policy-set-definition 'af28bf8b-c669-4dd3-9137-1e68fdc61bd6' --display-name 'Audit deployment with unsigned container images' --scope ${SCOPE} --mi-system-assigned --role Contributor --identity-scope ${SCOPE} --location ${LOCATION}
If you run into issues like I did with the parameter mi-system-assigned this is the visual of doing the same policy assigned to the cluster with the policy.
For Remediation on the wizard ensure you select Create a Remediation Task
After we create this we will set up Ratify via CRDs leveraging kubectl.
Create and configure Azure Workload Identity
Configuring environment variables below we will use the environment variables I’m running in Windows via PowerShell so mine might deviate from the following but similar.
export IDENTITY_NAME=<Identity Name>
export GROUP_NAME=<Azure Resource Group Name>
export SUBSCRIPTION_ID=<Azure Subscription ID>
export TENANT_ID=<Azure Tenant ID>
export AKS_NAME=<Azure Kubernetes Service Name>
export RATIFY_NAMESPACE=<Namespace where Ratify deployed, defaults to "gatekeeper-system">
Then we will run the following
az identity create --name "${IDENTITY_NAME}" --resource-group "${GROUP_NAME}" --location "${LOCATION}" --subscription "${SUBSCRIPTION_ID}"
export IDENTITY_OBJECT_ID="$(az identity show --name "${IDENTITY_NAME}" --resource-group "${GROUP_NAME}" --query 'principalId' -otsv)"
export IDENTITY_CLIENT_ID=$(az identity show --name ${IDENTITY_NAME} --resource-group ${GROUP_NAME} --query 'clientId' -o tsv)"
export AKS_OIDC_ISSUER="$(az aks show -n ${AKS_NAME} -g ${GROUP_NAME} --query "oidcIssuerProfile.issuerUrl" -otsv)"
As I’m running on PS this might be different if you’re running on Linux so I’ve included the image above to help those running on Windows.
Configure workload identity with ACR
Configuring the user-assigned managed identity and enable AcrPull role to the workload-identity
```bash
az role assignment create \
--assignee-object-id ${IDENTITY_OBJECT_ID} \
--role acrpull \
--scope subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP_NAME}/providers/Microsoft.ContainerRegistry/registries/${ACR_NAME}
```
We will use the federate identity credential later in our tenant run the following
az identity federated-credential create \
--name ratify-federated-credential \
--identity-name "${IDENTITY_NAME}" \
--resource-group "${GROUP_NAME}" \
--issuer "${AKS_OIDC_ISSUER}" \
--subject system:serviceaccount:"${RATIFY_NAMESPACE}":"ratify-admin"
Verification Configurations
First we have to ensure that ratify is installed by leveraging helm you’ll use the following.
What is left out prior to installing ratify if you run into issues with the ./charts command run it as ratify/ratify and also ensure that KEY_NAME is the key created in the desired key vault, this is generated as a RSA 2048 key.
helm repo add ratify https://deislabs.github.io/ratify
# Create a NS for Ratify
kubectl create ns ratify
# Install Ratify
helm install ratify \
./charts/ratify --atomic \
--namespace raitfy --create-namespace \
--set featureFlags.RATIFY_CERT_ROTATION=true \
--set akvCertConfig.enabled=true \
- Create a VerifyConfig file name verify-config.yaml and copy in the YAML below.
apiVersion: config.ratify.deislabs.io/v1beta1
kind: CertificateStore
metadata:
name: certstore-inline
spec:
provider: inline
parameters:
value: |
-----BEGIN CERTIFICATE-----
MIIDQzCCAiugAwIBAgIUDxHQ9JxxmnrLWTA5rAtIZCzY8mMwDQYJKoZIhvcNAQEL
BQAwKTEPMA0GA1UECgwGUmF0aWZ5MRYwFAYDVQQDDA1SYXRpZnkgU2FtcGxlMB4X
DTIzMDYyOTA1MjgzMloXDTMzMDYyNjA1MjgzMlowKTEPMA0GA1UECgwGUmF0aWZ5
MRYwFAYDVQQDDA1SYXRpZnkgU2FtcGxlMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAshmsL2VM9ojhgTVUUuEsZro9jfI27VKZJ4naWSHJihmOki7IoZS8
3/3ATpkE1lGbduJ77M9UxQbEW1PnESB0bWtMQtjIbser3mFCn15yz4nBXiTIu/K4
FYv6HVdc6/cds3jgfEFNw/8RVMBUGNUiSEWa1lV1zDM2v/8GekUr6SNvMyqtY8oo
ItwxfUvlhgMNlLgd96mVnnPVLmPkCmXFN9iBMhSce6sn6P9oDIB+pr1ZpE4F5bwa
gRBg2tWN3Tz9H/z2a51Xbn7hCT5OLBRlkorHJl2HKKRoXz1hBgR8xOL+zRySH9Qo
3yx6WvluYDNfVbCREzKJf9fFiQeVe0EJOwIDAQABo2MwYTAdBgNVHQ4EFgQUKzci
EKCDwPBn4I1YZ+sDdnxEir4wHwYDVR0jBBgwFoAUKzciEKCDwPBn4I1YZ+sDdnxE
ir4wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAgQwDQYJKoZIhvcNAQEL
BQADggEBAGh6duwc1MvV+PUYvIkDfgj158KtYX+bv4PmcV/aemQUoArqM1ECYFjt
BlBVmTRJA0lijU5I0oZje80zW7P8M8pra0BM6x3cPnh/oZGrsuMizd4h5b5TnwuJ
hRvKFFUVeHn9kORbyQwRQ5SpL8cRGyYp+T6ncEmo0jdIOM5dgfdhwHgb+i3TejcF
90sUs65zovUjv1wa11SqOdu12cCj/MYp+H8j2lpaLL2t0cbFJlBY6DNJgxr5qync
cz8gbXrZmNbzC7W5QK5J7fcx6tlffOpt5cm427f9NiK2tira50HU7gC3HJkbiSTp
Xw10iXXMZzSbQ0/Hj2BF4B40WfAkgRg=
-----END CERTIFICATE-----
---
apiVersion: config.ratify.deislabs.io/v1beta1
kind: Store
metadata:
name: store-oras
spec:
name: oras
---
apiVersion: config.ratify.deislabs.io/v1beta1
kind: Verifier
metadata:
name: verifier-notary-inline
spec:
name: notation
artifactTypes: application/vnd.cncf.notary.signature
parameters:
verificationCertStores: # certificates for validating signatures
certs: # name of the trustStore
- certstore-inline # name of the certificate store CRD to include in this trustStore
trustPolicyDoc: # policy language that indicates which identities are trusted to produce artifacts
version: "1.0"
trustPolicies:
- name: default
registryScopes:
- "*"
signatureVerification:
level: strict
trustStores:
- ca:certs
trustedIdentities:
- "*"
Now we can try and run our signed image if you followed the documentation listed
We also have to run a few more commands to get to the final output similar to the policy assignment, we can run the following to assign to our cluster.
custom_policy=$(curl -L https://raw.githubusercontent.com/deislabs/ratify/main/library/default/customazurepolicy.json)
definition_name="ratify-default-custom-policy"
scope=$(az aks show -g "${GROUP_NAME}" -n "${AKS_NAME}" --query id -o tsv)
definition_id=$(az policy definition create --name "${definition_name}" --rules "$(echo "${custom_policy}" | jq .policyRule)" --params "$(echo "${custom_policy}" | jq .parameters)" --mode "Microsoft.Kubernetes.Data" --query id -o tsv)
assignment_id=$(az policy assignment create --policy "${definition_id}" --name "${definition_name}" --scope "${scope}" --query id -o tsv)
echo "Please wait policy assignmet with id ${assignment_id} taking effect"
echo "It often requires 15 min"
echo "You can run 'kubectl get constraintTemplate ratifyverification' to verify the policy takes effect"
This will take some time to update but to show how this ran you’ll might run into a WARNING (no worries)
Now we can run the following command
kubectl run nginx --image=nginx
We should get a violation of the policy since the image is un-signed in that regard however upon creation I was running into issues.
In our Azure Policies we can see the validation showing non-compliance.
Our remediation task kicks off when its violated and we see its also in evaluating at this point in time.
Summary
Using ratify for images that are signed in combination of notary are able to implement the use of signed images being deployment criteria for our cluster. Depending on your CI/CD workflow and automation this can be cumbersome in the setup with some items require a little input from the end user I’ve ran into various issues in the documentation that weren’t not necessarily mentioned notably the use of Azure Container Registry login credentials along with the tag on the custom policy pushed out. However, you could always go the easier route by leveraging a tool such as Kyverno for enforcement which leverages gatekeeper in a YAML manifest without using rego. Azure Kubernetes Service is innovating with more open-source commitments to end users with this addition and its going to be part of native capabilities which is a win for those customers that are trying to stay as close to what works with the platform. I will say I learned quite a lot trouble shooting issues that arises and this is definitely a tool that I can see being implemented for production environments when this moves to GA.