
Introduction
If your organization stores images in a repository chances are it’s hosted in a cloud based solution. Typically, every Cloud Service Provider has an offering in Azure this is known as Azure Container Registry. This stores your Docker Images, OCI Artifacts in a centralized location. You can also get the advantage if you’re using Defender for Cloud against the registry to have detection if your image contains vulnerabilties. That’s always one side of the coin “detection”. However, your developers like many teams have a finite amount of time to remediate vulnerabilities and automation is an enabler. This post will go into the use of Tasks inside Azure Container Registry and the use of “Continuous Patching” for your container images.
Getting Started
For those who are new to Azure and want to follow along a couple of items to call out this blog post. If following along this will cost you some money to run. It is likely less if you just follow the instructions and delete the infrastructure afterwards.
We will need the following items deployed.
- Azure Container Registry (Standard SKU, Premium SKU)
- Images Pushed Into Repository
- Continuous Patching Method (Selection via JSON)
- Public Preview CLI enabled for this feature (I’ll walk through this)
If you don’t have any of these resources the following should get you up and running now note if you opt for “Standard SKU” for this demo. This doesn’t have the capability to make a private connection to white-listed IP Clients. This means the resource itself will be publicly accessible behind Entra ID authentication however, in production you’ll want a Private Registry.
So we start inside Azure Shell or your terminal of choice ideally linux.
az login --use-device-code
# This will prompt a device code you'll enter credentials upon authorization this will tell you the subscriptions you have.
After you’ve authenticated now we’ll have to create a resource group and the registry.
az group create -l westus -n acr-registry-west
After this is created we can now provision our Azure Container Registry.
az acr create --name registrydemo000 \
--resource-group acr-registry-west \
--sku Standard \
--location westus
Once this is created you’ll have a ACR Registry that will be accessible to login to. Navigate to your registry in the portal and Settings -> Access Keys enable a Login for your Admin user this can be disabled after or deleted entirely.

Now you have your username and password and our login server to login in the shell you can use the following command.
echo "<password>" | docker login <server> -u <user> --password-stdin

Now we install the required extension for our registry.
az extension add -n acrcssc
I have this installed if you don’t you’ll get a warning in the CLI that this is in preview as output and if installed.
Configuring the Patching
The file that defines patching is structured in JSON as shown below to break down the conventions you can specify which repository by name, tags, enablement along with tag convention.
Tag Convention are two options Incremental or Floating as shown in the diagram below Incremental will add “-1” to the image and floating will add “patched”.
It should be noted for the use cases Incremental (Default) is ideal for environments where audit-ability and rollbacks are critical, since each new patch is clearly identified with a unique tag.
Floating is noted as ideal if you prefer a single pointer to the latest patch for your CI/CD pipelines. Reduces complexity by removing the need to update reference in downstream applications per patch, but sacrifices strict versioning, making it difficult to rollback.

# continuouspatching.json
{
"version": "v1",
"tag-convention" : "incremental"
"repositories": [{
"repository": "nginx",
"tags": ["v1"],
"enabled": true
}]
}
Inside this json you can also annotate the wildcard as * if you want to patch all tags.
Once you’re done with your specific configuration you’ll create a workflow in your ACR.
# Note the days are at a minimum "1d" max value of # 30
az acr supply-chain workflow create -r <registryname> -g <resourcegroupname> -t continuouspatchv1 --config <JSONfilepath> --schedule <number of days> --dry-run
If this succeeds and you’d like to kick off the run you change the ‘–dry-run’ to –run-immediately.
Running the task
To show you the existing policies on my view you can also run the following after it is created you should see three policies.
az acr supply-chain workflow show -r sugariest -g car-registry-west --type continuouspatchingv1

I’ve added a couple images to the registry as shown in the image below and added my own file for this.

To show you the status of runs from the command line you can also run this after your run succeed or if it fails to see what changes happened.

It appears a patched image of nginxv1-1 (Incremental) has succeeded and no patch image available for my second image. The “hello world” image, this will tell you if a patch was skipped as it appears nginx:v1-1 is vulnerability free.
If we switch over to our portal we can also see what each run looks like this is the billing aspect of 0.0001/second of task running comes in.

Additionally we can see the vulnerabilities that were found are populated by Defender for Cloud in our Recommendations.

This shows our old image that is still in the repository as having two CVE’s associated with it and a Fix Status “Fix Available”. Taking a step back we can also see Defender for Cloud picked up our clean image that has been patched.

Notice in the image above our referenced image tag is the v1-1 as the incremental patch applied.
Summary
Automation of this task can greatly diminish the time your end developers are chasing down each OS vulnerability. While of course many ways exist to do this type of solution the aspect of this being natively integrated into your container registry is a big deal. The fusion of using Copacetic to provide the upstream patching based on results from Trivy allow you to immediately get the most up-to-date image relatively quickly. Add Defender for Cloud to give you visibility of what images are vulnerable can also assist your security teams and developers to be aware of the context. If you did follow along in this post and don’t want to incur charges ensure you delete the resource group and resources used in this.