Artifact Registry VEX in GCP

SLSA Build Level 3 (From image built on GCP)

Introduction

Vulnerability Exchange (VEX) or Vulnerability Exploitability eXchange is a communication format that is used to share detailed information about the exploitability of vulnerabilities in software products. VEX documents provide essential details about vulnerabilities, focusing on whether they are exploitable in the specific context of the software or environment in which they are found. Given that evolving landscape of Software supply chain ecosystem this exchange allows developers and security operations to be aware of any build artifacts with associated vulnerabilities. Similar to scanning images prior to commits to ensure vulnerabilities and attack surfaces are diminished the visibility of vulnerabilities to the organization should be measured.

In Google Cloud Platform, Artifact Registry serves as a repository management system that houses your build artifacts from either Google Cloud Build or other serves such as Dockerfiles that create the artifact you plan to deploy to other services such as Cloud Run, Google Kubernetes Engine etc. Notably in building out Docker Images I noticed VEX was added to the end users view that is a differentiator from a hyperscaler perspective of visibility typically all repositories that are stored in a container registry that is managed will have some premium features such as container image scanning and other full-fledged features for enterprises.

Getting Started

First we need a Google Cloud Project that we will be using if you haven’t created one, feel free to create a project if you’d follow along with the additional areas as pre-requisites.

  • Artifact Registry (API Enabled)
  • Dockerfile (Application Image Definition) / Can use the repository I’ve used
  • Scanning Enabled on Artifact Registry ($0.29 per image scanned)

In our GCP Portal if we navigate to Artifact Registry we can see the following items if empty.

Now we’ll need a repository to house our image that we’re going to scan and create. Use the Create Repository and follow the default settings as shown in the image, you can select a region that is closer to your geographical area.

If you have a repository like the one I’m using and don’t intend on using Docker Hub to upload the files, navigate to Cloud Build -> Repositories -> Link Repositories to update the authentication between your Source Control Management (Github, Gitlab). (This is optional since we are using Cloud Build to build our Dockerfile this will create a gcr.io in our Artifact Registry).

Once you have successfully imported the connection your repository view should look like this.

Navigate to Cloud Shell in the terminal and clone our repository or a repository you’d like to upload to Cloud Build.

git clone https://github.com/sn0rlaxlife/rodrigtech-blog.git && cd /rodrigtech-blog/groq-chat

If you run a ls command to list.

These will be the files we are working with, I’ve updated the Dockerfile you can use this as reference since we are using Chainguard multi-stage build this is distroless we are assigning our end user the id 1000.

FROM python:3.10-slim AS build

# Set the working directory
WORKDIR /app

# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential \
    && rm -rf /var/lib/apt/lists/*

# Copy the application code
COPY . /app

# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Stage 2: Run the application on a minimal image
FROM cgr.dev/chainguard/python:latest

# Set the working directory
WORKDIR /app

# Copy only the necessary files from the build stage
COPY --from=build /app /app

# Optionally, create a non-root user if needed
# Uncomment the following lines if Chainguard image requires manual user creation
USER 1000:1000

# Expose the port the app runs on
EXPOSE 8000

# Define the entry point for the application
CMD ["chainlit", "run", "app.py", "-w"]

Then we run simply, ‘gcloud build submit –tag gcr.io/<project-id>/<build-name>:<version> .’

gcloud builds submit --tag gcr.io/project-id/chainlit:0.01 .

Once this is completed we can see our build in our Cloud Build UI portal.

Notice I had a few fails that were related to the structure of the Dockerfile but your build should reflect successful.

We can now also move back to our Artifact Registry you’ll see the following.

Select gcr.io this will have our image labelled as chainlit from our commands earlier labeling our image.

After we open up our image we can see the tag we put as 0.01 is annotated and notice the far right hand-side states None found on vulnerabilities.

Now notice this also gives us a warning that it failed identify known supported OS in image.

Notice on the far right end we have the Vulnerability Exchange Status with five categories

  • Affected – if a product or component is affected by the vulnerability
  • Fixed – The vulnerability has been fixed in the product or component
  • Not Affected – The product is not affected by the vulnerability, this might be present but due to specific configuration, environment, or usage it is not exploitable in the current context.
  • Under Investigation – Think of this if further investigation is merited by the vendor to determine exploitability or impact.
  • Unspecified – while this isn’t explicit in CISA guidance this would be anything that doesn’t meet those criteria.

The larger picture

While the premise of vulnerability scanning is the adage that has been apparent for most security teams prior to pushing to production in a CI/CD pipeline, the visibility of what is possible in Google Cloud Platform expands to other area such as Supply-chain Levels for Software Artifacts these are defined in four levels and also a part of the GCP estate.

Inside your Cloud Build you can see the insights annotated as shown here in a blade on the right hand side.

Its important to note that SLSA Build Level 3 represents the following

  • The source and build platform meets specific standards to guarantee the auditability of the source and the integrity of the provenance respectively.

Its of note from the slsa.dev this means SLSA level 3 provides much stronger protections against tampering than earlier levels by preventing specific classes of threats, such as cross-build contamination. Where as our vulnerabilities and visibility provide the developer and secops visibility on what can be fixed/mitigated the SLSA level provides levels of integrity guarantees.

Additionally as transparency is becoming paramount into the software supply chain you can also provide Software-Bill of Materials to your build as well, the documentation located in GCP Docs annotate this by also running a generation command.

gcloud artifacts sbom export --uri=URI

You can then list where the sbom is stored by running the following commands.

gcloud artifacts sbom list

This has put the artifact analysis .json file in a storage bucket as shown below.

Without revealing to much details about the SBOM specifically this snapshot form the file generated is shown below in a IDE.

You can run your own SBOM and also upload it to have centralization of your supply chain.

Summary

Google Cloud Platform provides a whole suite of tools to assist for software development and delivery with security in mind. Additionally, seeing the increased focus on software supply chain that is native to the ecosystem. This is going to accelerate many organizations that are using this platform for ease-of-use I could see other hyperscalers such as Microsoft Azure and AWS. If you are building software and looking for expanding on tools that assist with your developers visibility take a look at Cloud Build and Artifact Registry native features.