Kargo by Akuity CD of the Future

Introduction

Kargo is a new tool presented by Akuity that aims at treating your releases as stages rather than environments shifting from deliver of the artifacts that are produced in CI pipelines as in continuous delivery the packaged artifact moves from different areas as needed uat, dev, prod. While ArgoCD revolutionize the use of GitOps in a tailored manner of approach Kargo feels more lightweight and nifty with its change to the sync to using freight for available artifacts that are healthy to promote to our intended “stage”.

Getting Started

Kargo is new and undergoing massive changes continuously if you run into issues also know this is not meant or intended to be production ready yet.

We will make use of Github/Kargo CLI/KinD/Kubectl/ArgoCD

For getting started we are going to run the following script this installs kargo and argocd in our cluster running KinD.

Requirements

Kubernetes v1.25.3

Cert-manager: v1.11.5

Argo CD: v2.8.3

Now that’s out of the way let’s get our hands dirty using the diagram below as our architecture.

helm inspect values \
  oci://ghcr.io/akuity/kargo-charts/kargo > ~/kargo-values.yaml

For the values of the yaml chart we can use the script above and run a cat kargo-values.yaml

If you want to customize the values in here prior to installation you can start with this above and pull the hashtag.

Running the startup script this provisions the needed values for kargo with installing cert-manager, argocd, and kargo.

curl -L https://raw.githubusercontent.com/akuity/kargo/main/hack/quickstart/kind.sh | sh

Run a kubectl get pods -A

We should have are argocd up and running along with cert-manager and kargo.

It should have the localhost:8443 open for argo’s login use admin/admin.

Next we are going to fork the repo located at the following link in our own Github Account (this will have to use a PAT Token) we can tailor the permissions no worries.

https://github.com/akuity/kargo-demo

Once you fork the repo you’ll export the following

export GITOPS_REPO_URL=<your repo URL, starting with https://>
export GITHUB_USERNAME=<github handle>
export GITHUB_PAT=<personal access token>

To get our github access token you’ll need to go to our account settings and select Developer Settings

Select fine-grained tokens (beta) to limit your permissions to the repository mine looks like the following settings.

I’m limiting the permissions to only our repository shown below if you’re forking the repository it will look similar to the image below.

After your permissions are scoped generate the token ensure you store this somewhere safe as this will be exported to continue.

export GITHUB_PAT=<token generated> # (ensure that you delete after demo)

Then use the following code where we create our applications/namespaces and secret for kargo.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: kargo-demo-test
---
apiVersion: v1
kind: Namespace
metadata:
  name: kargo-demo-uat
---
apiVersion: v1
kind: Namespace
metadata:
  name: kargo-demo-prod
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: kargo-demo-repo
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
  annotations:
    kargo.akuity.io/authorized-projects: kargo-demo
stringData:
  type: git
  project: default
  url: ${GITOPS_REPO_URL}
  username: ${GITHUB_USERNAME}
  password: ${GITHUB_PAT}
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kargo-demo-test
  namespace: argocd
  annotations:
    kargo.akuity.io/authorized-stage: kargo-demo:test
spec:
  project: default
  source:
    repoURL: ${GITOPS_REPO_URL}
    targetRevision: stage/test
    path: stages/test
  destination:
    server: https://kubernetes.default.svc
    namespace: kargo-demo-test
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kargo-demo-uat
  namespace: argocd
  annotations:
    kargo.akuity.io/authorized-stage: kargo-demo:uat
spec:
  project: default
  source:
    repoURL: ${GITOPS_REPO_URL}
    targetRevision: stage/uat
    path: stages/uat
  destination:
    server: https://kubernetes.default.svc
    namespace: kargo-demo-uat
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kargo-demo-prod
  namespace: argocd
  annotations:
    kargo.akuity.io/authorized-stage: kargo-demo:prod
spec:
  project: default
  source:
    repoURL: ${GITOPS_REPO_URL}
    targetRevision: stage/prod
    path: stages/prod
  destination:
    server: https://kubernetes.default.svc
    namespace: kargo-demo-prod
EOF

In argo we now will see our applications set up on our cluster.

Next if we want to use kargo cli – we will navigate to the release page and download the package for using the cli since I’m not doing that, I’m editing the YAML below to work without it.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: kargo-demo
  labels:
    kargo.akuity.io/project: "true"
---
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
  name: test
  namespace: kargo-demo
spec:
  subscriptions:
    repos:
      images:
      - repoURL: nginx
        semverConstraint: ^1.24.0
  promotionMechanisms:
    gitRepoUpdates:
    - repoURL: ${GITOPS_REPO_URL}
      writeBranch: stage/test
      kustomize:
        images:
        - image: nginx
          path: stages/test
    argoCDAppUpdates:
    - appName: kargo-demo-test
      appNamespace: argocd
---
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
  name: uat
  namespace: kargo-demo
spec:
  subscriptions:
    upstreamStages:
    - name: test
  promotionMechanisms:
    gitRepoUpdates:
    - repoURL: ${GITOPS_REPO_URL}
      writeBranch: stage/uat
      kustomize:
        images:
        - image: nginx
          path: stages/uat
    argoCDAppUpdates:
    - appName: kargo-demo-uat
      appNamespace: argocd
---
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
  name: prod
  namespace: kargo-demo
spec:
  subscriptions:
    upstreamStages:
    - name: uat
  promotionMechanisms:
    gitRepoUpdates:
    - repoURL: ${GITOPS_REPO_URL}
      writeBranch: stage/prod
      kustomize:
        images:
        - image: nginx
          path: stages/prod
    argoCDAppUpdates:
    - appName: kargo-demo-prod
      appNamespace: argocd
EOF
kubectl get stages

We defined our stages as prod, test and dev. Notice the terms on the columns stating current and freight think of the concept freight a set of references to one or more of our versioned artifacts (this can be updates to your container image for your application, kubernetes manifests, helm chart values)

kubectl get stage test --namespace kargo-demo --output jsonpath-as-json={.status}

We can see we have our freight identified along with a tag and the argo apps with health status healthy.

Saving the ID of our available Freight to a environment variable the documentation utilizing the following.

export FREIGHT_ID=$(kubectl get stage test --namespace kargo-demo --output jsonpath={.status.availableFreight\[0\].id})

Using the CLI

To get the Kargo CLI visit the release page linked https://github.com/akuity/kargo/releases/latest

sudo curl -L "<url for architecture>" -o /usr/local/bin/kargo
sudo chmod +x /usr/local/bin

kargo login https://localhost:8444 \
 --admin \
 --password admin \
 --insecure-skip-tls-verify
cat <<EOF | kargo apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: kargo-demo
  labels:
    kargo.akuity.io/project: "true"
---
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
  name: test
  namespace: kargo-demo
spec:
  subscriptions:
    repos:
      images:
      - repoURL: nginx
        semverConstraint: ^1.24.0
  promotionMechanisms:
    gitRepoUpdates:
    - repoURL: ${GITOPS_REPO_URL}
      writeBranch: stage/test
      kustomize:
        images:
        - image: nginx
          path: stages/test
    argoCDAppUpdates:
    - appName: kargo-demo-test
      appNamespace: argocd
---
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
  name: uat
  namespace: kargo-demo
spec:
  subscriptions:
    upstreamStages:
    - name: test
  promotionMechanisms:
    gitRepoUpdates:
    - repoURL: ${GITOPS_REPO_URL}
      writeBranch: stage/uat
      kustomize:
        images:
        - image: nginx
          path: stages/uat
    argoCDAppUpdates:
    - appName: kargo-demo-uat
      appNamespace: argocd
---
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
  name: prod
  namespace: kargo-demo
spec:
  subscriptions:
    upstreamStages:
    - name: uat
  promotionMechanisms:
    gitRepoUpdates:
    - repoURL: ${GITOPS_REPO_URL}
      writeBranch: stage/prod
      kustomize:
        images:
        - image: nginx
          path: stages/prod
    argoCDAppUpdates:
    - appName: kargo-demo-prod
      appNamespace: argocd
EOF

The stages have been updated with the manifests we apply using kargo cli.

kargo get stages --project kargo-demo

If we go back to our localhost:8444 and login with password admin

Our demo should be reflected as “kargo-demo” seen in the image is once we select the project.

When we run the following code to gather our “frieght”

kargo get stage test --project kargo-demo --output jsonpath-as-json={.status}

As shown earlier now we use the “freight id” as listed from here and use the cli to promote the freight

kargo stage promote kargo-demo test --freight $FREIGHT_ID

So I ran into a issue and dug into it in the portal to get more information on the freight failure it appears its a authentication issue on github.

This could be because during the time writing this blog I’ve had to recreate the personal access token when I created the cluster.

In the scenario of this being not in a error stage we’ve put our “freight” to promotion of the available artifact.

After working with the cli I was able to get passed the authentication issue by re-logging on with docker ghcr.io -u $GITHUB_USERNAME –password $GITHUB_PAT

Now let’s run http://localhost:8081

Let’s do the same iteration to our uat environment by leveraging the same commands as earlier changed up.

kargo get stage uat --project kargo-demo --output jsonpath-as-json={.status}
export FREIGHT_ID=$(kargo get stage uat --project kargo-demo --output jsonpath={.status.availableFreight\[0\].id})

Do the same and interchange the UAT to prod for the $FREIGHT_ID for this.

UAT is also now up and running on port 8082 and we will also run 8083

So now let’s navigate to our kargo dashboard and see our freight visualized in release.

Now we can tear down our cluster by deleting our kind cluster

kind delete cluster --name kargo-quickstart
unset GITOPS_REPO_URL
unset GITHUB_PAT
unset GITHUB_USERNAME

Summary

Kargo is still in the early phases of development but does show promise on changing how continous delivery is facilitated I wouldn’t bet against Akuity as they like CodeFresh.io support ArgoCD as a platform offering and have extensive experience in the gitops space. The idea of shifting from environments to stage I think is a nuance that can reflect on how often artifacts are passing through to intended target so its a neat verbiage decision in that regard. Will stay tuned for more in this space as GitOps is consistently in rapid growth. Check out the project and docs here I’ve used the quick-start for this but encountered some nuances that sit outside the docs to show the use of it.

https://kargo.akuity.io/