Jenkins can be deployed and utilized in many ways for this blog post we are going to deploy the jenkins operator via kubernetes with the use of helm. If your curious on how this is implemented in your kubernetes cluster this blog is just for you and to replicate this I’ve also included the documentation covering the installation process. For starters we always should know costs do occur in cloud so be wise and ensure after this demo if you are following along delete your resources.
Pre-requisites
- A kubernetes cluster (can be hosted whichever CSP or locally/bare metal)
- Helm
For the configuration we are using or at least I’ve selected for this iteration is I’m running Google Kubernetes Engine v1.27 on three nodes for this demo.
For reference of the latest version we are running as of 5/23/2023 is v0.7x
https://jenkinsci.github.io/kubernetes-operator/docs/getting-started/latest/
First we have to be running Kubernetes version above 1.17 (hopefully you are running well above this but I always will state the brass tax)
Let’s start with creating a namespace that we will deploy jenkins in
Let’s now run the next command to use Helm
helm repo add jenkins https://raw.githubusercontent.com/jenkinsci/kubernetes-operator/master/chart
helm install jenkins jenkins/jenkins-operator -n jenkins
You’ll be able to get more information on the process running ‘kubectl –namespace jenkins get pods -w’
If you run the following command youi’ll be able to see the process of the pod creation ongoing let this load for sometime until we see a running status with 1/1.
These notes will populate to assist in retrieving the user/password information along with how to connect to jenkins using port-forward
Let’s now navigate to Deploying Jenkins
type in nano jenkins_instance.yaml and copy the below code into your code editor/ide
apiVersion: jenkins.io/v1alpha2
kind: Jenkins
metadata:
name: jenkins
namespace: jenkins
spec:
configurationAsCode:
configurations: []
secret:
name: ""
groovyScripts:
configurations: []
secret:
name: ""
jenkinsAPISettings:
authorizationStrategy: createUser
master:
disableCSRFProtection: false
containers:
- name: jenkins-master
image: jenkins/jenkins:2.319.1-lts-alpine
imagePullPolicy: Always
livenessProbe:
failureThreshold: 12
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 100
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 10
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 80
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 1500m
memory: 3Gi
requests:
cpu: "1"
memory: 500Mi
seedJobs:
- id: jenkins-operator
targets: "cicd/jobs/*.jenkins"
description: "Jenkins Operator repository"
repositoryBranch: master
repositoryUrl: https://github.com/jenkinsci/kubernetes-operator.git
We will then apply our manifest running our trusty command
kubectl apply -f jenkins_instance.yaml
To get the credentials ensure you replace <cr-name> with your operator name
kubectl get secret jenkins-operator-credentials-<cr_name> -o 'jsonpath={.data.user}' | base64 -d
kubectl get secret jenkins-operator-credentials-<cr_name> -o 'jsonpath={.data.password}' | base64 -d
If you run into issues, I’ve ran a command to get secrets in the cluster to see what exists if I missed anything
This populated what I needed so if you run into issues no worries.
Now let’s run a kubectl port-forward
kubectl port-forward jenkins-jenkins 8080:8080
If you are running in Google Cloud Platform like me in the shell you can use the web preview port 8080 to launch this or if you are running via CLI on your local machine this will be http://localhost:8080
If all goes well as it should your dashboard should look similar after initial login
While I’ve wanted to run a more extensive pipeline in this blog post I was encountering errors on the proxy forward with my pod consistently restarting I believe this is a issue I pulled the logs to search further believe I’ve restricted API access in the backend that required some communication with the persistent volume claim.
Summary
Jenkins is a powerful tool with many ways to get up and running regardless of where you want to run your CI/CD server considerations and changes to how this is configured should also be considered. How is the CI/CD server accessed is it always via kubectl port-forward? Well, in practice for demos this is fine but realistically you’d want a ingress controller on this area to redirect to the pod and be able to be accessed. Google Cloud Platform provides a really in-depth tutorial on running continuous deployment on GKE operations that can also get you more familiar with using jenkins on kubernetes.
https://cloud.google.com/kubernetes-engine/docs/archive/continuous-delivery-jenkins-kubernetes-engine