Google Kubernetes Engine Up and Running in GCP

GKE

How to Get Started with Google Kubernetes Engine on GCP

Kubernetes is an open-source system for automating the management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Google Kubernetes Engine (GKE) is a hosted version of Kubernetes that runs on the Google Cloud Platform (GCP). In this blog post, we’ll show you how to get started with GKE on GCP. We’ll cover everything from creating a GCP project and enabling the Kubernetes API to setting up cloud storage and networking, creating a GKE cluster, deploying a sample app, and monitoring and managing your cluster.

Understanding Google Kubernetes Engine (GKE) and its Benefits.

Google Kubernetes Engine (GKE) is a managed container orchestration service that runs on the Google Cloud Platform (GCP). It enables users to deploy and manage containerized applications in a scalable and efficient way.

Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

GKE makes it easy to create and run Kubernetes clusters on GCP by providing a user-friendly interface and built-in integrations with other GCP services.

What are the Benefits of Using GKE.

There are many benefits of using GKE over other orchestration solutions, such as:

-Ease of use: GKE provides a simple user interface that makes it easy to deploy and manage containerized applications at scale.

-Flexibility: GKE supports both single-node and multi-node deployments, making it suitable for both development and production workloads.

-Scalability: GKE clusters can be easily scaled up or down to meet changing needs.

-Integration with other Google Cloud services: GKE integrates seamlessly with other Google Cloud services, such as Monitoring, and Debugger, making it easy to set up end-to-end observability for your application.”

Prerequisites for Setting Up GKE.

  • Project that can be used to deploy GKE (This is essentially your subscription if your coming from Microsoft Azure) GCP Operates on Projects for resource isolation along with billing.
  • Enablement of GKE API for utilization (This is straight forward in services)
  • Container Security API (For utilization of Security Posture of GKE Cluster)

Creation of GKE Cluster in GCP

Assuming you’re familiar with the use of GCP and enablement of the pre-requisites we will navigate to GKE and the selection should look like this image shown below.

We will select Create + and start the UI Wizard to select our settings.

For those who want a real hands off approach Google Offers the following options for GKE Services such as Auto-pilot or Standard

For this we will be selecting Standard if you want a comparison table that shows the difference between them follow this link

https://cloud.google.com/kubernetes-engine/docs/resources/autopilot-standard-feature-comparison?authuser=2

Now that this is out of the way lets start creating our cluster with standard as chosen option

We will have to select a name for our cluster along with a location – its always best to place the location closer to your geography however feel free to pick whichever.

As far as the control plane is concerned we can define what version of GKE we want to include the latest 1.26 which is not released to other CSP’s at the moment of this writing.

So our basics are defined we will then use the left side of the menu to start out Node Pool configuration.

For the node pool I’ve selected default values such as lowering number of nodes (cost do add up using kubernetes) – so be mindful if your testing this out with me step-by-step.

We can also have a few more options in this area such as Blue/Green Deployments listed below in node upgrade strategies. Ensure obviously before that you understand the difference between these but if your well involved with Google Cloud Professional Architect the differences of these strategies are elaborated a little more essentially this will spin up a new node with upgraded version while retaining the outdated node until you ready to move forward while costly (allows simple rollback)

I just selected a small machine for this cluster in the N1 series as you configure this you can determine what size you’ll need for your workloads based on this as well.

As far as network settings we can see the settings in the image below and can change accordingly I’m leaving this default.

For security settings I prefer setting strict settings on access for each API this approach can be tedious but good practice as over privilege can be consequential in kubernetes quickly.

I’ve also added the Shielded Options as shown below to increase the security of the cluster.

You can additionally add metadata prior to creation shown below to have taints/labels during creation.

Additionally at the cluster level I’ve added these settings for security.

Once you’re done with settings understanding what selections you want hit Create.

Now we have to connect the GKE cluster this can be done by following the image below and select “Connect” this will prompt the GCP CLI needed to get the kubeconfig to your cluster.

kubectl get pods -A

This shows the entire cluster and what is out of the box also we can see GKE uses the konnectivity agent similar to AKS on Azure.

Run the following commands to create our YAML manifests

touch run.yaml
vim run.yaml

https://cloud.google.com/kubernetes-engine/docs/samples/container-helloapp-deployment

kubectl apply -f run.yaml

We’re going to see this will populate a deployment of our application we can view this running

kubectl get deployments

Back to our dashboard of GKE in GCP we will ensure we have our security addressed in the cluster natively speaking Google Cloud Platform has a preview feature of Security Posture for free by enabling Container Security API.

Security Posture

We will then navigate in this page lower down in the Select Clusters and select our clusters to enable these processes to analyze our cluster.

If any of the findings are relevant to be Concerns we have a tab for that located as shown in the image below to start our inspection of where exactly we should focus on.

While I’ve just enabled this no data has populated so I will come back to this later and see if any data has been evaluated as the enablement process state it takes 15 minutes to update.

While I won’t be covering Policies in this blog post it is important to note similarities in the view but also natively the observability of compliance can be found here. After around 15 minutes we can see the findings of our security and what is needed to address.

GKE Security Posture

Let’s dive into the Concerns and investigate our cluster’s findings

It appears we have a container running elevated privileges such as root (not good….obviously) but this is another reason observability along with tracing system calls is important to be aware of this.

We can select the hyperlink for more context such as details and remediation.

We can also move to the Affected Workloads and take a look at which workloads are affected by this concern

So from our initial deployment we ran that deployment with no security context and this defaults the container inside the YAML Manifest to root we can fix this.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloweb
  labels:
    app: hello
spec:
  selector:
    matchLabels:
      app: hello
      tier: web
  template:
    metadata:
      labels:
        app: hello
        tier: web
    spec:

      securityContext:
        runAsUser:2000
      containers:
      - name: hello-app
        image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0

        securityContext:
          runAsUser: 2000
          allowPrivilegeEscalation: false
        ports:
        - containerPort: 8080
        resources:
          requests:
            cpu: 200m

This should fix our issue that was found by the security concern area. Now we will have to edit our deployment or we can re-create this and apply it to fix this.

Now ensure that we delete the created cluster as cost can and will be accumulated depending on your configuration.

A notable mention is to also point out much more to GKE feature set is the Policy Area and I won’t show that in this post but wanted to highlight the native abilities of GKE and the innovation moving forward in GCP.

Creating a Cluster in GKE.

Creating a GKE Cluster.

When you create a Google Kubernetes Engine (GKE) cluster, you need to decide on the number and type of nodes that will be included in the cluster, as well as other configuration options. This section provides an overview of the different types of nodes that are available and how to create a GKE cluster.

Choosing a Cluster Configuration

The first step in creating a GKE cluster is to decide on the desired configuration. There are three main types of nodes that can be used:

Standard Nodes: These are the most common type of node and offer a balance between price and performance. They are ideal for most applications.

High Memory Nodes: These nodes have more memory than standard nodes and are better suited for applications that require a lot of memory, such as databases.

High CPU Nodes: These nodes have more CPU power than standard nodes and are better suited for applications that require a lot of processing power, such as video encoding or machine learning tasks.

Once you have decided on the type of node, you need to decide on the number of nodes that you want in your cluster. The recommended starting point is three nodes, but this can be increased or decreased based on your needs. You also need to decide which region and zone you want your cluster to be located in. It is best to choose a region and zone that is close to your users so that they experience minimal latency when accessing your application.

After you have decided on the desired configuration, you can proceed to creating the cluster using one of the methods described below.

Creating a GKE Cluster

There are two main ways to create a GKE cluster: using the Google Cloud Platform Console or using the gcloud command-line tool .

Creating a Cluster Using the Google Cloud Platform Console:

To create a cluster using the Google Cloud Platform Console, navigate to https://console.cloud .google .com/kubernetes/ , select your project from the drop-down menu, click “Create Cluster”, and fill out the required information . Once you have fill out all fields , click “Create” . Your new cluster will now appear in t he list of clusters . Clicking on i t will allow y ou t o view m ore information about i t , such as t he n umber o f n odes , t he location, and the current  status. If you need to modify any settings f or your cluster , select it from the list of clusters , click “Edit”, make your changes ,and then click “Save” .

Deploying a Sample App in GKE.

In order to deploy a sample application in GKE, you will first need to configure a deployment. To do this, you will need to create a YAML file that contains the configuration for your deployment. The file should include the following:

– The name of your deployment

– The container image for your application

– The port that your application will be listening on

– The number of replicas you want to deploy

– Any environment variables that your application needs

Deploying an Application in GKE.

Now that you have configured and deployed a sample application in GKE, let’s take a look at how to actually access and use the application.

When you create a cluster in GKE, Google automatically creates a Load Balancer for you. This Load Balancer allows users to access applications running inside of your cluster from outside of GCP. In order to find out the external IP address of your Load Balancer, run the following command:

$ kubectl get services –namespace=my-deployment Once you have retrieved the external IP address of your Load Balancer, you can access your application by navigating to http://[EXTERNAL_IP_ADDRESS] in a web browser.

Monitoring and Managing Your Cluster.

Google Kubernetes Engine (GKE) provides built-in monitoring of your clusters and containers using Operations: Cloud Monitoring. By default, all metrics and logs from containers on your nodes are collected in a project owned by Google. If you’d like, you can also choose to have these metrics and logs ingested into a project that you own.

To view information about the health of your cluster and its node pools, visit the Kubernetes Engine > Clusters page in the GCP Console:

https://console.cloud.google.com/kubernetes/clusters

From here, you can see basic information about each of your clusters, such as the number of nodes and pods in each cluster, as well as the version of Kubernetes that is running. You can also click on a cluster to get more detailed information about that cluster, including a list of its node pools and their sizes.

If you want to get even more detailed information about what is going on with your nodes and pods, you can enable Cloud Monitoring and Logging and Prometheus Managed Monitoring for your project. Enabling these features will give you access to extensive logging and monitoring data for your application, which can be useful for debugging issues or performance tuning.

To learn more about how to these services work and in-depth documentation view the following link

Cloud Monitoring: https://cloud.google.com/monitoring/docs/setup

Managing Your GKE Cluster

In addition to monitoring the health of your cluster, you may also need to perform some management tasks on occasion, such as upgrading the version of Kubernetes that is running on your cluster or adding new node pools to an existing cluster. These tasks can be performed using either the GCP Console or the gcloud command-line tool.

To learn more about how to manage your GKE cluster using the GCP Console or gcloud, visit the following pages:

GCP Console: https://cloud.google.com/kubernetes-engine/docs/how-to/managing-clusters

gcloud: https://cloud.google.com/sdk/gcloud/reference/container/clusters/

Google Kubernetes Engine (GKE) is a powerful tool for managing containerized applications at scale. In this guide, we’ve covered the basics of what GKE is and how it can benefit you, as well as some of the prerequisite steps for setting up a cluster. We’ve also walked through creating a cluster and deploying a sample application to it. Finally, we’ve discussed how to monitor and manage your GKE cluster.

Conclusion

If you’re looking to get started with Google Kubernetes Engine on GCP, this guide will walk you through the necessary steps. From understanding what GKE is and its benefits, to setting up your cluster and deploying a sample app, you’ll be up and running in no time. Plus, we’ll also show you how to monitor and manage your new cluster so that you can keep everything running smoothly.