Application Gateway for Containers in Azure Kubernetes Service (AKS)

Most of the production recommendations in regards to Azure Kubernetes Service was directed to use native Application Gateway Ingress Controller. I’ve heard mixed uses of this being cumbersome and tedious that others have opted for use of nginx-ingress controller. As of this week the (preview) for Application Gateway for Containers is able to be used and this offering while might not catch the popular news is going to increase the adoption in simplifying use specifically for containers.

What is Application Gateway for Containers and Capabilities?

This is a separate SKU to the pre-existing Application Gateway family documentation will fall into Application Gateway. Extending the ability of application gateway ingress controller the biggest upgrade is Gateway API Support, Weighted/Split Traffic Distribution(blue-green)(active-active), Performance of near-real time convergence.

Currently pricing for existing Application Gateway V2 can get costly as a example shown on the Fixed and Capacity Unit.

Likely the cost should drop with this new offering I’d expect this to be a little cheaper option with the net gain of the performance and capabilities at layer 7. I’ve compiled a handful of bash scripts utilizing the Azure Documentation with Quick Start to start testing this out, notably if you’re going to follow along ensure you delete the resource groups that we are using.

Hands-on Application Gateway

I’d say the setup at first could seem cumbersome with the delegation being a definitive way I started running to issues on the registration so if you are going to use the repository pay careful attention to the use of the managed cluster resource group that houses your virtual network.

To follow along we start with the resources we will need

  • Resource Group
  • AKS Cluster
  • Helm
  • Azure CLI/Kubectl Installed

Let’s start with cloning the repository

git clone https://github.com/sn0rlaxlife/app-gateway-aks.git

We will have to go through with registering the following providers that are located in the /providers.sh (script)

Notably these are the following

  • Microsoft.ContainerService
  • Microsoft.Network
  • Microsoft.NetworkFunction
  • Microsoft.ServiceNetworking

Azure CLI Extension will install the Application Load Balancer – as a addon.

Our next script will have our resource group and aks cluster parameters if you’d like to change those values feel free ./aks.sh

If you follow the repo and don’t have the extensions running ./providers.sh should look like the follow stating the “alb” is in preview (add-on).

To validate if I have a relevant VM_Size for my region I run the following command

az vm list-sizes --location eastus -o table

After updating the values in /aks.sh to the relevant VM_Size you’d like to use for your AKS Cluster ensure you run a chmod +x to make the script executable. Then run ./aks.sh to start creation of the cluster this should be relatively quick in creation first this goes through the resource group and starts the cluster creation you’ll see the running… for a little.

Now our resource has been created and the “MC stands for managed cluster this is important for later steps so ensure you keep this handy.

Next we need to run our workload.sh – chmod +x

If you need to edit any of the values to change to your preference feel free to edit this file I’ve tried to consolidate the steps in scripts for start-up.

For our next script we will use the deploy.sh in the directory – by chmod +x the code below will deploy our application load balancer via helm. To alleviate declaring resource group/aks name every time you can use export for environment variables in the section for simplicity I hardcoded these in the script.

What this script is doing is pulling our credentials for our cluster (entry point) merging the ./kubeconfig to authorize our API call and we use helm which is installed to pull a helm chart.

We can see the pods now running in our namespace (azure-alb-system) are present.

Now let’s check on our gateway class by running the following

kubectl get gatewayclass azure-alb-external -o yaml

Two options for what to move to next breaking down the concepts of Application Gateway for Containers has two strategies for management

  • Bring your own deployment (BYO): Deployment and lifecycle of Application Gateway for Container resource, Association and Frontend resource is assumed via Azure Portal, CLI, PS, Terraform and referenced in Kubernetes configuration.
  • Managed by ALB Controller: ALB Controller deployed in Kubernetes is responsible for the lifecycle of the Application Gateway for Containers resource and its sub resources. ALB Controller will create the Application Gateway for Containers resource when Application Load Balancer CRD (custom resource definition) is defined on the cluster. The lifecycle of the ALB is managed by the lifecycle of the custom resource.

For this blog post let’s go with Bring your own deployment strategy. Run the following commands you can also create this as a script if needed.

RESOURCE_GROUP='aks-west-prod'
AGFC_NAME='alb-test' # Name of the Application Gateway for Containers resource to be created
az network alb create -g $RESOURCE_GROUP -n $AGFC_NAME

FRONTEND_NAME='test-frontend'
az network alb frontend create -g $RESOURCE_GROUP -n $FRONTEND_NAME --alb-name $AGFC_NAME

For putting these together it might cause some issues so I broke this up into two different tasks first creating the front end outlined in the docs and then the backend.

So for our next step pay close attention because this can cause issues if you don’t change values, since our cluster was created in our original resource group this creation in Azure will populate the cluster under a MC_<cluster-name> so we need to reference our existing virtual network.

If you navigate to the UI in portal.azure.com and find your resource group you’ll find the creation of the resources associated with your AKS cluster under the MC resource group.

This is the populated version of the code with the resource group and vnet that were associated I created the code

VNET_NAME='<name of the virtual network to use>'
VNET_RESOURCE_GROUP='<the resource group of your VNET>'
ALB_SUBNET_NAME='subnet-alb' # subnet name can be any non-reserved subnet name (i.e. GatewaySubnet, AzureFirewallSubnet, AzureBastionSubnet would all be invalid)
az network vnet subnet update \
    --resource-group $VNET_RESOURCE_GROUP  \
    --name $ALB_SUBNET_NAME \
    --vnet-name $VNET_NAME \
    --delegations 'Microsoft.ServiceNetworking/trafficControllers'
ALB_SUBNET_ID=$(az network vnet subnet list --resource-group $VNET_RESOURCE_GROUP --vnet-name $VNET_NAME --query "[?name=='$ALB_SUBNET_NAME'].id" --output tsv)
echo $ALB_SUBNET_ID

Now we assign our SP to the specified actions

Delegate the permissions by using the following code

IDENTITY_RESOURCE_NAME='azure-alb-identity'

resourceGroupId=$(az group show --name $RESOURCE_GROUP --query id -otsv)
principalId=$(az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query principalId -otsv)

# Delegate AppGw for Containers Configuration Manager role to RG containing Application Gateway for Containers resource
az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $resourceGroupId --role "fbc52c3f-28ad-4303-a892-8a056630b8f1" 

# Delegate Network Contributor permission for join to association subnet
az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $ALB_SUBNET_ID --role "4d97b98b-1d4f-4787-a291-c67834d212e7"

Okay so now we are ready to operationalize this blog is meant to start the creation we will cover more of this in the future video to demonstrate the uses of Application Gateway for Containers such as SSL/TLS Offloading and Backend TLS along with Traffic Splitting/Weighted Round Robin.

Summary

Application for containers is a new iteration of Application Gateway application service, this is going to expand capabilities of native integration with features hopefully this brings the cost down for your organization. I’ll extend on the capabilities and features of this area as this is still in-preview at the time the biggest issue I ran into was working with the CLI in a scripted fashion to speed things up some of the items are a little easier to run command by command in the association process likely because its leveraging a CLI extension.