Part #1. Azure Kubernetes Services (AKS). Create your first AKS cluster and deploy application.

This is a first part of my Azure Kubernetes Services blog series. In this post we will take a look how to create an AKS cluster and deploy first application.

Probably all of you at least once heard about Kubernetes (K8S) which is for sure most popular container orchestration platform. All major cloud players like Google, AWS and Microsoft are offering Kubernetes services in their environment. If I’m not mistaken first K8S release was in 2015 and since that moment it becomes more and more popular not only in cloud environment but on-premises as well. I will not going to explain all Kubernetes components and concepts as this will require to write a book. Kubernetes are quite complex solution and really huge topic and definitely requires some time to become familiar. For those of you who are just starting a Kubernetes journey I can recommend few resources which helped me a lot to understand this technology. Here is my top resources:

In this blog series I would like to focus on various infrastructure activities such as AKS cluster deployment, updating, monitoring and maintenance and etc. So let’s get started.

First thing which will be necessary to have is Azure CLI. This will be a main tool which we will use for AKS cluster deployment. To install or update your Azure CLI simply run this PowerShell command on your management machine:

Invoke-WebRequest -Uri https://aka.ms/installazurecliwindows -OutFile .\AzureCLI.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'

Probably a bit out of topic, but I highly recommend you to start using VS Code for all your scripting and development task. It’s already few year when VS Code is a #1 tool in my admin tasks. Particularly for Azure CLI it has awesome extension called “Azure CLI Tools” which helps a lot in Azure CLI scripts writing:

VS code has a lot of extensions for different programming and scripting languages and will be super handy for your daily tasks and it’s absolutely free πŸ˜‰.

So we now have Azure CLI installed on our system. Next thing which also will be needed in order to interact with Kubernetes cluster is a Kubectl tool which is a command line interface for running commands against Kubernetes clusters. There are several ways how to install Kubectl. For example you can download the latest Kubectl executive from HERE put it into appropriate path and for easier use configure the system path environment variable. You also can use package management tool like Chocolatey and install kubectl using it with command:

choco install kubernetes-cli

Chocolatey is a package management tool for windows like apt or yum in Linux. I have found this tool pretty useful as it allows quickly install and update various applications without searching and downloading them in browser. Try it you will definitely πŸ‘ it.

For simplicity this time we will use Azure CLI command which also can install Kubectl. Launch your preferred cli tool (CMD, PowereShell etc.) and run:

#Install kubectl
az aks install-cli

#Check the client version
kubectl version --client --short

We also going to install aks-preview extension for az aks subgroup. This will allow us to use additional options during cluster creation. Use the command bellow to install extension:

az extension add --name aks-preview

One of the things which we should decide before cluster creation is a version of Kubernetes operators that we would like to use in our deployment. By using Azure CLI command provided below you will get the list of available versions in the selected location as well as information about which version is default (if you will not specify a version during the cluster creation this version will be used) and which version is in preview state:

# Get available Kubernetes version in particular location
az aks get-versions --location "West Europe" --query "orchestrators" --output table 

This time I will choose a one version below the highest as this will allow us to overview an upgrade process in later AKS series posts.

For cluster creation we will use a az aks create command. aks create subgroup have a plenty of available parameters and this could help you to initially configure your cluster very precisely. However only two parameters (–name (name of your cluster) and –resource-group (resource group where you want to deploy AKS cluster) are required to be specified. You can examine all available options by running:

# List all available options ans some examples
az aks create --help

As I said almost all parameters are not required but be aware that lot of them could not be changed after the deployment and if you will not specify them then default values will be taken, so plan your deployment accordingly. Also it is worth to mention that big part of listed options are related to the AKS network. In AKS we have two types of networking which is default AKS Kubernetes network plugin “kubenet” and more advanced “azure” network plugin. The main differences between them according MS docs are:

  • With kubenet, nodes get an IP address from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive an IP address “hidden” behind the node IP. This approach reduces the number of IP addresses that you need to reserve in your network space for pods to use.
  • With Azure Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space, and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.

For initial deployment it is very important to choose appropriate VM size for your cluster nodes because you can’t change size after the deployment (this I think will be changed add some point). The good thing is that already now AKS have multiple node pools feature in preview. This will allow you to add additional node pools to your cluster and when you adding a new pool you can choose some other VM size for that pool nodes and then you can deploy your pods accordingly on those nodes which will suit application requirements. We will cover AKS node pools in upcoming posts. You can use these CLI commands to get info about available VM sizes:

# Use this command to list all available VM sizes in a selected location. 
az vm list-skus --location westeurope --query "[].{Name:name, Size:size, Tier:tier}" --output table
# Use this command to get detailed info of particular VM size 
az vm list-skus --location westeurope --query "[?name=='Standard_D2_v2']"

If you will not specify a –node-vm-size for your cluster then the default size will be used which is “Standard_D2_v2”.

As i already mentioned some az aks create options like: –vm-set-type or –load-balancer-sku are only become available after the aks-preview extension installation.

We are almost ready to start an AKS cluster deployment and run our Azure CLI script. This is a script which you can use to deploy and AKS from scratch:

# Login to azure 
az login --username sysadminas@sysadminas.eu

# Define Variables
$GROUPNAME="sysadminas-aks"
$LOCATION="West Europe"
$AKS_CLUSTER_NAME="sysadminas"
$COUNT_OF_NODES="2"
$NODE_POOL_NAME="sysadminas1"
$K8S_VERSION="1.14.6"
$VM_SIZE="Standard_D2_v2"
$CLUSTER_VNET_NAME="sysadminas-K8S-Vnet"
$CLUSTER_SUBNET_NAME="sysadminas-K8S-Subnet"
$ADMIN_USER="sysadminas"

# Create a resource group 
az group create --location $LOCATION --name $GROUPNAME

# Create virtual network for Kubernetes cluster
az network vnet create --name $CLUSTER_VNET_NAME --address-prefixes 10.0.0.0/8 --resource-group $GROUPNAME 
# Create Subnet for Kubernetes nodes and pods
az network vnet subnet create --vnet-name $CLUSTER_VNET_NAME --address-prefixes 10.10.0.0/16 --name $CLUSTER_SUBNET_NAME --resource-group $GROUPNAME

# Select subnet ID and save it as variable
$VNET_SUBNET_ID=$(az network vnet subnet show --vnet-name $CLUSTER_VNET_NAME --name $CLUSTER_SUBNET_NAME --resource-group $GROUPNAME --query id)

#Create AKS cluster 
az aks create \ 
    --resource-group $GROUPNAME \ # A resource group where your cluster will be deployed
    --name $AKS_CLUSTER_NAME \ # Name of you cluster
    --nodepool-name $NODE_POOL_NAME \ #Name of pool
    --node-vm-size $VM_SIZE \ # Selected VM size for your nodes
    --name $AKS_CLUSTER_NAME \ # Name of you cluster
    --node-count $COUNT_OF_NODES \ # How many nodes will be in tyour cluster. 
    --kubernetes-version $K8S_VERSION\ # Set the kubernetes version
    --network-plugin azure \ # Choose network plugin. Kubenet for basic networking and Azure for advanced
    --load-balancer-sku standard \ # Choose a Load balancer fot you AKS. Select between Basic and Standard
    --vm-set-type VirtualMachineScaleSets \ # Choose between VirtualMachineScaleSets or AvailabilitySet
    --docker-bridge-address 172.17.0.1/16 \ # IP address (in CIDR notation) used as the Docker bridge IP address on nodes. This CIDR is tied to the number of containers on the node. Default of 172.17.0.1/16.
    --vnet-subnet-id $VNET_SUBNET_ID \ # ID of Nodes/Pods subnet 
    --service-cidr 10.20.0.0/16 \ # CIDR for Kubernetes services (Should not overlap with Nodes/Pods subnet or with Docker Bridge CIDR which have defaults 172.17.0.0/16)
    --dns-service-ip 10.20.0.10 \ # Kubernetes DNS service IP address must be an IP address from service-cidr range
    --admin-username sysadminas \ # Use your own account to create on node VMs for SSH access instead of default azureuser
    --max-pods 200 \ # The maximum number of pods available to be deployed on node
    --network-policy azure \ # Choose a network policy for your AKS. Select between azure (zure’s own implementation, called Azure Network Policies) and calico an open-source network and network security solution founded by Tigera
    --verbose

I tried a to add at least short descriptions in the script, so hope this will let you better understand each parameter.

As you may noticed we deploying our cluster with advanced networking so we also creating a Vnet and subnet in it for the cluster nodes and pods. For advanced networking we also defined a CIDR for Kubernetes services as well as a CIDR for the Docker bridge network address. Make sure that advanced network parameters for your deployment planned accordingly and matches Microsoft recommendations.

I also chose to deploy a VM Scale Set (VMSS) instead of Availability Set. This is because only VMSS-backed clusters can be expanded with more node pools otherwise you can only scale the number of nodes in this initial pool.

After the several minutes our cluster should be deployed and ready for use. Before we will be able to interact with our cluster via “kubectl” we need to add a context to our kubeconfig file (default location of kubeconfig file %userprofile%\.kube\config ). To do this we will run CLI command:

# Get AKS cluster credentials and add to kubeconfig
az aks get-credentials --resource-group sysadminas-aks --name sysadminas --admin

Next we will need to change current context to the newly added. Normally if you never been connected to any Kubernetes cluster from your management device this previously added context will be set as current automatically. To view all possible contexts execute:

# List contexts
kubectl config get-contexts

To start using needed context run:

# Set context to use 
kubectl config use-context "Put-Name-of-your-context-Here"

Now we are ready to take a look on our cluster from Kubernetes point of view. For example let’s check for existing nodes, pods and services and ensure that their IP addresses are configured according the parameters we used during the cluster deployment:

# Get the pods info from all namespaces
kubectl get pods --all-namespaces -o wide
All pods are with correct IP’s from our defined subnet
# Get the services info from all namespaces
kubectl get services --all-namespaces -o wide
All services have IP’s from CIDR range which was defined during the deployment.
Kube-dns service has same IP which we set in deployment parameters.
# Get the nodes info 
kubectl get nodes -o wide
Nodes IP’s also in our defined subnet range

As you can see everything is looks correct and we can do deployments to our cluster. As example we will take probably most popular image which is NGINX web server. First save this as NGINX.yml file

apiVersion: v1
kind: Namespace
metadata:
  name: sysadminas 

---

apiVersion: v1
kind: Pod
metadata:
  namespace: sysadminas
  name: nginx
  labels:
    app: nginx
    environment: sysadminas
spec:
  containers:
  - image: nginx
    name: nginx
    ports:
    - containerPort: 80
      protocol: TCP
    
---

kind: Service
apiVersion: v1
metadata:
  name:  nginx
  namespace: sysadminas
spec:
  selector:
    app: nginx
    environment: sysadminas
  type:  LoadBalancer
  ports:
  - port:  80
    targetPort:  80

Then run:

# Create resources using NGINX.yml manifest
kubectl create -f .\NGINX.yml

This will create a namespace a pod and service for our application. You can qucikly review the deployment by running:

# Get all resources from particular namespace 
kubectl get all -n sysadminas -o wide

Our service type is a LoadBalancer so we can check our application from external source reaching it by service External-IP. In my case this will be http://51.105.128.147

As you can see we get a default nginx page so we successfully deployed and app on our AKS cluster and exposed it to the external world.

To remove the test application and it’s dependencies run:

# Delete namespace and resources in it
kubectl delete namespace sysadminas

So, that’s it for today I hope this will be informative to you. See you soon in the next post where we will continue to overview Azure Kubernetes Services πŸ€œπŸ€› .

Leave a Reply

Your email address will not be published. Required fields are marked *