5 tips for IIS on containers: #4 Solving for Horizontal Scale
Published Nov 22 2022 06:45 AM 7,304 Views
Microsoft

Fourth up in this blog series!  Solving for Horizontal Scale with IIS and Containers.   Make sure to check out the other topics in the blog on SSL certificate lifecycle management, IIS app pools and websites and Hardcoded configurations

 

Azure Kubernetes Service

Since each node on Azure Kubernetes Service (AKS) is a virtual machine on a Virtual Machine Scale Set (VMSS), AKS can easily add new nodes in case additional resources are needed to meet the demand.  Your website might have increased or decreased demand depending on the use case so AKS can be a valuable tool

Architecture in Azure with AKSArchitecture in Azure with AKS

 

Keep in mind that Kubernetes can scale pods up and down based on metrics and AKS can scale nodes up and down based on resource utilization and pods not being able to be scheduled.  Here I will show how AKS can scale nodes up and down according to pod scheduling

 

Node Pools

Below is an image of the Azure Portal showing my Contoso Cluster on AKS.  If you go to Node Pools under Settings.  We will see the Windows pool I have running in AKS.

AmyColyer_0-1669070063251.png

 

 

Here you can see I only have a Windows node pool names wspool and 1 node ready

AmyColyer_0-1669070189466.png

 

Scale Node Pools

However, if I go to the Scale Node Pool option you can see the Autoscale option is already in place and the way I set this up is to have 1 minimum node and a maximum of 10 nodes.  If more nodes are needed to support the pods I have in my environment, AKS will automatically scale up or if the load goes down it will scale down the number of nodes as well.

AmyColyer_1-1669070228672.png

 

We can test this setup without placing actual load on the environment.  Here I can use my deployment yaml file and change replicas (on line 6) from 1 to 6.

 

 

 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: iis-app-routing
spec:
  replicas: 1
  selector:
    matchLabels:
      app: iis-app-routing
  template:
    metadata:
      labels:
        app: iis-app-routing
    spec:
      nodeSelector:
        "kubernetes.io/os": windows
      containers:
      - name: iis-app-routing
        image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
        resources:
          limits:
            cpu: 1
            memory: 800M
          requests:
            cpu: .5
            memory: 400M
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: iis-app-routing
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: iis-app-routing

 

 

 

 

The way the environment is setup is only one node will be able to run one pod, none of the nodes can run multiple so if there is more demand or in this case an ask for 6 replicas it will be necessary to deploy more nodes.

 

In the Azure portal I’m going to open Azure Cloud shell and upload my deployment file (where I changed to 6 replicas) and here I use the "kubectl apply" command adding file name and my namespace of "hello-web-app-routing". 

 

 

 

kubectl apply -f deployment.yaml -n hello-web-pp-routing

 

 

 

After running it, you can see it shows configured so it applied the new deployment file with no issue.

AmyColyer_0-1669070703534.png

 

Now let's take a look at that pods that are running in this namespace with the "kubectl get pod" command

AmyColyer_2-1669071215058.png

 

As you can see, I have 1 pod running that was previously running on this host and the other pods have a status of pending. What is happening right now is that in the backend, the AKS cluster is actually being scaled up to support more pods and since no single node can run more than 1 pod, I’m going to need 6 nodes deployed.

 

If we select the node pool and open the details, I only have 1 node ready but the node count is already up to 6 nodes.  This process will take a few minutes until you see all 6 deployed

AmyColyer_3-1669071371291.png

 

Second screenshot showing status of 6 nodes running after the deployment completed

 

AmyColyer_4-1669071538055.png

 

Going back to our Azure cloud shell we can run a "kubectl get pod" to see the status on our nodes

Actual command here:

 

 

 

 

kubectl get pod -n hello-web-app-routing

 

 

 

 

 

​The output:

AmyColyer_6-1669071998923.png

 

As you can see some nodes are still in a pending state, while others are in a ContainerCreating State.  Eventually they will all be in a running state.  You can watch the status updates by adding the watch flag to the command like so

 

 

 

 

kubectl get pod -n hello-web-app-routing -w

 

 

 

 

​Eventually you can see that all the nodes are in a running state

AmyColyer_7-1669072236084.png

 

Now since this is set to auto-deploy, AKS will scale up or down based on demand without human intervention.  So, if you happen to run a sale at a retail store online and you know the demand is going to be higher, AKS can scale up during peak hours and scale back down when the extra nodes are no longer necessary.  This can save time and money.

 

Thanks for reading and keep an eye out on the last post in this series. If you have any questions or feedback please comment below

 

Amy Colyer

 

1 Comment
Co-Authors
Version history
Last update:
‎Nov 22 2022 01:39 PM
Updated by: