KEDA and AKS Experiments

Published Feb 10 2020 01:01 AM 3,384 Views

KEDA is an open source K8s controller that acts as a man in the middle between a data or event store such as Azure Event Hub, Storage or Bus queue (but also AWS, Kafka, etc.) and an event handler such as Azure Functions. Its scalers ensure that the appropriate number of handlers are started according to the load happening at the level of the source. KEDA comes with its own K8s Custom Resource Definition called ScaledObject. Here is an example of such a resource deployment:


kind: ScaledObject
  name: kedablobautoscaler
  namespace: anamespace
    deploymentName: kedablob
    deploymentName: kedablob
#this one is optional. KEDA scales out automatically but we can still limit the number of pods
  maxReplicaCount: 10 
  - type: azure-blob
      blobContainerName: whatever
      connection: AzureWebJobsStorage
      path: keda/{name}
      name: capturedBlob
In the above example, we can see that this resource is linked to a deployment called kedablob (not present here) that is the handler. The source is in this case an Azure Blob Storage and the connection to use is specified in the metadata.  When deploying this, you end up with an HPA (Kubernetes's built-in Horizontal Pod Autoscaler) being created for you:

The important part is highlighted in red. It takes the number of blobs present in the Blob Storage into account to scale out the related deployment accordingly. Note that if the handler does not delete the incoming blob, the HPA will never scale down. Also important to notice that one can influence the HPA by specifying the target metric ourselves. KEDA defaults to 5 for various event stores.


There are many triggers for many different sources, to get an exhaustive list, the easiest is to have a look at the .go source code . For Azure, KEDA currently supports Storage Queue, Service Bus Queue, Event Hub and Blob Storage.  


The below schema shows high level interactions and components:



To test it out with some realistic scenario, I deployed a QueueWriter pod that writes 5000 messages every 2 seconds to a Storage Queue. I scaled out the QueueWriter to 15 instances, meaning 37500 messages/s. I let KEDA scale out automatically and ended up with ~90 virtual kubelets (meaning ~90 ACIs) handling the load. I let it run during an hour to treat about 135 million messages. Whenever I checked the queue, it was empty or could see a message from time to time, meaning that the handlers had no problem to follow the pace. KEDA can be used in conjunction with worker nodes but it is wiser to use it together with Virtual Kubelets which translates to ACIs and a dedicated agent pool coming with the following characteristics:



This requires a serious amount of resources. However, and that is not related to KEDA, one must pay attention to multiple things. Despite of the nice figures above, some limitations such as the total number of concurrent ACI (quota) apply. By default, you can't exceed 100 per region and since your cluster is bound to a region, you simply can't exceed 100. This means that letting KEDA scale your handlers without control will easily lead to hit this limit. You'll end up with containers in the ProviderFailed state. A ticker to MS support can be done to increase the default threshold.


Also, admittedly, ACIs are not started in a few secs only, meaning that under high load, KEDA will attempt to scale even more. In messaging and event-driven scenarios that are very asynchronous by nature, waiting a bit is not an issue but as long as no handler is ready to handle queue messages, KEDA will keep scaling, unless a maxiumReplicaCount is specified for the ScaledObject.  Last but not least, sometimes some ACIs hang in a pending state. Here again, although a bit scary the scheduler will terminate them once the HPA is back to its lower targets.


Overall, this produced rather good results, and sure, KEDA is something to keep an eye on!'s%20built-in%20Horizontal%20Pod%20Autoscaler)%20being%20created%20for%20you%3A%3't%20exceed%20100%20per%20region%20and%20since%20your%20cluster%20is%20bound%20to%20a%20region%2C%20you%20simply%20can't%20exceed%20100.%20This%20means%20that%20letting%20KEDA%20scale%20your%20handlers%20without%20control%20will%20easily%20lead%20to%20hit%20this%20limit.%20You'll%20end%20up%20with%20containers%20in%20the%20ProviderFailed%20state.%20A%20ticker%20to%20MS%20support%20can%20be%20done%20to%20increase%20the%20default%20threshold.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-left%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-left%22%3EAlso%2C%20admittedly%2C%20ACIs%20are%20not%20started%20in%20a%20few%20secs%20only%2C%20meaning%20that%20under%20high%20load%2C%20KEDA%20will%20attempt%20to%20scale%20even%20more.%20In%20messaging%20and%20event-driven%20scenarios%20that%20are%20very%20asynchronous%20by%20nature%2C%20waiting%20a%20bit%20is%20not%20an%20issue%20but%20as%20long%20as%20no%20handler%20is%20ready%20to%20handle%20queue%20messages%2C%20KEDA%20will%20keep%20scaling%2C%20unless%20a%20maxiumReplicaCount%20is%20specified%20for%20the%20ScaledObject.%26nbsp%3B%20Last%20but%20not%20least%2C%20sometimes%20some%20ACIs%20hang%20in%20a%20pending%20state.%20Here%20again%2C%20although%20a%20bit%20scary%20the%20scheduler%20will%20terminate%20them%20once%20the%20HPA%20is%20back%20to%20its%20lower%20targets.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-left%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-left%22%3EOverall%2C%20this%20produced%20rather%20good%20results%2C%20and%20sure%2C%20KEDA%20is%20something%20to%20keep%20an%20eye%20on!%3C%2FP%3E%0A%3C%2FDIV%3E%0A%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1162632%22%20slang%3D%22en-US%22%3E%3CP%3EInterested%20in%20Event-Driven%20applications%20but%20also%20by%20new%20ways%20of%20handling%20messaging%3F%20Read%20some%20results%20from%20my%20experiments%20with%20KEDA%20and%20AKS%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1162632%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAKS%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Eserverless%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Version history
Last update:
‎Apr 07 2020 01:24 PM
Updated by: