Azure Container Apps (ACA) are built on a solid open source foundation. Behind the scenes, it runs on a managed Kubernetes cluster and includes several open source components, including Dapr for microservices creation and execution, Envoy Proxy for ingress functions, and KEDA for event-driven autoscaling. Again, you do not have to install these components yourself. All you need to do is enable and/or configure your container app to use these components.
In this blog, we will see how can we scale Container Apps based on Redis Streams with the Azure managed instance of Redis, i.e. Azure Cache for Redis. We will be referring to the KEDA trigger specification for Redis Streams (Scale applications based on Redis Streams) to configure our Container App. This specification describes the Redis-streams trigger that scales based on the Pending Entries List (XPENDING) for a specific Consumer Group of a Redis Stream. In this blog we are going to manually create Redis entities and add data to them using the Redis Console. However, this can also be done through a DAPR PUB/SUB implementation using Redis.
ACA's autoscaling feature internally leverages KEDA and gives you the ability to configure the number of replicas to deploy based on rules (event triggers). Apart from HTTP and TCP rule based scaling, container apps also support custom rules giving it the flexibility and opening up a lot more configuration options since all of KEDA's event-based scalers are supported.
Some of the supported event-driven Azure data sources include:
- Azure Service Bus
- Azure Storage Queue
- Azure Event Hubs
- Azure Blob Storage
- Azure Cache for Redis, etc.
Various scaling examples of Azure Container Apps can be found in this repository.
Lets cut to the chase !
- To begin with, sign into the Azure Portal and create a new Container App. (Reference: Create a Container App)
- Create a new Redis Cache (Reference: Create Azure Redis Cache).
NOTE: Make sure to select Redis version 6, since Redis Streams was introduced in Redis 5.0 and also confirm that the non-TLS/SSL access is enabled.
Once provisioned, open the Redis Console from the Overview blade. We will be first creating a new stream mystream and a new consumer group mygroup.
XGROUP CREATE mystream mygroup $ mkstream
- Now, we will configure a scaling configuration on our Container App.
Firstly, in order to connect to Redis Cache, Container App should have the Primary Redis Access Key saved as a Secret. Go to the Secrets blade on your Container App and add a new secret. Create a new key redis-connection-string along with Redis Primary Access key as the value (Primary Access Key is present in the Access keys blade on Redis Cache) and click on Add.
Now, we will be configuring the Scale rules. Go to the Scale blade on your Container App and click on Edit and deploy. Go to the Scale tab and Add a Scale Rule as follows.
Custom Rule Type is different for every KEDA based scaler. For Redis Streams, it is 'redis-streams'. The Trigger Parameter can be set as "password".
We'll have to specify the Metadata parameters in this as well. Edit the fields as provided below.
The consumerGroup and stream fields, as the name suggests, are the names of the Consumer Group and Redis Stream which we created initially. pendingEntriesCount are the number of messages present in the Pending Entries List (PEL) of that Consumer Group, and are yet to be acknowledged. In this example, for every 2 new messages in the PEL, one instance/replica will be added to the Container App revision.
After editing all the fields as mentioned above, click on Save, click on Create. This will deploy a new revision to your Container App. As soon as it is done, we are ready to test our environment.
NOTE: For more information on the above fields, please check the KEDA trigger specification here.
- It is suggested to open two browser tabs side by side for efficient testing, with one having the Redis Cache console and other having the Console blade of the Container App. Initially, since there are no messages in our stream, there will only be a single replica present in the Replica dropdown (given the minReplica count is set to 1).
Lets start by adding two messages to the queue and assign them to a Consumer Group. We will be using XADD and XREADGROUP commands for this.
> XADD mystream * name Alice
> XADD mystream * name Bob
> XREADGROUP GROUP mygroup consumerx COUNT 2 STREAMS mystream >
The above screenshot shows that both the messages have been added to the stream and have been assigned to the consumer group where the consumer is consumerx. The number of messages which are in pending state can be checked by the XPENDING command.
Now, lets have a look at our replica count. Refresh the Console blade and click on the Replica dropdown. As you would see that even though we added 2 messages to the stream, the instance/replica count did not increase, which is as expected.
Lets now add two more messages to the consumer group stream, in the same way we did before.
> XADD mystream * name Derek
> XADD mystream * name Emily
> XREADGROUP GROUP mygroup consumerx COUNT 2 STREAMS mystream >
We will get 4 pending messages in the list now.
Since there are 4 pending messages in the stream, according to our configuration, a total of 2 instances would be needed to cater them. Looking at the replica dropdown on the Console blade, we can see that there are 2 instances currently active.
This shows that our container app has successfully scaled out and will go on to scale as the messages increase further, till the maxReplica count is not reached.
Now coming to the scale in part. Instance scale in will happen when the pending messages in the list will be consumed and acknowledged. Let us now acknowledge 2 messages and check the PEL count again.
> XACK mystream mygroup <MESSAGE_ID>
For this example, let us pick up the two messageIDs from the result of the last XREADGROUP command. In this example, the two IDs are 1674708896170-0 and 1674708901325-0, for Derek and Emily respectively.
In the above result, we can see that the PEL count has come back down to 2, after the two messages got acknowledged. Let us now refresh our Console blade and check the number of replicas present.
As seen in the above screenshot, the replica count has come down to 1 which shows that the scale in has happened successfully.