Cloud Native Apps
89 TopicsContainer Apps a Practical Scaling with Azure Queue Scale Rule
The 202 pattern is a way to handle long-running requests in a scalable and resilient manner. The basic idea is to accept a request, but instead of processing it immediately, return a "202 Accepted" response, with a location header that points to a status endpoint. The client can then poll the status endpoint to check the progress of the request. Further reading - https://learn.microsoft.com/en-us/azure/architecture/patterns/async-request-reply. Whats is the Use Cases? This pattern can be used in the following scenarios: Where the request processing time may vary significantly, or Where the client does not need to wait for the request to complete before performing other tasks. Why Azure Container App? Azure Container App offers several benefits for implementing the 202 pattern: Scalability: Scale automatically based on workload, making it easier to handle a large number of requests concurrently. Reliability: Designed to be highly available and resilient, with features such as Auto-restart and Self-healing capabilities. Integration: Easily integrate with other Azure services, such as Azure Service Bus and Azure Queue Storage, to build a more powerful and flexible solution. Monitoring: Built-in monitoring and logging capabilities, allowing you to track the status of your requests and troubleshoot any issues that may arise. Flexibility: Supports a wide range of languages and frameworks, making it easy to build solutions using the tools and technologies you are most familiar with. Why Azure Queue Scale Rule? A good document to start reading is this: Scaling in Azure Container Apps | Microsoft Learn Configuration To configure the scale rule, you'll need to set a connection string as a secret and then configure the rule itself. In the Azure portal, navigate to your container app and then select Secrets. Select Add, and then enter your secret key/value information – in our case it’s the queue connection string. Select Add when you're done. In Azure portal, there are few options to edit the scaling rules of a container Enter a Rule name, select Azure Queue. Enter your Queue Name, Enter your desired Queue depth, under the Authentication section, add your Secret reference which in case is your Azure Queue connection string, and Trigger parameter (you can enter 'connection'). select Add when you're done. The queue length instructs Keda scaler, how many messages require scaling, for example if your setting is '5' and your queue length is 25, 5 instances would be created. Select Create when you're done. The Azure Queue scaling rule uses a queue depth to control the scale of your replicas. Here are the key learnings for the worker: Implementation should always dequeue a message from the queue. QueueMessage msg = await queueClient.ReceiveMessageAsync(); // attempt to cast to a TaskRequest object string body = msg.Body.ToString(); TaskRequest taskRequest; try{ taskRequest = TaskRequest.FromJson(body); _logger.LogInformation($"Parsed message {msg.MessageId} with content {body} to TaskRequest object"); }catch(Exception ex){ _logger.LogError($"Error parsing message {msg.MessageId} with content {body} to TaskRequest object. Error: {ex.Message}, removing message from queue."); // delete the message from the queue await queueClient.DeleteMessageAsync(msg.MessageId, msg.PopReceipt); // send a message to posion queue here } Address all failures. Log both success and failure. try{ // update the blob with new status await DoSomething(taskRequest); BlobContainerClient blobContainerClient = new BlobContainerClient(storageCS, blobContainerName); await blobContainerClient.CreateIfNotExistsAsync(); BlobClient blobClient = blobContainerClient.GetBlobClient($"{taskRequest.TaskId.ToString()}-202"); // update the task status taskRequest.Status = TaskRequest.TaskStatus.Completed; Stream stream = new MemoryStream(Encoding.UTF8.GetBytes(taskRequest.ToJson())); // upload and overwrite the blob await blobClient.UploadAsync(stream, overwrite: true); _logger.LogInformation($"Completed task processing for message {msg.MessageId}."); }catch(Exception ex){ _logger.LogError($"Error updating blob for message {msg.MessageId} with content {body}. Error: {ex.Message}, removing message from queue."); // delete the message from the queue await queueClient.DeleteMessageAsync(msg.MessageId, msg.PopReceipt); // send to a poison queue } Upon error, send a message to poison queue. Example You can see an example of how this pattern can be used in practice by visiting my repo Conclusion The Azure Container App and Queue Scale Rule are an efficient way to handle long-running requests in a scalable and resilient manner.9.8KViews5likes2CommentsMicrosoft Fabric - Multi-Tenant Architecture
Fabric Multi-Tenant Architecture (updated version - August 24) Organization often faces challenges in managing data for multiple tenants in a secure manner while keeping costs low. Traditional solutions may prove costly for scenarios with more than 100 tenants, especially with the common ISV scenario where the volume of trial and free tenants is much larger than the volume of paying tenants. The motivation for ISVs to use Fabric is that it brings together experiences such as Data Engineering, Data Factory, Data Science, Data Warehouse, Real-Time Analytics, and Power BI onto a shared SaaS foundation. In this article, we will explore the Workspace per tenant-based architecture, which is a cost-effective solution for managing data for all tenants in Microsoft Fabric, including ETL and reporting.21KViews5likes0CommentsIntegrating Azure Front Door WAF with Azure Container Apps
Azure Container Apps (ACA) provides a fully-managed container orchestration platform built on top of Azure Kubernetes Service (AKS). Whilst ACA provides automatic ingress deployment for public (external) and private (internal) ACA environments, the service doesn't currently offer any Web Application Firewall (WAF) or Globally distributed ingress routing. This blob post describes how to integrate Azure Front Door WAF with a private (internal) Azure Container App environment to security harden Azure Container App ingress.26KViews6likes19CommentsEnd-to-end TLS with AKS, Azure Front Door, Azure Private Link Service, and NGINX Ingress Controller
This article shows how Azure Front Door Premium can be set to use a Private Link Service to expose an AKS-hosted workload via NGINX Ingress Controller configured to use a private IP address on the internal load balancer.17KViews4likes4CommentsFastTrack for Azure (FTA) program retiring December 2024
ATTENTION: As of December 31st, 2024, the FastTrack for Azure (FTA) program will be retired. FTA will support any projects currently in motion to ensure successful completion by December 31st, 2024, but will no longer accept new nominations. For more information on available programs and resources, visit: Azure Migrate, Modernize, and Innovate | Microsoft Azure950Views1like0Comments