App
132 TopicsAzure APIM Management: Migrating contents of managed developer portal with source control
In the scenario I refer to this blog, customer would like to automate the deployment of contents for their APIM developer portal with a source control. While Automate developer portal deployments - Azure API Management | Microsoft Learn gives us the detail about how to automate developer portal deployments using scripts, I will go through how to split the process into multiple steps to apply the concepts of GitOps and DevOps to the developer portal migration. The repository mentioned in the Azure document has all the source code and this repository is managed by the Microsoft Azure API Management team. The script and the pipeline migrate the contents of the managed portal from source APIM instance to destination APIM instance, but it does not support the source control of contents and publishing the contents to the multiple environments. As part of this blog, I will discuss how to use the different pipelines to capture the snapshot of the contents from source APIM developer portal to a repository, create a pull request against the repository, update the existing Urls in the repository and import and publish the contents to the multiple destination APIM developer portal from the repository. To perform the above-mentioned steps, you should fork this repository and set up your pipelines. Instructions are there for Azure DevOps and GitHub Actions. We provide you with a template architecture diagram for your developer portal migration baseline architecture. Below are the high-level steps to perform the migration - Configure APIM Developer Portal Pipelines (Optional) Update URL configurations into the repository Capture APIM Developer Portal artifacts (optional) Review/Approve the PR for production environment Release and Publish APIM Developer Portal Artifacts Though the current implementation focuses on developer portal content alone, the scripts can be enhanced to include other developer portal related objects like users, groups, Identities etc. Also, the script currently bundles all portal configuration into a single data.json file. For scenarios that require complete control over the customizations, the captureContent method in the utils.js can be altered to emit separate files. I hope this post was useful and helped with a better approach of migrating managed APIM developer portal to another instance. Happy Learning!4.5KViews1like1CommentContainer Apps a Practical Scaling with Azure Queue Scale Rule
The 202 pattern is a way to handle long-running requests in a scalable and resilient manner. The basic idea is to accept a request, but instead of processing it immediately, return a "202 Accepted" response, with a location header that points to a status endpoint. The client can then poll the status endpoint to check the progress of the request. Further reading - https://learn.microsoft.com/en-us/azure/architecture/patterns/async-request-reply. Whats is the Use Cases? This pattern can be used in the following scenarios: Where the request processing time may vary significantly, or Where the client does not need to wait for the request to complete before performing other tasks. Why Azure Container App? Azure Container App offers several benefits for implementing the 202 pattern: Scalability: Scale automatically based on workload, making it easier to handle a large number of requests concurrently. Reliability: Designed to be highly available and resilient, with features such as Auto-restart and Self-healing capabilities. Integration: Easily integrate with other Azure services, such as Azure Service Bus and Azure Queue Storage, to build a more powerful and flexible solution. Monitoring: Built-in monitoring and logging capabilities, allowing you to track the status of your requests and troubleshoot any issues that may arise. Flexibility: Supports a wide range of languages and frameworks, making it easy to build solutions using the tools and technologies you are most familiar with. Why Azure Queue Scale Rule? A good document to start reading is this: Scaling in Azure Container Apps | Microsoft Learn Configuration To configure the scale rule, you'll need to set a connection string as a secret and then configure the rule itself. In the Azure portal, navigate to your container app and then select Secrets. Select Add, and then enter your secret key/value information – in our case it’s the queue connection string. Select Add when you're done. In Azure portal, there are few options to edit the scaling rules of a container Enter a Rule name, select Azure Queue. Enter your Queue Name, Enter your desired Queue depth, under the Authentication section, add your Secret reference which in case is your Azure Queue connection string, and Trigger parameter (you can enter 'connection'). select Add when you're done. The queue length instructs Keda scaler, how many messages require scaling, for example if your setting is '5' and your queue length is 25, 5 instances would be created. Select Create when you're done. The Azure Queue scaling rule uses a queue depth to control the scale of your replicas. Here are the key learnings for the worker: Implementation should always dequeue a message from the queue. QueueMessage msg = await queueClient.ReceiveMessageAsync(); // attempt to cast to a TaskRequest object string body = msg.Body.ToString(); TaskRequest taskRequest; try{ taskRequest = TaskRequest.FromJson(body); _logger.LogInformation($"Parsed message {msg.MessageId} with content {body} to TaskRequest object"); }catch(Exception ex){ _logger.LogError($"Error parsing message {msg.MessageId} with content {body} to TaskRequest object. Error: {ex.Message}, removing message from queue."); // delete the message from the queue await queueClient.DeleteMessageAsync(msg.MessageId, msg.PopReceipt); // send a message to posion queue here } Address all failures. Log both success and failure. try{ // update the blob with new status await DoSomething(taskRequest); BlobContainerClient blobContainerClient = new BlobContainerClient(storageCS, blobContainerName); await blobContainerClient.CreateIfNotExistsAsync(); BlobClient blobClient = blobContainerClient.GetBlobClient($"{taskRequest.TaskId.ToString()}-202"); // update the task status taskRequest.Status = TaskRequest.TaskStatus.Completed; Stream stream = new MemoryStream(Encoding.UTF8.GetBytes(taskRequest.ToJson())); // upload and overwrite the blob await blobClient.UploadAsync(stream, overwrite: true); _logger.LogInformation($"Completed task processing for message {msg.MessageId}."); }catch(Exception ex){ _logger.LogError($"Error updating blob for message {msg.MessageId} with content {body}. Error: {ex.Message}, removing message from queue."); // delete the message from the queue await queueClient.DeleteMessageAsync(msg.MessageId, msg.PopReceipt); // send to a poison queue } Upon error, send a message to poison queue. Example You can see an example of how this pattern can be used in practice by visiting my repo Conclusion The Azure Container App and Queue Scale Rule are an efficient way to handle long-running requests in a scalable and resilient manner.9.7KViews5likes2CommentsIntegrating Azure Front Door WAF with Azure Container Apps
Azure Container Apps (ACA) provides a fully-managed container orchestration platform built on top of Azure Kubernetes Service (AKS). Whilst ACA provides automatic ingress deployment for public (external) and private (internal) ACA environments, the service doesn't currently offer any Web Application Firewall (WAF) or Globally distributed ingress routing. This blob post describes how to integrate Azure Front Door WAF with a private (internal) Azure Container App environment to security harden Azure Container App ingress.26KViews6likes19CommentsEnd-to-end TLS with AKS, Azure Front Door, Azure Private Link Service, and NGINX Ingress Controller
This article shows how Azure Front Door Premium can be set to use a Private Link Service to expose an AKS-hosted workload via NGINX Ingress Controller configured to use a private IP address on the internal load balancer.17KViews4likes4CommentsFastTrack for Azure (FTA) program retiring December 2024
ATTENTION: As of December 31st, 2024, the FastTrack for Azure (FTA) program will be retired. FTA will support any projects currently in motion to ensure successful completion by December 31st, 2024, but will no longer accept new nominations. For more information on available programs and resources, visit: Azure Migrate, Modernize, and Innovate | Microsoft Azure942Views1like0CommentsHow to identify a stopped service in a Windows VM using Log Analytics Workspace
If you are a Windows user and love playing around with Windows VM on Azure, and if you would like to monitor whether a Windows service is stopped or in a running state using Log Analytics query, here is a post for you.11KViews3likes3Comments