Azure Container Apps
176 TopicsHosting Remote MCP Server on Azure Container Apps (ACA) using Streamable HTTP transport mechanism
About Continuing from the earlier article of Hosting Remote MCP Server on Azure Container Apps (ACA) using SSE transport mechanism This blog showcases the Hosting Remote MCP Servers on Azure Container Apps (ACA) as HTTP type transport. Overview The Model Context Protocol (MCP) has revolutionized how AI assistants interact with external tools and data sources. While many examples focus on local implementations using stdio transport, this post demonstrates how to build and deploy a production-ready MCP server using HTTP transport in Azure Container Apps. In this article, we create a live forex converter that fetches real-time exchange rates from external APIs, showcasing how MCP servers can integrate with third-party services to provide dynamic, up-to-date information to AI assistants. What is MCP HTTP Transport? MCP supports multiple transport mechanisms, with HTTP being ideal for cloud deployments: Stdio Transport: {"type": "stdio"} - Direct process communication HTTP Transport: {"type": "http"} - RESTful API communication HTTP transport enables: Cloud deployment and scaling Cross-platform compatibility Multiple client connections Load balancing and high availability Integration with external APIs Real-time data fetching from third-party services Building and testing the MCP Server from the GitHub Repo Follow the steps to clone the code on your local machine and test the server locally # Clone the repository git clone https://github.com/deepganguly/azure-container-apps-mcp-sample.git # Navigate to the project directory cd azure-container-apps-mcp-sample # Install dependencies npm install # Test Locally, Run the MCP server npm start # Test the server (in another terminal) curl -X POST http://localhost:3001/mcp \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' The `server.js` (checkout the file from the above repository for more details) fetches live forex exchange rates from a Third-Party API and servers as live price converter for any requests. It performs the following functions 1. Exchange Rate Management Caches live exchange rates from exchangerate-api.com for 10 minutes Fetches fresh rates when cache expires or is empty Falls back to hardcoded rates if the external API fails (USD, EUR, GBP, JPY, INR, CAD, AUD, CHF, CNY) 2. Exchange Rate Management Listens on /mcp endpoint for POST requests with JSON-RPC 2.0 format Handles tools/list method - Returns available tools (convert_currency) Handles tools/call method - Executes currency conversion requests Returns proper MCP responses with jsonrpc, id, and result/error fields // Fetch live exchange rates from exchangerate-api.com (free, no API key needed) async function getLiveRates() { try { // Check cache first if (ratesCache && cacheTimestamp && (Date.now() - cacheTimestamp) < CACHE_DURATION) { return ratesCache; } console.log('Fetching live exchange rates...'); const response = await fetch('https://api.exchangerate-api.com/v4/latest/USD'); const data = await response.json() 3. Currency Conversion Logic USD as base conversion - Direct rate lookup for USD to other currencies Other to USD conversion - Uses inverse rate (1/rate) Cross-currency conversion - Converts through USD (from→USD→to) Calculates exchange rate and final converted amount 4. Response Formatting Success response - Returns formatted text with amount, converted value, live rate, and timestamp Error handling - Returns proper JSON-RPC error responses for failures Deploy the App to Azure Container Apps The code can be deployed with the following commands. Also, check out the main.bicep file provided in the repository for quick one step deployment # Clone the repository git clone https://github.com/deepganguly/azure-container-apps-mcp-sample.git #Login to Azure az login # Create resource group az group create --name mcp-live-rates-rg --location eastus # Create Container App environment az containerapp env create --name mcp-forex-env --resource-group mcp-live-rates-rg --location eastus # Deploy container app az containerapp up --name mcp-live-forex-server --resource-group mcp-live-rates-rg --environment mcp-forex-env --source . --target-port 3001 --ingress external Connect the MCP Server with VS Code Chat Step 1: Get Your Deployed Server URL After deployment, your server is available at: https://mcp-live-forex-server.****.eastus.azurecontainerapps.io/ Key Endpoints: - MCP Endpoint: https://mcp-live-forex-server.****.eastus.azurecontainerapps.io/mcp - Health Check: https://mcp-live-forex-server.****.eastus.azurecontainerapps.io//health Step 2: Configure VS Code Settings 1. Open VS Code 2. Navigate to .vscode/mcp.json file with the following configurations { "servers": { "my-mcp-server-1a118d61": { "url": "https://mcp-live-forex-server.****.eastus.azurecontainerapps.io/mcp", "type": "http" } }, "inputs": [ { "id": "convert_currency", "type": "promptString", "description": "Convert {amount} {from} to {to} using live exchange rates" } ] } 3. Add the mcp.json as a Server 4. Reload the Window and query the VS code chat client Following picture showcases the Add Server configuration from the mcp.json file and the follow up conversation for the exchange rates Conclusion In this article we see the approach that enables to run serverless functions in a fully managed, scalable container environment, leveraging the flexibility of containers and the power of Azure Container Apps. You can now monitor, scale, and update your app easily using Azure tools and CLI.239Views0likes0CommentsBuilding Agents on Azure Container Apps with Goose AI Agent, Ollama and gpt-oss
Azure Container Apps (ACA) is redefining how developers build and deploy intelligent agents. With serverless scale, GPU-on-demand, and enterprise-grade isolation, ACA provides the ideal foundation for hosting AI agents securely and cost-effectively. Last month we highlighted how you can deploy n8n on Azure Container Apps to go from click-to-build to a running AI based automation platform in minutes, with no complex setup or infrastructure management overhead. In this post, we’re extending that same simplicity to AI agents, where we’ll show why Azure Container Apps is the best platform for running open-source agentic frameworks like Goose. Whether you’re experimenting with open-source models or building enterprise-grade automation, ACA gives you the flexibility and security you need. Challenges when building and hosting AI agents Building and running AI agents in production presents its own set of challenges. These systems often need access to proprietary data and internal APIs, making security and data governance critical, especially when agents interact dynamically with multiple tools and models. At the same time, developers need flexibility to experiment with different frameworks without introducing operational overhead or losing isolation. Simplicity and performance are also key. Managing scale, networking, and infrastructure can slow down iteration, while separating the agent’s reasoning layer from its inference backend can introduce latency and added complexity from managing multiple services. In short, AI agent development requires security, simplicity, and flexibility to ensure reliability and speed at scale. Why ACA and serverless GPUs for hosting AI agents Azure Container Apps provide a secure, flexible, and developer-friendly platform for hosting AI agents and inference workloads side by side within the same ACA environment. This unified setup gives you centralized control over network policies, RBAC, observability, and more, while ensuring that both your agentic logic and model inference run securely within one managed boundary. ACA also provides the following key benefits: Security and data governance: Your agent runs in your private, fully isolated environment, with complete control over identity, networking, and compliance. Your data never leaves the boundaries of your container Serverless economics: Scale automatically to zero when idle, pay only for what you use — no overprovisioning, no wasted resources. Developer simplicity: One-command deployment, integrated with Azure identity and networking. No extra keys, infrastructure management, or manual setup are required. Inferencing flexibility with serverless GPUs: Bring any open-source, community, or custom model. Run your inferencing apps on serverless GPUs alongside your agentic applications within the same environment. For example, running gpt-oss models via Ollama inside ACA containers avoids costly hosted inference APIs and keeps sensitive data private. These capabilities let teams focus on innovation, not infrastructure, making ACA a natural choice for building intelligent agents. Deploy the Goose AI Agent to ACA The Goose AI Agent, developed by Block, is an open source, general-purpose agent framework designed for quick deployment and easy customization. Out of the box, it supports many features like email integration, github interactions, and local CLI and system tool access. It’s great for building ready-to-run AI assistants that can connect to other systems while having a modular design that makes customization simple on top of supporting great defaults out the box. By deploying Goose on ACA, you gain all the benefits of serverless scale, secure isolation, GPU-on-demand, while maintaining the ability to customize and iterate quickly. Get started: Deploy Goose on Azure Container Apps using this open-source starter template. In just a few minutes, you’ll have a private, self-contained AI agent running securely on Azure Container Apps, ready to handle real-world workloads without compromise. Goose running on Azure Container Apps adding some content to a README, submitting a PR and sending a summary email to the team. Additional Benefits of running Goose on ACA Running the Goose AI Agent on Azure Container Apps (ACA) showcases how simple and powerful hosting AI agents can be. Always available: Goose can run continuously—handling long-lived or asynchronous workloads for hours or days—without tying up your local machine. Cost efficiency: ACA’s pay-per-use, serverless GPU model eliminates high per-call inference costs, making it ideal for sustained or compute-intensive workloads. Seamless developer experience: The Goose-on-ACA starter template sets up everything for you—model server, web UI, and CLI endpoints—with no manual configuration required. With ACA, you can go from concept to a fully running agent in minutes, without compromising on security, scalability, or cost efficiency. Part of a Growing Ecosystem of Agentic frameworks on ACA ACA is quickly becoming the go-to platform for containerized AI and Agentic workloads. From n8n, Goose to other emerging open-source and commercial agent frameworks, developers can use ACA to experiment, scale, and secure their agents - all while taking advantage of serverless scale, GPU-on-demand, and complete network isolation. It’s the same developer-first workflow that powers modern applications, now extended to intelligent agents. Whether you’re building a single agent or an entire automation ecosystem, ACA provides the flexibility and reliability you need to innovate faster.131Views0likes0CommentsExpanding the Public Preview of the Azure SRE Agent
We are excited to share that the Azure SRE Agent is now available in public preview for everyone instantly – no sign up required. A big thank you to all our preview customers who provided feedback and helped shape this release! Watching teams put the SRE Agent to work taught us a ton, and we’ve baked those lessons into a smarter, more resilient, and enterprise-ready experience. You can now find Azure SRE Agent directly in the Azure Portal and get started, or use the link below. 📖 Learn more about SRE Agent. 👉 Create your first SRE Agent (Azure login required) What’s New in Azure SRE Agent - October Update The Azure SRE Agent now delivers secure-by-default governance, deeper diagnostics, and extensible automation—built for scale. It can even resolve incidents autonomously by following your team’s runbooks. With native integrations across Azure Monitor, GitHub, ServiceNow, and PagerDuty, it supports root cause analysis using both source code and historical patterns. And since September 1, billing and reporting are available via Azure Agent Units (AAUs). Please visit product documentation for the latest updates. Here are a few highlights for this month: Prioritizing enterprise governance and security: By default, the Azure SRE Agent operates with least-privilege access and never executes write actions on Azure resources without explicit human approval. Additionally, it uses role-based access control (RBAC) so organizations can assign read-only or approver roles, providing clear oversight and traceability from day one. This allows teams to choose their desired level of autonomy from read-only insights to approval-gated actions to full automation without compromising control. Covering the breadth and depth of Azure: The Azure SRE Agent helps teams manage and understand their entire Azure footprint. With built-in support for AZ CLI and kubectl, it works across all Azure services. But it doesn’t stop there—diagnostics are enhanced for platforms like PostgreSQL, API Management, Azure Functions, AKS, Azure Container Apps, and Azure App Service. Whether you're running microservices or managing monoliths, the agent delivers consistent automation and deep insights across your cloud environment. Automating Incident Management: The Azure SRE Agent now plugs directly into Azure Monitor, PagerDuty, and ServiceNow to streamline incident detection and resolution. These integrations let the Agent ingest alerts and trigger workflows that match your team’s existing tools—so you can respond faster, with less manual effort. Engineered for extensibility: The Azure SRE Agent incident management approach lets teams reuse existing runbooks and customize response plans to fit their unique workflows. Whether you want to keep a human in the loop or empower the Agent to autonomously mitigate and resolve issues, the choice is yours. This flexibility gives teams the freedom to evolve—from guided actions to trusted autonomy—without ever giving up control. Root cause, meet source code: The Azure SRE Agent now supports code-aware root cause analysis (RCA) by linking diagnostics directly to source context in GitHub and Azure DevOps. This tight integration helps teams trace incidents back to the exact code changes that triggered them—accelerating resolution and boosting confidence in automated responses. By bridging operational signals with engineering workflows, the agent makes RCA faster, clearer, and more actionable. Close the loop with DevOps: The Azure SRE Agent now generates incident summary reports directly in GitHub and Azure DevOps—complete with diagnostic context. These reports can be assigned to a GitHub Copilot coding agent, which automatically creates pull requests and merges validated fixes. Every incident becomes an actionable code change, driving permanent resolution instead of temporary mitigation. Getting Started Start here: Create a new SRE Agent in the Azure portal (Azure login required) Blog: Announcing a flexible, predictable billing model for Azure SRE Agent Blog: Enterprise-ready and extensible – Update on the Azure SRE Agent preview Product documentation Product home page Community & Support We’d love to hear from you! Please use our GitHub repo to file issues, request features, or share feedback with the team3.2KViews2likes2CommentsTransition to Azure Functions V2 on Azure Container Apps
Introduction Azure Functions on Azure Container Apps lets you run serverless functions in a flexible, scalable container environment. As the platform evolves, there are two mechanisms to deploy Images with Function Programming Model as Azure Container apps: Functions V1 (Legacy Microsoft.Web RP Model) Functions V2 (New Microsoft.App RP Model) V2 is latest and recommended approach to host Functions on Azure Container Apps. In this article, we will learn about differences in different approaches and how you can transition to V2 model. V1 Limitations (Legacy approach) Function App Deployments has limited functionality and experience therefore transition to V2 is encouraged. Below are the limitations of V1 Feature limitations Lacks support for native container apps features such as multi revision, easy auth, health probes, custom domain, scale settings, container app secrets, side car containers etc. Troubleshooting Limitations Direct container access and real-time log viewing are not supported. Console access and live log output are restricted due to system-generated container configurations. Low-level diagnostics are available via Log Analytics, while application-level logs can be accessed through Application Insights. DAPR Integration Challenges Compatibility issues with DAPR with .NET isolated functions, particularly during build processes due to dependency conflicts. Functions V2 (Improved and Recommended) Deployment with -–kind=functionapp using Microsoft.App RP reflects the newer approach of deployment (Function on Container Apps V2) Simplified Resource Management internally: Instead of relying on proxy Function App (as in the V1 model), V2 provisions a native Azure Container App resource directly. This shift eliminates the need to dual-resource management previously involving both the proxy and the container app, thereby simplifying operations by consolidating everything into a single, standalone resource. Feature rich and fully native: As a result, V2 brings the native features of Azure Container Apps for the images deployed with Azure functions programming model, including Multi-revision management and traffic split Easy Auth Private Endpoint support Metrics & Alerting CI/CD through Azure pipelines and GitHub actions Health probes Custom domains and managed certificates Setting Polling interval and Cool down intervals in scale settings Use container app secrets Use Side car containers Since V2 contains significant upgrade on experience and functionality, it’s recommended to transition to V2 from existing V1 deployments. Legacy Direct Function image deployment approach Some customers continue to deploy Function images as standard container apps (without kind=functionapp) using the Microsoft.App resource provider. While this method enables access to native Container Apps features, it comes with key limitations: Not officially supported. No auto-scale rules — manual configuration required No access to new V2 capabilities in roadmap (e.g., List Functions, Function Keys, Invocation Count) Recommendation: Transition to Functions on Container Apps V2, which offers a significantly improved experience and enhanced functionality. Checklist for the transitioning to Functions V2 on Azure Container Apps Below is the transition guide 1. Preparation Identify your current deployment: Confirm you are running Functions V1 (Web RP) in Azure Container Apps Locate your container image: Ensure you have access to the container image used in your V1 deployment. Document configuration: Record all environment variables, secrets, storage account connections, and networking settings from your existing app. Check Azure Container Apps environment quotas: Review memory, CPU, and instance limits for your Azure Container Apps environment. Request quota increases if needed. 2. Create the New V2 App Create a new Container App with kind=functionapp: Use the Azure Portal (“Optimize for Functions app” option) Or use the CLI (az functionapp create) and specify your existing container image. See detailed guide for creating Functions on Container apps V2 No code changes required: You can use the same container image you used for V1-no need to modify your Functions code or rebuild your image. Replicate configuration: Apply all environment variables, secrets, and settings from your previous deployment. 3. Validation Test function triggers: Confirm all triggers (HTTP, Event Hub, Service Bus, etc.) work as expected. Test all integrations: Validate connections to databases, storage, and other Azure services. 4. DNS and Custom Domain Updates (optional) Review DNS names: The new V2 app will have a different default DNS name than your v1 app. Update custom domains: If you use a custom domain (e.g., api.yourcompany.com), update your DNS records (CNAME or A record) to point to the new V2 app’s endpoint after validation. Re-bind or update SSL/TLS certificates as needed. Notify Users and Stakeholders: Inform anyone who accesses the app directly about the DNS or endpoint change. Test endpoint: Ensure the new DNS or custom domain correctly routes traffic to the V2 app. 5. Cutover Switch production traffic: Once validated, update DNS, endpoints, or routing to direct traffic to the new V2 app. Monitor for issues: Closely monitor the new deployment for errors, latency, or scaling anomalies. Communicate with stakeholders: Notify your team and users about the transition and any expected changes. 6. Cleanup Remove the old V1 app: Delete the previous V1 deployment to avoid duplication and unnecessary costs. Update documentation: Record the new deployment details, configuration, and any lessons learned Feedback and Support We’re continuously improving Functions on Container Apps V2 and welcome your input. Share Feedback: Let us know what’s working well and what could be better. Submit an issue or a feature request to the Azure Container Apps GitHub repo Support Channels: Use the Support Portal for technical assistance. Engage with the team via GitHub Discussions or Azure Community forums. Reach out to your Microsoft account team for enterprise-specific guidance. Your feedback helps shape the future of serverless on containers—thank you for being part of the journey!441Views2likes1CommentDownloading Files from Azure Container App to your Local Machine via Azure Blob Storage
Azure Container Apps is a powerful tool for running containerized applications in the cloud. However, if you need to transfer files (custom log files, network captures, etc.) from your container app to another destination such as your local machine or Azure Blob Storage, you might find that the process is not as straightforward as you'd like. In this post, we'll walk you through the steps to transfer files from your Azure Container App to Azure Blob Storage using the `curl` command. Once the file is present on your Blob Storage, you can easily download it to your local machine from the Azure Portal itself in case you need to analyze the logs or network traces with a local tool. Step 1: Accessing the Azure Container App Console To access the Azure Container App Console, navigate to your container app in the Azure Portal. Once you've selected your container app, click on the "Console" option under the "Monitoring" blade. This will open up a terminal window in your browser that you can use to connect to your container. Step 2: Installing `curl` Before you can transfer files using curl, you'll need to install it within your container. To install `curl`, simply enter the following command in the terminal window: apt-get update && apt-get install -y curl Step 3: Creating the `curl` Command To transfer a file from your Azure Container App to Azure Blob Storage, you'll need to create a `curl` command that specifies the location of the file you want to transfer and the destination container and blob in Azure Blob Storage. The curl syntax to upload a file to a Container within an Azure Blob Storage would look like this curl -X PUT -T [file_path] -H "x-ms-blob-type: BlockBlob" "https://[storage_account_name].blob.core.windows.net/[container_name]/[blob_name]?[sas_token]" Make sure to replace the placeholders with the appropriate values: [file_path] is the local file path of the file you want to upload. [storage_account_name] is the name of your Azure Storage account. [container_name] is the name of the container you want to upload the file to. [blob_name] is the name of the blob you want to create or overwrite. [sas_token] is a Shared Access Signature (SAS) token that provides permission to upload the file to the container. You can generate a SAS token with the appropriate permissions using the Azure portal or Azure CLI. Here's an example command to upload a file named "example.txt" to a container named "mycontainer" into a storage account named "mystorageaccount" with a SAS token: curl -X PUT -T example.txt -H "x-ms-blob-type: BlockBlob" "https://mystorageaccount.blob.core.windows.net/mycontainer/example.txt?[sas_token]" Step 4: Executing the curl Command Once you've created your `curl` command, you can execute it in the Azure Container App Console by pasting it into the terminal window and hitting enter. This will initiate the file transfer process and upload the specified file to Azure Blob Storage. Step 5: Downloading the file from Azure Storage to your Local Machine Once you've confirmed the file is in your Azure Blob Storage, you can simply browse to the Container within the Storage Account and download the file. Note:- This post assumes you are not mounting a file share to your Container App, in case you are, you can simply 'mv' or 'cp' the file from whichever directory it is into the directory where you mounted the file share volume.29KViews1like1CommentChoosing the Right Azure Containerisation Strategy: AKS, App Service, or Container Apps?
Azure Kubernetes Service (AKS) What is it? AKS is Microsoft’s managed Kubernetes offering, providing full access to the Kubernetes API and control plane. It’s designed for teams that want to run complex, scalable, and highly customisable container workloads, with direct control over orchestration, networking, and security. When to choose AKS: You need advanced orchestration, custom networking, or integration with third-party tools. Your team has Kubernetes expertise and wants granular control. You’re running large-scale, multi-service, or hybrid/multi-cloud workloads. You require Windows container support (with some limitations). Advantages: Full Kubernetes API access and ecosystem compatibility. Supports both Linux and Windows containers. Highly customisable (networking, storage, security, scaling). Suitable for complex, stateful, or regulated workloads. Disadvantages: Steeper learning curve; requires Kubernetes knowledge. You manage cluster upgrades, scaling, and security patches (though Azure automates much of this). Potential for over-provisioning and higher operational overhead. Azure App Service What is it? App Service is a fully managed Platform-as-a-Service (PaaS) for hosting web apps, APIs, and backends. It supports both code and container deployments, but is optimised for web-centric workloads. When to choose App Service: You’re building traditional web apps, REST APIs, or mobile backends. You want to deploy quickly with minimal infrastructure management. Your team prefers a PaaS experience with built-in scaling, SSL, and CI/CD. You need to run Windows containers (with some limitations). Advantages: Easiest to use, minimal configuration, fast deployments. Built-in scaling, SSL, custom domains, and staging slots. Tight integration with Azure DevOps, GitHub Actions, and other Azure services. Handles infrastructure, patching, and scaling for you. Disadvantages: Less flexibility for complex microservices or custom orchestration. Limited access to underlying infrastructure and networking. Not ideal for event-driven or non-HTTP workloads. Azure Container Apps What is it? Container Apps is a fully managed, serverless container platform built on Kubernetes and open-source tech like Dapr and KEDA. It abstracts away Kubernetes complexity, letting you focus on microservices, event-driven, or background jobs. When to choose Container Apps: You want to run microservices or event-driven workloads without managing Kubernetes. You need automatic scaling (including scale to zero) based on HTTP traffic or events. You want to use Dapr for service discovery, pub/sub, or state management. You’re building modern, cloud-native apps but don’t need direct Kubernetes API access. Advantages: Serverless scaling (including to zero), pay only for what you use. Built-in support for microservices patterns, event-driven architectures, and background jobs. No cluster management—Azure handles the infrastructure. Integrates with Azure DevOps, GitHub Actions, and supports Linux containers from any registry. Disadvantages: No direct access to Kubernetes APIs or custom controllers. Linux containers only (no Windows container support). Some advanced networking and customisation options are limited compared to AKS. Key Differences Feature Azure Kubernetes Service (AKS) Azure App Service Azure Container Apps Best for Complex, scalable, custom workloads Web apps, APIs, backends Microservices, event-driven, jobs Management You manage (with Azure help) Fully managed Fully managed, serverless Scaling Manual/auto (pods, nodes) Auto (HTTP traffic) Auto (HTTP/events, scale to zero) API Access Full Kubernetes API No infra access No Kubernetes API OS Support Linux & Windows Linux & Windows Linux only Networking Advanced, customisable Basic (web-centric) Basic, with VNet integration Use Cases Hybrid/multi-cloud, regulated, large-scale Web, REST APIs, mobile Microservices, event-driven, background jobs Learning Curve Steep (Kubernetes skills needed) Low Low-medium Pricing Pay for nodes (even idle) Pay for plan (fixed/auto) Pay for usage (scale to zero) CI/CD Integration Azure DevOps, GitHub, custom Azure DevOps, GitHub Azure DevOps, GitHub How to Decide? Start with App Service if you’re building a straightforward web app or API and want the fastest path to production. Choose Container Apps for modern microservices, event-driven, or background processing workloads where you want serverless scaling and minimal ops. Go with AKS when you need full Kubernetes power, advanced customisation, or are running at enterprise scale with a skilled team. Conclusion Azure’s containerisation portfolio is broad, but each service is optimised for different scenarios. For most new cloud-native projects, Container Apps offers the best balance of simplicity and power. For web-centric workloads, App Service remains the fastest route. For teams needing full control and scale, AKS is unmatched. Tip: Start simple, and only move to more complex platforms as your requirements grow. Azure’s flexibility means you can mix and match these services as your architecture evolves.778Views2likes0CommentsAnnouncing the Public Preview of Azure Container Apps Azure Monitor dashboards with Grafana
We’re thrilled to announce the public preview of Azure Container Apps Azure Monitor Dashboards with Grafana, a major step forward in simplifying observability for your apps and environments. With this new integration, you can view Grafana dashboards directly within your app or environment in the Azure portal, with no extra setup or cost required. What’s new? Azure Monitor Dashboards with Grafana bring the power of Grafana’s visualization capabilities to your Azure resources. Dashboards with Grafana enable you to create and edit Grafana dashboards directly in the Azure portal without additional cost and less administrative overhead compared to self-hosting Grafana or using managed Grafana services. For Azure Container Apps, this means you can access two new pre-built dashboards: Container App View: View key metrics like CPU usage, memory usage, request rates, replica restarts, and more. Environment View: See all your apps in one view with details like latest revision name, minimum and maximum replicas, CPU and memory allocations, and more for each app. These dashboards are designed to help you quickly identify issues, optimize performance, and ensure your applications are running smoothly. Benefits Drill into key metrics: Stop switching between multiple tools or building dashboards from scratch. Start from the environment dashboard to get a high-level view of all of your apps, then drill into individual app dashboards. Customize your views: Tailor the dashboards to your team’s needs using Grafana’s flexible visualization options. Full compatibility with open-source Grafana: Dashboards created in Azure Monitor are portable across any Grafana instance. Share dashboards across your team with Azure Role-Based Access Control (RBAC): Dashboards are native Azure resources, so you can securely share them using RBAC. Get started today For Azure Container Apps, you can experience these dashboards directly from either your environment or an individual app: Navigate to your Azure Container App environment or a specific Container App in the Azure portal. Open the Monitoring section and select the “Dashboards with Grafana (Preview)” blade. View your metrics or customize the dashboard to meet your needs. For detailed guidance, see aka.ms/aca/grafana Want more? Explore the Grafana Gallery Looking for additional customization or inspiration? Visit the Grafana Dashboard Gallery to explore thousands of community dashboards. If you prefer to use Azure Managed Grafana, here are direct links to Azure Container Apps templates: Azure / Container Apps / Container App View Azure / Container Apps / Aggregate View You can also view other published Azure dashboards here.419Views1like1CommentBuilding Agentic Workflows with n8n and Azure Container Apps
Agents are rapidly emerging as the future of automation and intelligent workflows, empowering businesses to create dynamic solutions that adapt to real-world needs. Customers are recognizing immense value in building agents—these systems can orchestrate tasks, respond intelligently to changing inputs, and automate complex processes with minimal human intervention. Microsoft Azure delivers robust, end-to-end experiences for designing, deploying, and managing agents, making it easier than ever to realize business outcomes across a wide range of scenarios. The Azure Foundry Agent Service is a unified platform that enables creation, orchestration, and management of intelligent agents with integrated AI, workflow automation, and secure enterprise connectivity. Azure Logic Apps Agent Loop enables every workflow to become agentic, allowing you to orchestrate intelligent, collaborative automation solutions by seamlessly integrating AI-powered agents, human experts, and adaptive decision-making into business processes. At the same time, open-source frameworks like n8n are gaining significant traction due to their flexibility and platform independence. n8n enables customers to build and run agents on any cloud or on-premises environment, giving organizations the freedom to customize and share workflows without vendor lock-in. Its visual workflow editor, powerful integrations, and active community make it an attractive choice for teams looking to innovate quickly and efficiently. In this blog, we’ll walk you through how to deploy n8n agents on Azure Container Apps (ACA), ensuring you can leverage both the power of open source and the scalability of Azure to build the automation solutions your business needs. Why n8n on Azure? Community workflows: Start quickly with pre-built automation templates and shared flows from the n8n community. AI built-in: Seamlessly add natural language, summarization, and reasoning with Azure Foundry’s OpenAI models. Managed scale: Use Azure Container Apps to deploy n8n in a fully managed, container-native environment with scaling, networking, and security built in. Flexibility: Choose from lightweight testing to production-grade environments with the same deployment template. Three ways to deploy n8n with Azure Container Apps We’ve created an Azure deployment template that supports three common scenarios. You can move between them as your needs evolve. Try: Spin up n8n in minutes. Great for testing integrations with Azure OpenAI before committing to infrastructure. Small: Bring persistence and private networking. Designed for small teams who want to keep their workflows and data across sessions. Production: Scale securely and reliably. Suitable for production deployments where resilience, security, and multi-instance scaling are key. Bringing AI into the Flow Once n8n is running, you can plug in Azure OpenAI models directly into workflows—powering: Automated content generation Intelligent routing and decision logic Summarization of long-form data Enhanced customer engagement scenarios By pairing n8n’s integrations with Azure OpenAI’s reasoning and text generation, you unlock entirely new categories of automation. Get Started Today You can find the template and detailed instructions here. Closing thoughts n8n brings a thriving automation ecosystem; Azure provides the enterprise-grade foundation. Together, they empower developers and business teams to create intelligent, scalable automations—from quick experiments to mission-critical production workflows.1.1KViews0likes1CommentSend metrics from Micronaut native image applications to Azure Monitor
The original post (Japanese) was written on 20 July 2025. MicronautからAzure Monitorにmetricを送信したい – Logico Inside This entry is related to the following one. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Prerequisites Maven: 3.9.10 JDK version 21 Micronaut: 4.9.0 or later The following tutorials were used as a reference. Create a Micronaut Application to Collect Metrics and Monitor Them on Azure Monitor Metrics Collect Metrics with Micronaut Create an archetype We can create an archetype using Micronaut’s CLI ( mn ) or Micronaut Launch. In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature “yaml” so that we can include dependencies for using yaml. Micronaut Launch https://micronaut.io/launch/ $ mn create-app \ --build=maven \ --jdk=21 \ --lang=java \ --test=junit \ --features=validation,graalvm,micrometer-azure-monitor,http-client,micrometer-annotation,yaml \ dev.logicojp.micronaut.azuremonitor-metric When using Micronaut Launch, click [FEATURES] and select the following features. validation graalvm micrometer-azure-monitor http-client micrometer-annotation yaml After all features are selected, click [GENERATE PROJECT] and choose [Download Zip] to download an archetype in Zip file. Implementation In this section, we’re going to use the GDK sample code that we can find in the tutorial. The code is from the Micronaut Guides, but the database access and other parts have been taken out. We have made the following changes to the code to make it fit our needs. a) Structure of the directory In the GDK tutorial, folders called azure and lib are created, but this structure isn’t used in the standard Micronaut archetype. So, codes in both directories has now been combined. b) Instrumentation Key As the tutorial above and the Micronaut Micrometer documentation explain, we need to specify the Instrumentation Key. When we create an archetype using Micronaut CLI or Micronaut Launch, the configuration assuming the use of the Instrumentation Key is included in application.properties / application.yml . 6.3 Azure Monitor Registry Micronaut Micrometer This configuration will work, but currently, Application Insights does not recommend accessing it using only the Instrumentation Key. So, it is better to modify the connection string to include the Instrumentation Key. To set it up, open the file application.properties and enter the following information: micronaut.metrics.export.azuremonitor.connectionString="InstrumentationKey=...." In the case of application.yml , we need to specify the connection string in YAML format. micronaut: metrics: enabled: true export: azuremonitor: enabled: true connectionString: InstrumentationKey=.... We can also specify the environment variable MICRONAUT_METRICS_EXPORT_AZUREMONITOR_CONNECTIONSTRING , but since this environment variable name is too long, it is better to use a shorter one. Here’s an example using AZURE_MONITOR_CONNECTION_STRING (which is also long, if you think about it). micronaut.metrics.export.azuremonitor.connectionString=${AZURE_MONITOR_CONNECTION_STRING} micronaut: metrics: enabled: true export: azuremonitor: enabled: true connectionString: ${AZURE_MONITOR_CONNECTION_STRING} The connection string can be specified because Micrometer, which is used internally, already supports it. We can find the AzurMonitorConfig.java file here. AzureMonitorConfig.java micrometer/implementations/micrometer-registry-azure-monitor/src/main/java/io/micrometer/azuremonitor/AzureMonitorConfig.java at main · micrometer-metrics/micrometer The settings in application.properties / application.yml are as follows. For more information about the specified meter binders, please look at the following documents. Meter Binder Micronaut Micrometer micronaut: application: name: azuremonitor-metric metrics: enabled: true binders: files: enabled: true jdbc: enabled: true jvm: enabled: true logback: enabled: true processor: enabled: true uptime: enabled: true web: enabled: true export: azuremonitor: enabled: true step: PT1M connectionString: ${AZURE_MONITOR_CONNECTION_STRING} c) pom.xml To use the GraalVM Reachability Metadata Repository, you need to add this dependency. The latest version is 0.11.0 as of 20 July, 2025. <dependency> <groupid>org.graalvm.buildtools</groupid> <artifactid>graalvm-reachability-metadata</artifactid> <version>0.11.0</version> </dependency> Add the GraalVM Maven plugin and enable the use of GraalVM Reachability Metadata obtained from the above dependency. This plugin lets us set optimization levels using buildArg (in this example, the optimisation level is specified). We can also add it to native-image.properties , the native-image tool (and the Maven/Gradle plugin) will read it. <plugin> <groupid>org.graalvm.buildtools</groupid> <artifactid>native-maven-plugin</artifactid> <configuration> <metadatarepository> <enabled>true</enabled> </metadatarepository> <buildargs combine.children="append"> <buildarg>-Ob</buildarg> </buildargs> <quickbuild>true</quickbuild> </configuration> </plugin> For now, let’s build it as a Java application. $ mvn clean package Check if it works as a Java application At first, verify that the application is running without any problems and that metrics are being sent to Application Insights. Then, run the application using the Tracing Agent to generate the necessary configuration files. # (1) Collect configuration files such as reflect-config.json $JAVA_HOME/bin/java \ -agentlib:native-image-agent=config-output-dir=src/main/resources/META-INF/{groupId}/{artifactId}/ \ -jar ./target/{artifactId}-{version}.jar # (2)-a Generate a trace file $JAVA_HOME/bin/java \ -agentlib:native-image-agent=trace-output=/path/to/trace-file.json \ -jar ./target/{artifactId}-{version}.jar # (2)-b Generate a reachability metadata file from the collected trace file native-image-configure generate \ --trace-input=/path/to/trace-file.json \ --output-dir=/path/to/config-dir/ Configure Native Image with the Tracing Agent Collect Metadata with the Tracing Agent The following files are stored in the specific directory. jni-config.json reflect-config.json proxy-config.json resource-config.json reachability-metadata.json These files can be located at src/main/resources/META-INF/native-image . The native-image tool picks up configuration files located in the directory src/main/resources/META-INF/native-image . However, it is recommended that we place the files in subdirectories divided by groupId and artifactId , as shown below. src/main/resources/META-INF/native-image/{groupId}/{artifactId} native-image.properties When creating a native image, we call the following command. mvn package -Dpackaging=native-image We should specify the timing of class initialization (build time or runtime), the command line options for the native-image tool (the same command line options work in Maven/Gradle plugin), and the JVM arguments in the native-image.properties file. Indeed, these settings can be specified in pom.xml , but it is recommended that they be externalized. a) Location of configuration files: As described in the documentation, we can specify the location of configuration property files. If we build using the recommended method (placing the files in the directory src/main/resources/META-INF/native-image/{groupId}/{artifactId} ), we can specify the directory location using ${.}. -H:DynamicProxyConfigurationResources -H:JNIConfigurationResources -H:ReflectionConfigurationResources -H:ResourceConfigurationResources -H:SerializationConfigurationResources Native Image Build Configuration b) HTTP/HTTPS protocols support: We need to use --enable-https / --enable-http when using the HTTP(S) protocol in your application. URL Protocols in Native Image c) When classes are loaded and initialized: In the case of AOT compilation, classes are usually loaded at compile time and stored in the image heap (at build time). However, some classes might be specified to be loaded when the program is running. In these cases, it is necessary to explicitly specify initialization at runtime (and vice versa, of course). There are two types of build arguments. # Explicitly specify initialisation at runtime --initialize-at-run-time=... # Explicitly specify initialisation at build time --initialize-at-build-time=... To enable tracing of class initialization, use the following arguments. # Enable tracing of class initialization --trace-class-initialization=... # Deprecated in GraalVM 21.3 --trace-object-instantiation=... # Current option Specify Class Initialization Explicitly Class Initialization in Native Image d) Prevent fallback builds: If the application cannot be optimized during the Native Image build, the native-image tool will create a fallback file, which needs JVM. To prevent fallback builds, we need to specify the option --no-fallback . For other build options, please look at the following document. Command-line Options Build a Native Image application Building a native image application takes a long time (though it has got quicker over time). If building it for testing purpose, we strongly recommend enabling Quick Build and setting the optimization level to -Ob option (although this will still take time). See below for more information. Maven plugin for GraalVM Native Image Gradle plugin for GraalVM Native Image Optimizations and Performance Test as a native image application When you start the native image application, we might see the following message: This message means that GC notifications are not available because GarbageCollectorMXBean of JVM does not provide any notifications. GC notifications will not be available because no GarbageCollectorMXBean of the JVM provides any. GCs=[young generation scavenger, complete scavenger] Let’s check if the application works. 1) GET /books and GET /books/{isbn} This is a normal REST API. Call both of them a few times. 2) GET /metrics We can check the list of available metrics. { "names": [ "books.find", "books.index", "executor", "executor.active", "executor.completed", "executor.pool.core", "executor.pool.max", "executor.pool.size", "executor.queue.remaining", "executor.queued", "http.server.requests", "jvm.classes.loaded", "jvm.classes.unloaded", "jvm.memory.committed", "jvm.memory.max", "jvm.memory.used", "jvm.threads.daemon", "jvm.threads.live", "jvm.threads.peak", "jvm.threads.started", "jvm.threads.states", "logback.events", "microserviceBooksNumber.checks", "microserviceBooksNumber.latest", "microserviceBooksNumber.time", "process.cpu.usage", "process.files.max", "process.files.open", "process.start.time", "process.uptime", "system.cpu.count", "system.cpu.usage", "system.load.average.1m" ] } At first, the following three metrics are custom ones added in the class MicroserviceBooksNumberService . microserviceBooksNumber.checks microserviceBooksNumber.time microserviceBooksNumber.latest And, the following two metrics are custom ones collected in the class BooksController , which collect information such as the time taken and the number of calls. Each metric can be viewed at GET /metrics/{metric name} . books.find books.index The following is an example of microserviceBooksNumber.* . // miroserviceBooksNumber.checks { "name": "microserviceBooksNumber.checks", "measurements": [ { "statistic": "COUNT", "value": 12 } ] } // microserviceBooksNumber.time { "name": "microserviceBooksNumber.time", "measurements": [ { "statistic": "COUNT", "value": 12 }, { "statistic": "TOTAL_TIME", "value": 0.212468 }, { "statistic": "MAX", "value": 0.032744 } ], "baseUnit": "seconds" } //microserviceBooksNumber.latest { "name": "microserviceBooksNumber.latest", "measurements": [ { "statistic": "VALUE", "value": 2 } ] } Here is an example of the metric books.* . // books.index { "name": "books.index", "measurements": [ { "statistic": "COUNT", "value": 6 }, { "statistic": "TOTAL_TIME", "value": 3.08425 }, { "statistic": "MAX", "value": 3.02097 } ], "availableTags": [ { "tag": "exception", "values": [ "none" ] } ], "baseUnit": "seconds" } // books.find { "name": "books.find", "measurements": [ { "statistic": "COUNT", "value": 7 } ], "availableTags": [ { "tag": "result", "values": [ "success" ] }, { "tag": "exception", "values": [ "none" ] } ] } Metrics from Azure Monitor (application insights) Here is the grid view of custom metrics in Application Insights ( microserviceBooks.time is the average value). To confirm that the values match those in Application Insights, check the metric http.server.requests , for example. We should see three items on the graph and the value is equal to the number of API responses (3).114Views0likes0Comments