web apps
343 TopicsBuild Multi-Agent AI Systems on Azure App Service
Introduction: The Evolution of AI-Powered App Service Applications Over the past few months, we've been exploring how to supercharge existing Azure App Service applications with AI capabilities. If you've been following along with this series, you've seen how we can quickly integrate AI Foundry agents with MCP servers and host remote MCP servers directly on App Service. Today, we're taking the next leap forward by demonstrating how to build sophisticated multi-agent systems that leverage connected agents, Model Context Protocol (MCP), and OpenAPI tools - all running on Azure App Service's Premium v4 tier with .NET Aspire for enhanced observability and cloud-native development experience. đĄ Want the full technical details? This blog provides an overview of the key concepts and capabilities. For comprehensive setup instructions, architecture deep-dives, performance considerations, debugging guidance, and detailed technical documentation, check out the complete README on GitHub. What Makes This Sample Special? This fashion e-commerce demo showcases several cutting-edge technologies working together: đ€ Multi-Agent Architecture with Connected Agents Unlike single-agent systems, this sample implements an orchestration pattern where specialized agents work together: Main Orchestrator: Coordinates workflow and handles inventory queries via MCP tools Cart Manager: Specialized in shopping cart operations via OpenAPI tools Fashion Advisor: Provides expert styling recommendations Content Moderator: Ensures safe, professional interactions đ§ Advanced Tool Integration MCP Tools: Real-time connection to external inventory systems using the Model Context Protocol OpenAPI Tools: Direct agent integration with your existing App Service APIs Connected Agent Tools: Seamless agent-to-agent communication with automatic orchestration ⥠.NET Aspire Integration Enhanced development experience with built-in observability Simplified cloud-native application patterns Real-time monitoring and telemetry (when developing locally) đ Premium v4 App Service Tier Latest App Service performance capabilities Optimized for modern cloud-native workloads Enhanced scalability for AI-powered applications Key Technical Innovations Connected Agent Orchestration Your application communicates with a single main agent, which automatically coordinates with specialist agents as needed. No changes to your existing App Service code required. Dual Tool Integration This sample demonstrates both MCP tools for external system connectivity and OpenAPI tools for direct API integration. Zero-Infrastructure Overhead Agents work directly with your existing App Service APIs and external endpoints - no additional infrastructure deployment needed. Why These Technologies Matter for Real Applications The combination of these technologies isn't just about showcasing the latest features - it's about solving real business challenges. Let's explore how each component contributes to building production-ready AI applications. .NET Aspire: Enhancing the Development Experience This sample leverages .NET Aspire to provide enhanced observability and simplified cloud-native development patterns. While .NET Aspire is still in preview on App Service, we encourage you to start exploring its capabilities and keep an eye out for future updates planned for later this year. What's particularly exciting about Aspire is how it maintains the core principle we've emphasized throughout this series: making AI integration as simple as possible. You don't need to completely restructure your application to benefit from enhanced observability and modern development patterns. Premium v4 App Service: Built for Modern AI Workloads This sample is designed to run on Azure App Service Premium v4, which we recently announced is Generally Available. Premium v4 is the latest offering in the Azure App Service family, delivering enhanced performance, scalability, and cost efficiency. From Concept to Implementation: Staying True to Our Core Promise Throughout this blog series, we've consistently demonstrated that adding AI capabilities to existing applications doesn't require massive rewrites or complex architectural changes. This multi-agent sample continues that tradition - what might seem like a complex system is actually built using the same principles we've established: â Incremental Enhancement: Build on your existing App Service infrastructure â Simple Integration: Use familiar tools like azd up for deployment â Production-Ready: Leverage mature Azure services you already trust â Future-Proof: Easy to extend as new capabilities become available Looking Forward: What's Coming Next This sample represents just the beginning of what's possible with AI-powered App Service applications. Here's what we're working on next: đ MCP Authentication Integration Enhanced security patterns for production MCP server deployments, including Azure Entra ID integration. đ New Azure AI Foundry Features As Azure AI Foundry continues to evolve, we'll be updating this sample to showcase: New agent capabilities Enhanced tool integrations Performance optimizations Additional model support đ Advanced Analytics and Monitoring Deeper integration with Azure Monitor for: Agent performance analytics Business intelligence from agent interactions đ§ Additional Programming Language Support Following our multi-language MCP server samples, we'll be adding support for other languages in samples that will be added to the App Service documentation. Getting Started Today Ready to add multi-agent capabilities to your existing App Service application? The process follows the same streamlined approach we've used throughout this series. Quick Overview Clone and Deploy: Use azd up for one-command infrastructure deployment Create Your Agents: Run a Python setup script to configure the multi-agent system Connect Everything: Add one environment variable to link your agents Test and Explore: Try the sample conversations and see agent interactions đ For detailed step-by-step instructions, including prerequisites, troubleshooting tips, environment setup, and comprehensive configuration guidance, see the complete setup guide in the README. Learning Resources If you're new to this ecosystem, we recommend starting with these foundational resources: Integrate AI into your Azure App Service applications - Comprehensive guide with language-specific tutorials for building intelligent applications on App Service Supercharge Your App Service Apps with AI Foundry Agents Connected to MCP Servers - Learn the basics of integrating AI Foundry agents with MCP servers Host Remote MCP Servers on App Service - Deploy and manage MCP servers on Azure App Service Conclusion: The Future of AI-Powered Applications This multi-agent sample represents the natural evolution of our App Service AI integration journey. We started with basic agent integration, progressed through MCP server hosting, and now we're showcasing sophisticated multi-agent orchestration - all while maintaining our core principle that AI integration should enhance, not complicate, your existing applications. Whether you're just getting started with AI agents or ready to implement complex multi-agent workflows, the path forward is clear and incremental. As Azure AI Foundry adds new capabilities and App Service continues to evolve, we'll keep updating these samples and sharing new patterns. Stay tuned - the future of AI-powered applications is being built today, one agent at a time. Additional Resources đ Start Building GitHub repository for this sample - Comprehensive setup guide, architecture details, troubleshooting, and technical deep-dives đ Learn More Azure AI Foundry Documentation: Connected Agents Guide MCP Tools Setup: Model Context Protocol Integration .NET Aspire on App Service: Deployment Guide Premium v4 App Service: General Availability Announcement Have questions or want to share how you're using multi-agent systems in your applications? Join the conversation in the comments below. We'd love to hear about your AI-powered App Service success stories!136Views1like0CommentsAnnouncing the Public Preview of the New App Service Quota Self-Service Experience
Whatâs New? The updated experience introduces a dedicated App Service Quota blade in the Azure portal, offering a streamlined and intuitive interface to: View current usage and limits across the various SKUs Set custom quotas tailored to your App Service plan needs This new experience empowers developers and IT admins to proactively manage resources, avoid service disruptions, and optimize performance. Quick Reference - Start here! If your deployment requires quota for ten or more subscriptions, then file a support ticket with problem type Quota. If any subscription included in your request requires zone redundancy, then file a support ticket with problem type Quota. Otherwise, leverage the new self-service experience to increase your quota automatically. Self-service Quota Requests For non-zone-redundant needs, quota alone is sufficient to enable App Service deployment or scale-out. Follow the provided steps to place your request. 1. Navigate to the Quotas resource provider in the Azure portal 2. Select App Service Navigating the primary interface: Each App Service VM size is represented as a separate SKU. If the intention is to be able to scale up or down within a specific offering (e.g., Premium v3), then equivalent number of VMs need to be requested for each applicable size of that offering (e.g., request 5 instances for both P1v3 and P3v3). As with other quotas, you can filter by region, subscription, provider, or usage. You can also group the results by usage, quota (App Service VM type), or location (region). Current usage is represented as App Service VMs. This allows you to quickly identify which SKUs are nearing their quota limits. Adjustments can be made inline: no need to visit another page. This is covered in detail in the next section. 3. Request quota adjustments Clicking the pen icon opens a flyout window to capture the quota request: The quota type (App Service SKU) is already populated, along with current usage. Note that your request is not incremental: you must specify the new limit that you wish to see reflected in the portal. For example, to request two additional instances of P1v2 VMs, you would file the request like this: Click submit to send the request for automatic processing. How quota approvals work: Immediately upon submitting a quota request, you will see a processing dialog like the one shown: If the quota request can be automatically fulfilled, then no support request is needed. You should receive this confirmation within a few minutes of submission: If the request cannot be automatically fulfilled, then you will be given the option to file a support request with the same information. In the example below, the requested new limit exceeds what can be automatically granted for the region: 4. If applicable, create support ticket When creating a support ticket, you will need to repopulate the Region and App Service plan details; the new limit has already been populated for you. If you forget the region or SKU that was requested, you can reference them in your notifications pane: If you choose to create a support ticket, then you will interact with the capacity management team for that region. This is a 24x7 service, so requests may be created at any time. Once you have filed the support request, you can track its status via the Help + support dashboard. Known issues The self-service quota request experience for App Service is in public preview. Here are some caveats worth mentioning while the team finalizes the release for general availability: Closing the quota request flyout window will stop meaningful notifications for that request. You can still view the outcome of your quota requests by checking actual quota, but if you want to rely on notifications for alerts, then we recommend leaving the quota request window open for the few minutes that it is processing. Some SKUs are not yet represented in the quota dashboard. These will be added later in the public preview. The Activity Log does not currently provide a meaningful summary of previous quota requests and their outcomes. This will also be addressed during the public preview. As noted in the walkthrough, the new experience does not enable zone-redundant deployments. Quota is an inherently regional construct, and zone-redundant enablement requires a separate step that can only be taken in response to a support ticket being filed. Quota API documentation is being drafted to enable bulk non-zone redundant quota requests without requiring you to file a support ticket. Filing a Support Ticket If your deployment requires zone redundancy or contains many subscriptions, then we recommend filing a support ticket with issue type "Technical" and problem type "Quota": We want your feedback! If you notice any aspect of the experience that does not work as expected, or you have feedback on how to make it better, please use the comments below to share your thoughts!240Views1like0CommentsAzure App Service Premium v4 plan is now in public preview
Azure App Service Premium v4 plan is the latest offering in the Azure App Service family, designed to deliver enhanced performance, scalability, and cost efficiency. We are excited to announce the public preview of this major upgrade to one of our most popular services. Key benefits: Fully managed platform-as-a-service (PaaS) to run your favorite web stack, on both Windows and Linux. Built using next-gen Azure hardware for higher performance and reliability. Lower total cost of ownership with new pricing tailored for large-scale app modernization projects. and more to come! [Note: As of September 1st, 2025 Premium v4 is Generally Available on Azure App Service! See the launch blog for more details!] Fully managed platform-as-a-service (PaaS) As the next generation of one of the leading PaaS solutions, Premium v4 abstracts infrastructure management, allowing businesses to focus on application development rather than server maintenance. This reduces operational overhead, as tasks like patching, load balancing, and auto-scaling are handled automatically by Azure, saving time and IT resources. App Serviceâs auto-scaling optimizes costs by adjusting resources based on demand and saves you the cost and overhead of under- or over-provisioning. Modernizing applications with PaaS delivers a compelling economic impact by helping you eliminate legacy inefficiencies, decrease long-term costs, and increase your competitive agility through seamless cloud integration, CI/CD pipelines, and support for multiple languages. Higher performance and reliability Built on the latest Dadsv6 / Eadsv6 series virtual machines and NVMe based temporary storage, the App Service Premium v4 plan offers higher performance compared to previous Premium generations. According to preliminary measurements during private preview, you may expect to see: >25% performance improvement using Pmv4 plans, relative to the prior generation of memory-optimized Pmv3 plans >50% performance improvement using Pv4 plans, relative to the prior generation of non-memory-optimized Pv3 plans Please note that these features and metrics are preliminary and subject to change during public preview. Premium v4 provides a similar line-up to Premium v3, with four non-memory optimized options (P0v4 though P3v4) and five memory-optimized options (P1mv4 through P5mv4). Features like deployment slots, integrated monitoring, and enhanced global zone resilience further enhance the reliability and user experience, improving customer satisfaction. Lower total cost of ownership (TCO) Driven by the urgency to adopt generative AI and to stay competitive, application modernization has rapidly become one of the top priorities in boardrooms everywhere. Whether you are a large enterprise or a small shop running your web apps in the cloud, you will find App Service Premium v4 is designed to offer you the most compelling performance-per-dollar compared to previous generations, making it an ideal managed solution to run high-demand applications. Using the agentic AI GitHub Copilot app modernization tooling announced in preview at Microsoft Build 2025, you can save up to 24% when you upgrade and modernize your .NET web apps running on Windows Server to Azure App Service for Windows on Premium v4 compared with Premium v3. You will also be able to use deeper commitment-based discounts such as reserved instances and savings plan for Premium v4 when the service is generally available (GA). For more detailed pricing on the various CPU and memory options, see the pricing pages for Windowsand Linux as well as the Azure Pricing Calculator. Get started The preview will roll out globally over the next few weeks. Premium v4 is currently available in the following regions [updated 08/22/2025]: Australia East Canada Central Central US East US East US 2 France Central Japan East Korea Central North Central US North Europe Norway East Poland Central Southeast Asia Sweden Central Switzerland North UK South West Central US West Europe West US West US 3 App Service is continuing to expand the Premium v4 footprint with many additional regions planned to come online over the coming weeks and months. Customers can reference the product documentation for details on how to configure Premium v4 as well as a regularly updated list of regional availability. We encourage you to start assessing your apps using the partners and tools for Azure App Modernization, start using Premium v4 to better understand the benefits and capabilities, and build a plan to hit the ground running when the service is generally available. Watch this space for more information on GA! Key Resources Microsoft Build 2025 on-demand session: https://aka.ms/Build25/BRK200 Azure App Service documentation: https://aka.ms/AppService/Pv4docs Azure App Service web page: https://www.azure.com/appservice Join the Community Standups: https://aka.ms/video/AppService/community Follow us on X: @AzAppService2.1KViews2likes2CommentsWhat's the secret sauce for getting Functions API to work with static web site?
I'm brand new, got my first Azure static web site up and running so that's good! Now I need to create some images in code and that's fighting me tooth and nail. The code to generate the image looks like this: using Microsoft.Azure.Functions.Worker; using Microsoft.Azure.Functions.Worker.Http; using Microsoft.Extensions.Logging; using SkiaSharp; using System.Diagnostics; using System.IO; using System.Net; namespace Api { public class GenerateImage { private readonly ILogger _logger; public GenerateImage(ILoggerFactory loggerFactory) { Debug.WriteLine($"GenerateImage.GenerateImage()"); _logger = loggerFactory.CreateLogger<GenerateImage>(); } // http://localhost:7071/api/image/124 works [Function("GenerateImage")] public HttpResponseData Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "image/{id}")] HttpRequestData req, string id) { int width = 200, height = 100; Debug.WriteLine($"GenerateImage.Run() [id={id}]"); using var bitmap = new SKBitmap(width, height); using var canvas = new SKCanvas(bitmap); canvas.Clear(SKColors.LightBlue); var paint = new SKPaint { Color = SKColors.Black, TextSize = 24, IsAntialias = true }; canvas.DrawText($"ID: {id}", 10, 50, paint); using var ms = new MemoryStream(); bitmap.Encode(ms, SKEncodedImageFormat.Png, 100); ms.Position = 0; var response = req.CreateResponse(HttpStatusCode.OK); response.Headers.Add("Content-Type", "image/png"); response.Headers.Add("Cache-Control", "public, max-age=86400"); // 1 day // response.Body = ms; ms.CopyTo(response.Body); return response; } } } and if I navigate to http://localhost:7071/api/image/124 (for example) it happily generates an image with the number 124 in it. But if I add the HTML tag <img src="/api/image/123" alt="Generated Image"> to one of my other web pages, it says there's no such page. Apparently this is because my web pages are coming from my web site and it's at https://localhost:7154 and it doesn't know how to contact the Functions API. My staticwebapp.config.json looks like this: { "routes": [ { "route": "/api/*", "allowedRoles": [ "anonymous" ] } ], "navigationFallback": { "rewrite": "/index.html", "exclude": [ "/api/*" ] } } What am I missing?26Views0likes1CommentGenerating Classes with Custom Naming Conventions Using GitHub Copilot and a Custom MCP Server
GitHub Spark and GitHub Copilot are powerful development tools that can significantly boost productivity even when used out of the box. However, in enterprise settings, a common request is for development support that aligns with specific compliances or regulations. While GitHub Copilot allows you to choose models like GPT-4o or others, it does not currently support the use of custom fine-tuned models. Additionally, many users might find it unclear how to integrate Copilot with external services, which can be a source of frustration. To address such needs, one possible approach is to build a custom MCP server and connect it to GitHub Copilot. For a basic âHello Worldâ style guide on how to set this up, please refer to the articles below. https://devblogs.microsoft.com/dotnet/build-a-model-context-protocol-mcp-server-in-csharp/ https://learn.microsoft.com/en-us/dotnet/ai/quickstarts/build-mcp-server By building an MCP server as an ASP.NET Core application and integrating it with GitHub Copilot, you can introduce custom rules and functionality tailored to your organizationâs needs. The architecture for this post would look like the following: While the MCP server can technically be hosted anywhere as long as HTTP communication is possible, for enterprise use cases, itâs often recommended to deploy it within a private endpoint inside a closed Virtual Network. In production environments, this setup can be securely accessed from client machines via ExpressRoute, ensuring both compliance and network isolation. Building an MCP Server Using ASP.NET Core Start by creating a new ASP.NET Core Web API project in Visual Studio. Then, install the required libraries via NuGet. Note: Make sure to enable the option to include preview versionsâotherwise, some of the necessary packages may not appear in the list. ModelContextProtocol ModelContextProtocol.AspNetCore Next, update the Program.cs file as shown below to enable the MCP server functionality. Weâll create the NamingConventionManagerTool class later, but as you can see, itâs being registered via Dependency Injection during application startup. This allows it to be integrated as part of the MCP serverâs capabilities. using MCPServerLab01.Tools; var builder = WebApplication.CreateBuilder(args); builder.WebHost.ConfigureKestrel(servreOptions => { servreOptions.ListenAnyIP(8888); // you can change any port to adopt your environment }); builder.Logging.AddConsole(consoleLogOptions => { consoleLogOptions.LogToStandardErrorThreshold = LogLevel.Trace; }); builder.Services.AddMcpServer() .WithHttpTransport() .WithTools<NamingConventionManagerTool>(); var app = builder.Build(); app.MapMcp(); app.Run(); Next, create the NamingConventionManagerTool.cs file. By decorating the class with the [McpServerToolType] and [McpServerTool] attributes, you can expose it as a feature accessible via the MCP server. In this example, weâll add a tool that assigns class names based on business logic IDsâa common pattern in system integration projects. The class will include the following methods: GetNamingRules: Provides an overview of the naming conventions to follow. GenerateClassNamingConvention: Generates a class name based on a given business logic ID. DetermineBusinessCategory: Extracts the business logic ID from a given class name. As noted in the prompt, weâll assume this is part of a fictional project called Normalian Project. using ModelContextProtocol.Server; using System.ComponentModel; using System.Text.Json.Serialization; namespace MCPServerLab01.Tools; [McpServerToolType] [Description()] public class NamingConventionManagerTool { // Counter for sequential management // This is just a trial, so here is a static variable. For production, consider saving to a DB or other persistent storage. static int _counter = 0; [McpServerTool, Description(""" Provides Normalian Project rules that must be followed when adding or modifying programs. Be sure to refer to these rules as they are mandatory. """)] public string GetNamingRules() { return """ In this Normalian project, to facilitate management, class names must follow naming conventions based on business categories. Class names according to business categories are provided by the `GenerateClassNamingConvention` tool. Please define classes using the names provided here. Do not define classes with any other names. If you are unsure about the business category from the class name, use the `DetermineBusinessCategory` tool to obtain the business category. """; } [McpServerTool, Description(""" Retrieves a set of classes and namespaces that should be created for the specified business category in Normalian project. You must create classes using the names suggested here. """)] public ClassNamingConvention GenerateClassNamingConvention( [Description("Business category for the class to be created")] BusinessCategory businessCategory) { var number = _counter++; var prefix = businessCategory switch { BusinessCategory.NormalianOrder => "A", BusinessCategory.NormalianProduct => "B", BusinessCategory.NormalianCustomer => "C", BusinessCategory.NormalianSupplier => "D", BusinessCategory.NormalianEmployee => "E", _ => throw new ArgumentException("Unknown category."), }; var name = $"{prefix}{number:D4}"; return new ClassNamingConvention( ServiceNamespace: "{YourRootNamespace}.Services", ServiceClassName: $"{name}Service", UsecaseNamespace: "{YourRootNamespace}.Usecases", UsecaseClassName: $"{name}Usecase", DtoNamespace: "{YourRootNamespace}.Dtos", DtoClassName: $"{name}Dto"); } [McpServerTool, Description("If you do not know the business category in Normalian project from the class name, check the naming convention to obtain the business category to which the class belongs.")] public BusinessCategory DetermineBusinessCategory( [Description("Class name")] string className) { ArgumentException.ThrowIfNullOrEmpty(className); var prefix = className[0]; return prefix switch { 'A' => BusinessCategory.NormalianOrder, 'B' => BusinessCategory.NormalianProduct, 'C' => BusinessCategory.NormalianCustomer, 'D' => BusinessCategory.NormalianSupplier, 'E' => BusinessCategory.NormalianEmployee, _ => throw new ArgumentException("Unknown class name."), }; } } [Description("Class name to use in Normalian project")] public record ClassNamingConvention( [Description("Service namespace")] string ServiceNamespace, [Description("Class name to use for the service layer")] string ServiceClassName, [Description("Usecase namespace")] string UsecaseNamespace, [Description("Class name to use for the usecase layer")] string UsecaseClassName, [Description("DTO namespace")] string DtoNamespace, [Description("Class name to use for DTOs")] string DtoClassName); [JsonConverter(typeof(JsonStringEnumConverter))] public enum BusinessCategory { NormalianOrder, NormalianProduct, NormalianCustomer, NormalianSupplier, NormalianEmployee, } Next, run the project in Visual Studio to launch the MCP server as an ASP.NET Core application. Once the application starts, take note of the HTTP endpoint displayed in the console or output window as follows âthis will be used to interact with the MCP server. Connecting GitHub Copilot Agent to the MCP Server Next, connect the GitHub Copilot Agent to the MCP server. You can easily connect the GitHub Copilot Agent by simply specifying the MCP server's endpoint. To add the server, select Agent mode and click the wrench icon as shown below. From Add MCP Server, select HTTP and specify http://localhost:<<PORT>>/sse endpoint. Give it an appropriate name, then choose where to save the MCP server settingsâeither User Settings or Workspace Settings. If you will use it just for yourself, User Settings should be fine. However, selecting Workspace Settings is useful when you want to share the configuration with your team. Since this is just a trial and we only want to use it within this workspace, we chose Workspace Settings. This will create a .vscode/mcp.json file with the following content: { "servers": { "mine-mcp-server": { "type": "http", "url": "http://localhost:8888/" } }, "inputs": [] } You'll see a Start icon on top of the JSON fileâclick it to launch the MCP server. Once started, the MCP server will run as shown below. You can also start it from the GitHub Copilot Chat window. After launching, if you click the wrench icon in the Chat window, you'll see a list of tools available on the connected MCP server. Using the Created Features via GitHub Copilot Agent Now, letâs try it out. Open the folder of your .NET console app project in VS Code, and make a request to the Agent like the one shown below. Itâs doing a great job trying to check the rules. Next, it continues by trying to figure out the class name and other details needed for implementation. Then, following the naming conventions, it looks up the appropriate namespace and even starts creating folders. Once the folder is created, the class is properly generated with the specified class name. Even when you ask about the business category, it uses the tool correctly to look it up. Impressive! Conclusion In this article, we introduced how to build your own MCP server and use it via the GitHub Copilot Agent. By implementing an MCP server and adding tools tailored to your business needs, you can gain a certain level of control over the Agentâs behavior and build your own ecosystem. For this trial, we used a local machine to test the behavior, but for production use, you'll need to consider deploying the MCP server to Azure, adding authentication features, and other enhancements.374Views2likes0CommentsSupercharge Your App Service Apps with AI Foundry Agents Connected to MCP servers
The integration of AI into web applications has reached a new milestone with the introduction of Model Context Protocol (MCP) support in Azure AI Foundry Agent Service. This powerful new feature allows developers to extend the capabilities of their Azure AI Foundry agents by connecting them to tools hosted on remote MCP serversâand the best part? Both your applications and MCP servers can be hosted seamlessly on Azure App Service. No custom code is required to get your agents hooked up to these MCP servers either, it's built right into this new functionality. In this blog post, we'll explore how this new functionality works, demonstrate a practical implementation, and show you how to get started with your own MCP-enabled applications on App Service. What is Model Context Protocol (MCP)? The Model Context Protocol (MCP) is an open standard that defines how applications provide tools and contextual data to large language models (LLMs). Think of it as a standardized bridge that allows AI agents to interact with external systems, APIs, and data sources. Key benefits of MCP include: Standardized Integration: A consistent way to connect AI agents with external tools Flexible Tool Discovery: Dynamic discovery of available tools and capabilities Scalable Architecture: Easily share tools across multiple agents and applications Simple Implementation: Less complex than traditional API integration approaches Introducing Our MCP-Enabled App Service Solution We've built a comprehensive solution that demonstrates how to leverage MCP with Azure AI Foundry agents on App Service. Our implementation includes: 1. MCP Agent Client Application Repository: Azure-Samples/app-service-mcp-foundry-agent-python This is a FastAPI web application that allows you to: Create and manage Azure AI Foundry agents Connect agents to remote MCP servers Chat with agents through a web interface Deploy easily to Azure App Service using Azure Developer CLI (azd) The application serves as both a demonstration and a starting point for your own MCP-enabled applications. It can be used to connect to any publicly accessible MCP server so it's a great way to get started with the new MCP server connected agents feature in Azure AI Foundry. No custom code required. All you need is a publicly accessible MCP server and you can get started immediately. The app uses a managed identity to securely connect to your Azure AI Foundry project resource. 2. Example MCP Server - To-do List MCP Server Repository: Azure-Samples/app-service-python-todo-mcp This is a complete MCP server implementation that provides to-do list management capabilities. The server exposes MCP tools that allow agents to: Create, read, update, and delete to-do items List to-dos with filtering and sorting options Manage to-do categories and priorities Perform bulk operations on to-do items This example demonstrates how easy it is to take an existing application (a to-do list) and add AI agent capabilities through MCP integration. The agent requires no custom code, and the to-do list app required no changes to support the agent integration. Use this app as a starting point to experiment with the MCP server connected agents feature in Azure AI Foundry. Getting Started Ready to try this yourself? Here's how to get started: Prerequisites Azure subscription Azure Developer CLI (azd) installed Python 3.11+ (for local development) Quick Deployment 1. Clone the main application repository: git clone https://github.com/Azure-Samples/app-service-mcp-foundry-agent-python.git cd app-service-mcp-foundry-agent-python 2. Deploy to Azure using azd: azd auth login azd up Note: Since MCP support in Azure AI Foundry is currently in preview, you must deploy to one of the supported regions. Check the How it works section of the documentation for the current list of supported regions. The `azd up` command will prompt you to select a region during deployment. 3. Deploy the example MCP server (in a separate terminal): git clone https://github.com/Azure-Samples/app-service-python-todo-mcp.git cd app-service-python-todo-mcp azd up That's it! Both applications will be deployed to Azure App Service with all the necessary Azure AI Foundry resources configured automatically. You will end up with two separate App Services, one for the chat app and one for the to-do list app. Open both of these apps side by side to see the magic in real time. Using the Application Once deployed, you can: Connect to your MCP server: Enter the URL of the MCP endpoint of your deployed to-do MCP server into the chat app. For the provided to-do app, the URL is given at the top of the site. You can alternatively use any publicly accessible MCP server here and start chatting with an agent that will connect to it. Start chatting: Ask the agent to manage your to-dos using natural language. See the magic: Watch as the agent uses MCP tools to perform to-do operations. Refresh the page for your to-do app to see the changes as your agent interacts with it. MCP vs. OpenAPI: Multiple options to get your apps connected to AI agents Previously, we explained how to connect Azure AI Foundry agents to web applications using OpenAPI-defined tools from the AI Foundry Service. This method relies on an OpenAPI specification and is an excellent way to add agents to existing apps without any code changes. For legacy applications where updating to support MCP server capabilities isn't feasible, OpenAPI integration offers a straightforward path to modern AI experiences. If you don't have an OpenAPI specification, you can even use GitHub Copilot to generate one for your app. For more details and a practical example with App Service, see this tutorial. Expanding the Possibilities The to-do list example is just the beginning to help you understand the power of this feature. With this pattern, you can easily add AI capabilities to any existing application: E-commerce sites: Let agents help customers find products, manage orders, and track shipments CRM systems: Enable natural language customer data queries and updates Content management: Allow AI-powered content creation and organization Financial applications: Provide intelligent expense tracking and reporting The key insight is that you don't need to rebuild your application from scratch. Instead, you create an MCP server that exposes your existing functionality, then connect AI agents to it. What's Next? This new MCP integration in Azure AI Foundry represents a significant step forward in making AI more accessible to developers. We've shown you how to: Deploy MCP-enabled applications to Azure App Service Connect Azure AI Foundry agents to remote MCP servers Build practical, real-world AI integrations with minimal effort The combination of Azure AI Foundry's powerful agent capabilities with the simplicity of MCP and the reliability of App Service creates endless possibilities for enhancing your applications with AI. The future of AI-enabled applications is here, and it's easier to implement than ever before. With Azure App Service, Azure AI Foundry, and MCP, you have everything you need to transform your applications into intelligent, conversational experiences. This blog post is part of a series on integrating AI capabilities into Azure applications. For more Azure App Service and AI integration content, visit the Azure App Service documentation.564Views2likes2CommentsSend metrics from Micronaut native image applications to Azure Monitor
The original post (Japanese) was written on 20 July 2025. MicronautăăAzure Monitoră«metricăé俥ăăă â Logico Inside This entry is related to the following one. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Prerequisites Maven: 3.9.10 JDK version 21 Micronaut: 4.9.0 or later The following tutorials were used as a reference. Create a Micronaut Application to Collect Metrics and Monitor Them on Azure Monitor Metrics Collect Metrics with Micronaut Create an archetype We can create an archetype using Micronautâs CLI ( mn ) or Micronaut Launch. In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature âyamlâ so that we can include dependencies for using yaml. Micronaut Launch https://micronaut.io/launch/ $ mn create-app \ --build=maven \ --jdk=21 \ --lang=java \ --test=junit \ --features=validation,graalvm,micrometer-azure-monitor,http-client,micrometer-annotation,yaml \ dev.logicojp.micronaut.azuremonitor-metric When using Micronaut Launch, click [FEATURES] and select the following features. validation graalvm micrometer-azure-monitor http-client micrometer-annotation yaml After all features are selected, click [GENERATE PROJECT] and choose [Download Zip] to download an archetype in Zip file. Implementation In this section, weâre going to use the GDK sample code that we can find in the tutorial. The code is from the Micronaut Guides, but the database access and other parts have been taken out. We have made the following changes to the code to make it fit our needs. a) Structure of the directory In the GDK tutorial, folders called azure and lib are created, but this structure isnât used in the standard Micronaut archetype. So, codes in both directories has now been combined. b) Instrumentation Key As the tutorial above and the Micronaut Micrometer documentation explain, we need to specify the Instrumentation Key. When we create an archetype using Micronaut CLI or Micronaut Launch, the configuration assuming the use of the Instrumentation Key is included in application.properties / application.yml . 6.3 Azure Monitor Registry Micronaut Micrometer This configuration will work, but currently, Application Insights does not recommend accessing it using only the Instrumentation Key. So, it is better to modify the connection string to include the Instrumentation Key. To set it up, open the file application.properties and enter the following information: micronaut.metrics.export.azuremonitor.connectionString="InstrumentationKey=...." In the case of application.yml , we need to specify the connection string in YAML format. micronaut: metrics: enabled: true export: azuremonitor: enabled: true connectionString: InstrumentationKey=.... We can also specify the environment variable MICRONAUT_METRICS_EXPORT_AZUREMONITOR_CONNECTIONSTRING , but since this environment variable name is too long, it is better to use a shorter one. Hereâs an example using AZURE_MONITOR_CONNECTION_STRING (which is also long, if you think about it). micronaut.metrics.export.azuremonitor.connectionString=${AZURE_MONITOR_CONNECTION_STRING} micronaut: metrics: enabled: true export: azuremonitor: enabled: true connectionString: ${AZURE_MONITOR_CONNECTION_STRING} The connection string can be specified because Micrometer, which is used internally, already supports it. We can find the AzurMonitorConfig.java file here. AzureMonitorConfig.java micrometer/implementations/micrometer-registry-azure-monitor/src/main/java/io/micrometer/azuremonitor/AzureMonitorConfig.java at main · micrometer-metrics/micrometer The settings in application.properties / application.yml are as follows. For more information about the specified meter binders, please look at the following documents. Meter Binder Micronaut Micrometer micronaut: application: name: azuremonitor-metric metrics: enabled: true binders: files: enabled: true jdbc: enabled: true jvm: enabled: true logback: enabled: true processor: enabled: true uptime: enabled: true web: enabled: true export: azuremonitor: enabled: true step: PT1M connectionString: ${AZURE_MONITOR_CONNECTION_STRING} c) pom.xml To use the GraalVM Reachability Metadata Repository, you need to add this dependency. The latest version is 0.11.0 as of 20 July, 2025. <dependency> <groupid>org.graalvm.buildtools</groupid> <artifactid>graalvm-reachability-metadata</artifactid> <version>0.11.0</version> </dependency> Add the GraalVM Maven plugin and enable the use of GraalVM Reachability Metadata obtained from the above dependency. This plugin lets us set optimization levels using buildArg (in this example, the optimisation level is specified). We can also add it to native-image.properties , the native-image tool (and the Maven/Gradle plugin) will read it. <plugin> <groupid>org.graalvm.buildtools</groupid> <artifactid>native-maven-plugin</artifactid> <configuration> <metadatarepository> <enabled>true</enabled> </metadatarepository> <buildargs combine.children="append"> <buildarg>-Ob</buildarg> </buildargs> <quickbuild>true</quickbuild> </configuration> </plugin> For now, letâs build it as a Java application. $ mvn clean package Check if it works as a Java application At first, verify that the application is running without any problems and that metrics are being sent to Application Insights. Then, run the application using the Tracing Agent to generate the necessary configuration files. # (1) Collect configuration files such as reflect-config.json $JAVA_HOME/bin/java \ -agentlib:native-image-agent=config-output-dir=src/main/resources/META-INF/{groupId}/{artifactId}/ \ -jar ./target/{artifactId}-{version}.jar # (2)-a Generate a trace file $JAVA_HOME/bin/java \ -agentlib:native-image-agent=trace-output=/path/to/trace-file.json \ -jar ./target/{artifactId}-{version}.jar # (2)-b Generate a reachability metadata file from the collected trace file native-image-configure generate \ --trace-input=/path/to/trace-file.json \ --output-dir=/path/to/config-dir/ Configure Native Image with the Tracing Agent Collect Metadata with the Tracing Agent The following files are stored in the specific directory. jni-config.json reflect-config.json proxy-config.json resource-config.json reachability-metadata.json These files can be located at src/main/resources/META-INF/native-image . The native-image tool picks up configuration files located in the directory src/main/resources/META-INF/native-image . However, it is recommended that we place the files in subdirectories divided by groupId and artifactId , as shown below. src/main/resources/META-INF/native-image/{groupId}/{artifactId} native-image.properties When creating a native image, we call the following command. mvn package -Dpackaging=native-image We should specify the timing of class initialization (build time or runtime), the command line options for the native-image tool (the same command line options work in Maven/Gradle plugin), and the JVM arguments in the native-image.properties file. Indeed, these settings can be specified in pom.xml , but it is recommended that they be externalized. a) Location of configuration files: As described in the documentation, we can specify the location of configuration property files. If we build using the recommended method (placing the files in the directory src/main/resources/META-INF/native-image/{groupId}/{artifactId} ), we can specify the directory location using ${.}. -H:DynamicProxyConfigurationResources -H:JNIConfigurationResources -H:ReflectionConfigurationResources -H:ResourceConfigurationResources -H:SerializationConfigurationResources Native Image Build Configuration b) HTTP/HTTPS protocols support: We need to use --enable-https / --enable-http when using the HTTP(S) protocol in your application. URL Protocols in Native Image c) When classes are loaded and initialized: In the case of AOT compilation, classes are usually loaded at compile time and stored in the image heap (at build time). However, some classes might be specified to be loaded when the program is running. In these cases, it is necessary to explicitly specify initialization at runtime (and vice versa, of course). There are two types of build arguments. # Explicitly specify initialisation at runtime --initialize-at-run-time=... # Explicitly specify initialisation at build time --initialize-at-build-time=... To enable tracing of class initialization, use the following arguments. # Enable tracing of class initialization --trace-class-initialization=... # Deprecated in GraalVM 21.3 --trace-object-instantiation=... # Current option Specify Class Initialization Explicitly Class Initialization in Native Image d) Prevent fallback builds: If the application cannot be optimized during the Native Image build, the native-image tool will create a fallback file, which needs JVM. To prevent fallback builds, we need to specify the option --no-fallback . For other build options, please look at the following document. Command-line Options Build a Native Image application Building a native image application takes a long time (though it has got quicker over time). If building it for testing purpose, we strongly recommend enabling Quick Build and setting the optimization level to -Ob option (although this will still take time). See below for more information. Maven plugin for GraalVM Native Image Gradle plugin for GraalVM Native Image Optimizations and Performance Test as a native image application When you start the native image application, we might see the following message: This message means that GC notifications are not available because GarbageCollectorMXBean of JVM does not provide any notifications. GC notifications will not be available because no GarbageCollectorMXBean of the JVM provides any. GCs=[young generation scavenger, complete scavenger] Letâs check if the application works. 1) GET /books and GET /books/{isbn} This is a normal REST API. Call both of them a few times. 2) GET /metrics We can check the list of available metrics. { "names": [ "books.find", "books.index", "executor", "executor.active", "executor.completed", "executor.pool.core", "executor.pool.max", "executor.pool.size", "executor.queue.remaining", "executor.queued", "http.server.requests", "jvm.classes.loaded", "jvm.classes.unloaded", "jvm.memory.committed", "jvm.memory.max", "jvm.memory.used", "jvm.threads.daemon", "jvm.threads.live", "jvm.threads.peak", "jvm.threads.started", "jvm.threads.states", "logback.events", "microserviceBooksNumber.checks", "microserviceBooksNumber.latest", "microserviceBooksNumber.time", "process.cpu.usage", "process.files.max", "process.files.open", "process.start.time", "process.uptime", "system.cpu.count", "system.cpu.usage", "system.load.average.1m" ] } At first, the following three metrics are custom ones added in the class MicroserviceBooksNumberService . microserviceBooksNumber.checks microserviceBooksNumber.time microserviceBooksNumber.latest And, the following two metrics are custom ones collected in the class BooksController , which collect information such as the time taken and the number of calls. Each metric can be viewed at GET /metrics/{metric name} . books.find books.index The following is an example of microserviceBooksNumber.* . // miroserviceBooksNumber.checks { "name": "microserviceBooksNumber.checks", "measurements": [ { "statistic": "COUNT", "value": 12 } ] } // microserviceBooksNumber.time { "name": "microserviceBooksNumber.time", "measurements": [ { "statistic": "COUNT", "value": 12 }, { "statistic": "TOTAL_TIME", "value": 0.212468 }, { "statistic": "MAX", "value": 0.032744 } ], "baseUnit": "seconds" } //microserviceBooksNumber.latest { "name": "microserviceBooksNumber.latest", "measurements": [ { "statistic": "VALUE", "value": 2 } ] } Here is an example of the metric books.* . // books.index { "name": "books.index", "measurements": [ { "statistic": "COUNT", "value": 6 }, { "statistic": "TOTAL_TIME", "value": 3.08425 }, { "statistic": "MAX", "value": 3.02097 } ], "availableTags": [ { "tag": "exception", "values": [ "none" ] } ], "baseUnit": "seconds" } // books.find { "name": "books.find", "measurements": [ { "statistic": "COUNT", "value": 7 } ], "availableTags": [ { "tag": "result", "values": [ "success" ] }, { "tag": "exception", "values": [ "none" ] } ] } Metrics from Azure Monitor (application insights) Here is the grid view of custom metrics in Application Insights ( microserviceBooks.time is the average value). To confirm that the values match those in Application Insights, check the metric http.server.requests , for example. We should see three items on the graph and the value is equal to the number of API responses (3).98Views0likes0Comments