azure app service
477 TopicsMonitor AI Agents on App Service with OpenTelemetry and the New Application Insights Agents View
Part 2 of 2: In Blog 1, we deployed a multi-agent travel planner on Azure App Service using the Microsoft Agent Framework (MAF) 1.0 GA. This post dives deep into how we instrumented those agents with OpenTelemetry and lit up the brand-new Agents (Preview) view in Application Insights. 📋 Prerequisite: This post assumes you've followed the guidance in Blog 1 to deploy the multi-agent travel planner to Azure App Service. If you haven't deployed the app yet, start there first — you'll need a running App Service with the agents, Service Bus, Cosmos DB, and Azure OpenAI provisioned before the monitoring steps in this post will work. Deploying Agents Is Only Half the Battle In Blog 1, we walked through deploying a multi-agent travel planning application on Azure App Service. Six specialized agents — a Coordinator, Currency Converter, Weather Advisor, Local Knowledge Expert, Itinerary Planner, and Budget Optimizer — work together to generate comprehensive travel plans. The architecture uses an ASP.NET Core API backed by a WebJob for async processing, Azure Service Bus for messaging, and Azure OpenAI for the brains. But here's the thing: deploying agents to production is only half the battle. Once they're running, you need answers to questions like: Which agent is consuming the most tokens? How long does the Itinerary Planner take compared to the Weather Advisor? Is the Coordinator making too many LLM calls per workflow? When something goes wrong, which agent in the pipeline failed? Traditional APM gives you HTTP latencies and exception rates. That's table stakes. For AI agents, you need to see inside the agent — the model calls, the tool invocations, the token spend. And that's exactly what Application Insights' new Agents (Preview) view delivers, powered by OpenTelemetry and the GenAI semantic conventions. Let's break down how it all works. The Agents (Preview) View in Application Insights Azure Application Insights now includes a dedicated Agents (Preview) blade that provides unified monitoring purpose-built for AI agents. It's not just a generic dashboard — it understands agent concepts natively. Whether your agents are built with Microsoft Agent Framework, Azure AI Foundry, Copilot Studio, or a third-party framework, this view lights up as long as your telemetry follows the GenAI semantic conventions. Here's what you get out of the box: Agent dropdown filter — A dropdown populated by gen_ai.agent.name values from your telemetry. In our travel planner, this shows all six agents: "Travel Planning Coordinator", "Currency Conversion Specialist", "Weather & Packing Advisor", "Local Expert & Cultural Guide", "Itinerary Planning Expert", and "Budget Optimization Specialist". You can filter the entire dashboard to one agent or view them all. Token usage metrics — Visualizations of input and output token consumption, broken down by agent. Instantly see which agents are the most expensive to run. Operational metrics — Latency distributions, error rates, and throughput for each agent. Spot performance regressions before users notice. End-to-end transaction details — Click into any trace to see the full workflow: which agents were invoked, what tools they called, how long each step took. The "simple view" renders agent steps in a story-like format that's remarkably easy to follow. Grafana integration — One-click export to Azure Managed Grafana for custom dashboards and alerting. The key insight: this view isn't magic. It works because the telemetry is structured using well-defined semantic conventions. Let's look at those next. 📖 Docs: Application Insights Agents (Preview) view documentation GenAI Semantic Conventions — The Foundation The entire Agents view is powered by the OpenTelemetry GenAI semantic conventions. These are a standardized set of span attributes that describe AI agent behavior in a way that any observability backend can understand. Think of them as the "contract" between your instrumented code and Application Insights. Let's walk through the key attributes and why each one matters: gen_ai.agent.name This is the human-readable name of the agent. In our travel planner, each agent sets this via the name parameter when constructing the MAF ChatClientAgent — for example, "Weather & Packing Advisor" or "Budget Optimization Specialist" . This is what populates the agent dropdown in the Agents view. Without this attribute, Application Insights would have no way to distinguish one agent from another in your telemetry. It's the single most important attribute for agent-level monitoring. gen_ai.agent.description A brief description of what the agent does. Our Weather Advisor, for example, is described as "Provides weather forecasts, packing recommendations, and activity suggestions based on destination weather conditions." This metadata helps operators and on-call engineers quickly understand an agent's role without diving into source code. It shows up in trace details and helps contextualize what you're looking at when debugging. gen_ai.agent.id A unique identifier for the agent instance. In MAF, this is typically an auto-generated GUID. While gen_ai.agent.name is the human-friendly label, gen_ai.agent.id is the machine-stable identifier. If you rename an agent, the ID stays the same, which is important for tracking agent behavior across code deployments. gen_ai.operation.name The type of operation being performed. Values include "chat" for standard LLM calls and "execute_tool" for tool/function invocations. In our travel planner, when the Weather Advisor calls the GetWeatherForecast function via NWS, or when the Currency Converter calls ConvertCurrency via the Frankfurter API, those tool calls get their own spans with gen_ai.operation.name = "execute_tool" . This lets you measure LLM think-time separately from tool execution time — a critical distinction for performance optimization. gen_ai.request.model / gen_ai.response.model The model used for the request and the model that actually served the response (these can differ when providers do model routing). In our case, both are "gpt-4o" since that's what we deploy via Azure OpenAI. These attributes let you track model usage across agents, spot unexpected model assignments, and correlate performance changes with model updates. gen_ai.usage.input_tokens / gen_ai.usage.output_tokens Token consumption per LLM call. This is what powers the token usage visualizations in the Agents view. The Coordinator agent, which aggregates results from all five specialist agents, tends to have higher output token counts because it's synthesizing a full travel plan. The Currency Converter, which makes focused API calls, uses fewer tokens overall. These attributes let you answer the question "which agent is costing me the most?" — and more importantly, let you set alerts when token usage spikes unexpectedly. gen_ai.system The AI system or provider. In our case, this is "openai" (set by the Azure OpenAI client instrumentation). If you're using multiple AI providers — say, Azure OpenAI for planning and a local model for classification — this attribute lets you filter and compare. Together, these attributes create a rich, structured view of agent behavior that goes far beyond generic tracing. They're the reason Application Insights can render agent-specific dashboards with token breakdowns, latency distributions, and end-to-end workflow views. Without these conventions, all you'd see is opaque HTTP calls to an OpenAI endpoint. 💡 Key takeaway: The GenAI semantic conventions are what transform generic distributed traces into agent-aware observability. They're the bridge between your code and the Agents view. Any framework that emits these attributes — MAF, Semantic Kernel, LangChain — can light up this dashboard. Two Layers of OpenTelemetry Instrumentation Our travel planner sample instruments at two distinct levels, each capturing different aspects of agent behavior. Let's look at both. Layer 1: IChatClient-Level Instrumentation The first layer instruments at the IChatClient level using Microsoft.Extensions.AI . This is where we wrap the Azure OpenAI chat client with OpenTelemetry: var client = new AzureOpenAIClient(azureOpenAIEndpoint, new DefaultAzureCredential()); // Wrap with OpenTelemetry to emit GenAI semantic convention spans return client.GetChatClient(modelDeploymentName).AsIChatClient() .AsBuilder() .UseOpenTelemetry() .Build(); This single .UseOpenTelemetry() call intercepts every LLM call and emits spans with: gen_ai.system — the AI provider (e.g., "openai" ) gen_ai.request.model / gen_ai.response.model — which model was used gen_ai.usage.input_tokens / gen_ai.usage.output_tokens — token consumption per call gen_ai.operation.name — the operation type ( "chat" ) Think of this as the "LLM layer" — it captures what the model is doing regardless of which agent called it. It's model-centric telemetry. Layer 2: Agent-Level Instrumentation The second layer instruments at the agent level using MAF 1.0 GA's built-in OpenTelemetry support. This happens in the BaseAgent class that all our agents inherit from: Agent = new ChatClientAgent( chatClient, instructions: Instructions, name: AgentName, description: Description, tools: chatOptions.Tools?.ToList()) .AsBuilder() .UseOpenTelemetry(sourceName: AgentName) .Build(); The .UseOpenTelemetry(sourceName: AgentName) call on the MAF agent builder emits a different set of spans: gen_ai.agent.name — the human-readable agent name (e.g., "Weather & Packing Advisor" ) gen_ai.agent.description — what the agent does gen_ai.agent.id — the unique agent identifier Agent invocation traces — spans that represent the full lifecycle of an agent call This is the "agent layer" — it captures which agent is doing the work and provides the identity information that powers the Agents view dropdown and per-agent filtering. Why Both Layers? When both layers are active, you get the richest possible telemetry. The agent-level spans nest around the LLM-level spans, creating a trace hierarchy that looks like: Agent: "Weather & Packing Advisor" (gen_ai.agent.name) └── chat (gen_ai.operation.name) ├── model: gpt-4o, input_tokens: 450, output_tokens: 120 └── execute_tool: GetWeatherForecast └── chat (follow-up with tool results) └── model: gpt-4o, input_tokens: 680, output_tokens: 350 There is a tradeoff: with both layers active, you may see some span duplication since both the IChatClient wrapper and the MAF agent wrapper emit spans for the same underlying LLM call. If you find the telemetry too noisy, you can disable one layer: Agent layer only (remove .UseOpenTelemetry() from the IChatClient ) — You get agent identity but lose per-call token breakdowns. IChatClient layer only (remove .UseOpenTelemetry() from the agent builder) — You get detailed LLM metrics but lose agent identity in the Agents view. For the fullest experience with the Agents (Preview) view, we recommend keeping both layers active. The official sample uses both, and the Agents view is designed to handle the overlapping spans gracefully. 📖 Docs: MAF Observability Guide Exporting Telemetry to Application Insights Emitting OpenTelemetry spans is only useful if they land somewhere you can query them. The good news is that Azure App Service and Application Insights have deep native integration — App Service can auto-instrument your app, forward platform logs, and surface health metrics out of the box. For a full overview of monitoring capabilities, see Monitor Azure App Service. For our AI agent scenario, we go beyond the built-in platform telemetry. We need the GenAI semantic convention spans that we configured in the previous sections to flow into App Insights so the Agents (Preview) view can render them. Our travel planner has two host processes — the ASP.NET Core API and a WebJob — and each requires a slightly different exporter setup. ASP.NET Core API — Azure Monitor OpenTelemetry Distro For the API, it's a single line. The Azure Monitor OpenTelemetry Distro handles everything: // Configure OpenTelemetry with Azure Monitor for traces, metrics, and logs. // The APPLICATIONINSIGHTS_CONNECTION_STRING env var is auto-discovered. builder.Services.AddOpenTelemetry().UseAzureMonitor(); That's it. The distro automatically: Discovers the APPLICATIONINSIGHTS_CONNECTION_STRING environment variable Configures trace, metric, and log exporters to Application Insights Sets up appropriate sampling and batching Registers standard ASP.NET Core HTTP instrumentation This is the recommended approach for any ASP.NET Core application. One NuGet package ( Azure.Monitor.OpenTelemetry.AspNetCore ), one line of code, zero configuration files. WebJob — Manual Exporter Setup The WebJob is a non-ASP.NET Core host (it uses Host.CreateApplicationBuilder ), so the distro's convenience method isn't available. Instead, we configure the exporters explicitly: // Configure OpenTelemetry with Azure Monitor for the WebJob (non-ASP.NET Core host). // The APPLICATIONINSIGHTS_CONNECTION_STRING env var is auto-discovered. builder.Services.AddOpenTelemetry() .ConfigureResource(r => r.AddService("TravelPlanner.WebJob")) .WithTracing(t => t .AddSource("*") .AddAzureMonitorTraceExporter()) .WithMetrics(m => m .AddMeter("*") .AddAzureMonitorMetricExporter()); builder.Logging.AddOpenTelemetry(o => o.AddAzureMonitorLogExporter()); A few things to note: .AddSource("*") — Subscribes to all trace sources, including the ones emitted by MAF's .UseOpenTelemetry(sourceName: AgentName) . In production, you might narrow this to specific source names for performance. .AddMeter("*") — Similarly captures all metrics, including the GenAI metrics emitted by the instrumentation layers. .ConfigureResource(r => r.AddService("TravelPlanner.WebJob")) — Tags all telemetry with the service name so you can distinguish API vs. WebJob telemetry in Application Insights. The connection string is still auto-discovered from the APPLICATIONINSIGHTS_CONNECTION_STRING environment variable — no need to pass it explicitly. The key difference between these two approaches is ceremony, not capability. Both send the same GenAI spans to Application Insights; the Agents view works identically regardless of which exporter setup you use. 📖 Docs: Azure Monitor OpenTelemetry Distro Infrastructure as Code — Provisioning the Monitoring Stack The monitoring infrastructure is provisioned via Bicep modules alongside the rest of the application's Azure resources. Here's how it fits together. Log Analytics Workspace infra/core/monitor/loganalytics.bicep creates the Log Analytics workspace that backs Application Insights: resource logAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2023-09-01' = { name: name location: location tags: tags properties: { sku: { name: 'PerGB2018' } retentionInDays: 30 } } Application Insights infra/core/monitor/appinsights.bicep creates a workspace-based Application Insights resource connected to Log Analytics: resource appInsights 'Microsoft.Insights/components@2020-02-02' = { name: name location: location tags: tags kind: 'web' properties: { Application_Type: 'web' WorkspaceResourceId: logAnalyticsWorkspaceId } } output connectionString string = appInsights.properties.ConnectionString Wiring It All Together In infra/main.bicep , the Application Insights connection string is passed as an app setting to the App Service: appSettings: { APPLICATIONINSIGHTS_CONNECTION_STRING: appInsights.outputs.connectionString // ... other app settings } This is the critical glue: when the app starts, the OpenTelemetry distro (or manual exporters) auto-discover this environment variable and start sending telemetry to your Application Insights resource. No connection strings in code, no configuration files — it's all infrastructure-driven. The same connection string is available to both the API and the WebJob since they run on the same App Service. All agent telemetry from both host processes flows into a single Application Insights resource, giving you a unified view across the entire application. See It in Action Once the application is deployed and processing travel plan requests, here's how to explore the agent telemetry in Application Insights. Step 1: Open the Agents (Preview) View In the Azure portal, navigate to your Application Insights resource. In the left nav, look for Agents (Preview) under the Investigations section. This opens the unified agent monitoring dashboard. Step 2: Filter by Agent The agent dropdown at the top of the page is populated by the gen_ai.agent.name values in your telemetry. You'll see all six agents listed: Travel Planning Coordinator Currency Conversion Specialist Weather & Packing Advisor Local Expert & Cultural Guide Itinerary Planning Expert Budget Optimization Specialist Select a specific agent to filter the entire dashboard — token usage, latency, error rate — down to that one agent. Step 3: Review Token Usage The token usage tile shows total input and output token consumption over your selected time range. Compare agents to find your biggest spenders. In our testing, the Coordinator agent consistently uses the most output tokens because it aggregates and synthesizes results from all five specialists. Step 4: Drill into Traces Click "View Traces with Agent Runs" to see all agent executions. Each row represents a workflow run. You can filter by time range, status (success/failure), and specific agent. Step 5: End-to-End Transaction Details Click any trace to open the end-to-end transaction details. The "simple view" renders the agent workflow as a story — showing each step, which agent handled it, how long it took, and what tools were called. For a full travel plan, you'll see the Coordinator dispatch work to each specialist, tool calls to the NWS weather API and Frankfurter currency API, and the final aggregation step. Grafana Dashboards The Agents (Preview) view in Application Insights is great for ad-hoc investigation. For ongoing monitoring and alerting, Azure Managed Grafana provides prebuilt dashboards specifically designed for agent workloads. From the Agents view, click "Explore in Grafana" to jump directly into these dashboards: Agent Framework Dashboard — Per-agent metrics including token usage trends, latency percentiles, error rates, and throughput over time. Pin this to your operations wall. Agent Framework Workflow Dashboard — Workflow-level metrics showing how multi-agent orchestrations perform end-to-end. See how long complete travel plans take, identify bottleneck agents, and track success rates. These dashboards query the same underlying data in Log Analytics, so there's zero additional instrumentation needed. If your telemetry lights up the Agents view, it lights up Grafana too. Key Packages Summary Here are the NuGet packages that make this work, pulled from the actual project files: Package Version Purpose Azure.Monitor.OpenTelemetry.AspNetCore 1.3.0 Azure Monitor OTEL Distro for ASP.NET Core (API). One-line setup for traces, metrics, and logs. Azure.Monitor.OpenTelemetry.Exporter 1.3.0 Azure Monitor OTEL exporter for non-ASP.NET Core hosts (WebJob). Trace, metric, and log exporters. Microsoft.Agents.AI 1.0.0 MAF 1.0 GA — ChatClientAgent , .UseOpenTelemetry() for agent-level instrumentation. Microsoft.Extensions.AI 10.4.1 IChatClient abstraction with .UseOpenTelemetry() for LLM-level instrumentation. OpenTelemetry.Extensions.Hosting 1.11.2 OTEL dependency injection integration for Host.CreateApplicationBuilder (WebJob). Microsoft.Extensions.AI.OpenAI 10.4.1 OpenAI/Azure OpenAI adapter for IChatClient . Bridges the Azure OpenAI SDK to the M.E.AI abstraction. Wrapping Up Let's zoom out. In this two-part series, we've gone from zero to a fully observable, production-grade multi-agent AI application on Azure App Service: Blog 1 covered deploying the multi-agent travel planner with MAF 1.0 GA — the agents, the architecture, the infrastructure. Blog 2 (this post) showed how to instrument those agents with OpenTelemetry, explained the GenAI semantic conventions that make agent-aware monitoring possible, and walked through the new Agents (Preview) view in Application Insights. The pattern is straightforward: Add .UseOpenTelemetry() at the IChatClient level for LLM metrics. Add .UseOpenTelemetry(sourceName: AgentName) at the MAF agent level for agent identity. Export to Application Insights via the Azure Monitor distro (one line) or manual exporters. Wire the connection string through Bicep and environment variables. Open the Agents (Preview) view and start monitoring. With MAF 1.0 GA's built-in OpenTelemetry support and Application Insights' new Agents view, you get production-grade observability for AI agents with minimal code. The GenAI semantic conventions ensure your telemetry is structured, portable, and understood by any compliant backend. And because it's all standard OpenTelemetry, you're not locked into any single vendor — swap the exporter and your telemetry goes to Jaeger, Grafana, Datadog, or wherever you need it. Now go see what your agents are up to. Resources Sample repository: seligj95/app-service-multi-agent-maf-otel App Insights Agents (Preview) view: Documentation GenAI Semantic Conventions: OpenTelemetry GenAI Registry MAF Observability Guide: Microsoft Agent Framework Observability Azure Monitor OpenTelemetry Distro: Enable OpenTelemetry for .NET Grafana Agent Framework Dashboard: aka.ms/amg/dash/af-agent Grafana Workflow Dashboard: aka.ms/amg/dash/af-workflow Blog 1: Deploy Multi-Agent AI Apps on Azure App Service with MAF 1.0 GA13Views0likes0CommentsBuild Multi-Agent AI Apps on Azure App Service with Microsoft Agent Framework 1.0
A couple of months ago, we published a three-part series showing how to build multi-agent AI systems on Azure App Service using preview packages from the Microsoft Agent Framework (MAF) (formerly AutoGen / Semantic Kernel Agents). The series walked through async processing, the request-reply pattern, and client-side multi-agent orchestration — all running on App Service. Since then, Microsoft Agent Framework has reached 1.0 GA — unifying AutoGen and Semantic Kernel into a single, production-ready agent platform. This post is a fresh start with the GA bits. We'll rebuild our travel-planner sample on the stable API surface, call out the breaking changes from preview, and get you up and running fast. All of the code is in the companion repo: seligj95/app-service-multi-agent-maf-otel. What Changed in MAF 1.0 GA The 1.0 release is more than a version bump. Here's what moved: Unified platform. AutoGen and Semantic Kernel agent capabilities have converged into Microsoft.Agents.AI . One package, one API surface. Stable APIs with long-term support. The 1.0 contract is now locked for servicing. No more preview churn. Breaking change — Instructions on options removed. In preview, you set instructions through ChatClientAgentOptions.Instructions . In GA, pass them directly to the ChatClientAgent constructor. Breaking change — RunAsync parameter rename. The thread parameter is now session (type AgentSession ). If you were using named arguments, this is a compile error. Microsoft.Extensions.AI upgraded. The framework moved from the 9.x preview of Microsoft.Extensions.AI to the stable 10.4.1 release. OpenTelemetry integration built in. The builder pipeline now includes UseOpenTelemetry() out of the box — more on that in Blog 2. Our project references reflect the GA stack: <PackageReference Include="Microsoft.Agents.AI" Version="1.0.0" /> <PackageReference Include="Microsoft.Extensions.AI" Version="10.4.1" /> <PackageReference Include="Azure.AI.OpenAI" Version="2.1.0" /> Why Azure App Service for AI Agents? If you're building with Microsoft Agent Framework, you need somewhere to run your agents. You could reach for Kubernetes, containers, or serverless — but for most agent workloads, Azure App Service is the sweet spot. Here's why: No infrastructure management — App Service is fully managed. No clusters to configure, no container orchestration to learn. Deploy your .NET or Python agent code and it just runs. Always On — Agent workflows can take minutes. App Service's Always On feature (on Premium tiers) ensures your background workers never go cold, so agents are ready to process requests instantly. WebJobs for background processing — Long-running agent workflows don't belong in HTTP request handlers. App Service's built-in WebJob support gives you a dedicated background worker that shares the same deployment, configuration, and managed identity — no separate compute resource needed. Managed Identity everywhere — Zero secrets in your code. App Service's system-assigned managed identity authenticates to Azure OpenAI, Service Bus, Cosmos DB, and Application Insights automatically. No connection strings, no API keys, no rotation headaches. Built-in observability — Native integration with Application Insights and OpenTelemetry means you can see exactly what your agents are doing in production (more on this in Part 2). Enterprise-ready — VNet integration, deployment slots for safe rollouts, custom domains, auto-scaling rules, and built-in authentication. All the things you'll need when your agent POC becomes a production service. Cost-effective — A single P0v4 instance (~$75/month) hosts both your API and WebJob worker. Compare that to running separate container apps or a Kubernetes cluster for the same workload. The bottom line: App Service lets you focus on building your agents, not managing infrastructure. And since MAF supports both .NET and Python — both first-class citizens on App Service — you're covered regardless of your language preference. Architecture Overview The sample is a travel planner that coordinates six specialized agents to build a personalized trip itinerary. Users fill out a form (destination, dates, budget, interests), and the system returns a comprehensive travel plan complete with weather forecasts, currency advice, a day-by-day itinerary, and a budget breakdown. The Six Agents Currency Converter — calls the Frankfurter API for real-time exchange rates Weather Advisor — calls the National Weather Service API for forecasts and packing tips Local Knowledge Expert — cultural insights, customs, and hidden gems Itinerary Planner — day-by-day scheduling with timing and costs Budget Optimizer — allocates spend across categories and suggests savings Coordinator — assembles everything into a polished final plan Four-Phase Workflow Phase Agents Execution 1 — Parallel Gathering Currency, Weather, Local Knowledge Task.WhenAll 2 — Itinerary Itinerary Planner Sequential (uses Phase 1 context) 3 — Budget Budget Optimizer Sequential (uses Phase 2 output) 4 — Assembly Coordinator Final synthesis Infrastructure Azure App Service (P0v4) — hosts the API and a continuous WebJob for background processing Azure Service Bus — decouples the API from heavy AI work (async request-reply) Azure Cosmos DB — stores task state, results, and per-agent chat histories (24-hour TTL) Azure OpenAI (GPT-4o) — powers all agent LLM calls Application Insights + Log Analytics — monitoring and diagnostics ChatClientAgent Deep Dive At the core of every agent is ChatClientAgent from Microsoft.Agents.AI . It wraps an IChatClient (from Microsoft.Extensions.AI ) with instructions, a name, a description, and optionally a set of tools. This is client-side orchestration — you control the chat history, lifecycle, and execution order. No server-side Foundry agent resources are created. Here's the BaseAgent pattern used by all six agents in the sample: // BaseAgent.cs — constructor for agents with tools Agent = new ChatClientAgent( chatClient, instructions: Instructions, name: AgentName, description: Description, tools: chatOptions.Tools?.ToList()) .AsBuilder() .UseOpenTelemetry(sourceName: AgentName) .Build(); Notice the builder pipeline: .AsBuilder().UseOpenTelemetry(...).Build() . This opts every agent into the framework's built-in OpenTelemetry instrumentation with a single line. We'll explore what that telemetry looks like in Blog 2. Invoking an agent is equally straightforward: // BaseAgent.cs — InvokeAsync public async Task<ChatMessage> InvokeAsync( IList<ChatMessage> chatHistory, CancellationToken cancellationToken = default) { var response = await Agent.RunAsync( chatHistory, session: null, options: null, cancellationToken); return response.Messages.LastOrDefault() ?? new ChatMessage(ChatRole.Assistant, "No response generated."); } Key things to note: session: null — this is the renamed parameter (was thread in preview). We pass null because we manage chat history ourselves. The agent receives the full chatHistory list, so context accumulates across turns. Simple agents (Local Knowledge, Itinerary Planner, Budget Optimizer, Coordinator) use the tool-less constructor; agents that call external APIs (Currency, Weather) use the constructor that accepts ChatOptions with tools. Tool Integration Two of our agents — Weather Advisor and Currency Converter — call real external APIs through the MAF tool-calling pipeline. Tools are registered using AIFunctionFactory.Create() from Microsoft.Extensions.AI . Here's how the WeatherAdvisorAgent wires up its tool: // WeatherAdvisorAgent.cs private static ChatOptions CreateChatOptions( IWeatherService weatherService, ILogger logger) { var chatOptions = new ChatOptions { Tools = new List<AITool> { AIFunctionFactory.Create( GetWeatherForecastFunction(weatherService, logger)) } }; return chatOptions; } GetWeatherForecastFunction returns a Func<double, double, int, Task<string>> that the model can call with latitude, longitude, and number of days. Under the hood, it hits the National Weather Service API and returns a formatted forecast string. The Currency Converter follows the same pattern with the Frankfurter API. This is one of the nicest parts of the GA API: you write a plain C# method, wrap it with AIFunctionFactory.Create() , and the framework handles the JSON schema generation, function-call parsing, and response routing automatically. Multi-Phase Workflow Orchestration The TravelPlanningWorkflow class coordinates all six agents. The key insight is that the orchestration is just C# code — no YAML, no graph DSL, no special runtime. You decide when agents run, what context they receive, and how results flow between phases. // Phase 1: Parallel Information Gathering var gatheringTasks = new[] { GatherCurrencyInfoAsync(request, state, progress, cancellationToken), GatherWeatherInfoAsync(request, state, progress, cancellationToken), GatherLocalKnowledgeAsync(request, state, progress, cancellationToken) }; await Task.WhenAll(gatheringTasks); After Phase 1 completes, results are stored in a WorkflowState object — a simple dictionary-backed container that holds per-agent chat histories and contextual data: // WorkflowState.cs public Dictionary<string, object> Context { get; set; } = new(); public Dictionary<string, List<ChatMessage>> AgentChatHistories { get; set; } = new(); Phases 2–4 run sequentially, each pulling context from the previous phase. For example, the Itinerary Planner receives weather and local knowledge gathered in Phase 1: var localKnowledge = state.GetFromContext<string>("LocalKnowledge") ?? ""; var weatherAdvice = state.GetFromContext<string>("WeatherAdvice") ?? ""; var itineraryChatHistory = state.GetChatHistory("ItineraryPlanner"); itineraryChatHistory.Add(new ChatMessage(ChatRole.User, $"Create a detailed {days}-day itinerary for {request.Destination}..." + $"\n\nWEATHER INFORMATION:\n{weatherAdvice}" + $"\n\nLOCAL KNOWLEDGE & TIPS:\n{localKnowledge}")); var itineraryResponse = await _itineraryAgent.InvokeAsync( itineraryChatHistory, cancellationToken); This pattern — parallel fan-out followed by sequential context enrichment — is simple, testable, and easy to extend. Need a seventh agent? Add it to the appropriate phase and wire it into WorkflowState . Async Request-Reply Pattern A multi-agent workflow with six LLM calls (some with tool invocations) can easily run 30–60 seconds. That's well beyond typical HTTP timeout expectations and not a great user experience for a synchronous request. We use the Async Request-Reply pattern to handle this: The API receives the travel plan request and immediately queues a message to Service Bus. It stores an initial task record in Cosmos DB with status queued and returns a taskId to the client. A continuous WebJob (running as a separate process on the same App Service plan) picks up the message, executes the full multi-agent workflow, and writes the result back to Cosmos DB. The client polls the API for status updates until the task reaches completed . This pattern keeps the API responsive, makes the heavy work retriable (Service Bus handles retries and dead-lettering), and lets the WebJob run independently — you can restart it without affecting the API. We covered this pattern in detail in the previous series, so we won't repeat the plumbing here. Deploy with azd The repo is wired up with the Azure Developer CLI for one-command provisioning and deployment: git clone https://github.com/seligj95/app-service-multi-agent-maf-otel.git cd app-service-multi-agent-maf-otel azd auth login azd up azd up provisions the following resources via Bicep: Azure App Service (P0v4 Windows) with a continuous WebJob Azure Service Bus namespace and queue Azure Cosmos DB account, database, and containers Azure AI Services (Azure OpenAI with GPT-4o deployment) Application Insights and Log Analytics workspace Managed Identity with all necessary role assignments After deployment completes, azd outputs the App Service URL. Open it in your browser, fill in the travel form, and watch six agents collaborate on your trip plan in real time. What's Next We now have a production-ready multi-agent app running on App Service with the GA Microsoft Agent Framework. But how do you actually observe what these agents are doing? When six agents are making LLM calls, invoking tools, and passing context between phases — you need visibility into every step. In the next post, we'll dive deep into how we instrumented these agents with OpenTelemetry and the new Agents (Preview) view in Application Insights — giving you full visibility into agent runs, token usage, tool calls, and model performance. You already saw the .UseOpenTelemetry() call in the builder pipeline; Blog 2 shows what that telemetry looks like end to end and how to light up the new Agents experience in the Azure portal. Stay tuned! Resources Sample repo — app-service-multi-agent-maf-otel Microsoft Agent Framework 1.0 GA Announcement Microsoft Agent Framework Documentation Previous Series — Part 3: Client-Side Multi-Agent Orchestration on App Service Microsoft.Extensions.AI Documentation Azure App Service Documentation24Views0likes0CommentsDeploying to Azure Web App from Azure DevOps Using UAMI
TOC UAMI Configuration App Configuration Azure DevOps Configuration Logs UAMI Configuration Create a User Assigned Managed Identity with no additional configuration. This identity will be mentioned in later steps, especially at Object ID. App Configuration On an existing Azure Web App, enable Diagnostic Settings and configure it to retain certain types of logs, such as Access Audit Logs. These logs will be discussed in the final section of this article. Next, navigate to Access Control (IAM) and assign the previously created User Assigned Managed Identity the Website Contributor role. Azure DevOps Configuration Go to Azure DevOps → Project Settings → Service Connections, and create a new ARM (Azure Resource Manager) connection. While creating the connection: Select the corresponding User Assigned Managed Identity Grant it appropriate permissions at the Resource Group level During this process, you will be prompted to sign in again using your own account. This authentication will later be reflected in the deployment logs discussed below. Assuming the following deployment template is used in the pipeline, you will notice that additional steps appear in the deployment process compared to traditional service principal–based authentication. Logs A few minutes after deployment, related log records will appear. In the AppServiceAuditLogs table, you can observe that the deployment initiator is shown as the Object ID from UAMI, and the Source is listed as Azure (DevOps). This indicates that the User Assigned Managed Identity is authorized under my user context, while the deployment action itself is initiated by Azure DevOps.72Views0likes0CommentsBuild and Host MCP Apps on Azure App Service
MCP Apps are here, and they're a game-changer for building AI tools with interactive UIs. If you've been following the Model Context Protocol (MCP) ecosystem, you've probably heard about the MCP Apps spec — the first official MCP extension that lets your tools return rich, interactive UIs that render directly inside AI chat clients like Claude Desktop, ChatGPT, VS Code Copilot, Goose, and Postman. And here's the best part: you can host them on Azure App Service. In this post, I'll walk you through building a weather widget MCP App and deploying it to App Service. You'll have a production-ready MCP server serving interactive UIs in under 10 minutes. What Are MCP Apps? MCP Apps extend the Model Context Protocol by combining tools (the functions your AI client can call) with UI resources (the interactive interfaces that display the results). The pattern is simple: A tool declares a _meta.ui.resourceUri in its metadata When the tool is invoked, the MCP host fetches that UI resource The UI renders in a sandboxed iframe inside the chat client The key insight? MCP Apps are just web apps — HTML, JavaScript, and CSS served through MCP. And that's exactly what App Service does best. The MCP Apps spec supports cross-client rendering, so the same UI works in Claude Desktop, VS Code Copilot, ChatGPT, and other MCP-enabled clients. Your weather widget, map viewer, or data dashboard becomes a universal component in the AI ecosystem. Why App Service for MCP Apps? Azure App Service is a natural fit for hosting MCP Apps. Here's why: Always On — No cold starts. Your UI resources are served instantly, every time. Easy Auth — Secure your MCP endpoint with Entra ID authentication out of the box, no code required. Custom domains + TLS — Professional MCP server endpoints with your own domain and managed certificates. Deployment slots — Canary and staged rollouts for MCP App updates without downtime. Sidecars — Run backend services (Redis, message queues, monitoring agents) alongside your MCP server. App Insights — Built-in telemetry to see which tools and UIs are being invoked, response times, and error rates. Now, these are all capabilities you can add to a production MCP App, but the sample we're building today keeps things simple. We're focusing on the core pattern: serving MCP tools with interactive UIs from App Service. The production features are there when you need them. When to Use Functions vs App Service for MCP Apps Before we dive into the code, let's talk about Azure Functions. The Functions team has done great work with their MCP Apps quickstart, and if serverless is your preferred model, that's a fantastic option. Functions and App Service both host MCP Apps beautifully — they just serve different needs. Azure Functions Azure App Service Best for New, purpose-built MCP Apps that benefit from serverless scaling MCP Apps that need always-on hosting, persistent state, or are part of larger web apps Scaling Scale to zero, pay per invocation Dedicated plans, always running Cold start Possible (mitigated by premium plan) None (Always On) Deployment azd up with Functions template azd up with App Service template MCP Apps quickstart Available This blog post! Additional capabilities Event-driven triggers, durable functions Easy Auth, custom domains, deployment slots, sidecars Think of it this way: if you're building a new MCP App from scratch and want serverless economics, go with Functions. If you're adding MCP capabilities to an existing web app, need zero cold starts, or want production features like Easy Auth and deployment slots, App Service is your friend. Build the Weather Widget MCP App Let's build a simple MCP App that fetches weather data from the Open-Meteo API and displays it in an interactive widget. The sample uses ASP.NET Core for the MCP server and Vite for the frontend UI. Here's the structure: app-service-mcp-app-sample/ ├── src/ │ ├── Program.cs # MCP server setup │ ├── WeatherTool.cs # Weather tool with UI metadata │ ├── WeatherUIResource.cs # MCP resource serving the UI │ ├── WeatherService.cs # Open-Meteo API integration │ └── app/ # Vite frontend (weather widget) │ └── src/ │ └── weather-app.ts # MCP Apps SDK integration ├── .vscode/ │ └── mcp.json # VS Code MCP server config ├── azure.yaml # Azure Developer CLI config └── infra/ # Bicep infrastructure Program.cs — MCP Server Setup The MCP server is an ASP.NET Core app that registers tools and UI resources: using ModelContextProtocol; var builder = WebApplication.CreateBuilder(args); // Register WeatherService builder.Services.AddSingleton<WeatherService>(sp => new WeatherService(WeatherService.CreateDefaultClient())); // Add MCP Server with HTTP transport, tools, and resources builder.Services.AddMcpServer() .WithHttpTransport(t => t.Stateless = true) .WithTools<WeatherTool>() .WithResources<WeatherUIResource>(); var app = builder.Build(); // Map MCP endpoints (no auth required for this sample) app.MapMcp("/mcp").AllowAnonymous(); app.Run(); AddMcpServer() configures the MCP protocol handler. WithHttpTransport() enables Streamable HTTP with stateless mode (no session management needed). WithTools<WeatherTool>() registers our weather tool, and WithResources<WeatherUIResource>() registers the UI resource that the MCP host will fetch and render. MapMcp("/mcp") maps the MCP endpoint at /mcp . WeatherTool.cs — Tool with UI Metadata The WeatherTool class defines the tool and uses the [McpMeta] attribute to declare a ui metadata block containing the resourceUri . This tells the MCP host where to fetch the interactive UI: using System.ComponentModel; using ModelContextProtocol.Server; [McpServerToolType] public class WeatherTool { private readonly WeatherService _weatherService; public WeatherTool(WeatherService weatherService) { _weatherService = weatherService; } [McpServerTool] [Description("Get current weather for a location via Open-Meteo. Returns weather data that displays in an interactive widget.")] [McpMeta("ui", JsonValue = """{"resourceUri": "ui://weather/index.html"}""")] public async Task<object> GetWeather( [Description("City name to check weather for (e.g., Seattle, New York, Miami)")] string location) { var result = await _weatherService.GetCurrentWeatherAsync(location); return result; } } The key line is the [McpMeta("ui", ...)] attribute. This adds _meta.ui.resourceUri to the tool definition, pointing to the ui://weather/index.html resource. When the AI client calls this tool, the host fetches that resource and renders it in a sandboxed iframe alongside the tool result. WeatherUIResource.cs — UI Resource The UI resource class serves the bundled HTML as an MCP resource with the ui:// scheme and text/html;profile=mcp-app MIME type required by the MCP Apps spec: using ModelContextProtocol.Protocol; using ModelContextProtocol.Server; [McpServerResourceType] public class WeatherUIResource { [McpServerResource( UriTemplate = "ui://weather/index.html", Name = "weather_ui", MimeType = "text/html;profile=mcp-app")] public static ResourceContents GetWeatherUI() { var filePath = Path.Combine( AppContext.BaseDirectory, "app", "dist", "index.html"); var html = File.ReadAllText(filePath); return new TextResourceContents { Uri = "ui://weather/index.html", MimeType = "text/html;profile=mcp-app", Text = html }; } } The [McpServerResource] attribute registers this method as the handler for the ui://weather/index.html resource. When the host fetches it, the bundled single-file HTML (built by Vite) is returned with the correct MIME type. WeatherService.cs — Open-Meteo API Integration The WeatherService class handles geocoding and weather data from the Open-Meteo API. Nothing MCP-specific here — it's just a standard HTTP client that geocodes a city name and fetches current weather observations. The UI Resource (Vite Frontend) The app/ directory contains a TypeScript app built with Vite that renders the weather widget. It uses the @modelcontextprotocol/ext-apps SDK to communicate with the host: import { App } from "@modelcontextprotocol/ext-apps"; const app = new App({ name: "Weather Widget", version: "1.0.0" }); // Handle tool results from the server app.ontoolresult = (params) => { const data = parseToolResultContent(params.content); if (data) render(data); }; // Adapt to host theme (light/dark) app.onhostcontextchanged = (ctx) => { if (ctx.theme) applyTheme(ctx.theme); }; await app.connect(); The SDK's App class handles the postMessage communication with the host. When the tool returns weather data, ontoolresult fires and the widget renders the temperature, conditions, humidity, and wind. The app also adapts to the host's theme so it looks native in both light and dark mode. The frontend is bundled into a single index.html file using Vite and the vite-plugin-singlefile plugin, which inlines all JavaScript and CSS. This makes it easy to serve as a single MCP resource. Run Locally To run the sample locally, you'll need the .NET 9 SDK and Node.js 18+ installed. Clone the repo and run: # Clone the repo git clone https://github.com/seligj95/app-service-mcp-app-sample.git cd app-service-mcp-app-sample # Build the frontend cd src/app npm install npm run build # Run the MCP server cd .. dotnet run The server starts on http://localhost:5000 . Now connect from VS Code Copilot: Open your workspace in VS Code The sample includes a .vscode/mcp.json that configures the local MCP server: { "servers": { "local-mcp-appservice": { "type": "http", "url": "http://localhost:5000/mcp" } } } Open the GitHub Copilot Chat panel Ask: "What's the weather in Seattle?" Copilot will invoke the GetWeather tool, and the interactive weather widget will render inline in the chat: Weather widget MCP App rendering inline in VS Code Copilot Chat Deploy to Azure Deploying to Azure is even easier. The sample includes an azure.yaml file and Bicep templates for App Service, so you can deploy with a single command: cd app-service-mcp-app-sample azd auth login azd up azd up will: Provision an App Service plan and web app in your subscription Build the .NET app and Vite frontend Deploy the app to App Service Output the public MCP endpoint URL After deployment, azd will output a URL like https://app-abc123.azurewebsites.net . Update your .vscode/mcp.json to point to the remote server: { "servers": { "remote-weather-app": { "type": "http", "url": "https://app-abc123.azurewebsites.net/mcp" } } } From that point forward, your MCP App is live. Any AI client that supports MCP Apps can invoke your weather tool and render the interactive widget — no local server required. What's Next? You've now built and deployed an MCP App to Azure App Service. Here's what you can explore next: Read the MCP Apps spec to understand the full capabilities of the extension, including input forms, persistent state, and multi-step workflows. Check out the ext-apps examples on GitHub — there are samples for map viewers, PDF renderers, system monitors, and more. Try the Azure Functions MCP Apps quickstart if you want to build a serverless MCP App. Learn about hosting remote MCP servers in App Service for more patterns and best practices. Clone the sample repo and customize it for your own use cases. And remember: App Service gives you a full production hosting platform for your MCP Apps. You can add Easy Auth to secure your endpoints with Entra ID, wire up App Insights for telemetry, configure custom domains and TLS certificates, and set up deployment slots for blue/green rollouts. These features make App Service a great choice when you're ready to take your MCP App to production. If you build something cool with MCP Apps and App Service, let me know — I'd love to see what you create!125Views0likes0CommentsAgentic IIS Migration to Managed Instance on Azure App Service
Introduction Enterprises running ASP.NET Framework workloads on Windows Server with IIS face a familiar dilemma: modernize or stay put. The applications work, the infrastructure is stable, and nobody wants to be the person who breaks production during a cloud migration. But the cost of maintaining aging on-premises servers, patching Windows, and managing IIS keeps climbing. Azure App Service has long been the lift-and-shift destination for these workloads. But what about applications that depend on Windows registry keys, COM components, SMTP relay, MSMQ queues, local file system access, or custom fonts? These OS-level dependencies have historically been migration blockers — forcing teams into expensive re-architecture or keeping them anchored to VMs. Managed Instance on Azure App Service changes this equation entirely. And the IIS Migration MCP Server makes migration guided, intelligent, and safe — with AI agents that know what to ask, what to check, and what to generate at every step. What Is Managed Instance on Azure App Service? Managed Instance on App Service is Azure's answer to applications that need OS-level customization beyond what standard App Service provides. It runs on the PremiumV4 (PV4) SKU with IsCustomMode=true, giving your app access to: Capability What It Enables Registry Adapters Redirect Windows Registry reads to Azure Key Vault secrets — no code changes Storage Adapters Mount Azure Files, local SSD, or private VNET storage as drive letters (e.g., D:\, E:\) install.ps1 Startup Script Run PowerShell at instance startup to install Windows features (SMTP, MSMQ), register COM components, install MSI packages, deploy custom fonts Custom Mode Full access to the Windows instance for configuration beyond standard PaaS guardrails The key constraint: Managed Instance on App Service requires PV4 SKU with IsCustomMode=true. No other SKU combination supports it. Why Managed Instance Matters for Legacy Apps Consider a classic enterprise ASP.NET application that: Reads license keys from HKLM\SOFTWARE\MyApp in the Windows Registry Uses a COM component for PDF generation registered via regsvr32 Sends email through a local SMTP relay Writes reports to D:\Reports\ on a local drive Uses a custom corporate font for PDF rendering With standard App Service, you'd need to rewrite every one of these dependencies. With Managed Instance on App Service, you can: Map registry reads to Key Vault secrets via Registry Adapters Mount Azure Files as D:\ via Storage Adapters Enable SMTP Server via install.ps1 Register the COM DLL via install.ps1 (regsvr32) Install the custom font via install.ps1 Please note that when you are migrating your web applications to Managed Instance on Azure App Service in majority of the use cases "Zero application code changes may be required " but depending on your specific web app some code changes may be necessary. Microsoft Learn Resources Managed Instance on App Service Overview Azure App Service Documentation App Service Migration Assistant Tool Migrate to Azure App Service Azure App Service Plans Overview PremiumV4 Pricing Tier Azure Key Vault Azure Files AppCat (.NET) — Azure Migrate Application and Code Assessment Why Agentic Migration? The Case for AI-Guided IIS Migration The Problem with Traditional Migration Microsoft provides excellent PowerShell scripts for IIS migration — Get-SiteReadiness.ps1, Get-SitePackage.ps1, Generate-MigrationSettings.ps1, and Invoke-SiteMigration.ps1. They're free, well-tested, and reliable. So why wrap them in an AI-powered system? Because the scripts are powerful but not intelligent. They execute what you tell them to. They don't tell you what to do. Here's what a traditional migration looks like: Run readiness checks — get a wall of JSON with cryptic check IDs like ContentSizeCheck, ConfigErrorCheck, GACCheck Manually interpret 15+ readiness checks per site across dozens of sites Decide whether each site needs Managed Instance or standard App Service (how?) Figure out which dependencies need registry adapters vs. storage adapters vs. install.ps1 (the "Managed Instance provisioning split") Write the install.ps1 script by hand for each combination of OS features Author ARM templates for adapter configurations (Key Vault references, storage mount specs, RBAC assignments) Wire together PackageResults.json → MigrationSettings.json with correct Managed Instance fields (Tier=PremiumV4, IsCustomMode=true) Hope you didn't misconfigure anything before deploying to Azure Even experienced Azure engineers find this time-consuming, error-prone, and tedious — especially across a fleet of 20, 50, or 100+ IIS sites. What Agentic Migration Changes The IIS Migration MCP Server introduces an AI orchestration layer that transforms this manual grind into a guided conversation: Traditional Approach Agentic Approach Read raw JSON output from scripts AI summarizes readiness as tables with plain-English descriptions Memorize 15 check types and their severity AI enriches each check with title, description, recommendation, and documentation links Manually decide Managed Instance vs App Service recommend_target analyzes all signals and recommends with confidence + reasoning Write install.ps1 from scratch generate_install_script builds it from detected features Author ARM templates manually generate_adapter_arm_template generates full templates with RBAC guidance Wire JSON artifacts between phases by hand Agents pass readiness_results_path → package_results_path → migration_settings_path automatically Pray you set PV4 + IsCustomMode correctly Enforced automatically — every tool validates Managed Instance constraints Deploy and find out what broke confirm_migration presents a full cost/resource summary before touching Azure The core value proposition: the AI knows the Managed Instance provisioning split. It knows that registry access needs an ARM template with Key Vault-backed adapters, while SMTP needs an install.ps1 section enabling the Windows SMTP Server feature. You don't need to know this. The system detects it from your IIS configuration and AppCat analysis, then generates exactly the right artifacts. Human-in-the-Loop Safety Agentic doesn't mean autonomous. The system has explicit gates: Phase 1 → Phase 2: "Do you want to assess these sites, or skip to packaging?" Phase 3: "Here's my recommendation — Managed Instance for Site A (COM + Registry), standard for Site B. Agree?" Phase 4: "Review MigrationSettings.json before proceeding" Phase 5: "This will create billable Azure resources. Type 'yes' to confirm" The AI accelerates the workflow; the human retains control over every decision. Quick Start Clone and set up the MCP server git clone https://github.com//iis-migration-mcp.git cd iis-migration-mcp python -m venv .venv .venv\Scripts\activate pip install -r requirements.txt # Download Microsoft's migration scripts (NOT included in this repo) # From: https://appmigration.microsoft.com/api/download/psscripts/AppServiceMigrationScripts.zip # Unzip to C:\MigrationScripts (or your preferred path) # Start using in VS Code with Copilot # 1. Copy .vscode/mcp.json.example → .vscode/mcp.json # 2. Open folder in VS Code # 3. In Copilot Chat: "Configure scripts path to C:\MigrationScripts" # 4. Then: @iis-migrate "Discover my IIS sites" The server also works with any MCP-compatible client — Claude Desktop, Cursor, Copilot CLI, or custom integrations — via stdio transport. Architecture: How the MCP Server Works The system is built on the Model Context Protocol (MCP), an open protocol that lets AI assistants like GitHub Copilot, Claude, or Cursor call external tools through a standardized interface. ┌──────────────────────────────────────────────────────────────────┐ │ VS Code + Copilot Chat │ │ @iis-migrate orchestrator agent │ │ ├── iis-discover (Phase 1) │ │ ├── iis-assess (Phase 2) │ │ ├── iis-recommend (Phase 3) │ │ ├── iis-deploy-plan (Phase 4) │ │ └── iis-execute (Phase 5) │ └─────────────┬────────────────────────────────────────────────────┘ │ stdio JSON-RPC (MCP Transport) ▼ ┌──────────────────────────────────────────────────────────────────┐ │ FastMCP Server (server.py) │ │ 13 Python Tool Modules (tools/*.py) │ │ └── ps_runner.py (Python → PowerShell bridge) │ │ └── Downloaded PowerShell Scripts (user-configured) │ │ ├── Local IIS (discovery, packaging) │ │ └── Azure ARM API (deployment) │ └──────────────────────────────────────────────────────────────────┘ The server exposes 13 MCP tools organized across 5 phases, orchestrated by 6 Copilot agents (1 orchestrator + 5 specialist subagents). Important: The PowerShell migration scripts are not included in this repository. Users must download them from GitHub and configure the path using the configure_scripts_path tool. This ensures you always use the latest version of Microsoft's scripts, avoiding version mismatch issues. The 13 MCP Tools: Complete Reference Phase 0 — Setup configure_scripts_path Purpose: Point the server to Microsoft's downloaded migration PowerShell scripts. Before any migration work, you need to download the scripts from GitHub, unzip them, and tell the server where they are. "Configure scripts path to C:\MigrationScripts" Phase 1 — Discovery 1. discover_iis_sites Purpose: Scan the local IIS server and run readiness checks on every web site. This is the entry point for every migration. It calls Get-SiteReadiness.ps1 under the hood, which: Enumerates all IIS web sites, application pools, bindings, and virtual directories Runs 15 readiness checks per site (config errors, HTTPS bindings, non-HTTP protocols, TCP ports, location tags, app pool settings, app pool identity, virtual directories, content size, global modules, ISAPI filters, authentication, framework version, connection strings, and more) Detects source code artifacts (.sln, .csproj, .cs, .vb) near site physical paths Output: ReadinessResults.json with per-site status: Status Meaning READY No issues detected — clear for migration READY_WITH_WARNINGS Minor issues that won't block migration READY_WITH_ISSUES Non-fatal issues that need attention BLOCKED Fatal issues (e.g., content > 2GB) — cannot migrate as-is Requires: Administrator privileges, IIS installed. 2. choose_assessment_mode Purpose: Route each discovered site into the appropriate next step. After discovery, you decide the path for each site: assess_all: Run detailed assessment on all non-blocked sites package_and_migrate: Skip assessment, proceed directly to packaging (for sites you already know well) The tool classifies each site into one of five actions: assess_config_only — IIS/web.config analysis assess_config_and_source — Config + AppCat source code analysis (when source is detected) package — Skip to packaging blocked — Fatal errors, cannot proceed skip — User chose to exclude Phase 2 — Assessment 3. assess_site_readiness Purpose: Get a detailed, human-readable readiness assessment for a specific site. Takes the raw readiness data from Phase 1 and enriches each check with: Title: Plain-English name (e.g., "Global Assembly Cache (GAC) Dependencies") Description: What the check found and why it matters Recommendation: Specific guidance on how to resolve the issue Category: Grouping (Configuration, Security, Compatibility) Documentation Link: Microsoft Learn URL for further reading This enrichment comes from WebAppCheckResources.resx, an XML resource file that maps check IDs to detailed metadata. Without this tool, you'd see GACCheck: FAIL — with it, you see the full context. Output: Overall status, enriched failed/warning checks, framework version, pipeline mode, binding details. 4. assess_source_code Purpose: Analyze an Azure Migrate application and code assessment for .NET JSON report to identify Managed Instance-relevant source code dependencies. If your application has source code and you've run the assessment tool against it, this tool parses the results and maps findings to migration actions: Dependency Detected Migration Action Windows Registry access Registry Adapter (ARM template) Local file system I/O / hardcoded paths Storage Adapter (ARM template) SMTP usage install.ps1 (SMTP Server feature) COM Interop install.ps1 (regsvr32/RegAsm) Global Assembly Cache (GAC) install.ps1 (GAC install) Message Queuing (MSMQ) install.ps1 (MSMQ feature) Certificate access Key Vault integration The tool matches rules from the assessment output against known Managed Instance-relevant patterns. For a complete list of rules and categories, see Interpret the analysis results. Output: Issues categorized as mandatory/optional/potential, plus install_script_features and adapter_features lists that feed directly into Phase 3 tools. Phase 3 — Recommendation & Provisioning 5. suggest_migration_approach Purpose: Recommend the right migration tool/approach for the scenario. This is a routing tool that considers: Source code available? → Recommend the App Modernization MCP server for code-level changes No source code? → Recommend this IIS Migration MCP (lift-and-shift) OS customization needed? → Highlight Managed Instance on App Service as the target 6. recommend_target Purpose: Recommend the Azure deployment target for each site based on all assessment data. This is the intelligence center of the system. It analyzes config assessments and source code findings to recommend: Target When Recommended SKU MI_AppService Registry, COM, MSMQ, SMTP, local file I/O, GAC, or Windows Service dependencies detected PremiumV4 (PV4) AppService Standard web app, no OS-level dependencies PremiumV2 (PV2) ContainerApps Microservices architecture or container-first preference N/A Each recommendation comes with: Confidence: high or medium Reasoning: Full explanation of why this target was chosen Managed Instance reasons: Specific dependencies that require Managed Instance Blockers: Issues that prevent migration entirely install_script_features: What the install.ps1 needs to enable adapter_features: What the ARM template needs to configure Provisioning guidance: Step-by-step instructions for what to do next 7. generate_install_script Purpose: Generate an install.ps1 PowerShell script for OS-level feature enablement on Managed Instance. This handles the OS-level side of the Managed Instance provisioning split. It generates a startup script that includes sections for: Feature What the Script Does SMTP Install-WindowsFeature SMTP-Server, configure smart host relay MSMQ Install MSMQ, create application queues COM/MSI Run msiexec for MSI installers, regsvr32/RegAsm for COM registration Crystal Reports Install SAP Crystal Reports runtime MSI Custom Fonts Copy .ttf/.otf to C:\Windows\Fonts, register in registry The script can auto-detect needed features from config and source assessments, or you can specify them manually. 8. generate_adapter_arm_template Purpose: Generate an ARM template for Managed Instance registry and storage adapters. This handles the platform-level side of the Managed Instance provisioning split. It generates a deployable ARM template that configures: Registry Adapters (Key Vault-backed): Map Windows Registry paths (e.g., HKLM\SOFTWARE\MyApp\LicenseKey) to Key Vault secrets Your application reads the registry as before; Managed Instance redirects the read to Key Vault transparently Storage Adapters (three types): Type Description Credentials AzureFiles Mount Azure Files SMB share as a drive letter Storage account key in Key Vault Custom Mount storage over private endpoint via VNET Requires VNET integration LocalStorage Allocate local SSD on the Managed Instance as a drive letter None needed The template also includes: Managed Identity configuration RBAC role assignments guidance (Key Vault Secrets User, Storage File Data SMB Share Contributor, etc.) Deployment CLI commands ready to copy-paste Phase 4 — Deployment Planning & Packaging 9. plan_deployment Purpose: Plan the Azure App Service deployment — plans, SKUs, site assignments. Collects your Azure details (subscription, resource group, region) and creates a validated deployment plan: Assigns sites to App Service Plans Enforces PV4 + IsCustomMode=true for Managed Instance — won't let you accidentally use the wrong SKU Supports single_plan (all sites on one plan) or multi_plan (separate plans) Optionally queries Azure for existing Managed Instance plans you can reuse 10. package_site Purpose: Package IIS site content into ZIP files for deployment. Calls Get-SitePackage.ps1 to: Compress site binaries + web.config into deployment-ready ZIPs Optionally inject install.ps1 into the package (so it deploys alongside the app) Handle sites with non-fatal issues (configurable) Size limit: 2 GB per site (enforced by System.IO.Compression). 11. generate_migration_settings Purpose: Create the MigrationSettings.json deployment configuration. This is the final configuration artifact. It calls Generate-MigrationSettings.ps1 and then post-processes the output to inject Managed Instance-specific fields: Important: The Managed Instance on App Service Plan is not automatically created by the migration tools. You must pre-create the Managed Instance on App Service Plan (PV4 SKU with IsCustomMode=true) in the Azure portal or via CLI before generating migration settings. When running generate_migration_settings, provide the name of your existing Managed Instance plan so the settings file references it correctly. { "AppServicePlan": "mi-plan-eastus", "Tier": "PremiumV4", "IsCustomMode": true, "InstallScriptPath": "install.ps1", "Region": "eastus", "Sites": [ { "IISSiteName": "MyLegacyApp", "AzureSiteName": "mylegacyapp-azure", "SitePackagePath": "packagedsites/MyLegacyApp_Content.zip" } ] } Phase 5 — Execution 12. confirm_migration Purpose: Present a full migration summary and require explicit human confirmation. Before touching Azure, this tool displays: Total plans and sites to be created SKU and pricing tier per plan Whether Managed Instance is configured Cost warning for PV4 pricing Resource group, region, and subscription details Nothing proceeds until the user explicitly confirms. 13. migrate_sites Purpose: Deploy everything to Azure App Service. This creates billable resources. Calls Invoke-SiteMigration.ps1, which: Sets Azure subscription context Creates/validates resource groups Creates App Service Plans (PV4 with IsCustomMode for Managed Instance) Creates Web Apps Configures .NET version, 32-bit mode, pipeline mode from the original IIS settings Sets up virtual directories and applications Disables basic authentication (FTP + SCM) for security Deploys ZIP packages via Azure REST API Output: MigrationResults.json with per-site Azure URLs, Resource IDs, and deployment status. The 6 Copilot Agents The MCP tools are orchestrated by a team of specialized Copilot agents — each responsible for a specific phase of the migration lifecycle. @iis-migrate — The Orchestrator The root agent that guides the entire migration. It: Tracks progress across all 5 phases using a todo list Delegates work to specialist subagents Gates between phases — asks before transitioning Enforces the Managed Instance constraint (PV4 + IsCustomMode) at every decision point Never skips the Phase 5 confirmation gate Usage: Open Copilot Chat and type @iis-migrate I want to migrate my IIS applications to Azure iis-discover — Discovery Specialist Handles Phase 1. Runs discover_iis_sites, presents a summary table of all sites with their readiness status, and asks whether to assess or skip to packaging. Returns readiness_results_path and per-site routing plans. iis-assess — Assessment Specialist Handles Phase 2. Runs assess_site_readiness for every site, and assess_source_code when AppCat results are available. Merges findings, highlights Managed Instance-relevant issues, and produces the adapter/install features lists that drive Phase 3. iis-recommend — Recommendation Specialist Handles Phase 3. Runs recommend_target for each site, then conditionally generates install.ps1 and ARM adapter templates. Presents all recommendations with confidence levels and reasoning, and allows you to edit generated artifacts. iis-deploy-plan — Deployment Planning Specialist Handles Phase 4. Collects Azure details, runs plan_deployment, package_site, and generate_migration_settings. Validates Managed Instance configuration, allows review and editing of MigrationSettings.json. Does not execute migration. iis-execute — Execution Specialist Handles Phase 5 only. Runs confirm_migration to present the final summary, then only proceeds with migrate_sites after receiving explicit "yes" confirmation. Reports results with Azure URLs and deployment status. The Managed Instance Provisioning Split: A Critical Concept One of the most important ideas Managed Instance introduces is the provisioning split — the division of OS dependencies into two categories that are configured through different mechanisms: ┌──────────────────────────────────────────────────────────────┐ │ MANAGED INSTANCE PROVISIONING SPLIT │ ├─────────────────────────────┬────────────────────────────────┤ │ ARM Template │ install.ps1 │ │ (Platform-Level) │ (OS-Level) │ ├─────────────────────────────┼────────────────────────────────┤ │ Registry Adapters │ COM/MSI Registration │ │ → Key Vault secrets │ → regsvr32, RegAsm, msiexec │ │ │ │ │ Storage Mounts │ SMTP Server Feature │ │ → Azure Files │ → Install-WindowsFeature │ │ → Local SSD │ │ │ → VNET private storage │ MSMQ │ │ │ → Message queue setup │ │ │ │ │ │ Crystal Reports Runtime │ │ │ → SAP MSI installer │ │ │ │ │ │ Custom Fonts │ │ │ → Copy to C:\Windows\Fonts │ └─────────────────────────────┴────────────────────────────────┘ The MCP server handles this split automatically: assess_source_code detects which dependencies fall into which category recommend_target reports both adapter_features and install_script_features generate_adapter_arm_template builds the ARM template for platform features generate_install_script builds the PowerShell startup script for OS features You don't need to remember which goes where — the system decides and generates the right artifacts. End-to-End Walkthrough: From Discovery to Running on Managed Instance Here's what a complete migration conversation looks like: You: "@iis-migrate I want to migrate my IIS applications to Azure" Phase 1 — Discovery: Agent runs discover_iis_sites, presents a table: Site Status Framework Source Code? HRPortal READY_WITH_ISSUES v4.8 Yes (.sln found) PayrollAPI READY v4.8 No IntranetCMS BLOCKED (>2GB) v4.7.2 No Phase 2 — Assessment: Agent runs assess_site_readiness for HRPortal — finds GACCheck and RegistryCheck failures. Runs assess_source_code using AppCat report — confirms COM interop, registry access, and SMTP usage. Phase 3 — Recommendation: Agent runs recommend_target: HRPortal → MI_AppService (high confidence) — COM, registry, SMTP dependencies PayrollAPI → AppService (high confidence) — no OS dependencies Generates install.ps1 for HRPortal (SMTP + COM sections). Generates ARM template with registry adapter (Key Vault-backed) for HRPortal. Phase 4 — Deployment Planning: Agent collects subscription/RG/region, validates PV4 availability. Packages both sites. Generates MigrationSettings.json with two plans: mi-plan-hrportal (PremiumV4, IsCustomMode=true) — HRPortal std-plan-payrollapi (PremiumV2) — PayrollAPI Phase 5 — Execution: Agent shows full summary with cost projection. You type "yes". Sites deploy. You get Azure URLs within minutes. Prerequisites & Setup Requirement Purpose Windows Server with IIS Source server for discovery and packaging PowerShell 5.1 Runs migration scripts (ships with Windows) Python 3.10+ MCP server runtime Administrator privileges Required for IIS discovery, packaging, and migration Azure subscription Target for deployment (execution phase only) Azure PowerShell (Az module) Deploy to Azure (execution phase only) Migration Scripts ZIP Microsoft's PowerShell migration scripts AppCat CLI Source code analysis (optional) FastMCP (mcp[cli]>=1.0.0) MCP server framework Data Flow & Artifacts Every phase produces JSON artifacts that chain into the next phase: Phase 1: discover_iis_sites ──→ ReadinessResults.json │ Phase 2: assess_site_readiness ◄──────┘ assess_source_code ───→ Assessment JSONs │ Phase 3: recommend_target ◄───────────┘ generate_install_script ──→ install.ps1 generate_adapter_arm ─────→ mi-adapters-template.json │ Phase 4: package_site ────────────→ PackageResults.json + site ZIPs generate_migration_settings → MigrationSettings.json │ Phase 5: confirm_migration ◄──────────┘ migrate_sites ───────────→ MigrationResults.json │ ▼ Apps live on Azure *.azurewebsites.net Each artifact is inspectable, editable, and auditable — providing a complete record of what was assessed, recommended, and deployed. Error Handling The MCP server classifies errors into actionable categories: Error Cause Resolution ELEVATION_REQUIRED Not running as Administrator Restart VS Code / terminal as Admin IIS_NOT_FOUND IIS or WebAdministration module missing Install IIS role + WebAdministration AZURE_NOT_AUTHENTICATED Not logged into Azure PowerShell Run Connect-AzAccount SCRIPT_NOT_FOUND Migration scripts path not configured Run configure_scripts_path SCRIPT_TIMEOUT PowerShell script exceeded time limit Check IIS server responsiveness OUTPUT_NOT_FOUND Expected JSON output wasn't created Verify script execution succeeded Conclusion The IIS Migration MCP Server turns what used to be a multi-week, expert-driven project into a guided conversation. It combines Microsoft's battle-tested migration PowerShell scripts with AI orchestration that understands the nuances of Managed Instance on App Service — the provisioning split, the PV4 constraint, the adapter configurations, and the OS-level customizations. Whether you're migrating 1 site or 10, agentic migration reduces risk, eliminates guesswork, and produces auditable artifacts at every step. The human stays in control; the AI handles the complexity. Get started: Download the migration scripts, set up the MCP server, and ask @iis-migrate to discover your IIS sites. The agents will take it from there. This project is compatible with any MCP-enabled client: VS Code GitHub Copilot, Claude Desktop, Cursor, and more. The intelligence travels with the server, not the client.204Views0likes0CommentsAnnouncing the Public Preview of the New App Service Quota Self-Service Experience
Update 10/30/2025: The App Service Quota Self-Service experience is back online after a short period where we were incorporating your feedback and making needed updates. As this is public preview, availability and features are subject to change as we receive and incorporate feedback. What’s New? The updated experience introduces a dedicated App Service Quota blade in the Azure portal, offering a streamlined and intuitive interface to: View current usage and limits across the various SKUs Set custom quotas tailored to your App Service plan needs This new experience empowers developers and IT admins to proactively manage resources, avoid service disruptions, and optimize performance. Quick Reference - Start here! If your deployment requires quota for ten or more subscriptions, then file a support ticket with problem type Quota following the instructions at the bottom of this post. If any subscription included in your request requires zone redundancy (note that most Isolated v2 deployments require ZR), then file a support ticket with problem type Quota following the instructions at the bottom of this post. Otherwise, leverage the new self-service experience to increase your quota automatically. Self-service Quota Requests For non-zone-redundant needs, quota alone is sufficient to enable App Service deployment or scale-out. Follow the provided steps to place your request. 1. Navigate to the Quotas resource provider in the Azure portal 2. Select App Service (Pubic Preview) Navigating the primary interface: Each App Service VM size is represented as a separate SKU. If the intention is to be able to scale up or down within a specific offering (e.g., Premium v3), then equivalent number of VMs need to be requested for each applicable size of that offering (e.g., request 5 instances for both P1v3 and P3v3). As with other quotas, you can filter by region, subscription, provider, or usage. Note that your portal will now show "App Service (Public Preview)" for the Provider name. You can also group the results by usage, quota (App Service VM type), or location (region). Current usage is represented as App Service VMs. This allows you to quickly identify which SKUs are nearing their quota limits. Adjustments can be made inline: no need to visit another page. This is covered in detail in the next section. Total Regional VMs: There is a SKU in each region called Total Regional VMs. This SKU summarizes your usage and available quota across all individual SKUs in that region. There are three key points about using Total Regional VMs. You should never request Total Regional VMs quota directly - it will automatically increase in response to your request for individual SKU quota. If you are unable to deploy a given SKU, then you must request more quota for that SKU to unblock deployment. For your deployment to succeed, you must have sufficient quota in the individual SKU as well as Total Regional VMs. If either usage is at its respective limit, then you will be unable to deploy and must request more of that individual SKU's quota to proceed. In some regions, Total Regional VMs appears as "0 of 0" usage and limit and no individual SKU quotas are shown. This is an indication that you should not interact with the portal to resolve any quota-related issues in this region. Instead, you should try the deployment and observe any error messages that arise. If any error messages indicate more quota is needed, then this must be requested by filing a support ticket with problem type Quota following the instructions at the bottom of this post so that App Service can identify and fix any potential quota issues. In most cases, this will not be necessary, and your deployment will work without requesting quota wherever "0 of 0" is shown for Total Regional VMs and no individual SKU quotas are visible. See the example below: 3. Request quota adjustments Clicking the pen icon opens a flyout window to capture the quota request: The quota type (App Service SKU) is already populated, along with current usage. Note that your request is not incremental: you must specify the new limit that you wish to see reflected in the portal. For example, to request two additional instances of P1v2 VMs, you would file the request like this: Click submit to send the request for automatic processing. How quota approvals work: Immediately upon submitting a quota request, you will see a processing dialog like the one shown: If the quota request can be automatically fulfilled, then no support request is needed. You should receive this confirmation within a few minutes of submission: If the request cannot be automatically fulfilled, then you will be given the option to file a support request with the same information. In the example below, the requested new limit exceeds what can be automatically granted for the region: 4. If applicable, create support ticket When creating a support ticket, you will need to repopulate the Region and App Service plan details; the new limit has already been populated for you. If you forget the region or SKU that was requested, you can reference them in your notifications pane: If you choose to create a support ticket, then you will interact with the capacity management team for that region. This is a 24x7 service, so requests may be created at any time. Once you have filed the support request, you can track its status via the Help + support dashboard. Known issues The self-service quota request experience for App Service is in public preview. Here are some caveats worth mentioning while the team finalizes the release for general availability: Closing the quota request flyout window will stop meaningful notifications for that request. You can still view the outcome of your quota requests by checking actual quota, but if you want to rely on notifications for alerts, then we recommend leaving the quota request window open for the few minutes that it is processing. Some SKUs are not yet represented in the quota dashboard. These will be added later in the public preview. The Activity Log does not currently provide a meaningful summary of previous quota requests and their outcomes. This will also be addressed during the public preview. As noted in the walkthrough, the new experience does not enable zone-redundant deployments. Quota is an inherently regional construct, and zone-redundant enablement requires a separate step that can only be taken in response to a support ticket being filed. Quota API documentation is being drafted to enable bulk non-zone redundant quota requests without requiring you to file a support ticket. Filing a Support Ticket If your deployment requires zone redundancy or contains many subscriptions, then we recommend filing a support ticket with issue type "Technical" and problem type "Quota": We want your feedback! If you notice any aspect of the experience that does not work as expected, or you have feedback on how to make it better, please use the comments below to share your thoughts!9.5KViews3likes34CommentsMCP Apps on Azure Functions: Quick Start with TypeScript
Azure Functions makes hosting MCP apps simple: build locally, create a secure endpoint, and deploy fast with Azure Developer CLI (azd). This guide shows you how using a weather app example. What Are MCP Apps? MCP Apps let MCP servers return interactive HTML interfaces such as data visualizations, forms, dashboards that render directly inside MCP-compatible hosts (Visual Studio Code Copilot, Claude, ChatGPT, etc.). Learn more about MCP Apps in the official documentation. Having an interactive UI removes many restrictions that plain texts have, such as if your scenario has: Interactive Data: Replacing lists with clickable maps or charts for deep exploration. Complex Setup: Use one-page forms instead of long, back-and-forth questioning. Rich Media: Embed native viewers to pan, zoom, or rotate 3D models and documents. Live Updates: Maintain real-time dashboards that refresh without new prompts. Workflow Management: Handle multi-step tasks like approvals with navigation buttons and persistent state. MCP App Hosting as a Feature Azure Functions provides an easy abstraction to help you build MCP servers without having to learn the nitty-gritty of the MCP protocol. When hosting your MCP App on Functions, you get: MCP tools (server logic): Handle client requests, call backend services, return structured data - Azure Functions manages the MCP protocol details for you MCP resources (UI payloads such as app widgets): Serve interactive HTML, JSON documents, or formatted content - just focus on your UI logic Secure HTTPS access: Built-in authentication using Azure Functions keys, plus built-in MCP authentication with OAuth support for enterprise-grade security Easy deployment with Bicep and azd: Infrastructure as Code for reliable deployments Local development: Test and debug locally before deploying Auto-scaling: Azure Functions handles scaling, retries, and monitoring automatically The weather app in this repo is an example of this feature, not the only use case. Architecture Overview Example: The classic Weather App The sample implementation includes: A GetWeather MCP tool that fetches weather by location (calls Open-Meteo geocoding and forecast APIs) A Weather Widget MCP resource that serves interactive HTML/JS code (runs in the client; fetches data via GetWeather tool) A TypeScript service layer that abstracts API calls and data transformation (runs on the server) Bidirectional communication: client-side UI calls server-side tools, receives data, renders locally Local and remote testing flow for MCP clients (via MCP Inspector, VS Code, or custom clients) How UI Rendering Works in MCP Apps In the Weather App example: Azure Functions serves getWeatherWidget as a resource → returns weather-app.ts compiled to HTML/JS Client renders the Weather Widget UI User interacts with the widget or requests are made internally The widget calls the getWeather tool → server processes and returns weather data The widget renders the weather data on the client side This architecture keeps the UI responsive locally while using server-side logic and data on demand. Quick Start Checkout repository: https://github.com/Azure-Samples/remote-mcp-functions-typescript Run locally: npm install npm run build func start Local endpoint: http://0.0.0.0:7071/runtime/webhooks/mcp Deploy to Azure: azd provision azd deploy Remote endpoint: https://.azurewebsites.net/runtime/webhooks/mcp TypeScript MCP Tools Snippet (Get Weather service) In Azure Functions, you define MCP tools using app.mcpTool(). The toolName and description tell clients what this tool does, toolProperties defines the input arguments (like location as a string), and handler points to your function that processes the request. app.mcpTool("getWeather", { toolName: "GetWeather", description: "Returns current weather for a location via Open-Meteo.", toolProperties: { location: arg.string().describe("City name to check weather for") }, handler: getWeather, }); Resource Trigger Snippet (Weather App Hook) MCP resources are defined using app.mcpResource(). The uri is how clients reference this resource, resourceName and description provide metadata, mimeType tells clients what type of content to expect, and handler is your function that returns the actual content (like HTML for a widget). app.mcpResource("getWeatherWidget", { uri: "ui://weather/index.html", resourceName: "Weather Widget", description: "Interactive weather display for MCP Apps", mimeType: "text/html;profile=mcp-app", handler: getWeatherWidget, }); Sample repos and references Complete sample repository with TypeScript implementation: https://github.com/Azure-Samples/remote-mcp-functions-typescript Official MCP extension documentation: https://learn.microsoft.com/azure/azure-functions/functions-bindings-mcp?pivots=programming-language-typescript Java sample: https://github.com/Azure-Samples/remote-mcp-functions-java .NET sample: https://github.com/Azure-Samples/remote-mcp-functions-dotnet Python sample: https://github.com/Azure-Samples/remote-mcp-functions-python MCP Inspector: https://github.com/modelcontextprotocol/inspector Final Takeaway MCP Apps are just MCP servers but they represent a paradigm shift by transforming the AI from a text-based chatbot into a functional interface. Instead of forcing users to navigate complex tasks through back-and-forth conversations, these apps embed interactive UIs and tools directly into the chat, significantly improving the user experience and the usefulness of MCP servers. Azure Functions allows developers to quickly build and host an MCP app by providing an easy abstraction and deployment experience. The platform also provides built-in features to secure and scale your MCP apps, plus a serverless pricing model so you can just focus on the business logic.278Views1like0CommentsAnnouncing general availability for the Azure SRE Agent
Today, we’re excited to announce the General Availability (GA) of Azure SRE Agent— your AI‑powered operations teammate that helps organizations improve uptime, reduce incident impact, and cut operational toil by accelerating diagnosis and automating response workflows.12KViews1like1CommentAnnouncing a flexible, predictable billing model for Azure SRE Agent
Billing for Azure SRE Agent will start on September 1, 2025. Announced at Microsoft Build 2025, Azure SRE Agent is a pre-built AI agent for root cause analysis, uptime improvement, and operational cost reduction. Learn more about the billing model and example scenarios.4.2KViews1like1CommentContinued Investment in Azure App Service
This blog was originally published to the App Service team blog Recent Investments Premium v4 (Pv4) Azure App Service Premium v4 delivers higher performance and scalability on newer Azure infrastructure while preserving the fully managed PaaS experience developers rely on. Premium v4 offers expanded CPU and memory options, improved price-performance, and continued support for App Service capabilities such as deployment slots, integrated monitoring, and availability zone resiliency. These improvements help teams modernize and scale demanding workloads without taking on additional operational complexity. App Service Managed Instance App Service Managed Instance extends the App Service model to support Windows web applications that require deeper environment control. It enables plan-level isolation, optional private networking, and operating system customization while retaining managed scaling, patching, identity, and diagnostics. Managed Instance is designed to reduce migration friction for existing applications, allowing teams to move to a modern PaaS environment without code changes. Faster Runtime and Language Support Azure App Service continues to invest in keeping pace with modern application stacks. Regular updates across .NET, Node.js, Python, Java, and PHP help developers adopt new language versions and runtime improvements without managing underlying infrastructure. Reliability and Availability Improvements Ongoing investments in platform reliability and resiliency strengthen production confidence. Expanded Availability Zone support and related infrastructure improvements help applications achieve higher availability with more flexible configuration options as workloads scale. Deployment Workflow Enhancements Deployment workflows across Azure App Service continue to evolve, with ongoing improvements to GitHub Actions, Azure DevOps, and platform tooling. These enhancements reduce friction from build to production while preserving the managed App Service experience. A Platform That Grows With You These recent investments reflect a consistent direction for Azure App Service: active development focused on performance, reliability, and developer productivity. Improvements to runtimes, infrastructure, availability, and deployment workflows are designed to work together, so applications benefit from platform progress without needing to re-architect or change operating models. The recent General Availability of Aspire on Azure App Service is another example of this direction. Developers building distributed .NET applications can now use the Aspire AppHost model to define, orchestrate, and deploy their services directly to App Service — bringing a code-first development experience to a fully managed platform. We are also seeing many customers build and run AI-powered applications on Azure App Service, integrating models, agents, and intelligent features directly into their web apps and APIs. App Service continues to evolve to support these scenarios, providing a managed, scalable foundation that works seamlessly with Azure's broader AI services and tooling. Whether you are modernizing with Premium v4, migrating existing workloads using App Service Managed Instance, or running production applications at scale - including AI-enabled workloads - Azure App Service provides a predictable and transparent foundation that evolves alongside your applications. Azure App Service continues to focus on long-term value through sustained investment in a managed platform developers can rely on as requirements grow, change, and increasingly incorporate AI. Get Started Ready to build on Azure App Service? Here are some resources to help you get started: Create your first web app — Deploy a web app in minutes using the Azure portal, CLI, or VS Code. App Service documentation — Explore guides, tutorials, and reference for the full platform. Aspire on Azure App Service — Now generally available. Deploy distributed .NET applications to App Service using the Aspire AppHost model. Pricing and plans — Compare tiers including Premium v4 and find the right fit for your workload. App Service on Azure Architecture Center — Reference architectures and best practices for production deployments.271Views1like0Comments