Blog Post

Apps on Azure Blog
8 MIN READ

From Local MCP Server to Hosted Web Agent: App Service Observability, Part 2

jordanselig's avatar
jordanselig
Icon for Microsoft rankMicrosoft
Feb 11, 2026

In Part 1, we introduced the App Service Observability MCP Server — a proof-of-concept that lets GitHub Copilot (and other AI assistants) query your App Service logs, analyze errors, and help debug issues through natural language. That version runs locally alongside your IDE, and it's great for individual developers who want to investigate their apps without leaving VS Code.

A local MCP server is powerful, but it's personal. Your teammate has to clone the repo, configure their IDE, and run it themselves. What if your on-call engineer could just open a browser and start asking questions? What if your whole team had a shared observability assistant — no setup required?

In this post, we'll show how we took the same set of MCP tools and wrapped them in a hosted web application — deployed to Azure App Service with a chat UI and a built-in Azure OpenAI agent. We'll cover what changed, what stayed the same, and why this pattern opens the door to far more than just a web app.

Quick Recap: The Local MCP Server

If you haven't read Part 1, here's the short version:

We built an MCP (Model Context Protocol) server that exposes ~15 observability tools for App Service — things like querying Log Analytics, fetching Kudu container logs, analyzing HTTP errors, correlating deployments with failures, and checking logging configurations. You point your AI assistant (GitHub Copilot, Claude, etc.) at the server, and it calls those tools on your behalf to answer questions about your apps.

That version:

  • Runs locally on your machine via node
  • Uses stdio transport (your IDE spawns the process)
  • Relies on your Azure credentials (az login) — the AI operates with your exact permissions
  • Requires no additional Azure resources

It works. It's fast. And for a developer investigating their own apps, it's the simplest path. This is still a perfectly valid way to use the project — nothing about the hosted version replaces it.

The Problem: Sharing Is Hard

The local MCP server has a limitation: it's tied to one developer's machine and IDE. In practice, this means:

  • On-call engineers need to clone the repo and configure their environment before they can use it
  • Team leads can't point someone at a URL and say "go investigate"
  • Non-IDE users (PMs, support engineers) are left out entirely
  • Consistent configuration (which subscription, which resource group) has to be managed per-person

We wanted to keep the same tools and the same observability capabilities, but make them accessible to anyone with a browser.

The Solution: Host It on App Service

The answer turned out to be straightforward: deploy the MCP server itself to Azure App Service, give it a web frontend, and bring its own AI agent along for the ride.

Here's what the hosted version adds on top of the local MCP server:

 Local MCP ServerHosted Web Agent
How it worksRuns locally, your IDE's AI calls the toolsDeployed to Azure App Service with its own AI agent
InterfaceVS Code, Claude Desktop, or any MCP clientBrowser-based chat UI
AgentYour existing AI assistant (Copilot, Claude, etc.)Built-in Azure OpenAI (GPT-5-mini)
Azure resources neededNone beyond az loginApp Service, Azure OpenAI, VNet
Best forIndividual developers in their IDETeams who want a shared, centralized tool
AuthenticationYour local az login credentialsManaged identity + Easy Auth (Entra ID)
Deploynpm install && npm run buildazd up

The key insight: the MCP tools are identical. Both versions use the exact same set of observability tools — the only difference is who's calling them (your IDE's AI vs. the built-in Azure OpenAI agent) and where the server runs (your laptop vs. App Service).

What We Built

Architecture

┌─────────────────────────────────────────────────────────────────────────────┐
│                              Web Browser                                    │
│   React Chat UI — resource selectors, tool steps, markdown responses        │
└──────────────────────────────────┬──────────────────────────────────────────┘
                                   │ HTTP (REST API)
                                   ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                     Azure App Service (Node.js 20)                          │
│  ┌──────────────────────────────────────────────────────────────────────┐   │
│  │  Express Server                                                      │   │
│  │  ├── /api/chat         → Agent loop (OpenAI → tool calls → respond)  │   │
│  │  ├── /api/set-context  → Set target app for investigation            │   │
│  │  ├── /api/resource-groups, /api/apps → Resource discovery            │   │
│  │  ├── /mcp              → MCP protocol endpoint (Streamable HTTP)     │   │
│  │  └── /                 → Static SPA (React chat UI)                  │   │
│  └──────────────────────────────────────────────────────────────────────┘   │
│                    VNet Integration (snet-app)                              │
└─────────────────────────────────────────────────────────────────────────────┘
          │                        │                              │
          ▼                        ▼                              ▼
   ┌──────────────┐    ┌───────────────────┐         ┌────────────────────┐
   │ Azure OpenAI │    │  Log Analytics /  │         │  ARM API / Kudu    │
   │ (GPT-5-mini) │    │  KQL Queries      │         │  (app metadata,    │
   │ Private EP   │    └───────────────────┘         │   container logs)  │
   └──────────────┘                                  └────────────────────┘

The Express server does double duty: it serves the React chat UI as static files and exposes the MCP endpoint for remote IDE connections. The agent loop is simple — when a user sends a message, the server calls Azure OpenAI, which may request tool calls, the server executes those tools, and the loop continues until the AI has a final answer.

Demo

The following screenshots show how this app can be used. The first screenshot shows what happens when you ask about a functioning app. You can see the agent made 5 tool calls and was able to give a thorough summary of the current app's status, recent deployments, as well as provide some recommendations for how to improve observability of the app itself. I expanded the tools section so you could see exactly what the agent was doing behind the scenes and get a sense of how it was thinking. At this point, you can proceed to ask more questions about your app if there were other pieces of information you wanted to pull from your logs.

I then injected a fault into this app by initiating a deployment pointing to a config file that didn't actually exist. The goal here was to prove that the agent could correlate an application issue to a specific deployment event, something that currently involves manual effort and deep investigation into logs and source code. Having an agent that can do this for you in a matter of seconds saves so much time and effort that could be directed to more important activities and ensures that you find the issue the first time. 

A few minutes after initiating the bad deployment, I saw that my app was no longer responding. Rather than going to the logs and investigating myself, I asked the agent "I'm getting an application error now, what happened?" I obviously know what happened and what the source of the error was, but let's see if the agent can pick that up.

The agent was able to see that something was wrong and then point me in the direction to address the issue. It ran a number of tool calls following our investigation steps called out in the skills file and was successfully able to identify the source of the error.

And lastly, I wanted to confirm the error was associated with the recent deployment, something that our agent should be able to do because we built in the tools it needs to be able to corrleate these kinds of events with errors. I asked it directly and here was the response, exactly what I expected to see.

Infrastructure (one command)

Everything is defined in Bicep and deployed with the Azure Developer CLI:

 
azd up
 

This provisions:

  • App Service Plan (P0v3) with App Service (Node.js 20 LTS, VNet-integrated)
  • Azure OpenAI (GPT-5-mini, Global Standard) with a private endpoint and private DNS zone
  • VNet (10.0.0.0/16) with dedicated subnets for the app and private endpoints
  • Managed Identity with RBAC roles: Reader, Website Contributor, Log Analytics Reader, Cognitive Services OpenAI User

No API keys anywhere. The App Service authenticates to Azure OpenAI over a private network using its managed identity.

The Chat UI

The web interface is designed to get out of the way and let you focus on investigating:

  • Resource group and app dropdowns — Browse your subscription, pick the app you want to investigate
  • Tool step visibility — A collapsible panel shows exactly which tools the agent called, what arguments it used, and how long each took
  • Session management — Start fresh conversations, with confirmation dialogs when switching context mid-investigation
  • Markdown responses — The agent's answers are rendered with full formatting, code blocks, and tables

When you first open the app, it auto-discovers your subscription and populates the resource group dropdown. Select an app, hit "Tell me about this app," and the agent starts investigating.

Security

Since this app has subscription-wide read access to your App Services and Log Analytics workspaces, you should definitely enable authentication. After deploying, configure Easy Auth in the Azure Portal:

  1. Go to your App Service → Authentication
  2. Click Add identity provider → select Microsoft Entra ID
  3. Set Unauthenticated requests to "HTTP 401 Unauthorized"

This ensures only authorized members of your organization can access the tool.

The connection to Azure OpenAI is secured via a private endpoint — traffic never traverses the public internet. The app authenticates using its managed identity with the Cognitive Services OpenAI User role.

What Stayed the Same

This is the part worth emphasizing: the core tools didn't change at all. Whether you're using the local MCP server or the hosted web agent, you get the same 15 tools.

The Agent Skill (SKILL.md) from Part 1 also carries over. The hosted agent has the same domain expertise for App Service debugging baked into its system prompt — the same debugging workflows, common error patterns, KQL templates, and SKU reference that make the local version effective.

The Bigger Picture: It's Not Just a Web App

Here's what makes this interesting beyond our specific implementation: the pattern is the point.

We took a set of domain-specific tools (App Service observability), wrapped them in a standard protocol (MCP), and showed two ways to use them:

  1. Local MCP server → Your IDE's AI calls the tools
  2. Hosted web agent → A deployed app with its own AI calls the same tools

But those are just two examples. The same tools could power:

  • A Microsoft Teams bot — Your on-call channel gets an observability assistant that anyone can mention​
  • A Slack integration — Same idea, different platform
  • A CLI agent — A terminal-based chat for engineers who live in the command line
  • An automated monitor — An agent that periodically checks your apps and files alerts
  • An Azure Portal extension — Observability chat embedded directly in the portal experience
  • A mobile app — Check on your apps from your phone during an incident

The MCP tools are the foundation. The agent and interface are just the delivery mechanism. Build whatever surface makes sense for your team.

This is one of the core ideas behind MCP: write the tools once, use them everywhere. The protocol standardizes how AI assistants discover and call tools, so you're not locked into any single client or agent.

Try It Yourself

Both versions are open-source:

To deploy the hosted version:

git clone https://github.com/seligj95/app-service-observability-agent-hosted.git
cd app-service-observability-agent-hosted
azd up

To run the local version, see the Getting Started section in Part 1.

What's Next?

This is still a proof-of-concept, and we're continuing to explore how AI-powered observability can become a first-class part of the App Service platform. Some things we're thinking about:

  • More tools — Resource health, autoscale history, certificate expiration, network diagnostics
  • Multi-app investigations — Correlate issues across multiple apps in a resource group
  • Proactive monitoring — Agents that watch your apps and alert you before users notice
  • Deeper integration — What if every App Service came with a built-in observability endpoint?

We'd love your feedback. Try it out, open an issue, or submit a PR if you have ideas for additional tools or debugging patterns. And if you build something interesting on top of these MCP tools — a Teams bot, a CLI agent, anything — we'd love to hear about it.

Updated Feb 09, 2026
Version 1.0
No CommentsBe the first to comment