azure sre agent
50 TopicsPlugin Marketplace for Azure SRE Agent: Build Once, Install Anywhere
What's a Plugin? A plugin bundles two things: Skills — Operational knowledge (triage runbooks, policy rules, known issues) the agent reads at runtime to guide its reasoning MCP Connectors — Live integrations to your internal APIs (deployment tracker, cost dashboard, CMDB) the agent can query during an investigation This is the key distinction: a plugin doesn't just tell the agent what your policies are — it gives the agent tools to query your internal systems and apply those policies with real data. A plugin bundles skills and MCP connectors as a single installable unit. The Marketplace Model: Create Once, Install Everywhere The marketplace is a GitHub repository with a marketplace.json manifest. Any team pushes their plugin to the repo. Every SRE Agent in the org can discover it and install it with one click — no need for each team to manually recreate skills and configure connectors. How it works: A specialist team creates a plugin (skills + MCP connector config) and pushes it to the shared GitHub repo Any SRE Agent user browses the marketplace, sees what's available, and clicks Install The plugin's skills and connectors are deployed to that agent instance instantly Contoso runs multiple SRE Agent instances — payments team, platform team, data team. The same marketplace serves all of them. Each team installs exactly the plugins they need. One marketplace, many agents. Teams publish plugins once — every agent in the org can install them. The Scenario: AKS Incident Investigation with Plugins Contoso runs a payment processing service on AKS. Three teams have contributed plugins to the company's internal marketplace: Plugin Team Skills MCP Connector AKS Runbooks K8s Platform Team aks-incident-triage, aks-deployment-analysis Deployment Tracker API Cost & Capacity Cloud FinOps Team cost-analysis, capacity-planning Cost Dashboard API Service Catalog SRE Leadership service-ownership-lookup, dependency-impact-analysis CMDB API All three are installed on the payments team's SRE Agent. Let's see what happens when an incident hits. Building and Publishing the Plugins Each team creates their plugin independently and pushes it to the shared marketplace repo. 1. AKS Runbooks (Kubernetes Platform Team) The K8s Platform Team packages their triage procedures, node pool naming conventions, PDB policies, known issues registry, and deployment gates. Skills: aks-incident-triage — Per-symptom triage procedures (OOMKill, NodePressure, CrashLoop), PDB-first policy checks, Tier-0 escalation rules, and a known issues registry aks-deployment-analysis — Correlates incidents with recent deployments, surfaces resource spec diffs and gate violations, provides a rollback decision tree MCP Connector: contoso-deploy-tracker — Exposes get_deployments: recent deployments by namespace with deployer, image versions, resource diffs, and gate status 2. Cost & Capacity (Cloud FinOps Team) The FinOps team packages their SKU approval matrix, team budget allocations, chargeback model, and scaling governance. Skills: cost-analysis — Team budget tiers, cost dashboard API usage, incident cost impact calculations capacity-planning — "Scale-out before scale-up" rule (CCP-001), SKU approval matrix (B-series = team lead, D-series = director, E/N-series = VP/CTO), auto-scale thresholds MCP Connector: contoso-cost-dashboard — Exposes get_team_spend (budget, burn rate) and get_resources_cost (resource-level cost with utilization) 3. Service Catalog (SRE Leadership) SRE Leadership packages service ownership, SLA tiers, escalation paths, and the dependency graph. Skills: service-ownership-lookup — Maps namespaces to owning teams, on-call contacts, SLA tiers (Tier-0 through Tier-3), escalation policies dependency-impact-analysis — Dependency classification (hard/soft/async), blast radius assessment, security implications MCP Connector: contoso-cmdb — Exposes get_service_info (ownership, SLA), get_service_dependencies, and get_blast_radius The Marketplace Manifest All three plugins are described in a single marketplace.json that the SRE Agent discovers: { "name": "Contoso SRE Plugins", "description": "Internal plugin marketplace for Contoso SRE teams", "version": "1.0.0", "plugins": [ { "id": "aks-runbooks", "name": "AKS Runbooks", "description": "Kubernetes Platform Team's operational runbooks and deployment correlation", "author": "K8s Platform Team", "source": "./aks-runbooks", "category": "Operations" }, { "id": "cost-capacity", "name": "Cost & Capacity", "description": "FinOps team's cost governance, SKU approval matrix, and capacity planning", "author": "Cloud FinOps Team", "source": "./cost-capacity", "category": "Cost Management" }, { "id": "service-catalog", "name": "Service Catalog", "description": "Service ownership, SLA tiers, dependency graphs, and escalation paths", "author": "SRE Leadership", "source": "./service-catalog", "category": "Governance" } ] } The internal plugin marketplace on GitHub. Each directory is a plugin contributed by a different team. The marketplace.json manifest tells the SRE Agent what's available. Registering the Marketplace and Installing Plugins Step 1: Add the Marketplace In the SRE Agent, navigate to Builder → Plugins → Browse and click "Add Marketplace". Enter the GitHub repository path (contoso/sre-agent-plugins) and click Add. The agent fetches marketplace.json and displays the marketplace card with all three plugins discovered. Adding the internal marketplace — just point to the GitHub repo. Step 2: Browse the Catalog The Browse tab now shows the Contoso SRE Plugins marketplace. Clicking into it reveals three plugin cards — one from each contributing team — with descriptions, skill counts, and connector details. & Capacity, Service Catalog — with author teams, skill counts, and install buttonsThree plugins from three teams. Each one brings skills (organizational knowledge) and an MCP connector (internal API access). Step 3: Install All Three Plugins Click into each plugin to review what it installs — skills and MCP connectors — then click "Install Plugin" for each one. After installing all three: 6 skills loaded (2 per plugin) — organizational knowledge documents the agent reads at runtime 3 MCP connectors registered — internal API integrations the agent can call as tools Each plugin clearly shows what it installs — skills and connectors — before you commit. ith green "Installed" badges, green borders, and skill/connector countsAll three plugins installed — each card shows its "Installed" status, the authoring team, and exactly what it brings (2 skills, 1 connector). The green border and badge make installed plugins immediately recognizable. The Agent in Action Now let's ask the question: "Pods are crashing in payments-prod on aks-payments-prod-eastus2 in sre-marketplace-demo-rg. Investigate and give me a full incident report." The agent investigates — combining its native Kubernetes capabilities with the organizational context from all three plugins. The same strong Kubernetes diagnosis, now enriched with organizational context. Deployment correlation, policy violations, cost governance, blast radius, and escalation paths — all layered on top of the agent's native investigation. Why This Matters: Different Teams, One Agent The K8s Platform Team writes triage procedures and known issues. The FinOps Team writes budget governance and SKU rules. SRE Leadership defines service ownership and escalation paths. Each team packages their domain expertise independently. The SRE Agent combines all of it at runtime — producing a response no single team could have written alone, drawn from three internal systems and three bodies of institutional knowledge. This is how organizational knowledge scales: composable plugins that the agent reasons with in real-time, not longer wikis that nobody reads. Learn More Plugin Marketplace overview — How the marketplace works, manifest formats, and MCP config support Tutorial: Install a marketplace plugin — Step-by-step walkthrough of adding a marketplace, browsing plugins, and importing skills Skills in Azure SRE Agent — How skills work, how the agent loads them at runtime, and how they relate to custom agents and knowledge files MCP connectors and tools — Connecting your agent to external systems via the Model Context Protocol Tutorial: Set up an MCP connector — Configuring remote and local MCP servers as agent connectors32Views0likes0CommentsGet started with the New Relic MCP server in Azure SRE Agent
Overview The New Relic MCP server is a cloud-hosted bridge between your New Relic account and Azure SRE Agent. Once configured, it enables real-time interaction with your observability data—APM traces, distributed traces, logs, metrics, alerts, incidents, dashboards, and entities—through natural language. All actions respect your existing New Relic role and capability assignments. The server uses Streamable HTTP transport with a single Api-Key header for authentication. Azure SRE Agent connects directly to the New Relic-hosted endpoint ( https://mcp.newrelic.com/mcp/ )—no npm packages, local proxies, or container deployments are required. The SRE Agent portal includes a dedicated New Relic connector type that pre-populates the endpoint URL and the Api-Key header for streamlined setup. Key capabilities Area Capabilities NRQL Run NRQL queries against any event type, build queries from natural language, explain existing NRQL APM Search applications, view golden signals (throughput, errors, latency), inspect transactions Distributed tracing Find slow or errored traces, fetch full trace details by trace ID Logs Search and filter logs by attribute, service, host, or time range Infrastructure Query host metrics (CPU, memory, disk, network), inspect Kubernetes clusters and containers Alerts & Incidents List alert policies and conditions, search open and closed incidents, view incident timelines Entities Search entities (services, hosts, databases, browsers, mobile apps) by name, tag, or domain Dashboards Search and retrieve dashboards by name, account, or tag Synthetics List synthetic monitors, view recent check results and failures Errors Inbox Query error groups, occurrences, and stack traces Change tracking Inspect deployment markers and configuration changes correlated with incidents Note: > This is the official New Relic-hosted MCP server (Preview). Tool availability depends on your New Relic plan, the data your account ingests, and your role-based access control assignments. Prerequisites Azure SRE Agent resource deployed in Azure New Relic account with an active subscription (Free, Standard, Pro, or Enterprise) New Relic user with appropriate role and capability assignments User API key created from your New Relic account ( NRAK-… ) (Optional) Your New Relic account ID for NRQL queries that explicitly target a specific account Step 1: Get your New Relic credentials The New Relic MCP server requires a User API key, which authenticates the user and inherits that user's role-based permissions. The key is created in the New Relic UI. Create a User API key Sign in to New Relic One Select your user menu (your initials in the bottom-left corner) Select API keys Direct URL: https://one.newrelic.com/api-keys Select Create a key in the top-right corner Configure the key: Key type: Select User Name: Enter a descriptive name (e.g., sre-agent-mcp ) Select Create a key Copy the key value (starts with NRAK-… )—it is shown only once. If lost, you must create a new key. Tip: > User API keys inherit the role and capabilities of the user who created them. For production use, create the key from a dedicated service user rather than a personal account so the integration continues to work if team members leave the organization. Find your account ID You don't need the account ID for the connector itself (the User API key carries the user's default account). It's still useful for NRQL queries that explicitly scope to a specific account when the user has access to multiple. From any page in New Relic One, select the account picker in the top navigation Note the numeric Account ID displayed next to your account name Alternatively, navigate to Administration > Account settings Direct URL: https://one.newrelic.com/admin-portal/organizations/organization-detail Note: > If you have access to multiple accounts under one organization, the User API key uses the user's default account. Tools that accept an explicit accountId argument can target other accounts the user has access to. Required role and capabilities The User API key inherits the permissions of the underlying user. Configure the user's role to grant the capabilities your agent needs: Capability Description Required? Logs.read Read log data Recommended APM.read Read APM application data, transactions, and traces Recommended Infrastructure.read Read infrastructure metrics, hosts, and Kubernetes data Recommended Alerts.read Read alert policies, conditions, and incidents Recommended Dashboards.read Read dashboards Optional Synthetics.read Read synthetic monitor results Optional Entities.read Search and inspect entities Required NRQL.execute Run NRQL queries Required Important: > Apply the principle of least privilege. Grant read-only capabilities unless the agent needs to acknowledge incidents, mute conditions, or modify other resources. Avoid using a full-admin user account. Step 2: Add the MCP connector Connect the New Relic MCP server to your SRE Agent using the portal. The portal includes a dedicated New Relic connector type that pre-populates the endpoint URL and the Api-Key header. Using the SRE Agent portal In the SRE Agent portal, open your agent In the left navigation, expand Builder and select Connectors Select + Add connector in the toolbar In the Choose a connector step, scroll to the Additional connectors section under Telemetry and select the New Relic card. The card description reads "Connect to New Relic for application performance monitoring and analytics." Select Next. In the Set up New Relic connector step, configure the fields: Field Value Name newrelic-mcp (any unique name for this connector) URL https://mcp.newrelic.com/mcp/ (pre-populated) Authentication method Custom headers (pre-selected) Key Api-Key (pre-populated) Value Your New Relic User API key ( NRAK-… ) Select Next to advance to Review + test connection. The portal validates the endpoint and credentials. In the Select tools step, choose which New Relic tools to expose to the agent (or leave the defaults selected). Select Add connector to save. Note: > The New Relic connector type pre-populates the endpoint URL ( https://mcp.newrelic.com/mcp/ ), sets the authentication method to Custom headers, and adds the Api-Key header key automatically. You only need to paste in the User API key value ( NRAK-… ). Tip: > The single https://mcp.newrelic.com/mcp/ endpoint serves accounts in all New Relic data centers. The User API key includes the account context required to route the request to the correct region. Once the connector shows Connected status on the Connectors list, the New Relic MCP tools are automatically available to your agent. Step 3: Create a New Relic subagent Create a specialized subagent to give the AI focused New Relic observability expertise and better prompt responses. Navigate to Builder > Subagents Select Add subagent Paste the following YAML configuration: api_version: azuresre.ai/v1 kind: AgentConfiguration metadata: owner: your-team@contoso.com version: "1.0.0" spec: name: NewRelicObservabilityExpert display_name: New Relic Observability Expert system_prompt: | You are a New Relic observability expert with access to NRQL, APM data, distributed traces, logs, infrastructure metrics, alerts, incidents, entities, dashboards, and synthetic monitors via the New Relic MCP server. ## Capabilities ### NRQL (New Relic Query Language) - Run NRQL queries with `run_nrql_query` against any event type (Transaction, TransactionError, Log, Metric, SystemSample, etc.) - Generate NRQL from natural language with `create_nrql_query` - Explain existing NRQL with `explain_nrql_query` - Always include a `SINCE` clause to bound the time range ### APM (Application Performance Monitoring) - Search APM applications with `search_apm_applications` - Get golden signals (throughput, errors, latency, Apdex) with `get_apm_golden_signals` - Inspect slow or errored transactions with NRQL on the Transaction event ### Distributed Tracing - Search traces with `search_traces` filtered by service, error, or duration - Fetch a full trace with `get_trace` using the trace ID - Identify the slowest span and the service that owns it ### Logs - Search logs with `search_logs` using attribute filters and time ranges - Use NRQL aggregation on the Log event for grouping and counts - Correlate logs with traces using `trace.id` and `span.id` attributes ### Infrastructure - List hosts with `search_hosts` and filter by tag or status - Query host metrics (CPU, memory, disk, network) with NRQL on SystemSample, ProcessSample, NetworkSample, StorageSample - For Kubernetes, use the K8sClusterSample, K8sPodSample, K8sNodeSample event types ### Alerts & Incidents - List alert policies with `search_alert_policies` - List alert conditions with `search_alert_conditions` - Search open and closed incidents with `search_incidents` - Get full incident details and timeline with `get_incident` ### Entities - Search entities with `search_entities` across services, hosts, databases, browsers, mobile apps, and synthetic monitors - Get entity details with `get_entity` including tags, golden metrics, and related entities ### Dashboards - Search dashboards with `search_dashboards` - Retrieve dashboard widgets and embedded NRQL with `get_dashboard` ### Synthetics - List synthetic monitors with `search_synthetic_monitors` - Inspect recent check results with `get_synthetic_check_results` ### Errors Inbox - Query error groups with `search_error_groups` - Get occurrences and stack traces with `get_error_group` ## Best Practices When investigating incidents: - Start with `search_incidents` or `get_incident` for context - Check related alert conditions with `search_alert_conditions` - Pull APM golden signals with `get_apm_golden_signals` for the affected service - Use `search_traces` to find slow or errored requests - Correlate with `search_logs` filtered by service and time range - Check `search_hosts` and SystemSample for infrastructure-level problems When writing NRQL: - Always include `SINCE` (e.g., `SINCE 30 minutes ago`) - Use `FACET` to group results by service, host, or status code - Use `TIMESERIES` to plot trends over time - Use `LIMIT` to bound result size - Filter by `appName`, `service.name`, or `host` to scope queries When handling errors: - If access is denied, explain which capability the user is missing - If no data is returned, suggest broadening the time range - For multi-account organizations, mention the `accountId` parameter - For NRQL syntax errors, use `explain_nrql_query` to validate mcp_connectors: - newrelic-mcp handoffs: [] Select Save Click to edit the created sub-agent and scroll down to Tools to add the New Relic tools Step 4: Add a New Relic skill (optional) Skills provide contextual knowledge and best practices that help agents use tools more effectively. Create a New Relic skill to give your agent expertise in NRQL, APM analysis, and incident investigation workflows. Navigate to Builder > Skills Select Add skill Paste the following skill configuration: api_version: azuresre.ai/v1 kind: SkillConfiguration metadata: owner: your-team@contoso.com version: "1.0.0" spec: name: newrelic_observability display_name: New Relic Observability description: | Expertise in New Relic's observability platform including NRQL, APM, distributed tracing, logs, infrastructure metrics, alerts, incidents, entities, dashboards, and synthetics. Use for running NRQL queries, investigating slow services, analyzing traces, searching logs, inspecting incidents, and navigating New Relic data via the New Relic MCP server. instructions: | ## Overview New Relic is a cloud-scale observability platform for APM, distributed tracing, logs, metrics, infrastructure, browser, mobile, and synthetics. NRQL (New Relic Query Language) is the unified query language across all event types. **Authentication:** A single `Api-Key` header containing a User API key (`NRAK-…`). The key inherits the role and capabilities of the user who created it, and the user's default account scopes the queries. **Endpoint:** `https://mcp.newrelic.com/mcp/` — a single endpoint serves accounts in all New Relic data centers (US, EU, FedRAMP). ## Writing NRQL queries NRQL syntax is similar to SQL but optimized for time-series event data. **Common patterns:** ```sql -- Errors per service in the last hour SELECT count(*) FROM TransactionError SINCE 1 hour ago FACET appName -- p95 latency by transaction SELECT percentile(duration, 95) FROM Transaction WHERE appName = 'checkout-api' SINCE 30 minutes ago FACET name TIMESERIES -- Top error messages from logs SELECT count(*) FROM Log WHERE level = 'ERROR' AND service.name = 'payment-api' SINCE 1 hour ago FACET message LIMIT 10 -- Host CPU utilization SELECT average(cpuPercent) FROM SystemSample WHERE hostname LIKE 'web-prod-%' SINCE 30 minutes ago TIMESERIES -- Apdex by application SELECT apdex(duration, t: 0.5) FROM Transaction SINCE 1 hour ago FACET appName ``` Always include `SINCE`. Use `FACET` for grouping, `TIMESERIES` for trends, `LIMIT` to bound results, and `WHERE` to filter. ## APM investigation workflow 1. `search_apm_applications` — find the affected service 2. `get_apm_golden_signals` — pull throughput, error rate, response time, Apdex 3. NRQL on Transaction — drill into slow or failing transactions 4. NRQL on TransactionError — identify error classes and frequencies 5. `search_traces` — find specific slow or errored traces 6. `get_trace` — inspect span timing and errors 7. `search_logs` — correlate logs by `trace.id` or `service.name` ## Distributed trace investigation Use `search_traces` for span-level queries and `get_trace` for full traces. **Workflow:** 1. Search for slow or errored traces with `search_traces` 2. Get the full trace with `get_trace` using the trace ID 3. Identify the bottleneck span (longest duration or error) 4. Note the owning service and operation name 5. Correlate with `search_logs` using the `trace.id` attribute 6. Check `get_apm_golden_signals` for the bottleneck service ## Log analysis Use `search_logs` for filtered retrieval and NRQL on the Log event for aggregation. **Common log filters:** ``` # Errors from a specific service service.name:payment-api level:ERROR # Logs containing a trace ID trace.id:abc123def456 # Logs from a Kubernetes pod k8s.namespace.name:production k8s.deployment.name:checkout # HTTP 5xx responses http.statusCode:>=500 ``` **Aggregations on the Log event:** ```sql SELECT count(*) FROM Log WHERE level = 'ERROR' SINCE 1 hour ago FACET service.name LIMIT 20 ``` ## Infrastructure investigation New Relic infrastructure data lives in event types: | Event type | Purpose | |------------|---------| | `SystemSample` | Host CPU, memory, disk, load average | | `ProcessSample` | Per-process CPU and memory usage | | `NetworkSample` | Network interface throughput, errors, drops | | `StorageSample` | Disk capacity and I/O | | `K8sClusterSample` | Kubernetes cluster-level metrics | | `K8sNodeSample` | Kubernetes node metrics | | `K8sPodSample` | Pod-level CPU, memory, restart counts | | `K8sContainerSample` | Container-level metrics | ## Incident investigation workflow For structured incident investigation: 1. `search_incidents` — find recent or active incidents 2. `get_incident` — get full incident details and timeline 3. `search_alert_conditions` — check which conditions triggered 4. `search_logs` — search for errors around the incident time 5. NRQL on Transaction/TransactionError — check service health 6. `search_traces` — inspect request traces for latency or errors 7. `search_hosts` — verify infrastructure health 8. Check change tracking markers near the incident start time ## Working with entities New Relic entities include services (APM, browser, mobile), hosts, databases, queues, synthetic monitors, dashboards, and workloads. - Use `search_entities` to find entities by name, domain, or tag - Use `get_entity` to retrieve tags, golden metrics, and related entities - Filter by `domain` (APM, INFRA, BROWSER, MOBILE, SYNTH, EXT) to narrow ## Troubleshooting | Issue | Solution | |-------|----------| | 401 Unauthorized | Verify the User API key is valid and starts with `NRAK-` | | 403 Forbidden | The user's role lacks the required capability—check role assignments | | No data returned | Confirm the account ID is correct and contains data for the queried event type | | Wrong region | Ensure the connector URL matches your data center (US, EU, or FedRAMP) | | NRQL syntax error | Use `explain_nrql_query` to validate before running | | Cross-account query | Pass `accountId` explicitly if the user has access to multiple accounts | mcp_connectors: - newrelic-mcp Select Save Reference the skill in your subagent Update your subagent configuration to include the skill: spec: name: NewRelicObservabilityExpert skills: - newrelic_observability mcp_connectors: - newrelic-mcp Step 5: Test the integration Open a new chat session with your SRE Agent Try these example prompts: NRQL queries Run NRQL: SELECT count(*) FROM TransactionError SINCE 1 hour ago FACET appName Show me the p95 response time for the checkout-api over the last 30 minutes Build me an NRQL query to find the top 10 slowest transactions in the payment-api Explain this NRQL query: SELECT rate(count(*), 1 minute) FROM Transaction FACET appName APM and traces Show me the golden signals for the payment-api service in the last hour Find the slowest distributed traces for the checkout-api in the last 30 minutes Get the full trace details for trace ID abc123def456 What is the error rate trend for the api-gateway over the last 4 hours? Log analysis Search for ERROR logs from the payment-api service in the last hour Count log errors grouped by service over the last 24 hours Find logs containing trace.id abc123def456 Show me HTTP 5xx responses across all services in the last 30 minutes Incident investigation Show me all open incidents in the last 24 hours Get details for incident 12345 including the timeline and triggered condition What alert conditions triggered during the most recent incident? Correlate the latest incident with related logs and traces Infrastructure What is the CPU utilization across all production hosts in the last 30 minutes? Show me memory pressure on web-prod-01 over the last hour Find Kubernetes pods that have restarted in the last 24 hours List hosts tagged with env:production and team:platform Entities and dashboards Search for all APM services tagged with team:checkout Get details for the entity named payment-api including its tags and related services List all dashboards related to "Checkout" or "Payments" What synthetic monitors are currently failing? Available tools The New Relic MCP server exposes the following core tools. Tool availability depends on your New Relic plan and the user's role capabilities. Tool Description run_nrql_query Execute an NRQL query and return results create_nrql_query Generate an NRQL query from a natural language prompt explain_nrql_query Get a plain English explanation of an NRQL query search_apm_applications Search APM applications by name, language, or tag get_apm_golden_signals Get throughput, error rate, response time, and Apdex for an APM service search_traces Search distributed traces by service, error, or duration get_trace Fetch a full distributed trace by trace ID search_logs Search logs by attribute filters and time ranges search_hosts Search infrastructure hosts by name, tag, or status search_entities Search entities (services, hosts, dashboards, monitors) across domains get_entity Get entity details including tags, golden metrics, and related entities search_alert_policies List alert policies in the account search_alert_conditions List alert conditions, optionally filtered by policy search_incidents Search open and closed incidents get_incident Get full incident details and timeline search_dashboards Search dashboards by title, account, or tag get_dashboard Retrieve dashboard widgets and embedded NRQL search_synthetic_monitors List synthetic monitors get_synthetic_check_results Get recent check results for a synthetic monitor search_error_groups Query Errors Inbox for error groups get_error_group Get occurrences and stack traces for an error group search_change_markers List deployment markers and configuration changes Related content New Relic User API keys NRQL reference NerdGraph API New Relic role-based access control New Relic regional data centers MCP integration overview Build a custom subagent48Views0likes0CommentsHow Microsoft 1ES uses agentic AI to take on security and compliance at scale
Microsoft’s Customer Zero blog series gives an insider view of how Microsoft builds and operates Microsoft using our trusted, enterprise-grade IQ platform. Learn best practices from our engineering teams with real-world lessons, architectural patterns, and operational strategies for pressure-tested solutions in building, operating, and scaling AI apps and agent fleets across the organization. What we do Within Microsoft’s One Engineering System (1ES) organization, teams build and maintain the internal engineering systems that product groups across the company rely on to ship and secure their services. These shared tools and processes support teams responsible for mission-critical products, from modern cloud-native platforms to long-lived legacy applications. Security, compliance, and reliability work is non-negotiable at this scale. But it has to coexist with developer productivity and velocity across thousands of independently owned repositories. The problem: the CVE and compliance treadmill Here’s the loop we kept living: A security or compliance alert arrives, often via automation like Dependabot or a CVE finding. The version gets bumped, or the config gets nudged. CI is green. The PR merges. Production fails or the finding reopens because the fix required code changes beyond a version bump or a config flip. This repeats across repositories, teams, and organizations. And the hard truth is not all vulnerabilities are mechanical version bumps, and not all compliance findings are config tweaks. Many introduce behavioral or security model changes. Automation handles the easy cases but silently fails on the hard ones. A second pattern compounds it: when a service has 30+ open action items spanning OTel audit, identity, secret rotation, and CodeQL findings, just figuring out which ones are quick versus deep can take longer than the fixes themselves. Multiply this across Microsoft’s repo footprint and the cost becomes months of engineering time spent on work that doesn’t ship new customer value. But this is exactly the kind of challenge AI was made for: high-speed, high-scale evaluation and judgment calls, coached by human expertise. Why this is solvable now In the previous era of software development, an average CVE alert meant hours of developer toil. Three things changed at once. Frontier models like GPT-5.5 and Claude Opus 4.7 can now reason about context, intent, and tradeoffs not just generate code. Agent runtimes like GitHub Copilot CLI can read repositories, run tools, execute tests, and open pull requests end-to-end. And we’ve started encoding hard-won domain expertise as portable skills, so an agent doesn’t have to re-derive what an expert already knows. None of these is enough alone. Frontier models without runtimes are just chat. Runtimes without skills hallucinate confidently. Skills without judgment automate the wrong thing. Together, bounded by human–AI partnership patterns that make escalation a first-class behavior, they enable a safer, more disciplined way to tackle judgment-heavy engineering work. How we approach it: collaborate, don’t automate The co-creative model Instead of treating AI as a script executor, we treat agents as collaborators operating within explicit guardrails: Agents propose changes based on skills and available context. Humans review, approve, and retain final ownership of every change. Skills over prompts Agents start cold. They don’t have repo-specific context beyond the invoked skill. A skill captures the exact steps, decisions, and edge cases a human expert would apply to a specific class of problem. Skills are written once as Markdown and loaded only when needed: focused context, improved complexity handling, more predictable behavior. We author skills with agents too. The same operating model we use for remediation. Human owns the decision, agent does the work, signals feed back is how the skills themselves get written and refined. One of those agents, Ember, is now open-sourced on awesome-copilot. A real example: the XStream CVE Some CVEs include changes in aspects like default security models, which require code changes beyond just bumping the dependency version. Take the XStream dependency update. In the previous 1.4.17 version, any class deserializes through a default-allow classification. But in the latest update, classification changed to default-deny meaning we need to make permitted types explicit. Once we find the XStream call sites, we need to fix type permissions after each instantiation and make sure that change propagates from test, to PR, to run. This is the type of judgment-heavy work where naïve automation creates risk and blocks developers from focusing on feature work. How execution works The agent loads the relevant skill for the task at hand. If it encounters ambiguity or risk, it stops and escalates rather than guessing. The agent goes through required steps: compile, test, pull request, as explicitly agreed upon in the guidance we provide. After each run, the agent emits an Agent Signals: a structured self-assessment of what worked, what was hard, and where the skill fell short. These compound across sessions so the system improves continuously. Autonomy is great, but trust is far better. Between the CVE context, the skills, and our working agreement with the agent, we’re creating a dynamic where the agent feels empowered to execute until it reaches a point of uncertainty. This cuts down the risk of hallucinations dramatically and scales repeatable, trustworthy execution. The most important issues get surfaced for humans in the loop, where human judgment actually matters. Closing the loop: dev-side and ops-side Skills and agents handle the dev-side work: CVE remediation, compliance findings, codebase changes that need judgment. On the ops side, Azure SRE Agent handles at-scale data analysis and operational toil. Same philosophy on both sides: agents act within explicit guardrails, humans own the decisions that matter, and signals from every run feed back into the system. Then the two sides connect. Every Agent Signal our dev-side skills emit flows into Azure SRE Agent, which analyzes them at scale, identifies where skills are degrading or falling short, opens PRs against the skills themselves to fix the gaps, and sends us a daily skill-health report. The ops-side agent maintains the dev-side agents: agents improving agents, while humans review and merge every change. The same human-in-the-loop discipline that governs a CVE fix governs a skill fix. Impact Across Microsoft, 1ES supports teams working on hundreds of repos at a variety of ages and sizes. Agents enable velocity while skills enable uniqueness which is what helps us scale across such a vast enterprise. Impact of the frontier models, GitHub Copilot, agent skills and agent signals for compliance work. Real engineering time saved We’re finding 18-15 hours of manual work compressed into ~9 hours of agent+skill assisted work – a 50-60% reduction overall, with some compliance work moving from 3-4 hrs manually to 30 min with the agent+skill. What devs told us “Considering I didn’t know anything about any of this, including never having seen the IaC in question, I’d say at least a week’s worth, done in less than 10 prompts.” — Patrick, Senior Engineer “Many times with [compliance], the actual changes are minimal, but reading the docs and knowing what applies to your app can be more time consuming… When you have 30+ action items, you need to go hunting for which one is quick versus time-consuming. This [agent+skills] saves a lot of time.” — Greg, Engineering Manager “The [agent+skills] eliminates most early-phase toil — up to ~90% — but 0% of the last-mile effort. The bottleneck shifts entirely to validation and deployment.” — CloudBuild team That last quote is the one we keep coming back to. The agent+skills doesn’t eliminate the work, it changes where the work lives. Discovery, scoping, and first-draft remediation collapse. Validation and deployment become the new ceiling. That’s the right problem to have and it tells us where to invest next. Security and compliance response with agents is evolving from reactive maintenance to a proactive, strategic defense capability. What we’ve learned On quality and trust With agents, silent confidence is more dangerous than visible uncertainty. Testing agents cold exposes gaps early, before risk compounds. Build uncertainty into skills, and lean on Agent Signals to capture what worked, what was hard, and where the skill fell short. When agents report honestly, the next run starts smarter than the last one. Quality is measured, not assumed. We evaluate every PR on an A/B/C scale, and we run agents that evaluate other agents’ output, closing the loop between execution and assessment. On scaling Not all work should be automated. Some work requires human-AI collaboration. Encoding expertise will always be more valuable than scaling generic prompts. Start with a win in one repo, then slowly scale out that skill to other teams and repos. Where teams can start Teams don’t adopt AI through mandates. They adopt it through trust, built on quality results in their code. Start with one team, one skill, and one real win. Identify a CVE or dependency issue that appears repeatedly across repositories. Write the fix as Markdown, as if you’re onboarding a new engineer. That’s your first skill file. Test the skill with a cold agent on a real repo with a real problem. Iterate until the agent knows both how to act and when to stop. Agents can assess their own work and flag gaps in skills. Want to learn more? Watch the demo video of the dependency update scenario Learn more about the co-creative framework Discover how the GitHub Copilot CLI can help you run and orchestrate agents Learn more about Agent Signals Learn more about Agent Skills Read the companion ops-side story: How we build and use Azure SRE Agent with agentic workflows179Views2likes0CommentsAnnouncing AWS with Azure SRE Agent: Cross-Cloud Investigation using the brand new AWS DevOps Agent
Overview Connect Azure SRE Agent to AWS services using the official AWS MCP server. Query AWS documentation, execute any of the 15,000+ AWS APIs, run operational workflows, and kick off incident investigations through AWS DevOps Agent, which is now generally available. The AWS MCP server connects Azure SRE Agent to AWS documentation, APIs, regional availability data, pre-built operational workflows (Agent SOPs), and AWS DevOps Agent for incident investigation. When connected, the proxy exposes 23 MCP tools organized into four categories: documentation and knowledge, API execution, guided workflows, and DevOps Agent operations. How it works The MCP Proxy for AWS runs as a local stdio process that SRE Agent spawns via uvx . The proxy handles AWS authentication using credentials you provide as environment variables. No separate infrastructure or container deployment is needed. In the portal, you use the generic MCP server (User provided connector) option with stdio transport. Key capabilities Area Capabilities Documentation Search all AWS docs, API references, and best practices; retrieve pages as markdown API execution Execute authenticated calls across 15,000+ AWS APIs with syntax validation and error handling Agent SOPs Pre-built multi-step workflows following AWS Well-Architected principles Regional info List all AWS regions, check service and feature availability by region Infrastructure Provision VPCs, databases, compute instances, storage, and networking resources Troubleshooting Analyze CloudWatch logs, CloudTrail events, permission issues, and application failures Cost management Set up billing alerts, analyze resource usage, and review cost data DevOps Agent Start AWS incident investigations, read root cause analyses, get remediation recommendations, and chat with AWS DevOps Agent Note: The AWS MCP Server is free to use. You pay only for the AWS resources consumed by API calls made through the server. All actions respect your existing IAM policies. Prerequisites Azure SRE Agent resource deployed in Azure AWS account with IAM credentials configured uv package manager installed on the SRE Agent host (used to run the MCP proxy via uvx ) IAM permissions: aws-mcp:InvokeMcp , aws-mcp:CallReadOnlyTool , and optionally aws-mcp:CallReadWriteTool Step 1: Create AWS access keys The AWS MCP server authenticates using AWS access keys (an Access Key ID and a Secret Access Key). These keys are tied to an IAM user in your AWS account. You create them in the AWS Management Console. Navigate to IAM in the AWS Console Sign in to the AWS Management Console In the top search bar, type IAM and select IAM from the results (Direct URL: https://console.aws.amazon.com/iam/ ) In the left sidebar, select Users (Direct URL: https://console.aws.amazon.com/iam/home#/users ) Create a dedicated IAM user Create a dedicated user for SRE Agent rather than reusing a personal account. This makes it easy to scope permissions and rotate keys independently. Select Create user Enter a descriptive user name (e.g., sre-agent-mcp ) Do not check "Provide user access to the AWS Management Console" (this user only needs programmatic access) Select Next Select Attach policies directly Select Create policy (opens in a new tab) and paste the following JSON in the JSON editor: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "aws-mcp:InvokeMcp", "aws-mcp:CallReadOnlyTool", "aws-mcp:CallReadWriteTool" ], "Resource": "*" } ] } Select Next, give the policy a name (e.g., SREAgentMCPAccess ), and select Create policy Back on the Create user tab, select the refresh button in the policy list, search for SREAgentMCPAccess , and check it Select Next > Create user Generate access keys After the user is created, generate the access keys that SRE Agent will use: From the Users list, select the user you just created (e.g., sre-agent-mcp ) Select the Security credentials tab Scroll down to the Access keys section Select Create access key For the use case, select Third-party service Check the confirmation checkbox and select Next Optionally add a description tag (e.g., Azure SRE Agent ) and select Create access key Copy both values immediately: Value Example format Where you'll use it Access Key ID <your-access-key-id> Connector environment variable AWS_ACCESS_KEY_ID Secret Access Key <your-secret-access-key> Connector environment variable AWS_SECRET_ACCESS_KEY Important: The Secret Access Key is shown only once on this screen. If you close the page without copying it, you must delete the key and create a new one. Select Download .csv file as a backup, then store the file securely and delete it after configuring the connector. Tip: For production use, also add service-specific IAM permissions for the AWS APIs you want SRE Agent to call. The MCP permissions above grant access to the MCP server itself, but individual API calls (e.g., ec2:DescribeInstances , logs:GetQueryResults ) require their own IAM actions. Start broad for testing, then scope down using the principle of least privilege. Required permissions summary Permission Description Required? aws-mcp:InvokeMcp Base access to the AWS MCP server Yes aws-mcp:CallReadOnlyTool Read operations (describe, list, get, search) Yes aws-mcp:CallReadWriteTool Write operations (create, update, delete resources) Optional Step 2: Add the MCP connector Connect the AWS MCP server to your SRE Agent using the portal. The proxy runs as a local stdio process that SRE Agent spawns via uvx . It handles SigV4 signing using the AWS credentials you provide as environment variables. Determine the AWS MCP endpoint for your region The AWS MCP server has regional endpoints. Choose the one matching your AWS resources: AWS Region MCP Endpoint URL us-east-1 (default) https://aws-mcp.us-east-1.api.aws/mcp us-west-2 https://aws-mcp.us-west-2.api.aws/mcp eu-west-1 https://aws-mcp.eu-west-1.api.aws/mcp Note: Without the --metadata AWS_REGION=<region> argument, operations default to us-east-1 . You can always override the region in your query. Using the Azure portal In Azure portal, navigate to your SRE Agent resource Select Builder > Connectors Select Add connector Select MCP server (User provided connector) and select Next Configure the connector with these values: Field Value Name aws-mcp Connection type stdio Command python3 Arguments -c , __import__('subprocess').check_call(['pip','install','-q','mcp-proxy-for-aws']);__import__('os').execlp('mcp-proxy-for-aws','mcp-proxy-for-aws','https://aws-mcp.us-east-1.api.aws/mcp','--metadata','AWS_REGION=us-west-2') Environment variables AWS_ACCESS_KEY_ID=<your-access-key-id> , AWS_SECRET_ACCESS_KEY=<your-secret-access-key> Select Next to review Select Add connector This is equivalent to the following MCP client configuration used by tools like Claude Desktop or Amazon Kiro CLI: { "mcpServers": { "aws-mcp": { "command": "uvx", "args": [ "mcp-proxy-for-aws@latest", "https://aws-mcp.us-east-1.api.aws/mcp", "--metadata", "AWS_REGION=us-west-2" ] } } } Important: Store the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY securely. In the portal, environment variables for connectors are stored encrypted. For production deployments, consider using a dedicated IAM user with scoped-down permissions (see Step 1). Never commit credentials to source control. Tip: If your SRE Agent host already has AWS credentials configured (e.g., via aws configure or an instance profile), the proxy will pick them up automatically from the environment. In that case, you can omit the explicit AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. Note: After adding the connector, the agent service initializes the MCP connection. This may take up to 30 seconds as uvx downloads the proxy package on first run (~89 dependencies). If the connector does not show Connected status after a minute, see the Troubleshooting section below. Step 3: Add an AWS skill Skills give agents domain knowledge and best practices for specific tool sets. Create an AWS skill so your agent knows how to troubleshoot AWS services, provision infrastructure, and follow operational workflows. Tip: Why skills over subagents? Skills inject domain knowledge into the main agent's context, so it can use AWS expertise without handing off to a separate agent. Conversation context stays intact and there's no handoff latency. Use a subagent when you need full isolation with its own system prompt and tool restrictions. Navigate to Builder > Skills Select Add skill Paste the following skill configuration: api_version: azuresre.ai/v1 kind: SkillConfiguration metadata: owner: your-team@contoso.com version: "1.0.0" spec: name: aws_infrastructure_operations display_name: AWS Infrastructure & Operations description: | AWS infrastructure and operations: EC2, EKS, Lambda, S3, RDS, CloudWatch, CloudTrail, IAM, VPC, and others. Also covers AWS DevOps Agent for incident investigation, root cause analysis, and remediation. Use for querying AWS resources, investigating issues, provisioning infrastructure, searching documentation, running AWS API calls via the AWS MCP server, and coordinating investigations between Azure SRE Agent and AWS DevOps Agent. instructions: | ## Overview The AWS MCP Server is a managed remote MCP server that gives AI assistants authenticated access to AWS services. It combines documentation access, authenticated API execution, and pre-built Agent SOPs in a single interface. **Authentication:** Handled automatically by the MCP Proxy for AWS, running as a local stdio process. All actions respect existing IAM policies configured in the connector environment variables. **Regional endpoints:** The MCP server has regional endpoints. The proxy is configured with a default region; you can override by specifying a region in your queries (e.g., "list my EC2 instances in eu-west-1"). ## Searching Documentation Use aws___search_documentation to find information across all AWS docs. ## Executing AWS API Calls Use aws___call_aws to execute authenticated AWS API calls. The tool handles SigV4 signing and provides syntax validation. ## Using Agent SOPs Use aws___retrieve_agent_sop to find and follow pre-built workflows. SOPs provide step-by-step guidance following AWS Well-Architected principles. ## Regional Operations Use aws___list_regions to see all available AWS regions and aws___get_regional_availability to check service support in specific regions. ## AWS DevOps Agent Integration The AWS MCP server includes tools for AWS DevOps Agent: - aws___list_agent_spaces / aws___create_agent_space: Manage AgentSpaces - aws___create_investigation: Start incident investigations (5-8 min async) - aws___get_task: Poll investigation status - aws___list_journal_records: Read root cause analysis - aws___list_recommendations / aws___get_recommendation: Get remediation steps - aws___start_evaluation: Run proactive infrastructure evaluations - aws___create_chat / aws___send_message: Chat with AWS DevOps Agent ## Troubleshooting | Issue | Solution | |-------|----------| | Access denied errors | Verify IAM policy includes aws-mcp:InvokeMcp and aws-mcp:CallReadOnlyTool | | API call fails | Check IAM policy includes the specific service action | | Wrong region results | Specify the region explicitly in your query | | Proxy connection error | Verify uvx is installed and the proxy can reach aws-mcp.region.api.aws | mcp_connectors: - aws-mcp Select Save Note: The mcp_connectors: - aws-mcp at the bottom links this skill to the connector you created in Step 2. The skill's instructions teach the agent how to use the 23 AWS MCP tools effectively. Step 4: Test the integration Open a new chat session with your SRE Agent and try these example prompts to verify the connection is working. Quick verification Start with this simple test to confirm the AWS MCP proxy is connected and authenticating correctly: What AWS regions are available? If the agent returns a list of regions, the connection is working. If you see authentication errors, go back and verify the IAM credentials and permissions from Step 1. Documentation and knowledge Search AWS documentation for EKS best practices for production clusters What AWS regions support Amazon Bedrock? Read the AWS documentation page about S3 bucket policies Infrastructure queries List all my running EC2 instances in us-east-1 Show me the details of my EKS cluster named "production-cluster" What Lambda functions are deployed in my account? CloudWatch and monitoring What CloudWatch alarms are currently in ALARM state? Show me the CPU utilization metrics for my RDS instance over the last 24 hours Search CloudWatch Logs for errors in the /aws/lambda/my-function log group Troubleshooting workflows My EC2 instance i-0abc123 is not reachable. Help me troubleshoot. My Lambda function is timing out. Walk me through the investigation. Find an Agent SOP for troubleshooting EKS pod scheduling failures Cross-cloud scenarios My Azure Function is failing when calling AWS S3. Check if there are any S3 service issues and review the bucket policy for "my-data-bucket". Compare the health of my AWS EKS cluster with my Azure AKS cluster. AWS DevOps Agent investigations List all available AWS DevOps Agent spaces in my account Create an AWS DevOps Agent investigation for the high error rate on my Lambda function "order-processor" in us-west-2 Start a chat with AWS DevOps Agent about my EKS cluster performance Cross-agent investigation (Azure SRE Agent + AWS DevOps Agent) My application is failing across both Azure and AWS. Start an AWS DevOps Agent investigation for the AWS side while you check Azure Monitor for errors on the Azure side. Then combine the findings into a unified root cause analysis. What's New: AWS DevOps Agent Integration The AWS MCP server now includes full integration with AWS DevOps Agent, which recently became generally available. This means Azure SRE Agent can start autonomous incident investigations on AWS infrastructure and get back root cause analyses and remediation recommendations — all within the same chat session. Available tools by category AgentSpace management Tool Description aws___list_agent_spaces Discover available AgentSpaces aws___get_agent_space Get AgentSpace details including ARN and configuration aws___create_agent_space Create a new AgentSpace for investigations Investigation lifecycle Tool Description aws___create_investigation Start an incident investigation (async, 5-8 min) aws___get_task Poll investigation task status aws___list_tasks List investigation tasks with filters aws___list_journal_records Read root cause analysis journal aws___list_executions List execution runs for a task aws___list_recommendations Get prioritized mitigation recommendations aws___get_recommendation Get full remediation specification Proactive evaluations Tool Description aws___start_evaluation Start an evaluation to find preventive recommendations aws___list_goals List evaluation goals and criteria Real-time chat Tool Description aws___create_chat Start a real-time chat session with AWS DevOps Agent aws___list_chats List recent chat sessions aws___send_message Send a message and get a streamed response Cross-Agent Investigation Workflow With the AWS MCP server connected, SRE Agent can run parallel investigations across both clouds. Here's how the cross-agent workflow works: Start an AWS investigation: Ask SRE Agent to create an AWS DevOps Agent investigation for the AWS-side symptoms Investigate Azure in parallel: While the AWS investigation runs (5-8 minutes), SRE Agent uses its native tools to check Azure Monitor, Log Analytics, and resource health Read AWS results: When the investigation completes, SRE Agent reads the journal records and recommendations Correlate findings: SRE Agent combines both sets of findings into a single root cause analysis with remediation steps for both clouds Common cross-cloud scenarios: Azure app calling AWS services: Investigate Azure Function errors that correlate with AWS API failures Hybrid deployments: Check AWS EKS clusters alongside Azure AKS clusters during multi-cloud outages Data pipeline issues: Trace data flow across Azure Event Hubs and AWS Kinesis or SQS Agent-to-agent investigation: Start an AWS DevOps Agent investigation for the AWS side while Azure SRE Agent checks Azure resources in parallel Architecture The integration uses a stdio proxy architecture. SRE Agent spawns the proxy as a child process, and the proxy forwards requests to the AWS MCP endpoint: Azure SRE Agent | | stdio (local process) v mcp-proxy-for-aws (spawned via uvx) | | Authenticated HTTPS requests v AWS MCP Server (aws-mcp.<region>.api.aws) | |--- Authenticated AWS API calls --> AWS Services | (EC2, S3, CloudWatch, EKS, Lambda, etc.) | '--- DevOps Agent API calls ------> AWS DevOps Agent |-- AgentSpaces (workspaces) |-- Investigations (async root cause analysis) |-- Recommendations (remediation specs) '-- Chat sessions (real-time interaction) Troubleshooting Authentication and connectivity issues Error Cause Solution 403 Forbidden IAM user lacks MCP permissions Add aws-mcp:InvokeMcp , aws-mcp:CallReadOnlyTool to the IAM policy 401 Unauthorized Invalid or expired AWS credentials Rotate access keys and update the connector environment variables Proxy fails to start uvx not installed or not on PATH Install uv on the SRE Agent host Connection timeout Proxy cannot reach the AWS MCP endpoint Verify outbound HTTPS (port 443) is allowed to aws-mcp.<region>.api.aws Connector added but tools not available MCP connections are initialized at agent startup Redeploy or restart the agent service from the Azure portal Slow first connection uvx downloads ~89 dependencies on first run Wait up to 30 seconds for the initial connection API and permission issues Error Cause Solution AccessDenied on API call IAM user lacks the service-specific permission Add the required IAM action (e.g., ec2:DescribeInstances ) to the user's policy CallReadWriteTool denied Write permission not granted Add aws-mcp:CallReadWriteTool to the IAM policy Wrong region data Proxy configured for a different region Update the AWS_REGION metadata in the connector arguments, or specify the region in your query API not found Newly released or unsupported API Use aws___suggest_aws_commands to find the correct API name Verify the connection Test that the proxy can authenticate by opening a new chat session and asking: What AWS regions are available? If the agent returns a list of regions, the connection is working. If you see authentication errors, verify the IAM credentials and permissions from Step 1. Re-authorize the integration If you encounter persistent authentication issues: Navigate to the IAM console Select the user created in Step 1 Navigate to Security credentials > Access keys Deactivate or delete the old access key Create a new access key Update the connector environment variables in the SRE Agent portal with the new credentials Related content AWS MCP Server documentation MCP Proxy for AWS on GitHub AWS MCP Server tools reference AWS DevOps Agent documentation AWS DevOps Agent GA announcement AWS IAM documentation8KViews0likes0CommentsHow we build and use Azure SRE Agent with agentic workflows
The Challenge: Ops is critical but takes time from innovation Microsoft operates always-on, mission-critical production systems at extraordinary scale. Thousands of services, millions of deployments, and constant change are the reality of modern cloud engineering. These are titan systems that power organizations around the globe—including our own—with extremely low risk tolerance for downtime. While operations work like incident investigation, response and recovery, and remediation is essential, it’s also disruptive to innovation. For engineers, operational toil often means being pulled away from feature work to diagnose alerts, sift through logs, correlate metrics across systems, or respond to incidents at any hour. On-call rotations and manual investigations slow teams down and introduce burnout. What's more, in the era of AI, demand for operational excellence has spiked to new heights. It became clear that traditional human-only processes couldn't meet the scale and complexity needs for system maintenance especially in the AI world where code shipping velocity has increased exponentially. At the same time, we needed to integrate with the AI landscape which continues to evolve at a breakneck pace. New models, new tooling, and new best practices released constantly, fragmenting ecosystems between different platforms for observability, DevOps, incident management, and security. Beyond simply automating tasks, we needed to build an adaptable approach that could integrate with existing systems and improve over time. Microsoft needed a fundamentally different way to perform operations—one that reduced toil, accelerated response, and gave engineers the time to focus on building great products. The Solution: How we build Azure SRE Agent using agentic workflows To address these challenges, Microsoft built Azure SRE Agent, an AI-powered operations agent that serves as an always-on SRE partner for engineers. In practice, Azure SRE Agent continuously observes production environments to detect and investigate incidents. It reasons across signals like logs, metrics, code changes, and other deployment records to perform root cause analysis. It supports engineers from triage to resolution and it’s used in a variety of autonomy levels from assistive investigation to automating remediation proposals. Everything occurs within governance guardrails and human approval checks grounded in role‑based access controls and clear escalation paths. What’s more, Azure SRE Agent learns from past incidents, outcomes, and human feedback to improve over time. But just as important as what was built is how it was built. Azure SRE Agent was created using the agentic workflow approach—building agents with agents. Rather than treating AI as a bolt-on tool, Microsoft embedded specialized agents across the entire software development lifecycle (SDLC) to collaborate with developers, from planning through operations. The diagram above outlines the agents used at each stage of development. They come together to form a full lifecycle: Plan & Code: Agents support spec‑driven development to unlock faster inner loop cycles for developers and even product managers. With AI, we can not only draft spec documentation that defines feature requirements for UX and software development agents but also create prototypes and check in code to staging which now enables PMs/UX/Engineering to rapidly iterate, generate and improve code even for early-stage merges. Verify, Test & Deploy: Agents for code quality review, security, evaluation, and deployment agents work together to shift left on quality and security issues. They also continuously assess reliability, ensure performance, and enforce consistent release best practices. Operate & Optimize: Azure SRE Agent handles ongoing operational work from investigating alerts, to assisting with remediation, and even resolving some issues autonomously. Moreover, it learns continuously over time and we provide Azure SRE Agent with its own specialized instance of Azure SRE Agent to maintain itself and catalyze feedback loops. While agents surface insights, propose actions, mitigate issues and suggest long term code or IaC fixes autonomously, humans remain in the loop for oversight, approval, and decision-making when required. This combination of autonomy and governance proved critical for safe operations at scale. We also designed Azure SRE agent to integrate across existing systems. Our team uses custom agents, Model Context Protocol (MCP) and Python tools, telemetry connections, incident management platforms, code repositories, knowledge sources, business process and operational tools to add intelligence on top of established workflows rather than replacing them. Built this way, Azure SRE Agent was not just a new tool but a new operational system. And at Microsoft’s scale, transformative systems lead to transformative outcomes. The Impact: Reducing toil at enterprise scale The impact of Azure SRE Agent is felt most clearly in day-to-day operations. By automating investigations and assisting with remediation, the agent reduces burden for on-call engineers and accelerates time to resolution. Internally at Microsoft in the last nine months, we've seen: 35,000+ incidents have been handled autonomously by Azure SRE Agent. 50,000+ developer hours have been saved by reducing manual investigation and response work. Teams experienced a reduced on-call burden and faster time-to-mitigation during incidents. To share a couple of specific cases, the Azure Container Apps and Azure App Service product group teams have had tremendous success with Azure SRE Agent. Engineers for Azure Container Apps had overwhelmingly positive (89%) responses to the root cause analysis (RCA) results from Azure SRE agent, covering over 90% of incidents. Meanwhile, Azure App Service has brought their time-to-mitigation for live-site incidents (LSIs) down to 3 minutes, a drastic improvement from the 40.5-hour average with human-only activity. And this impact is felt within the developer experience. When asked developers about how the agent has changed ops work, one of our engineers had this to say: “[It’s] been a massive help in dealing with quota requests which were being done manually at first. I can also say with high confidence that there have been quite a few CRIs that the agent was spot on/ gave the right RCA / provided useful clues that helped navigate my initial investigation in the right direction RATHER than me having to spend time exploring all different possibilities before arriving at the correct one. Since the Agent/AI has already explored all different combinations and narrowed it down to the right one, I can pick the investigation up from there and save me countless hours of logs checking.” - Software Engineer II, Microsoft Engineering Beyond the impact of the agent itself, the agentic workflow process has also completely redefined how we build. Key learnings: Agentic workflow process and impact It's very easy to think of agents as another form of advanced automation, but it's important to understand that Azure SRE agent is also a collaborative tool. Engineers can prompt the agent in their investigations to surface relevant context (logs, metrics, and related code changes) to propose actions far faster and easier than traditional troubleshooting. What’s more, they can also extend it for data analysis and dashboarding. Now engineers can focus on the agent’s findings to approve actions or intervene when necessary. The result is a human-AI partnership that scales operations expertise without sacrificing control. While the process took time and experimentation to refine, the payoff has been extraordinary; our team is building high-quality features faster than ever since we introduced specialized agents for each stage of the SDLC. While these results were achieved inside Microsoft, the underlying patterns are broadly applicable. First, building agents with agents is essential to scaling, as manual development quickly became a bottleneck; agents dramatically accelerated inner loop iteration through code generation, review, debugging, security fixes, etc. In practice, we found that a generic agent—guided by rich context and powered by memory and learning—can continuously adapt, becoming faster and more effective over time as it builds experience. This allows the agent to apply prior knowledge, avoid relearning, and reduce the effort required to resolve similar problems repeatedly. In parallel, specialized agents help bring consistency and repeatability to well‑defined categories of incidents, encoding proven patterns, workflows, and safeguards. Together, these approaches enable systems that both adapt to new situations and respond reliably at scale. Microsoft also learned to integrate deeply with existing systems, embedding agents into established telemetry, workflows, and platforms rather than attempting to replace them. Throughout this process, maintaining tight human‑in‑the‑loop governance proved critical. Autonomy had to be balanced with clear approval boundaries, role‑based access, and safety checks to build trust. Finally, teams learned to invest in continuous feedback and evaluation, using ongoing measurement to improve agents over time and understand where automation added value versus where human judgment should remain central. Want to learn more? Azure SRE Agent is one example of how agentic workflows can transform both product development and operations at scale. Teams at Microsoft are on a mission of leading the industry by example, not just sharing results. We invite you to take the practical learnings from this blog and apply the same principles in your own environments. Discover more about Azure SRE Agent Learn about agents in DevOps tools and processes Read best practices on agent management with Azure7.9KViews4likes1CommentAnnouncing general availability for the Azure SRE Agent
Today, we’re excited to announce the General Availability (GA) of Azure SRE Agent— your AI‑powered operations teammate that helps organizations improve uptime, reduce incident impact, and cut operational toil by accelerating diagnosis and automating response workflows.13KViews1like2CommentsAutonomous AKS Incident Response with Azure SRE Agent: From Alert to Verified Recovery in Minutes
When a Sev1 alert fires on an AKS cluster, detection is rarely the hard part. The hard part is what comes next: proving what broke, why it broke, and fixing it without widening the blast radius, all under time pressure, often at 2 a.m. Azure SRE Agent is designed to close that gap. It connects Azure-native observability, AKS diagnostics, and engineering workflows into a single incident-response loop that can investigate, remediate, verify, and follow up, without waiting for a human to page through dashboards and run ad-hoc kubectl commands. This post walks through that loop in two real AKS failure scenarios. In both cases, the agent received an incident, investigated Azure Monitor and AKS signals, applied targeted remediation, verified recovery, and created follow-up in GitHub, all while keeping the team informed in Microsoft Teams. Core concepts Azure SRE Agent is a governed incident-response system, not a conversational assistant with infrastructure access. Five concepts matter most in an AKS incident workflow: Incident platform. Where incidents originate. In this demo, that is Azure Monitor. Built-in Azure capabilities. The agent uses Azure Monitor, Log Analytics, Azure Resource Graph, Azure CLI/ARM, and AKS diagnostics without requiring external connectors. Connectors. Extend the workflow to systems such as GitHub, Teams, Kusto, and MCP servers. Permission levels. Reader for investigation and read oriented access, privileged for operational changes when allowed. Run modes. Review for approval-gated execution and Autonomous for direct execution. The most important production controls are permission level and run mode, not prompt quality. Custom instructions can shape workflow behavior, but they do not replace RBAC, telemetry quality, or tool availability. The safest production rollout path: Start: Reader + Review Then: Privileged + Review Finally: Privileged + Autonomous. Only for narrow, trusted incident paths. Demo environment The full scripts and manifests are available if you want to reproduce this: Demo repository: github.com/hailugebru/azure-sre-agents-aks. The README includes setup and configuration details. The environment uses an AKS cluster with node auto-provisioning (NAP), Azure CNI Overlay powered by Cilium, managed Prometheus metrics, the AKS Store sample microservices application, and Azure SRE Agent configured for incident-triggered investigation and remediation. This setup is intentionally realistic but minimal. It provides enough surface area to exercise real AKS failure modes without distracting from the incident workflow itself. Azure Monitor → Action Group → Azure SRE Agent → AKS Cluster (Alert) (Webhook) (Investigate / Fix) (Recover) ↓ Teams notification + GitHub issue → GitHub Agent → PR for review How the agent was configured Configuration came down to four things: scope, permissions, incident intake, and response mode. I scoped the agent to the demo resource group and used its user-assigned managed identity (UAMI) for Azure access. That scope defined what the agent could investigate, while RBAC determined what actions it could take. I used broader AKS permissions than I would recommend as a default production baseline so the agent could complete remediation end to end in the lab. That is an important distinction: permissions control what the agent can access, while run mode controls whether it asks for approval or acts directly. For this scenario, Azure Monitor served as the incident platform, and I set the response plan to Autonomous for a narrow, trusted path so the workflow could run without manual approval gates. I also added Teams and GitHub integrations so the workflow could extend beyond Azure. Teams provided milestone updates during the incident, and GitHub provided durable follow up after remediation. For the complete setup, see the README. A note on context. The more context you can provide the agent about your environment, resources, runbooks, and conventions, the better it performs. Scope boundaries, known workloads, common failure patterns, and links to relevant documentation all sharpen its investigations and reduce the time it spends exploring. Treat custom instructions and connector content as first-class inputs, not afterthoughts. Two incidents, two response modes These incidents occurred on the same cluster in one session and illustrate two realistic operating modes: Alert triggered automation. The agent acts when Azure Monitor fires. Ad hoc chat investigation. An engineer sees a symptom first and asks the agent to investigate. Both matter in real environments. The first is your scale path. The second is your operator assist path. Incident 1. CPU starvation (alert driven, ~8 min MTTR) The makeline-service deployment manifest contained a CPU and memory configuration that was not viable for startup: resources: requests: cpu: 1m memory: 6Mi limits: cpu: 5m memory: 20Mi Within five minutes, Azure Monitor fired the pod-not-healthy Sev1 alert. The agent picked it up immediately. Here is the key diagnostic conclusion the agent reached from the pod state, probe behavior, and exit code: "Exit code 1 (not 137) rules out OOMKill. The pod failed at startup, not at runtime memory pressure. CPU limit of 5m is insufficient for the process to bind its port before the startup probe times out. This is a configuration error, not a resource exhaustion scenario." That is the kind of distinction that often takes an on call engineer several minutes to prove under pressure: startup failure from CPU starvation vs. runtime termination from memory pressure. The agent then: Identified three additional CPU-throttled pods at 112 to 200% of configured limit using kubectl top. Patched four workloads: makeline-service, virtual-customer, virtual-worker, and mongodb. Verified that all affected pods returned to healthy running state with 0 restarts cluster wide. Azure SRE Agent's Incident History blade confirming full cluster recovery: 4 patches applied, 0 unhealthy pods — no human intervention required. Outcome. Full cluster recovery in ~8 minutes, 0 human interventions. Incident 2. OOMKilled (chat driven, ~4 min MTTR) For the second case, I deployed a deliberately undersized version of order-service: kubectl apply -f .\manifests\aks-store\order-service-changed.yaml -n pets I started this case from chat before the pod-phase alert fired to demonstrate the interactive troubleshooting flow. That was a demo choice, not an alerting gap. CrashLoopBackOff is a container waiting reason, not a pod phase, so production coverage should come from Prometheus based crash-loop signals rather than pod phase alone. Here is the PromQL query I use in Azure Monitor to catch this class of failure: sum by (namespace, pod) ( ( max_over_time( kube_pod_container_status_waiting_reason{ namespace="pets", reason="CrashLoopBackOff" }[5m] ) == 1 ) and on (namespace, pod, container) ( increase( kube_pod_container_status_restarts_total{ namespace="pets" }[15m] ) > 0 ) ) > 0 This query fires when a container has been in CrashLoopBackOff within the last 5 minutes and its restart count has increased in the last 15 minutes. In production, replace the hardcoded namespace with a regex matcher or remove it entirely to cover all namespaces. The order-service pod in the pets namespace is not healthy. Please investigate, identify the root cause, and fix it. The agent's reasoning: "Container logs are empty. The process was killed before it could write its first log line. Exit code 137 confirms OOMKill. No NODE_OPTIONS in the ConfigMap rules out a V8 heap misconfiguration. The 20Mi limit is 12.8x below the pod's observed 50Mi runtime baseline. This limit was never viable for this workload." The agent increased the memory limit (20Mi to 128Mi) and request (10Mi to 50Mi), then verified the new pod stabilized at 74Mi/128Mi (58% utilization) with 0 restarts. Outcome. Service recovered in ~4 minutes without any manual cluster interaction. Side by side comparison Dimension Incident 1: CPU starvation Incident 2: OOMKilled Trigger Azure Monitor alert (automated) Engineer chat prompt (ad hoc) Failure mode CPU too low for startup probe to pass Memory limit too low for process to start Key signal Exit code 1, probe timeout Exit code 137, empty container logs Blast radius 4 workloads affected cluster wide 1 workload in target namespace Remediation CPU request/limit patches across 4 deployments Memory request/limit patch on 1 deployment MTTR ~8 min ~4 min Human interventions 0 0 Why this matters Most AKS environments already emit rich telemetry through Azure Monitor and managed Prometheus. What is still manual is the response: engineers paging through dashboards, running ad-hoc kubectl commands, and applying hotfixes under time pressure. Azure SRE Agent changes that by turning repeatable investigation and remediation paths into an automated workflow. The value isn't just that the agent patched a CPU limit. It's that the investigation, remediation, and verification loop is the same regardless of failure mode, and it runs while your team sleeps. In this lab, the impact was measurable: Metric This demo with Azure SRE Agent Alert to recovery ~4 to 8 min Human interventions 0 Scope of investigation Cluster wide, automated Correlate evidence and diagnose ~2 min Apply fix and verify ~4 min Post incident follow-up GitHub issue + draft PR These results came from a controlled run on April 10, 2026. Real world outcomes depend on alert quality, cluster size, and how much automation you enable. For reference, industry reports from PagerDuty and Datadog typically place manual Sev1 MTTR in the 30 to 120 minute range for Kubernetes environments. Teams + GitHub follow-up Runtime remediation is only half the story. If the workflow ends when the pod becomes healthy again, the same issue returns on the next deployment. That is why the post incident path matters. After Incident 1 resolved, Azure SRE Agent used the GitHub connector to file an issue with the incident summary, root cause, and runtime changes. In the demo, I assigned that issue to GitHub Copilot agent, which opened a draft pull request to align the source manifests with the hotfix. The agent can also be configured to submit the PR directly in the same workflow, not just open the issue, so the fix is in your review queue by the time anyone sees the notification. Human review still remains the final control point before merge. Setup details for the GitHub connector are in the demo repo README, and the official reference is in the Azure SRE Agent docs. Azure SRE Agent fixes the live issue, and the GitHub follow-up prepares the durable source change so future deployments do not reintroduce the same configuration problem. The operations to engineering handoff: Azure SRE Agent fixed the live cluster; GitHub Copilot agent prepares the durable source change so the same misconfiguration can't ship again. In parallel, the Teams connector posted milestone updates during the incident: Investigation started. Root cause and remediation identified. Incident resolved. Teams handled real time situational awareness. GitHub handled durable engineering follow-up. Together, they closed the gap between operations and software delivery. Key takeaways Three things to carry forward Treat Azure SRE Agent as a governed incident response system, not a chatbot with infrastructure access. The most important controls are permission levels and run modes, not prompt quality. Anchor detection in your existing incident platforms. For this demo, we used Prometheus and Azure Monitor, but the pattern applies regardless of where your signals live. Use connectors to extend the workflow outward. Teams for real time coordination, GitHub for durable engineering follow-up. Start where you're comfortable. If you are just getting your feet wet, begin with one resource group, one incident type, and Review mode. Validate that telemetry flows, RBAC is scoped correctly, and your alert rules cover the failure modes you actually care about before enabling Autonomous. Expand only once each layer is trusted. Next steps Add Prometheus based alert coverage for ImagePullBackOff and node resource pressure to complement the pod phase rule. Expand to multi cluster managed scopes once the single cluster path is trusted and validated. Explore how NAP and Azure SRE Agent complement each other — NAP manages infrastructure capacity, while the agent investigates and remediates incidents. I'd like to thank Cary Chai, Senior Product Manager for Azure SRE Agent, for his early technical guidance and thorough review — his feedback sharpened both the accuracy and quality of this post.591Views0likes0CommentsNew in Azure SRE Agent: Log Analytics and Application Insights Connectors
Azure SRE Agent now supports Log Analytics and Application Insights as log providers, backed by the Azure MCP Server. Connect your workspaces and App Insights resources, and the agent can query them directly during investigations. Why This Matters Log Analytics and Application Insights are common destinations for Azure operational data - container logs, application traces, dependency failures, security events. The agent could already access this data through az monitor CLI commands if you granted RBAC roles to its managed identity, and that approach still works. But it required manual RBAC setup and the agent had to shell out to CLI for every query. With these connectors, setup is simpler and querying is faster. You pick a workspace, we handle the RBAC grants, and the agent gets native MCP-backed query tools instead of going through CLI. What You Get Two new connector types in Builder > Connectors (or through the onboarding flow under Logs): Log Analytics - connect a workspace. The agent can query ContainerLog, Syslog, AzureDiagnostics, KubeEvents, SecurityEvent, custom tables, anything in that workspace. Application Insights - connect an App Insights resource. The agent gets access to requests, dependencies, exceptions, traces, and custom telemetry. You can connect multiple workspaces and App Insights resources. The agent knows which ones are available and targets the right one based on the investigation. Setup If you want early access, please enable: Early access to features under Settings > Basics. From there you can add connectors in two ways: Through onboarding: Click Logs in the onboarding flow, then select Log Analytics Workspace or Application Insights under Additional connectors. Through Builder: Go to Builder > Connectors in the sidebar and add a Log Analytics or Application Insights connector. Pick your resource from the dropdown and save. If discovery doesn't find your resource, both connector types have a manual entry fallback. On save, we grant the agent's managed identity Log Analytics Reader and Monitoring Reader on the target resource group. If your account can't assign roles, you can grant them separately. Backed by Azure MCP Under the hood, this uses the Azure MCP Server with the monitor namespace. When you save your first connector, we spin up an MCP server instance automatically. The agent gets access to tools like: monitor_workspace_log_query - KQL against a workspace monitor_resource_log_query - KQL against a specific resource monitor_workspace_list - discover workspaces monitor_table_list - list tables in a workspace Everything is read-only. The agent can query but never modify your monitoring configuration. If different connectors use different managed identities, the system handles per-call identity routing automatically. What It Looks Like An alert fires on your AKS cluster. The agent starts investigating and queries your connected workspace: ContainerLog | where TimeGenerated > ago(30m) | where LogEntry contains "error" or LogEntry contains "exception" | summarize count() by ContainerID, LogEntry | top 10 by count_ KubeEvents | where TimeGenerated > ago(1h) | where Reason in ("BackOff", "Failed", "Unhealthy") | summarize count() by Reason, Name, Namespace | order by count_ desc The agent also ships with built-in skills for common Log Analytics and App Insights query patterns, so it knows which tables to look at and how to structure queries for typical failure scenarios. Things to Know Read-only - the agent can query data but cannot modify alerts, retention, or workspace config Resource discovery needs Reader - the dropdown uses Azure Resource Graph. If your resources don't show up, use the manual entry fallback One identity per connector - if workspaces need different managed identities, create separate connectors Learn More Azure SRE Agent documentation Azure MCP Server We'd love feedback. Try it out and let us know what works and what doesn't. Azure SRE Agent is generally available. Learn more at sre.azure.com/docs.505Views1like0CommentsEvent-Driven IaC Operations with Azure SRE Agent: Terraform Drift Detection via HTTP Triggers
What Happens After terraform plan Finds Drift? If your team is like most, the answer looks something like this: A nightly terraform plan runs and finds 3 drifted resources A notification lands in Slack or Teams Someone files a ticket During the next sprint, an engineer opens 4 browser tabs — Terraform state, Azure Portal, Activity Log, Application Insights — and spends 30 minutes piecing together what happened They discover the drift was caused by an on-call engineer who scaled up the App Service during a latency incident at 2 AM They revert the drift with terraform apply The app goes down because they just scaled it back down while the bug that caused the incident is still deployed Step 7 is the one nobody talks about. Drift detection tooling has gotten remarkably good — scheduled plans, speculative runs, drift alerts — but the output is always the same: a list of differences. What changed. Not why. Not whether it's safe to fix. The gap isn't detection. It's everything that happens after detection. HTTP Triggers in Azure SRE Agent close that gap. They turn the structured output that drift detection already produces — webhook payloads, plan summaries, run notifications — into the starting point of an autonomous investigation. Detection feeds the agent. The agent does the rest: correlates with incidents, reads source code, classifies severity, recommends context-aware remediation, notifies the team, and even ships a fix. Here's what that looks like end to end. What you'll see in this blog: An agent that classifies drift as Benign, Risky, or Critical — not just "changed" Incident correlation that links a SKU change to a latency spike in Application Insights A remediation recommendation that says "Do NOT revert" — and why reverting would cause an outage A Teams notification with the full investigation summary An agent that reviews its own performance, finds gaps, and improves its own skill file A pull request the agent created on its own to fix the root cause The Pipeline: Detection to Resolution in One Webhook The architecture is straightforward. Terraform Cloud (or any drift detection tool) sends a webhook when it finds drift. An Azure Logic App adds authentication. The SRE Agent's HTTP Trigger receives it and starts an autonomous investigation. The end-to-end pipeline: Terraform Cloud detects drift and sends a webhook. The Logic App adds Azure AD authentication via Managed Identity. The SRE Agent's HTTP Trigger fires and the agent autonomously investigates across 7 dimensions. Setting Up the Pipeline Step 1: Deploy the Infrastructure with Terraform We start with a simple Azure App Service running a Node.js application, deployed via Terraform. The Terraform configuration defines the desired state: App Service Plan: B1 (Basic) — single vCPU, ~$13/mo App Service: Node 20-lts with TLS 1.2 Tags: environment: demo, managed_by: terraform, project: sre-agent-iac-blog resource "azurerm_service_plan" "demo" { name = "iacdemo-plan" resource_group_name = azurerm_resource_group.demo.name location = azurerm_resource_group.demo.location os_type = "Linux" sku_name = "B1" } A Logic App is also deployed to act as the authentication bridge between Terraform Cloud webhooks and the SRE Agent's HTTP Trigger endpoint, using Managed Identity to acquire Azure AD tokens. Learn more about HTTP Triggers here. Step 2: Create the Drift Analysis Skill Skills are domain knowledge files that teach the agent how to approach a problem. We create a terraform-drift-analysis skill with an 8-step workflow: Identify Scope — Which resource group and resources to check Detect Drift — Compare Terraform config against Azure reality Correlate with Incidents — Check Activity Log and App Insights Classify Severity — Benign, Risky, or Critical Investigate Root Cause — Read source code from the connected repository Generate Drift Report — Structured summary with severity-coded table Recommend Smart Remediation — Context-aware: don't blindly revert Notify Team — Post findings to Microsoft Teams The key insight in the skill: "NEVER revert critical drift that is actively mitigating an incident." This teaches the agent to think like an experienced SRE, not just a diff tool. Step 3: Create the HTTP Trigger In the SRE Agent UI, we create an HTTP Trigger named tfc-drift-handler with a 7-step agent prompt: A Terraform Cloud run has completed and detected infrastructure drift. Workspace: {payload.workspace_name} Organization: {payload.organization_name} Run ID: {payload.run_id} Run Message: {payload.run_message} STEP 1 — DETECT DRIFT: Compare Terraform configuration against actual Azure state... STEP 2 — CORRELATE WITH INCIDENTS: Check Azure Activity Log and App Insights... STEP 3 — CLASSIFY SEVERITY: Rate each drift item as Benign, Risky, or Critical... STEP 4 — INVESTIGATE ROOT CAUSE: Read the application source code... STEP 5 — GENERATE DRIFT REPORT: Produce a structured summary... STEP 6 — RECOMMEND SMART REMEDIATION: Context-aware recommendations... STEP 7 — NOTIFY TEAM: Post a summary to Microsoft Teams... Step 4: Connect GitHub and Teams We connect two integrations in the SRE Agent Connectors settings: Code Repository: GitHub — so the agent can read application source code during investigations Notification: Microsoft Teams — so the agent can post drift reports to the team channel The Incident Story Act 1: The Latency Bug Our demo app has a subtle but devastating bug. The /api/data endpoint calls processLargeDatasetSync() — a function that sorts an array on every iteration, creating an O(n² log n) blocking operation. On a B1 App Service Plan (single vCPU), this blocks the Node.js event loop entirely. Under load, response times spike from milliseconds to 25-58 seconds, with 502 Bad Gateway errors from the Azure load balancer. Act 2: The On-Call Response An on-call engineer sees the latency alerts and responds — not through Terraform, but directly through the Azure Portal and CLI. They: Add diagnostic tags — manual_update=True, changed_by=portal_user (benign) Downgrade TLS from 1.2 to 1.0 while troubleshooting (risky — security regression) Scale the App Service Plan from B1 to S1 to throw more compute at the problem (critical — cost increase from ~$13/mo to ~$73/mo) The incident is partially mitigated — S1 has more compute, so latency drops from catastrophic to merely bad. Everyone goes back to sleep. Nobody updates Terraform. Act 3: The Drift Check Fires The next morning, a nightly speculative Terraform plan runs and detects 3 drifted attributes. The notification webhook fires, flowing through the Logic App auth bridge to the SRE Agent HTTP Trigger. The agent wakes up and begins its investigation. What the Agent Found Layer 1: Drift Detection The agent compares Terraform configuration against Azure reality and produces a severity-classified drift report: Three drift items detected: Critical: App Service Plan SKU changed from B1 (~$13/mo) to S1 (~$73/mo) — a +462% cost increase Risky: Minimum TLS version downgraded from 1.2 to 1.0 — a security regression vulnerable to BEAST and POODLE attacks Benign: Additional tags (changed_by: portal_user, manual_update: True) — cosmetic, no functional impact Layer 2: Incident Correlation Here's where the agent goes beyond simple drift detection. It queries Application Insights and discovers a performance incident correlated with the SKU change: Key findings from the incident correlation: 97.6% of requests (40 of 41) were impacted by high latency The /api/data endpoint does not exist in the repository source code — the deployed application has diverged from the codebase The endpoint likely contains a blocking synchronous pattern — Node.js runs on a single event loop, and any synchronous blocking call would explain 26-58s response times The SKU scale-up from B1→S1 was an attempt to mitigate latency by adding more compute, but scaling cannot fix application-level blocking code on a single-threaded Node.js server Layer 3: Smart Remediation This is the insight that separates an autonomous agent from a reporting tool. Instead of blindly recommending "revert all drift," the agent produces context-aware remediation recommendations: The agent's remediation logic: Tags (Benign) → Safe to revert anytime via terraform apply -target TLS 1.0 (Risky) → Revert immediately — the TLS downgrade is a security risk unrelated to the incident SKU S1 (Critical) → DO NOT revert until the /api/data performance root cause is fixed This is the logic an experienced SRE would apply. Blindly running terraform apply to revert all drift would scale the app back down to B1 while the blocking code is still deployed — turning a mitigated incident into an active outage. Layer 4: Investigation Summary The agent produces a complete summary tying everything together: Key findings in the summary: Actor: surivineela@microsoft.com made all changes via Azure Portal at ~23:19 UTC Performance incident: /api/data averaging 25-57s latency, affecting 97.6% of requests Code-infrastructure mismatch: /api/data exists in production but not in the repository source code Root cause: SKU scale-up was emergency incident response, not unauthorized drift Layer 5: Teams Notification The agent posts a structured drift report to the team's Microsoft Teams channel: The on-call engineer opens Teams in the morning and sees everything they need: what drifted, why it drifted, and exactly what to do about it — without logging into any dashboard. The Payoff: A Self-Improving Agent Here's where the demo surprised us. After completing the investigation, the agent did two things we didn't explicitly ask for. The Agent Improved Its Own Skill The agent performed an Execution Review — analyzing what worked and what didn't during its investigation — and found 5 gaps in its own terraform-drift-analysis.md skill file: What worked well: Drift detection via az CLI comparison against Terraform HCL was straightforward Activity Log correlation identified the actor and timing Application Insights telemetry revealed the performance incident driving the SKU change Gaps it found and fixed: No incident correlation guidance — the skill didn't instruct checking App Insights No code-infrastructure mismatch detection — no guidance to verify deployed code matches the repository No smart remediation logic — didn't warn against reverting critical drift during active incidents Report template missing incident correlation column No Activity Log integration guidance — didn't instruct checking who made changes and when The agent then edited its own skill file to incorporate these learnings. Next time it runs a drift analysis, it will include incident correlation, code-infra mismatch checks, and smart remediation logic by default. This is a learning loop — every investigation makes the agent better at future investigations. The Agent Created a PR Without being asked, the agent identified the root cause code issue and proactively created a pull request to fix it: The PR includes: App safety fixes: Adding MAX_DELAY_MS and SERVER_TIMEOUT_MS constants to prevent unbounded latency Skill improvements: Incorporating incident correlation, code-infra mismatch detection, and smart remediation logic From a single webhook: drift detected → incident correlated → root cause found → team notified → skill improved → fix shipped. Key Takeaways Drift detection is not enough. Knowing that B1 changed to S1 is table stakes. Knowing it changed because of a latency incident, and that reverting it would cause an outage — that's the insight that matters. Context-aware remediation prevents outages. Blindly running terraform apply after drift would have scaled the app back to B1 while blocking code was still deployed. The agent's "DO NOT revert SKU" recommendation is the difference between fixing drift and causing a P1. Skills create a learning loop. The agent's self-review and skill improvement means every investigation makes the next one better — without human intervention. HTTP Triggers connect any platform. The auth bridge pattern (Logic App + Managed Identity) works for Terraform Cloud, but the same architecture applies to any webhook source: GitHub Actions, Jenkins, Datadog, PagerDuty, custom internal tools. The agent acts, not just reports. From a single webhook: drift detected, incident correlated, root cause identified, team notified via Teams, skill improved, and PR created. End-to-end in one autonomous session. Getting Started HTTP Triggers are available now in Azure SRE Agent: Create a Skill — Teach the agent your operational runbook (in this case, drift analysis with severity classification and smart remediation) Create an HTTP Trigger — Define your agent prompt with {payload.X} placeholders and connect it to a skill Set Up an Auth Bridge — Deploy a Logic App with Managed Identity to handle Azure AD token acquisition Connect Your Source — Point Terraform Cloud (or any webhook-capable platform) at the Logic App URL Connect GitHub + Teams — Give the agent access to source code and team notifications Within minutes, you'll have an autonomous pipeline that turns infrastructure drift events into fully contextualized investigations — with incident correlation, root cause analysis, and smart remediation recommendations. The full implementation guide, Terraform files, skill definitions, and demo scripts are available in this repository.580Views0likes0CommentsManaging Multi‑Tenant Azure Resource with SRE Agent and Lighthouse
Azure SRE Agent is an AI‑powered reliability assistant that helps teams diagnose and resolve production issues faster while reducing operational toil. It analyzes logs, metrics, alerts, and deployment data to perform root cause analysis and recommend or execute mitigations with human approval. It’s capable of integrating with azure services across subscriptions and resource groups that you need to monitor and manage. Today’s enterprise customers live in a multi-tenant world, and there are multiple reasons to that due to acquisitions, complex corporate structures, managed service providers, or IT partners. Azure Lighthouse enables enterprise IT teams and managed service providers to manage resources across multiple azure tenants from a single control plane. In this demo I will walk you through how to set up Azure SRE agent to manage and monitor multi-tenant resources delegated through Azure Lighthouse. Navigate to the Azure SRE agent and select Create agent. Fill in the required details along with the deployment region and deploy the SRE agent. Once the deployment is complete, hit Set up your agent. Select the Azure resources you would like your agent to analyze like resource groups or subscriptions. This will land you to the popup window that allows you to select the subscriptions and resource groups that you would like SRE agent to monitor and manage. You can then select the subscriptions and resource groups under the same tenant that you want SRE agent to manage; Great, So far so good 👍 As a Managed Service Provider (MSP) you have multiple tenants that you are managing via Azure Lighthouse, and you need to have SRE agent access to those. So, to demo this will need to set up Azure Lighthouse with correct set of roles and configuration to delegate access to management subscription where the Centralized SRE agent is running. From Azure portal search Lighthouse. Navigate to the Lighthouse home page and select Manage your customers. On My customers Overview select Create ARM Template Provide a Name and Description. Select subscriptions on a Delegated scope. Select + Add authorization which will take you to Add authorization window. Select Principal type, I am selecting User for demo purposes. The pop-up window will allow Select users from the list. Select the checkbox next to the desired user who you want to delegate the subscription and hit Select Then select the Role that you would like to assign the user from the managing tenant to the delegated tenant and select add. You can add multiple roles by adding additional authorization to the selected user. This step is important to make sure the delegated tenant is assigned with the right role in order for SRE Agents to add it as Azure source. Azure SRE agent requires an Owner or User Administrator RBAC role to assign the subscription to the list of managed resources. If an appropriate role is not assigned, you will see an error when selecting the delegated subscriptions in SRE agent Managed resources. As per Lighthouse role support Owner role isn’t supported and User access Administrator role is supported, but only for limited purpose. Refer Azure Lighthouse documentation for additional information. If role is not defined correctly, you might see an error stating: 🛑Failed to add Role assignment “The 'delegatedRoleDefinitionIds' property is required when using certain roleDefinitionIds for authorization. To allow a principalId to assign roles to a managed identity in the customer tenant, set its roleDefinitionId to User Access Administrator. Download the ARM template and add specific Azure built-in roles that you want to grant in the delegatedRoleDefinitionIds property. You can include any supported Azure built-in role except for User Access Administrator or Owner. This example shows a principalId with User Access Administrator role that can assign two built in roles to managed identities in the customer tenant: Contributor and Log Analytics Contributor. { "principalId": "00000000-0000-0000-0000-000000000000", "principalIdDisplayName": "Policy Automation Account", "roleDefinitionId": "18d7d88d-d35e-4fb5-a5c3-7773c20a72d9", "delegatedRoleDefinitionIds": [ "b24988ac-6180-42a0-ab88-20f7382dd24c", "92aaf0da-9dab-42b6-94a3-d43ce8d16293" ] } In addition SRE agent would require certain roles at the managed identity level in order to access and operate on those services. Locate SRE agent User assigned managed identity and add roles to the service principal. For the demo purpose I am assigning Reader, Monitoring Reader, and Log Analytics Reader role. Here is the sample ARM template used for this demo. { "$schema": "https://schema.management.azure.com/schemas/2019-08-01/subscriptionDeploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "mspOfferName": { "type": "string", "metadata": { "description": "Specify a unique name for your offer" }, "defaultValue": "lighthouse-sre-demo" }, "mspOfferDescription": { "type": "string", "metadata": { "description": "Name of the Managed Service Provider offering" }, "defaultValue": "lighthouse-sre-demo" } }, "variables": { "mspRegistrationName": "[guid(parameters('mspOfferName'))]", "mspAssignmentName": "[guid(parameters('mspOfferName'))]", "managedByTenantId": "6e03bca1-4300-400d-9e80-000000000000", "authorizations": [ { "principalId": "504adfc5-da83-47d4-8709-000000000000", "roleDefinitionId": "e40ec5ca-96e0-45a2-b4ff-59039f2c2b59", "principalIdDisplayName": "Pranab Mandal" }, { "principalId": "504adfc5-da83-47d4-8709-000000000000", "roleDefinitionId": "18d7d88d-d35e-4fb5-a5c3-7773c20a72d9", "delegatedRoleDefinitionIds": [ "b24988ac-6180-42a0-ab88-20f7382dd24c", "92aaf0da-9dab-42b6-94a3-d43ce8d16293" ], "principalIdDisplayName": "Pranab Mandal" }, { "principalId": "504adfc5-da83-47d4-8709-000000000000", "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c", "principalIdDisplayName": "Pranab Mandal" }, { "principalId": "0374ff5c-5272-49fa-878a-000000000000", "roleDefinitionId": "acdd72a7-3385-48ef-bd42-f606fba81ae7", "principalIdDisplayName": "sre-agent-ext-sub1-4n4y4v5jjdtuu" }, { "principalId": "0374ff5c-5272-49fa-878a-000000000000", "roleDefinitionId": "43d0d8ad-25c7-4714-9337-8ba259a9fe05", "principalIdDisplayName": "sre-agent-ext-sub1-4n4y4v5jjdtuu" }, { "principalId": "0374ff5c-5272-49fa-878a-000000000000", "roleDefinitionId": "73c42c96-874c-492b-b04d-ab87d138a893", "principalIdDisplayName": "sre-agent-ext-sub1-4n4y4v5jjdtuu" } ] }, "resources": [ { "type": "Microsoft.ManagedServices/registrationDefinitions", "apiVersion": "2022-10-01", "name": "[variables('mspRegistrationName')]", "properties": { "registrationDefinitionName": "[parameters('mspOfferName')]", "description": "[parameters('mspOfferDescription')]", "managedByTenantId": "[variables('managedByTenantId')]", "authorizations": "[variables('authorizations')]" } }, { "type": "Microsoft.ManagedServices/registrationAssignments", "apiVersion": "2022-10-01", "name": "[variables('mspAssignmentName')]", "dependsOn": [ "[resourceId('Microsoft.ManagedServices/registrationDefinitions/', variables('mspRegistrationName'))]" ], "properties": { "registrationDefinitionId": "[resourceId('Microsoft.ManagedServices/registrationDefinitions/', variables('mspRegistrationName'))]" } } ], "outputs": { "mspOfferName": { "type": "string", "value": "[concat('Managed by', ' ', parameters('mspOfferName'))]" }, "authorizations": { "type": "array", "value": "[variables('authorizations')]" } } } Login to the customers tenant and navigate to the service provides from the Azure Portal. From the Service Providers overview screen, select Service provider offers from the left navigation pane. From the top menu, select the Add offer drop down and select Add via template. In the Upload Offer Template window drag and drop or upload the template file that was created in the earlier step and hit Upload. Once the file is uploaded, select Review + Create. This will take a few minutes to deploy the template, and a successful deployment page should be displayed. Navigate to Delegations from Lighthouse overview and validate if you see the delegated subscription and the assigned role. Once the Lighthouse delegation is set up sign in to the managing tenant and navigate to the deployed SRE agent. Navigate to Azure resources from top menu or via Settings > Managed resources. Navigate to Add subscriptions to select customers subscriptions that you need SRE agent to manage. Adding subscription will automatically add required permission for the agent. Once the appropriate roles are added, the subscriptions are ready for the agent to manage and monitor resources within them. Summary - Benefits This blog post demonstrates how Azure SRE Agent can be used to centrally monitor and manage Azure resources across multiple tenants by integrating it with Azure Lighthouse, a common requirement for enterprises and managed service providers operating in complex, multi-tenant environments. It walks through: Centralized SRE operations across multiple Azure tenants Secure, role-based access using delegated resource management Reduced operational overhead for MSPs and enterprise IT teams Unified visibility into resource health and reliability across customer environments346Views1like0Comments