azure sre agent
20 TopicsGet started with Dynatrace MCP server in Azure SRE Agent
Overview Dynatrace provides a hosted MCP server that enables Azure SRE Agent to interact with the Dynatrace observability platform via SSE (Server-Sent Events) transport. Query logs, investigate problems, analyze security vulnerabilities, and execute DQL (Dynatrace Query Language), and generate timeseries forecasts directly from your SRE Agent conversations. Key capabilities Capability Description Create DQL query Generate DQL queries from natural language prompts Explain DQL query Get plain English explanations of DQL queries Ask Dynatrace Ask general Dynatrace questions (workflows, alerts, etc.) Run DQL query Execute DQL and get results (up to 1000 records) Query problems List active or closed Davis Problems Get problem by ID Get detailed problem information Get vulnerabilities List open security vulnerabilities by risk level Timeseries forecast Predict future values using statistical models NOTE: This is the official Dynatrace-hosted MCP server. For the community OSS version, see the alternate configuration section. Prerequisites Azure SRE Agent resource Dynatrace Platform account (SaaS) Platform Token or OAuth Client with required scopes Your Dynatrace environment URL (e.g., https://abc12345.apps.dynatrace.com ) Step 1: Get your Dynatrace credentials Create a Platform Token Go to My Platform Tokens Click + Create token Add the token name, expiration date, and required scopes (see below). For more details on creating platform tokens click here Copy the generated token Required scopes Tool Required Scopes MCP Gateway access mcp-gateway:servers:invoke , mcp-gateway:servers:read Create DQL query davis-copilot:nl2dql:execute Explain DQL query davis-copilot:dql2nl:execute Ask Dynatrace davis-copilot:conversations:execute Run DQL query storage:buckets:read Query problems storage:buckets:read , storage:events:read Get vulnerabilities storage:security.events:read Timeseries forecast davis:analyzers:read , davis:analyzers:execute IMPORTANT: The mcp-gateway:servers:invoke and mcp-gateway:servers:read scopes are required for SSE transport to the Dynatrace-hosted MCP server. Step 2: Add the MCP connector Navigate to your Azure SRE Agent at sre.azure.com Select your agent from the list In the left navigation, expand Builder > Connectors Select Add connector In "Choose a connector", select MCP server (User provided connector) Click Next and configure: For Streamable-HTTP connection (Dynatrace-hosted MCP): Field Value Name dynatrace-mcp Connection type Streamable-HTTP URL https://abc12345.apps.dynatrace.com/platform-reserved/mcp-gateway/v0.1/servers/dynatrace-mcp/mcp Authentication method Bearer token Token Your Dynatrace Platform Token TIP: Replace abc12345 with your Dynatrace tenant name from your environment URL. Click Next to review, then Add to save Equivalent mcp.json configuration For reference, here's the equivalent configuration in mcp.json format (used by VS Code and other MCP clients): { "mcpServers": { "dynatrace-mcp": { "url": "https://abc12345.apps.dynatrace.com/platform-reserved/mcp-gateway/v0.1/servers/dynatrace-mcp/mcp", "transport": "streamable-http", "headers": { "Authorization": "Bearer YOUR_PLATFORM_TOKEN" } } } } Step 3: Create a Dynatrace subagent Create a specialized agent focused on Dynatrace observability: In the left navigation, select Builder > Subagent builder Click + Create Switch to the YAML tab and paste this configuration: api_version: azuresre.ai/v1 kind: AgentConfiguration spec: name: DynatraceExpert system_prompt: | You are a Dynatrace observability expert with access to real-time monitoring data via the Dynatrace MCP server. ## Capabilities You can help users with: - Creating DQL queries from natural language descriptions - Explaining existing DQL queries in plain English - Running DQL queries to fetch logs, events, and metrics - Investigating active and closed Davis Problems - Analyzing security vulnerabilities by risk level - Getting Kubernetes cluster events - Generating timeseries forecasts ## Best Practices When working with DQL: - Use the Create DQL query tool to generate queries from natural language - Use the Explain DQL query tool to help users understand complex queries - Start with smaller timeframes (last 1h or 12h) to optimize query performance - Always explain what data your queries will return When investigating problems: - First query problems to understand the active issue landscape - Get detailed problem info including root cause analysis - Correlate with related logs and events agent_type: Autonomous enable_skills: true Click Create to save the subagent Click on the newly created subagent named "DynatraceExpert" and scroll down to the Tools section Under Tools select "Choose Tools" Filter for the "MCP Tool" and select your Dynatrace MCP Server Tools Click Save Step 4: Test the integration Open a new chat session with your SRE Agent Try these example prompts: DQL queries Create a DQL query to find all error logs from the last hour Explain this DQL query: fetch logs | filter loglevel == "ERROR" | limit 10 Run a DQL query to show me the top 10 slowest requests in the last 24 hours Problem investigation List active problems in my Dynatrace environment Get details for problem P-12345 Query problems from the last 7 days that are now closed Available tools Tool Description Required Scopes Create DQL query Generate DQL from natural language davis-copilot:nl2dql:execute Explain DQL query Get plain English explanation of DQL davis-copilot:dql2nl:execute Ask Dynatrace General Dynatrace questions and guidance davis-copilot:conversations:execute Run DQL query Execute DQL and get results (max 1000 records) storage:buckets:read Query problems List active or closed Davis Problems storage:buckets:read , storage:events:read Get problem by ID Get detailed problem information storage:buckets:read , storage:events:read Get vulnerabilities List open security vulnerabilities by risk storage:security.events:read Get K8s cluster events Get events for Kubernetes clusters storage:buckets:read Timeseries forecast Predict future values using statistical models davis:analyzers:read , davis:analyzers:execute Troubleshooting Connection issues Error Cause Solution 401 Unauthorized Invalid or expired token Generate a new Platform Token 403 Forbidden Missing scopes Add mcp-gateway:servers:invoke and mcp-gateway:servers:read scopes Could not connect Wrong tenant URL Verify your tenant name in the URL Timeout Network issues Check network access to *.apps.dynatrace.com Verify your token Test your token with this curl command: curl -X GET https://abc12345.apps.dynatrace.com/platform/management/v1/environment -H Authorization: Bearer YOUR_PLATFORM_TOKEN -H accept: application/json Alternate: OSS MCP server Dynatrace also provides an open-source MCP server with additional capabilities like entity management, workflows, and document creation. This version uses Stdio transport and requires Node.js. NOTE: The OSS MCP server is community-supported. For help, use GitHub Issues. Setup in Azure SRE Agent Navigate to Builder > Connectors > Add connector Select MCP server and click Next Configure the Stdio connection: Field Value Name dynatrace-mcp-oss Connection type Stdio Command npx Arguments -y , @dynatrace-oss/dynatrace-mcp-server@latest Environment variables See table below Environment variables: Key Value DT_ENVIRONMENT Your Dynatrace environment URL DT_PLATFORM_TOKEN Your Platform Token with required scopes DT_GRAIL_QUERY_BUDGET_GB 100 (optional, limits query costs) Click Next to review, then Add to save Equivalent mcp.json configuration { "mcpServers": { "dynatrace-mcp-oss": { "command": "npx", "args": ["-y", "@dynatrace-oss/dynatrace-mcp-server@latest"], "env": { "DT_ENVIRONMENT": "https://abc12345.apps.dynatrace.com", "DT_PLATFORM_TOKEN": "YOUR_PLATFORM_TOKEN", "DT_GRAIL_QUERY_BUDGET_GB": "100" } } } } OSS server scopes Feature Required Scopes Read logs storage:logs:read Read metrics storage:metrics:read Read entities storage:entities:read Workflows automation:workflows:read , automation:workflows:write Documents document:documents:read , document:documents:write Slack app-settings:objects:read Related content Dynatrace MCP documentation Dynatrace MCP Server (OSS) Dynatrace Platform Token documentation Dynatrace Query Language (DQL)70Views0likes0CommentsAn AI led SDLC: Building an End-to-End Agentic Software Development Lifecycle with Azure and GitHub.
This is due to the inevitable move towards fully agentic, end-to-end SDLCs. We may not yet be at a point where software engineers are managing fleets of agents creating the billion-dollar AI abstraction layer, but (as I will evidence in this article) we are certainly on the precipice of such a world. Before we dive into the reality of agentic development today, let me examine two very different modules from university and their relevance in an AI-first development environment. Manual Requirements Translation. At university I dedicated two whole years to a unit called “Systems Design”. This was one of my favourite units, primarily focused on requirements translation. Often, I would receive a scenario between “The Proprietor” and “The Proprietor’s wife”, who seemed to be in a never-ending cycle of new product ideas. These tasks would be analysed, broken down, manually refined, and then mapped to some kind of early-stage application architecture (potentially some pseudo-code and a UML diagram or two). The big intellectual effort in this exercise was taking human intention and turning it into something tangible to build from (BA’s). Today, by the time I have opened Notepad and started to decipher requirements, an agent can already have created a comprehensive list, a service blueprint, and a code scaffold to start the process (*cough* spec-kit *cough*). Manual debugging. Need I say any more? Old-school debugging with print()’s and breakpoints is dead. I spent countless hours learning to debug in a classroom and then later with my own software, stepping through execution line by line, reading through logs, and understanding what to look for; where correlation did and didn’t mean causation. I think back to my year at IBM as a fresh-faced intern in a cloud engineering team, where around 50% of my time was debugging different issues until it was sufficiently “narrowed down”, and then reading countless Stack Overflow posts figuring out the actual change I would need to make to a PowerShell script or Jenkins pipeline. Already in Azure, with the emergence of SRE agents, that debug process looks entirely different. The debug process for software even more so… #terminallastcommand WHY IS THIS NOT RUNNING? #terminallastcommand Review these logs and surface errors relating to XYZ. As I said: breakpoints are dead, for now at least. Caveat – Is this a good thing? One more deviation from the main core of the article if you would be so kind (if you are not as kind skip to the implementation walkthrough below). Is this actually a good thing? Is a software engineering degree now worthless? What if I love printf()? I don’t know is my answer today, at the start of 2026. Two things worry me: one theoretical and one very real. To start with the theoretical: today AI takes a significant amount of the “donkey work” away from developers. How does this impact cognitive load at both ends of the spectrum? The list that “donkey work” encapsulates is certainly growing. As a result, on one end of the spectrum humans are left with the complicated parts yet to be within an agent’s remit. This could have quite an impact on our ability to perform tasks. If we are constantly dealing with the complex and advanced, when do we have time to re-root ourselves in the foundations? Will we see an increase in developer burnout? How do technical people perform without the mundane or routine tasks? I often hear people who have been in the industry for years discuss how simple infrastructure, computing, development, etc. were 20 years ago, almost with a longing to return to a world where today’s zero trust, globally replicated architectures are a twinkle in an architect’s eye. Is constantly working on only the most complex problems a good thing? At the other end of the spectrum, what if the performance of AI tooling and agents outperforms our wildest expectations? Suddenly, AI tools and agents are picking up more and more of today’s complicated and advanced tasks. Will developers, architects, and organisations lose some ability to innovate? Fundamentally, we are not talking about artificial general intelligence when we say AI; we are talking about incredibly complex predictive models that can augment the existing ideas they are built upon but are not, in themselves, innovators. Put simply, in the words of Scott Hanselman: “Spicy auto-complete”. Does increased reliance on these agents in more and more of our business processes remove the opportunity for innovative ideas? For example, if agents were football managers, would we ever have graduated from Neil Warnock and Mick McCarthy football to Pep? Would every agent just augment a ‘lump it long and hope’ approach? We hear about learning loops, but can these learning loops evolve into “innovation loops?” Past the theoretical and the game of 20 questions, the very real concern I have is off the back of some data shared recently on Stack Overflow traffic. We can see in the diagram below that Stack Overflow traffic has dipped significantly since the release of GitHub Copilot in October 2021, and as the product has matured that trend has only accelerated. Data from 12 months ago suggests that Stack Overflow has lost 77% of new questions compared to 2022… Stack Overflow democratises access to problem-solving (I have to be careful not to talk in past tense here), but I will admit I cannot remember the last time I was reviewing Stack Overflow or furiously searching through solutions that are vaguely similar to my own issue. This causes some concern over the data available in the future to train models. Today, models can be grounded in real, tested scenarios built by developers in anger. What happens with this question drop when API schemas change, when the technology built for today is old and deprecated, and the dataset is stale and never returning to its peak? How do we mitigate this impact? There is potential for some closed-loop type continuous improvement in the future, but do we think this is a scalable solution? I am unsure. So, back to the question: “Is this a good thing?”. It’s great today; the long-term impacts are yet to be seen. If we think that AGI may never be achieved, or is at least a very distant horizon, then understanding the foundations of your technical discipline is still incredibly important. Developers will not only be the managers of their fleet of agents, but also the janitors mopping up the mess when there is an accident (albeit likely mopping with AI-augmented tooling). An AI First SDLC Today – The Reality Enough reflection and nostalgia (I don’t think that’s why you clicked the article), let’s start building something. For the rest of this article I will be building an AI-led, agent-powered software development lifecycle. The example I will be building is an AI-generated weather dashboard. It’s a simple example, but if agents can generate, test, deploy, observe, and evolve this application, it proves that today, and into the future, the process can likely scale to more complex domains. Let’s start with the entry point. The problem statement that we will build from. “As a user I want to view real time weather data for my city so that I can plan my day.” We will use this as the single input for our AI led SDLC. This is what we will pass to promptkit and watch our app and subsequent features built in front of our eyes. The goal is that we will: - Spec-kit to get going and move from textual idea to requirements and scaffold. - Use a coding agent to implement our plan. - A Quality agent to assess the output and quality of the code. - GitHub Actions that not only host the agents (Abstracted) but also handle the build and deployment. - An SRE agent proactively monitoring and opening issues automatically. The end to end flow that we will review through this article is the following: Step 1: Spec-driven development - Spec First, Code Second A big piece of realising an AI-led SDLC today relies on spec-driven development (SDD). One of the best summaries for SDD that I have seen is: “Version control for your thinking”. Instead of huge specs that are stale and buried in a knowledge repository somewhere, SDD looks to make them a first-class citizen within the SDLC. Architectural decisions, business logic, and intent can be captured and versioned as a product evolves; an executable artefact that evolves with the project. In 2025, GitHub released the open-source Spec Kit: a tool that enables the goal of placing a specification at the centre of the engineering process. Specs drive the implementation, checklists, and task breakdowns, steering an agent towards the end goal. This article from GitHub does a great job explaining the basics, so if you’d like to learn more it’s a great place to start (https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/). In short, Spec Kit generates requirements, a plan, and tasks to guide a coding agent through an iterative, structured development process. Through the Spec Kit constitution, organisational standards and tech-stack preferences are adhered to throughout each change. I did notice one (likely intentional) gap in functionality that would cement Spec Kit’s role in an autonomous SDLC. That gap is that the implement stage is designed to run within an IDE or client coding agent. You can now, in the IDE, toggle between task implementation locally or with an agent in the cloud. That is great but again it still requires you to drive through the IDE. Thinking about this in the context of an AI-led SDLC (where we are pushing tasks from Spec Kit to a coding agent outside of my own desktop), it was clear that a bridge was needed. As a result, I used Spec Kit to create the Spec-to-issue tool. This allows us to take the tasks and plan generated by Spec Kit, parse the important parts, and automatically create a GitHub issue, with the option to auto-assign the coding agent. From the perspective of an autonomous AI-led SDLC, Speckit really is the entry point that triggers the flow. How Speckit is surfaced to users will vary depending on the organisation and the context of the users. For the rest of this demo I use Spec Kit to create a weather app calling out to the OpenWeather API, and then add additional features with new specs. With one simple prompt of “/promptkit.specify “Application feature/idea/change” I suddenly had a really clear breakdown of the tasks and plan required to get to my desired end state while respecting the context and preferences I had previously set in my Spec Kit constitution. I had mentioned a desire for test driven development, that I required certain coverage and that all solutions were to be Azure Native. The real benefit here compared to prompting directly into the coding agent is that the breakdown of one large task into individual measurable small components that are clear and methodical improves the coding agents ability to perform them by a considerable degree. We can see an example below of not just creating a whole application but another spec to iterate on an existing application and add a feature. We can see the result of the spec creation, the issue in our github repo and most importantly for the next step, our coding agent, GitHub CoPilot has been assigned automatically. Step 2: GitHub Coding Agent - Iterative, autonomous software creation Talking of coding agents, GitHub Copilot’s coding agent is an autonom ous agent in GitHub that can take a scoped development task and work on it in the background using the repository’s context. It can make code changes and produce concrete outputs like commits and pull requests for a developer to review. The developer stays in control by reviewing, requesting changes, or taking over at any point. This does the heavy lifting in our AI-led SDLC. We have already seen great success with customers who have adopted the coding agent when it comes to carrying out menial tasks to save developers time. These coding agents can work in parallel to human developers and with each other. In our example we see that the coding agent creates a new branch for its changes, and creates a PR which it starts working on as it ticks off the various tasks generated in our spec. One huge positive of the coding agent that sets it apart from other similar solutions is the transparency in decision-making and actions taken. The monitoring and observability built directly into the feature means that the agent’s “thinking” is easily visible: the iterations and steps being taken can be viewed in full sequence in the Agents tab. Furthermore, the action that the agent is running is also transparently available to view in the Actions tab, meaning problems can be assessed very quickly. Once the coding agent is finished, it has run the required tests and, even in the case of a UI change, goes as far as calling the Playwright MCP server and screenshotting the change to showcase in the PR. We are then asked to review the change. In this demo, I also created a GitHub Action that is triggered when a PR review is requested: it creates the required resources in Azure and surfaces the (in this case) Azure Container Apps revision URL, making it even smoother for the human in the loop to evaluate the changes. Just like any normal PR, if changes are required comments can be left; when they are, the coding agent can pick them up and action what is needed. It’s also worth noting that for any manual intervention here, use of GitHub Codespaces would work very well to make minor changes or perform testing on an agent’s branch. We can even see the unit tests that have been specified in our spec how been executed by our coding agent. The pattern used here (Spec Kit -> coding agent) overcomes one of the biggest challenges we see with the coding agent. Unlike an IDE-based coding agent, the GitHub.com coding agent is left to its own iterations and implementation without input until the PR review. This can lead to subpar performance, especially compared to IDE agents which have constant input and interruption. The concise and considered breakdown generated from Spec Kit provides the structure and foundation for the agent to execute on; very little is left to interpretation for the coding agent. Step 3: GitHub Code Quality Review (Human in the loop with agent assistance.) GitHub Code Quality is a feature (currently in preview) that proactively identifies code quality risks and opportunities for enhancement both in PRs and through repository scans. These are surfaced within a PR and also in repo-level scoreboards. This means that PRs can now extend existing static code analysis: Copilot can action CodeQL, PMD, and ESLint scanning on top of the new, in-context code quality findings and autofixes. Furthermore, we receive a summary of the actual changes made. This can be used to assist the human in the loop in understanding what changes have been made and whether enhancements or improvements are required. Thinking about this in the context of review coverage, one of the challenges sometimes in already-lean development teams is the time to give proper credence to PRs. Now, with AI-assisted quality scanning, we can be more confident in our overall evaluation and test coverage. I would expect that use of these tools alongside existing human review processes would increase repository code quality and reduce uncaught errors. The data points support this too. The Qodo 2025 AI Code Quality report showed that usage of AI code reviews increased quality improvements to 81% (from 55%). A similar study from Atlassian RovoDev 2026 study showed that 38.7% of comments left by AI agents in code reviews lead to additional code fixes. LLM’s in their current form are never going to achieve 100% accuracy however these are still considerable, significant gains in one of the most important (and often neglected) parts of the SDLC. With a significant number of software supply chain attacks recently it is also not a stretch to imagine that that many projects could benefit from "independently" (use this term loosely) reviewed and summarised PR's and commits. This in the future could potentially by a specialist/sub agent during a PR or merge to focus on identifying malicious code that may be hidden within otherwise normal contributions, case in point being the "near-miss" XZ Utils attack. Step 4: GitHub Actions for build and deploy - No agents here, just deterministic automation. This step will be our briefest, as the idea of CI/CD and automation needs no introduction. It is worth noting that while I am sure there are additional opportunities for using agents within a build and deploy pipeline, I have not investigated them. I often speak with customers about deterministic and non-deterministic business process automation, and the importance of distinguishing between the two. Some processes were created to be deterministic because that is all that was available at the time; the number of conditions required to deal with N possible flows just did not scale. However, now those processes can be non-deterministic. Good examples include IVR decision trees in customer service or hard-coded sales routines to retain a customer regardless of context; these would benefit from less determinism in their execution. However, some processes remain best as deterministic flows: financial transactions, policy engines, document ingestion. While all these flows may be part of an AI solution in the future (possibly as a tool an agent calls, or as part of a larger agent-based orchestration), the processes themselves are deterministic for a reason. Just because we could have dynamic decision-making doesn’t mean we should. Infrastructure deployment and CI/CD pipelines are one good example of this, in my opinion. We could have an agent decide what service best fits our codebase and which region we should deploy to, but do we really want to, and do the benefits outweigh the potential negatives? In this process flow we use a deterministic GitHub action to deploy our weather application into our “development” environment and then promote through the environments until we reach production and we want to now ensure that the application is running smoothly. We also use an action as mentioned above to deploy and surface our agents changes. In Azure Container Apps we can do this in a secure sandbox environment called a “Dynamic Session” to ensure strong isolation of what is essentially “untrusted code”. Often enterprises can view the building and development of AI applications as something that requires a completely new process to take to production, while certain additional processes are new, evaluation, model deployment etc many of our traditional SDLC principles are just as relevant as ever before, CI/CD pipelines being a great example of that. Checked in code that is predictably deployed alongside required services to run tests or promote through environments. Whether you are deploying a java calculator app or a multi agent customer service bot, CI/CD even in this new world is a non-negotiable. We can see that our geolocation feature is running on our Azure Container Apps revision and we can begin to evaluate if we agree with CoPilot that all the feature requirements have been met. In this case they have. If they hadn't we'd just jump into the PR and add a new comment with "@copilot" requesting our changes. Step 5: SRE Agent - Proactive agentic day two operations. The SRE agent service on Azure is an operations-focused agent that continuously watches a running service using telemetry such as logs, metrics, and traces. When it detects incidents or reliability risks, it can investigate signals, correlate likely causes, and propose or initiate response actions such as opening issues, creating runbook-guided fixes, or escalating to an on-call engineer. It effectively automates parts of day two operations while keeping humans in control of approval and remediation. It can be run in two different permission models: one with a reader role that can temporarily take user permissions for approved actions when identified. The other model is a privileged level that allows it to autonomously take approved actions on resources and resource types within the resource groups it is monitoring. In our example, our SRE agent could take actions to ensure our container app runs as intended: restarting pods, changing traffic allocations, and alerting for secret expiry. The SRE agent can also perform detailed debugging to save human SREs time, summarising the issue, fixes tried so far, and narrowing down potential root causes to reduce time to resolution, even across the most complex issues. My initial concern with these types of autonomous fixes (be it VPA on Kubernetes or an SRE agent across your infrastructure) is always that they can very quickly mask problems, or become an anti-pattern where you have drift between your IaC and what is actually running in Azure. One of my favourite features of SRE agents is sub-agents. Sub-agents can be created to handle very specific tasks that the primary SRE agent can leverage. Examples include alerting, report generation, and potentially other third-party integrations or tooling that require a more concise context. In my example, I created a GitHub sub-agent to be called by the primary agent after every issue that is resolved. When called, the GitHub sub-agent creates an issue summarising the origin, context, and resolution. This really brings us full circle. We can then potentially assign this to our coding agent to implement the fix before we proceed with the rest of the cycle; for example, a change where a port is incorrect in some Bicep, or min scale has been adjusted because of latency observed by the SRE agent. These are quick fixes that can be easily implemented by a coding agent, subsequently creating an autonomous feedback loop with human review. Conclusion: The journey through this AI-led SDLC demonstrates that it is possible, with today’s tooling, to improve any existing SDLC with AI assistance, evolving from simply using a chat interface in an IDE. By combining Speckit, spec-driven development, autonomous coding agents, AI-augmented quality checks, deterministic CI/CD pipelines, and proactive SRE agents, we see an emerging ecosystem where human creativity and oversight guide an increasingly capable fleet of collaborative agents. As with all AI solutions we design today, I remind myself that “this is as bad as it gets”. If the last two years are anything to go by, the rate of change in this space means this article may look very different in 12 months. I imagine Spec-to-issue will no longer be required as a bridge, as native solutions evolve to make this process even smoother. There are also some areas of an AI-led SDLC that are not included in this post, things like reviewing the inner-loop process or the use of existing enterprise patterns and blueprints. I also did not review use of third-party plugins or tools available through GitHub. These would make for an interesting expansion of the demo. We also did not look at the creation of custom coding agents, which could be hosted in Microsoft Foundry; this is especially pertinent with the recent announcement of Anthropic models now being available to deploy in Foundry. Does today’s tooling mean that developers, QAs, and engineers are no longer required? Absolutely not (and if I am honest, I can’t see that changing any time soon). However, it is evidently clear that in the next 12 months, enterprises who reshape their SDLC (and any other business process) to become one augmented by agents will innovate faster, learn faster, and deliver faster, leaving organisations who resist this shift struggling to keep up.382Views4likes0CommentsUnifying Scattered Observability Data from Dynatrace + Azure for Self-Healing with SRE Agent
What if your deployments could fix themselves? The Deployment Remediation Challenge Modern operations teams face a recurring nightmare: A deployment ships at 9 AM Errors spike at 9:15 AM By the time you correlate logs, identify the bad revision, and execute a rollback—it's 10:30 AM Your users felt 75 minutes of degraded experience The data to detect and fix this existed the entire time—but it was scattered across clouds and platforms: Error logs and traces → Dynatrace (third-party observability cloud) Deployment history and revisions → Azure Container Apps API Resource health and metrics → Azure Monitor Rollback commands → Azure CLI Your observability data lives in one cloud. Your deployment data lives in another. Stitching together log analysis from Dynatrace with deployment correlation from Azure—and then executing remediation—required a human to manually bridge these silos. What if an AI agent could unify data from third-party observability platforms with Azure deployment history and act on it automatically—every week, before users even notice? Enter SRE Agent + Model Context Protocol (MCP) + Subagents Azure SRE Agent doesn't just work with Azure. Using the Model Context Protocol (MCP), you can connect external observability platforms like Dynatrace directly to your agent. Combined with subagents for specialized expertise and scheduled tasks for automation, you can build an automated deployment remediation system. Here's what I built/configured for my Azure Container Apps environment inside SRE Agent: Component Purpose Dynatrace MCP Connector Connect to Dynatrace's MCP gateway for log queries via DQL 'Dynatrace' Subagent Log analysis specialist that executes DQL queries and identifies root causes 'Remediation' Subagent Deployment remediation specialist that correlates errors with deployments and executes rollbacks Scheduled Task Weekly Monday 9 AM health check for the 'octopets-prod-api' Container App Subagent workflow: The subagent workflow in SRE Agent Builder: 'OctopetsScheduledTask' triggers 'RemediationSubagent' (12 tools), which hands off to 'DynatraceSubagent' (3 MCP tools) for log analysis. How I Set It Up: Step by Step Step 1: Connect Dynatrace via MCP SRE Agent supports the Model Context Protocol (MCP) for connecting external data sources. Dynatrace exposes an MCP gateway that provides access to its APIs as first-class tools. Connection configuration: { "name": "dynatrace-mcp-connector", "dataConnectorType": "Mcp", "dataSource": "Endpoint=https://<your-tenant>.live.dynatrace.com/platform-reserved/mcp-gateway/v0.1/servers/dynatrace-mcp/mcp;AuthType=BearerToken;BearerToken=<your-api-token>" } Once connected, SRE Agent automatically discovers Dynatrace tools. 💡 Tip: When creating your Dynatrace API token, grant the `entities.read`, `events.read`, and `metrics.read` scopes for comprehensive access. Step 2: Build Specialized Subagents Generic agents are good. Specialized agents are better. I created two subagents that work together in a coordinated workflow—one for Dynatrace log analysis, the other for deployment remediation. DynatraceSubagent This subagent is the log analysis specialist. It uses the Dynatrace MCP tools to execute DQL queries and identify root causes. Key capabilities: Executes DQL queries via MCP tools (`create-dql`, `execute-dql`, `explain-dql`) Fetches 5xx error counts, request volumes, and spike detection Returns consolidated analysis with root cause, affected services, and error patterns 👉 View full DynatraceSubagent configuration here RemediationSubagent This is the deployment remediation specialist. It correlates Dynatrace log analysis with Azure Container Apps deployment history, generates correlation charts, and executes rollbacks when confidence is high. Key capabilities: Retrieves Container Apps revision history (`GetDeploymentTimes`, `ListRevisions`) Generates correlation charts (`PlotTimeSeriesData`, `PlotBarChart`, `PlotAreaChartWithCorrelation`) Computes confidence score (0-100%) for deployment causation Executes rollback and traffic shift when confidence > 70% 👉 View full RemediationSubagent configuration here The power of specialization: Each agent focuses on its domain—DynatraceSubagent handles log analysis, RemediationSubagent handles deployment correlation and rollback. When the workflow runs, RemediationSubagent hands off to DynatraceSubagent (bi-directional handoff) for analysis, gets the findings back, and continues with remediation. Simple delegation, not a single monolithic agent trying to do everything. Step 3: Create the Weekly Scheduled Task Now the automation. I configured a scheduled task that runs every Monday at 9:30 AM to check whether deployments in the last 4 hours caused any issues—and automatically remediate if needed. Scheduled task configuration: Setting Value Task Name OctopetsScheduledTask Frequency Weekly Day of Week Monday Time 9:30 AM Response Subagent RemediationSubagent Scheduled Task Configuration Configuring the OctopetsScheduledTask in the SRE Agent portal The key insight: the scheduled task is just a coordinator. It immediately hands off to the RemediationSubagent, which orchestrates the entire workflow including handoffs to DynatraceSubagent. Step 4: See It In Action Here's what happens when the scheduled task runs: The scheduled task triggering and initiating Dynatrace analysis for octopets-prod-api The DynatraceSubagent analyzes the logs and identifies the root cause: executing DQL queries and returning consolidated log analysis The RemediationSubagent then generates correlation charts: Finally, with a 95% confidence score, SRE agent executes the rollback autonomously: executing rollback and traffic shift autonomously. The agent detected the bad deployment, generated visual evidence, and automatically shifted 100% traffic to the last known working revision—all without human intervention. Why This Matters Before After Manually check Dynatrace after incidents Automated DQL queries via MCP Stitch together logs + deployments manually Subagents correlate data automatically Rollback requires human decision + execution Confidence-based auto-remediation 75+ minutes from deployment to rollback Under 5 Minutes with autonomous workflow Reactive incident response Proactive weekly health checks Try It Yourself Connect your observability tool via MCP (Dynatrace, Datadog, Prometheus—any tool with an MCP gateway) Build a log analysis subagent that knows how to query your observability data Build a remediation subagent that can correlate logs with deployments and execute fixes Wire them together with handoffs so the subagents can delegate log analysis Create a scheduled task to trigger the workflow automatically Learn More Azure SRE Agent documentation Model Context Protocol (MCP) integration guide Building subagents for specialized workflows Scheduled tasks and automation SRE Agent Community Azure SRE Agent pricing SRE Agent Blogs350Views0likes0CommentsHow SRE Agent Pulls Logs from Grafana and Creates Jira Tickets Without Native Integrations
Your tools. Your workflows. SRE Agent adapts. SRE Agent natively integrates with PagerDuty, ServiceNow, and Azure Monitor. But your team might use Jira for incident tracking. Grafana for dashboards. Loki for logs. Prometheus for metrics. These aren't natively supported. That doesn't matter. SRE Agent supports MCP, the Model Context Protocol. Any MCP-compatible server extends the agent's capabilities. Connect your Grafana instance. Connect your Jira. The agent queries logs, correlates errors, and creates tickets with root cause analysis across tools that were never designed to talk to each other. The Scenario I built a grocery store app that simulates a realistic SRE scenario: an external supplier API starts rate limiting your requests. Customers see "Unable to check inventory" errors. The on-call engineer gets paged. The goal: SRE Agent should diagnose the issue by querying Loki logs through Grafana, identify the root cause, and create a Jira ticket with findings and recommendations. The app runs on Azure Container Apps with Loki for logs and Azure Managed Grafana for visualization. 👉 Deploy it yourself: github.com/dm-chelupati/grocery-sre-demo How I Set Up SRE Agent: Step by Step Step 1: Create SRE Agent I created an SRE Agent and gave it Reader access to my subscription Step 2: Connect to Grafana and Jira via MCP Neither MCP server had a remotely hosted option, and their stdio setup didn't match what SRE Agent supports. So I hosted them myself as Azure Container Apps: Grafana MCP Server — connects to my Azure Managed Grafana instance Atlassian MCP Server — connects to my Jira Cloud instance Now I have two endpoints SRE Agent can reach: https://ca-mcp-grafana.<env>.azurecontainerapps.io/mcp https://ca-mcp-jira.<env>.azurecontainerapps.io/mcp I added both to SRE Agent's MCP configuration as remotely hosted servers. Step 3: Create Sub-Agent with Tools and Instructions I created a sub-agent specifically for incident diagnosis with these tools enabled: Grafana MCP (for querying Loki logs) Atlassian MCP (for creating Jira tickets) Instructions were simple: You are expert in diagnosing applications running on Azure services. You need to use the Grafana tools to get the logs, metrics or traces and create a summary of your findings inside Jira as a ticket. use your knowledge base file loki-queries.md to learn about app configuration with loki and Query the loki for logs in Grafana. Step 4: Invoke Sub-Agent and Watch It Work I went to the SRE Agent chat and asked: @JiraGrafanaexpert: My container app ca-api-3syj3i2fat5dm in resource group rg-groceryapp is experiencing rate limit errors from a supplier API when checking product inventory. The agent: Queried Loki via Grafana MCP: {app="grocery-api"} |= "error" Found 429 rate limit errors spiking — 55+ requests hitting supplier API limits Identified root cause: SUPPLIER_RATE_LIMIT_429 from FreshFoods Wholesale API Created a Jira ticket: One prompt. Logs queried. Root cause identified. Ticket created with remediation steps. Making It Better: The Knowledge File SRE Agent can explore and discover how your apps are wired but you can speed that up. When querying observability data sources, the agent needs to learn the schema, available labels, table structures, and query syntax. For Loki, that means understanding LogQL, knowing which labels your apps use, and what JSON fields appear in logs. SRE Agent can figure things out, but with context, it gets there faster — just like humans. I created a knowledge file that gives the agent a head start: With this context, the agent knows exactly which labels to query, what fields to extract from JSON logs, and which query patterns to use 👉 See my full knowledge file How MCP Makes This Possible SRE Agent supports two ways to connect MCP servers: stdio — runs locally via command. This works for MCP servers that can be invoked via npx, node, or uvx. For example: npx -y @modelcontextprotocol/server-github. Remotely hosted — HTTP endpoint with streamable transport: https://mcp-server.example.com/sse or /mcp The catch: Not every MCP server fits these options out of the box. Some servers only support stdio but not the npx/node/uvx formats SRE Agent expects. Others don't offer a hosted endpoint at all. The solution: host them yourself. Deploy the MCP server as a container with an HTTP endpoint. That's what I did with Grafana MCP Server and Atlassian MCP Server, deployed both as Azure Container Apps exposing /mcp endpoints. Why This Matters Enterprise tooling is fragmented across Azure and non-Azure ecosystems. Some teams use Azure Monitor, others use Datadog. Incident tracking might be ServiceNow in one org and Jira in another. Logs live in Loki, Splunk, Elasticsearch and sometimes all three. SRE Agent meets you where you are. Azure-native tools work out of the box. Everything else connects via MCP. Your observability stack stays the same. Your ticketing system stays the same. The agent becomes the orchestration layer that ties them together. One agent. Any tool. Intelligent workflows across your entire ecosystem. Try It Yourself Create an SRE Agent Deploy MCP servers for your tools (Grafana, Atlassian) Create a sub-agent with the MCP tools connected Add a knowledge file with your app context Ask it to diagnose an issue Watch logs become tickets. Errors become action items. Context becomes intelligence. Learn More Azure SRE Agent documentation Azure SRE Agent blogs Grocery SRE Demo repo MCP specification Azure SRE Agent is currently in preview.874Views0likes0CommentsHow to Connect Azure SRE Agent to Azure MCP
Prerequisites Before you begin, ensure you have: An active Azure subscription An Azure SRE Agent deployed and accessible Step 1: Add an MCP Connector Using the Portal UI To connect Azure MCP to your SRE Agent, you need to add an MCP connector through the Azure Portal. This connector tells the SRE Agent how to communicate with the Azure MCP server. Navigate to the MCP Connectors section: Open the Azure Portal and navigate to your SRE Agent resource In the left navigation menu, select Connectors under the Settings section Click + Add MCP Connector to open the configuration panel Configure the connector settings: Name: Enter a descriptive name for your connector (for example, "Azure MCP Server") Connection Type: Select stdio from the dropdown menu. This tells the agent to communicate with the MCP server through standard input/output Arguments: Enter the command arguments that will launch the Azure MCP server. Use the following format, with each argument separated by a comma: npx, -y, @azure/mcp, server, start Customizing Tool Exposure: You can customize which tools Azure MCP exposes to your agent by adding optional arguments: To expose only tools in the subscription namespace, add: --namespace, subscription To expose all tools without the namespace wrapper, add: --mode, all Step 2: Configure Managed Identity Select a managed identity from the Managed Identity dropdown menu. Azure MCP will use this identity to make downstream API calls. The capabilities of Azure MCP are bounded by the permissions granted to this identity. Add the following required environment variables: Variable Value Purpose AZURE_CLIENT_ID <client-id-of-managed-identity> Specifies which managed identity to use AZURE_TOKEN_CREDENTIALS managedidentitycredential Tells the server to only use managed identity The AZURE_CLIENT_ID must match the client ID of the managed identity selected in the dropdown. Consult the Azure MCP documentation for additional environment variables that can customize behavior. Important: You must assign the necessary Azure RBAC roles to this managed identity for it to perform the actions in its tool calls. For example, if you want the agent to list resources, grant the identity at least Reader access on the relevant subscriptions or resource groups. Step 3: Create a Subagent Using Subagent Builder Use the Subagent Builder to create a subagent that leverages your MCP connector: Give the subagent a meaningful name (for example, "Azure Resource Manager") Provide helpful instructions on when and how to use its MCP tools Click "Choose tools" and add the previously configured MCP connector to the subagent's available tools Example instructions for your subagent: You are an Azure resource management assistant. Use the Azure MCP tools to: - List Azure subscriptions the user has access to - Query resources across subscriptions - Retrieve resource details and configurations Always confirm the subscription context before performing operations. Step 4: Test Your Configuration Test your configuration by calling an MCP tool in the subagent's playground: Open the subagent playground Ask a question that triggers an MCP tool call (for example, "List my Azure subscriptions") View the trace of the interaction to verify: The tool call was made correctly The tool response contains the expected data Security Considerations Managed Identity Access Control Azure MCP can only use managed identity when used with SRE Agent in this configuration. This design has an important security implication: if users are granted access to the SRE Agent, they effectively inherit the permissions of the agent's managed identity. This can accidentally provide over-privileged access to users if: The SRE Agent's managed identity has broad permissions across Azure resources Users are granted access to the SRE Agent who should not have access to those resources Best Practices: Follow the principle of least privilege when assigning roles to the managed identity Scope permissions to specific resource groups rather than entire subscriptions when possible Regularly audit the managed identity's role assignments Consider creating separate SRE Agents with different managed identities for different user groups or use cases Summary Connecting Azure SRE Agent to Azure MCP enables powerful Azure-native capabilities for your AI agent. By following the steps above, you can configure your agent to interact with Azure resources securely using managed identity authentication. Remember to carefully consider the security implications of the managed identity's permissions and implement appropriate access controls. Additional Resources Azure MCP Documentation Azure SRE Agent Documentation Managed Identities for Azure Resources521Views0likes0CommentsConnect Azure SRE Agent to ServiceNow: End-to-End Incident Response
🎯 What You'll Achieve In this tutorial, you'll: Connect Azure SRE Agent to ServiceNow as your incident management platform Create a test incident in ServiceNow Watch the AI agent automatically pick up, investigate, and resolve the incident See the agent write triage findings and resolution notes back to ServiceNow Time to complete: ~10 minutes 🎬 The End Result Before we dive in, here's what the end result looks like: ServiceNow Incident - Resolved by Azure SRE Agent The Azure SRE Agent: ✅ Detected the incident from ServiceNow ✅ Acknowledged and began triage automatically ✅ Investigated AKS cluster memory utilization ✅ Documented findings in work notes ✅ Resolved the incident with detailed root cause analysis 📋 Prerequisites A ServiceNow instance (Developer, PDI, or Enterprise) Administrator access to ServiceNow An Azure SRE Agent deployed in your Azure subscription 💡 Don't have a ServiceNow instance? Get a free Personal Developer Instance (PDI) at developer.servicenow.com 🔧 Step 1: Gather Your ServiceNow Credentials To connect Azure SRE Agent to ServiceNow, you need three pieces of information: Component Where to Find It ServiceNow Endpoint Browser address bar when logged into ServiceNow (format: https://your-instance.service-now.com ) Username Click your profile avatar → Profile → User ID Password Your ServiceNow login password Finding Your Instance URL Your ServiceNow instance URL is visible in your browser's address bar when logged in: https://{your-instance-name}.service-now.com Finding Your Username Click your profile avatar in the top-right corner of ServiceNow Click Profile Your User ID is your username ⚙️ Step 2: Connect SRE Agent to ServiceNow Navigate to Your SRE Agent Open the Azure Portal Search for "Azure SRE Agent" in the search bar Click on Azure SRE Agent (Preview) in the results Select your agent from the list Configure the Incident Platform In the left navigation, expand Settings Click Incident platform Click the Incident platform dropdown Select ServiceNow Here's what the ServiceNow configuration form looks like: Enter Your ServiceNow Credentials Field Value ServiceNow endpoint Your ServiceNow instance URL (from Step 1) Username Your ServiceNow username (from Step 1) Password Your ServiceNow password Quickstart response plan ✓ Enable this for automatic investigation Save and Verify Click the Save button Wait for validation to complete Look for: "ServiceNow is connected." with a green checkmark 🚨 Step 3: Create a Test Incident in ServiceNow Now let's test the integration by creating an incident in ServiceNow. Navigate to Create Incident In ServiceNow, click All in the left navigation Search for "Incident" Click Incident → Create New Fill in the Incident Details Field Value Caller Select any user (e.g., System Administrator) Short description [SRE Agent Test] AKS Cluster memory pressure detected in production environment Impact 2 - Medium Submit the Incident Click Submit to create the incident. Note the incident number that's assigned. 🤖 Step 4: Watch SRE Agent Investigate Check the SRE Agent Portal Return to the Azure Portal Open your SRE Agent Click Activities → Incidents Within seconds, you should see your ServiceNow incident appear! Observe the Autonomous Investigation Click on the incident to see the SRE Agent's investigation in action: The agent automatically: 🔔 Acknowledged the incident 📋 Created a triage plan with clear steps 🔍 Identified AKS clusters in your subscription 📊 Validated memory utilization metrics ✅ Resolved the incident with findings 📝 Step 5: Review Resolution in ServiceNow Check the ServiceNow Incident Return to ServiceNow and open your incident. You'll see: State: Changed to Resolved Activity Stream: Multiple work notes from the agent Resolution notes: Detailed findings Resolution Notes The agent writes comprehensive resolution notes including: Timestamp of resolution Root cause analysis Validation steps performed Fix applied (if any) 🚀 Next Steps Create custom response plans to customize how the agent responds to different incident types Configure alert routing to route specific Azure Monitor alerts to the agent Explore the Azure SRE Agent documentation for more features Share your experiences, learnings, and questions with other early adopters. Start a discussion in our Community Hub1.4KViews0likes0CommentsProactive Cloud Ops with SRE Agent: Scheduled Checks for Cloud Optimization
The Cloud Optimization Challenge Your cloud environment is always changing: New features ship weekly Traffic patterns shift seasonally Costs creep up quietly Security best practices evolve Teams spin up resources and forget them It's Monday morning. You open the Azure portal. Everything looks... fine. But "fine" isn't great. That VM has been at 8% CPU for weeks. A Key Vault secret expires in 12 days. Nothing's broken. But security is drifting, costs are creeping, and capacity gaps are growing silently. The question isn't "is something broken?" it's "could this be better?" Four Pillars of Cloud Optimization Pillar What Teams Want The Challenge Security Stay compliant, reduce risk Config drift, legacy settings, expiring creds Cost Spend efficiently, justify budget Hard to spot waste across 100s of resources Performance Meet SLOs, handle growth Know when to scale before demand hits Availability Maximize uptime, build resilience Hidden dependencies, single points of failure Most teams check these sometimes. SRE Agent checks them continuously. Enter SRE Agent + Scheduled tasks SRE Agent can pull data from Azure Monitor, resource configurations, metrics, logs, traces, errors, cost data and analyze it on a schedule. If you use tools outside Azure (Datadog, PagerDuty, Splunk), you can connect those via MCP servers so the agent sees your full observability stack. My setup uses Azure-native sources. Here's how I wired it up. How I Set It Up: Step by Step Step 1: Create SRE Agent with Subscription Access I created an SRE Agent without attaching it to any specific resource group. Instead, I gave it Reader access at the subscription level. This lets the agent scan across all my resource groups for optimization opportunities. No resource group configuration needed. The agent builds a knowledge graph of everything VMs, storage accounts, Key Vaults, NSGs, web apps across the subscription. Step 2: Create and Upload My Organization Practices I created an org-practices.md file that defines what "good" looks like for my team: I uploaded this to SRE Agent's knowledge base. Now the agent knows our bar, not just Azure defaults. 👉 See my full org-practices.md Source repos for this demo: security-demoapp - App with intentional security misconfigurations costoptimizationapp - App with cost optimization opportunities Step 3: Connect to Teams Channel I connected SRE Agent to my team's Teams channel so findings land where we already work. Critical findings get immediate notifications. Warnings go into a daily digest. No more logging into separate dashboards. The insights come to us. Step 4: Connect Resource Groups to GitHub Repos Add the two resource groups to the SRE Agent and link the apps to their corresponding GitHub repos: Resource Group GitHub Repository rg-security-opt-demo security-demoapp rg-cost-opt-sreademo costoptimizationapp This enables the agent to create GitHub issues for findings linking violations directly to the repo responsible for that infrastructure. Step 5: Test with Prompts Before setting up automation, I tested the agent with manual prompts to make sure it was finding the right issues. The agent ran the checks, compared against my org-practices.md, and identified the issues. Security Check: Scan resource group "rg-security-opt-demo" for any violations of our security practices defined in org-practices.md in your knowledge base. list violations with severity and remediation steps. Make sure to check against all critical requirements and send message in teams channel with your findings and create an issue in the github repo https://github.com/dm-chelupati/security-demoapp.git Cost Check: Scan resource group "rg-cost-opt-sreademo" for any violations of our costpractices defined in org-practices.md in your knowledge base. list violations with severity and remediation steps. Make sure to check against all critical requirements and send message in teams channel with your findings and create an issue in the github repo https://github.com/dm-chelupati/costoptimizationapp.git Step 6: Check Output via GitHub Issues After running prompts, I checked GitHub. The agent had created issues. Each issue has the root cause, impact, and fix ready for the team to action or for Coding Agent to pick up and create a PR. 👉 See the actual issues created: Security findings issue Cost findings issue Step 7: Set Up Scheduled Triggers This is where it gets powerful. I configured recurring schedules: Weekly Security Check (Wednesdays 8 AM): Create a scheduled trigger that performs security practices checks against the org practices in knowledge base org-practices.md, creates github issue and send teams message on a weekly basis Wednesdays at 8 am UTC Weekly Cost Review (Mondays 8 AM): Create a scheduled trigger that performs cost practices checks against the org practices in knowledge base org-practices.md, creates github issue and send teams message on a weekly basis on Mondays at 8 am UTC Now optimization runs automatically. Every week, fresh findings land in GitHub Issues and Teams. Why Context Makes the SRE Agent Powerful Think about hiring a new SRE. They're excellent at their craft—they know Kubernetes, networking, Azure inside out. But on day one, they can't solve problems in your environment yet. Why? They don't have context: What are your SLOs? What's "acceptable" latency for your app? When do you rotate secrets? Monthly? Quarterly? Before each release? Which resources are production-critical vs. dev experiments? What's your tagging policy? Who owns what? How do you deploy? GitOps? Pipelines? Manual approvals? A great engineer becomes your great engineer once they learn how your team operates. SRE Agent works the same way. Out of the box, it knows Azure resource types, networking, best practices. But it doesn't know your bar. Is 20% CPU utilization acceptable or wasteful? Should secrets expire in 30 days or 90? Are public endpoints ever okay, or never? The more context you give the agent, your SLOs, your runbooks, your policies, the more it reasons like a team member who understands your environment, not just Azure in general. That's why Step 2 matters so much. When I uploaded our standards, the agent stopped checking generic Azure best practices and started checking our best practices. Bring your existing knowledge: You don't have to start from scratch. If your team's documentation already lives in Atlassian Confluence, SharePoint, or other tools, you can connect those via MCP servers. The agent pulls context from where your team already works, no need to duplicate content. Why This Matters Before this setup, optimization was a quarterly thing. Now it happens automatically: Before After Check security when audit requests it Daily automated posture check Find waste when finance complains Weekly savings report in Teams Discover capacity issues during incidents Scheduled headroom analysis Expire credentials and debug at 2 AM 30-day warning with exact secret names Optimization isn't a project anymore. It's a practice. Try It Yourself Create an SRE Agent with access to your subscription Upload your team's standards (security policies, cost thresholds, tagging rules) Set up a scheduled trigger, start with a daily security check Watch the first report land in Teams See what you've been missing while everything looked "fine." Learn More Azure SRE Agent documentation Azure SRE Agent blogs Azure SRE Agent community Azure SRE Agent home page Azure SRE Agent pricing Azure SRE Agent is currently in preview. Get Started513Views1like0CommentsContext Engineering Lessons from Building Azure SRE Agent
We started with 100+ tools and 50+ specialized agents. We ended with 5 core tools and a handful of generalists. The agent got more reliable, not less. Every context decision is a tradeoff: latency vs autonomy, evidence-building vs speed, oversight - and the cost of being wrong. This post is a practical map of those knobs and how we adjusted them for SRE Agent.6KViews20likes2CommentsFind the Alerts You Didn't Know You Were Missing with Azure SRE Agent
I had 6 alert rules. CPU. Memory. Pod restarts. Container errors. OOMKilled. Job failures. I thought I was covered. Then my app went down. I kept refreshing the Azure portal, waiting for an alert. Nothing. That's when it hit me: my alerts were working perfectly. They just weren't designed for this failure mode. Sound familiar? The Problem Every Developer Knows If you're a developer or DevOps engineer, you've been here: a customer reports an issue, you scramble to check your monitoring, and then you realize you don't have the right alerts set up. By the time you find out, it's already too late. You set up what seems like reasonable alerting and assume you're covered. But real-world failures are sneaky. They slip through the cracks of your carefully planned thresholds. My Setup: AKS with Redis I love to vibe code apps using GitHub Copilot Agent mode with Claude Opus 4.5. It's fast, it understands context, and it lets me focus on building rather than boilerplate. For this project, I built a simple journal entry app: AKS cluster hosting the web API Azure Cache for Redis storing journal data Azure Monitor alerts for CPU, memory, pod restarts, container errors, OOMKilled, and job failures Seemed solid. What could go wrong? The Scenario: Redis Password Rotation Here's something that happens constantly in enterprise environments: the security team rotates passwords. It's best practice. It's in the compliance checklist. And it breaks things when apps don't pick up the new credentials. I simulated exactly this. The pods came back up. But they couldn't connect to Redis (as expected). The readiness probes started failing. The LoadBalancer had no healthy backends. The endpoint timed out. And not a single alert fired. Using SRE Agent to Find the Alert Gaps Instead of manually auditing every alert rule and trying to figure out what I missed, I turned to Azure SRE Agent. I asked it a simple question: "My endpoint is timing out. What alerts do I have, and why didn't any of them fire?" Within minutes, it had diagnosed the problem. Here's what it found: My Existing Alerts Why They Didn't Fire High CPU/Memory No resource pressure,just auth failures Pod Restarts Pods weren't restarting, just unhealthy Container Errors App logs weren't being written OOMKilled No memory issues Job Failures No K8s jobs involved The gaps SRE Agent identified: ❌ No synthetic URL availability test ❌ No readiness/liveness probe failure alerts ❌ No "pods not ready" alerts scoped to my namespace ❌ No Redis connection error detection ❌ No ingress 5xx/timeout spike alerts ❌ No per-pod resource alerts (only node-level) SRE Agent didn't just tell me what was wrong, it created a GitHub issue with : KQL queries to detect each failure type Bicep code snippets for new alert rules Remediation suggestions for the app code Exact file paths in my repo to update Check it out: GitHub Issue How I Built It: Step by Step Let me walk you through exactly how I set this up inside SRE Agent. Step 1: Create an SRE Agent I created a new SRE Agent in the Azure portal. Since this workflow analyzes alerts across my subscription (not just one resource group), I didn't configure any specific resource groups. Instead, I gave the agent's managed identity Reader permissions on my entire subscription. This lets it discover resources, list alert rules, and query Log Analytics across all my resource groups. Step 2: Connect GitHub to SRE Agent via MCP I added a GitHub MCP server to give the agent access to my source code repository.MCP (Model Context Protocol) lets you bring any API into the agent. If your tool has an API, you can connect it. I use GitHub for both source code and tracking dev tickets, but you can connect to wherever your code lives (GitLab, Azure DevOps) or your ticketing system (Jira, ServiceNow, PagerDuty). Step 3: Create a Subagent inside SRE Agent for managing Azure Monitor Alerts I created a focused subagent with a specific job and only the tools it needs: Azure Monitor Alerts Expert Prompt: " You are expert in managing operations related to azure monitor alerts on azure resources including discovering alert rules configured on azure resources, creating new alert rules (with user approval and authorization only), processing the alerts fired on azure resources and identifying gaps in the alert rules. You can get the resource details from azure monitor alert if triggered via alert. If not, you need to ask user for the specific resource to perform analysis on. You can use az cli tool to diagnose logs, check the app health metrics. You must use the app code and infra code (bicep files) files you have access to in the github repo <insert your repo> to further understand the possible diagnoses and suggest remediations. Once analysis is done, you must create a github issue with details of analysis and suggested remediation to the source code files in the same repo." Tools enabled: az cli – List resources, alert rules, action groups Log Analytics workspace querying – Run KQL queries for diagnostics GitHub MCP – Search repositories, read file contents, create issues Step 4: Ask the Subagent About Alert Gaps I gave the agent context and asked a simple question: "@AzureAlertExpert: My API endpoint http://132.196.167.102/api/journals/john is timing out. What alerts do I have configured in rg-aks-journal, and why didn't any of them fire? The agent did the analysis autonomously and summarized findings with suggestions to add new alert rules in a GitHub issue. Here's the agentic workflow to perform azure monitor alert operations Why This Matters Faster response times. Issues get diagnosed in minutes, not hours of manual investigation. Consistent analysis. No more "I thought we had an alert for that" moments. The agent systematically checks what's covered and what's not. Proactive coverage. You don't have to wait for an incident to find gaps. Ask the agent to review your alerts before something breaks. The Bottom Line Your alerts have gaps. You just don't know it until something slips through. I had 6 alert rules and still missed a basic failure. My pods weren't restarting, they were just unhealthy. My CPU wasn't spiking, the app was just returning errors. None of my alerts were designed for this. You don't need to audit every alert rule manually. Give SRE Agent your environment, describe the failure, and let it tell you what's missing. Stop discovering alert gaps from customer complaints. Start finding them before they matter. A Few Tips Give the agent Reader access at subscription level so it can discover all resources Use a focused subagent prompt, don't try to do everything in one agent Test your MCP connections before running workflows What Alert Gaps Have Burned You? What's the alert you wish you had set up before an incident? Credential rotation? Certificate expiry? DNS failures? Let us know in the comments.419Views1like0Comments