containers
375 TopicsBeyond the Desktop: The Future of Development with Microsoft Dev Box and GitHub Codespaces
The modern developer platform has already moved past the desktop. We’re no longer defined by what’s installed on our laptops, instead we look at what tooling we can use to move from idea to production. An organisations developer platform strategy is no longer a nice to have, it sets the ceiling for what’s possible, an organisation can’t iterate it's way to developer nirvana if the foundation itself is brittle. A great developer platform shrinks TTFC (time to first commit), accelerates release velocity, and maybe most importantly, helps alleviate everyday frictions that lead to developer burnout. Very few platforms deliver everything an organization needs from a developer platform in one product. Modern development spans multiple dimensions, local tooling, cloud infrastructure, compliance, security, cross-platform builds, collaboration, and rapid onboarding. The options organizations face are then to either compromise on one or more of these areas or force developers into rigid environments that slow productivity and innovation. This is where Microsoft Dev Box and GitHub Codespaces come into play. On their own, each addresses critical parts of the modern developer platform: Microsoft Dev Box provides a full, managed cloud workstation. Dev Box gives developers a consistent, high-performance environment while letting central IT apply strict governance and control. Internally at Microsoft, we estimate that usage of Dev Box by our development teams delivers savings of 156 hours per year per developer purely on local environment setup and upkeep. We have also seen significant gains in other key SPACE metrics reducing context-switching friction and improving build/test cycles. Although the benefits of Dev Box are clear in the results demonstrated by our customers it is not without its challenges. The biggest challenge often faced by Dev Box customers is its lack of native Linux support. At the time of writing and for the foreseeable future Dev Box does not support native Linux developer workstations. While WSL2 provides partial parity, I know from my own engineering projects it still does not deliver the full experience. This is where GitHub Codespaces comes into this story. GitHub Codespaces delivers instant, Linux-native environments spun up directly from your repository. It’s lightweight, reproducible, and ephemeral ideal for rapid iteration, PR testing, and cross-platform development where you need Linux parity or containerized workflows. Unlike Dev Box, Codespaces can run fully in Linux, giving developers access to native tools, scripts, and runtimes without workarounds. It also removes much of the friction around onboarding: a new developer can open a repository and be coding in minutes, with the exact environment defined by the project’s devcontainer.json. That said, Codespaces isn’t a complete replacement for a full workstation. While it’s perfect for isolated project work or ephemeral testing, it doesn’t provide the persistent, policy-controlled environment that enterprise teams often require for heavier workloads or complex toolchains. Used together, they fill the gaps that neither can cover alone: Dev Box gives the enterprise-grade foundation, while Codespaces provides the agile, cross-platform sandbox. For organizations, this pairing sets a higher ceiling for developer productivity, delivering a truly hybrid, agile and well governed developer platform. Better Together: Dev Box and GitHub Codespaces in action Together, Microsoft Dev Box and GitHub Codespaces deliver a hybrid developer platform that combines consistency, speed, and flexibility. Teams can spin up full, policy-compliant Dev Box workstations preloaded with enterprise tooling, IDEs, and local testing infrastructure, while Codespaces provides ephemeral, Linux-native environments tailored to each project. One of my favourite use cases is having local testing setups like a Docker Swarm cluster, ready to go in either Dev Box or Codespaces. New developers can jump in and start running services or testing microservices immediately, without spending hours on environment setup. Anecdotally, my time to first commit and time to delivering “impact” has been significantly faster on projects where one or both technologies provide local development services out of the box. Switching between Dev Boxes and Codespaces is seamless every environment keeps its own libraries, extensions, and settings intact, so developers can jump between projects without reconfiguring or breaking dependencies. The result is a turnkey, ready-to-code experience that maximizes productivity, reduces friction, and lets teams focus entirely on building, testing, and shipping software. To showcase this value, I thought I would walk through an example scenario. In this scenario I want to simulate a typical modern developer workflow. Let's look at a day in the life of a developer on this hybrid platform building an IOT project using Python and React. Spin up a ready-to-go workstation (Dev Box) for Windows development and heavy builds. Launch a Linux-native Codespace for cross-platform services, ephemeral testing, and PR work. Run "local" testing like a Docker Swarm cluster, database, and message queue ready to go out-of-the-box. Switch seamlessly between environments without losing project-specific configurations, libraries, or extensions. 9:00 AM – Morning Kickoff on Dev Box I start my day on my Microsoft Dev Box, which gives me a fully-configured Windows environment with VS Code, design tools, and Azure integrations. I select my teams project, and the environment is pre-configured for me through the Dev Box catalogue. Fortunately for me, its already provisioned. I could always self service another one using the "New Dev Box" button if I wanted too. I'll connect through the browser but I could use the desktop app too if I wanted to. My Tasks are: Prototype a new dashboard widget for monitoring IoT device temperature. Use GUI-based tools to tweak the UI and preview changes live. Review my Visio Architecture. Join my morning stand up. Write documentation notes and plan API interactions for the backend. In a flash, I have access to my modern work tooling like Teams, I have this projects files already preloaded and all my peripherals are working without additional setup. Only down side was that I did seem to be the only person on my stand up this morning? Why Dev Box first: GUI-heavy tasks are fast and responsive. Dev Box’s environment allows me to use a full desktop. Great for early-stage design, planning, and visual work. Enterprise Apps are ready for me to use out of the box (P.S. It also supports my multi-monitor setup). I use my Dev Box to make a very complicated change to my IoT dashboard. Changing the title from "IoT Dashboard" to "Owain's IoT Dashboard". I preview this change in a browser live. (Time for a coffee after this hardwork). The rest of the dashboard isnt loading as my backend isnt running... yet. 10:30 AM – Switching to Linux Codespaces Once the UI is ready, I push the code to GitHub and spin up a Linux-native GitHub Codespace for backend development. Tasks: Implement FastAPI endpoints to support the new IoT feature. Run the service on my Codespace and debug any errors. Why Codespaces now: Linux-native tools ensure compatibility with the production server. Docker and containerized testing run natively, avoiding WSL translation overhead. The environment is fully reproducible across any device I log in from. 12:30 PM – Midday Testing & Sync I toggle between Dev Box and Codespaces to test and validate the integration. I do this in my Dev Box Edge browser viewing my codespace (I use my Codespace in a browser through this demo to highlight the difference in environments. In reality I would leverage the VSCode "Remote Explorer" extension and its GitHub Codespace integration to use my Codespace from within my own desktop VSCode but that is personal preference) and I use the same browser to view my frontend preview. I update the environment variable for my frontend that is running locally in my Dev Box and point it at the port running my API locally on my Codespace. In this case it was a web socket connection and HTTPS calls to port 8000. I can make this public by changing the port visibility in my Codespace. https://fluffy-invention-5x5wp656g4xcp6x9-8000.app.github.dev/api/devices wss://fluffy-invention-5x5wp656g4xcp6x9-8000.app.github.dev/ws This allows me to: Preview the frontend widget on Dev Box, connecting to the backend running in Codespaces. Make small frontend adjustments in Dev Box while monitoring backend logs in Codespaces. Commit changes to GitHub, keeping both environments in sync and leveraging my CI/CD for deployment to the next environment. We can see the Dev Box running local frontend and the Codespace running the API connected to each other, making requests and displaying the data in the frontend! Hybrid advantage: Dev Box handles GUI previews comfortably and allows me to live test frontend changes. Codespaces handles production-aligned backend testing and Linux-native tools. Dev Box allows me to view all of my files in one screen with potentially multiple Codespaces running in browser of VS Code Desktop. Due to all of those platform efficiencies I have completed my days goals within an hour or two and now I can spend the rest of my day learning about how to enable my developers to inner source using GitHub CoPilot and MCP (Shameless plug). The bottom line There are some additional considerations when architecting a developer platform for an enterprise such as private networking and security not covered in this post but these are implementation details to deliver the described developer experience. Architecting such a platform is a valuable investment to deliver the developer platform foundations we discussed at the top of the article. While in this demo I have quickly built I was working in a mono repository in real engineering teams it is likely (I hope) that an application is built of many different repositories. The great thing about Dev Box and Codespaces is that this wouldn’t slow down the rapid development I can achieve when using both. My Dev Box would be specific for the project or development team, pre loaded with all the tools I need and potentially some repos too! When I need too I can quickly switch over to Codespaces and work in a clean isolated environment and push my changes. In both cases any changes I want to deliver locally are pushed into GitHub (Or ADO), merged and my CI/CD ensures that my next step, potentially a staging environment or who knows perhaps *Whispering* straight into production is taken care of. Once I’m finished I delete my Codespace and potentially my Dev Box if I am done with the project, knowing I can self service either one of these anytime and be up and running again! Now is there overlap in terms of what can be developed in a Codespace vs what can be developed in Azure Dev Box? Of course, but as organisations prioritise developer experience to ensure release velocity while maintaining organisational standards and governance then providing developers a windows native and Linux native service both of which are primarily charged on the consumption of the compute* is a no brainer. There are also gaps that neither fill at the moment for example Microsoft Dev Box only provides windows compute while GitHub Codespaces only supports VS Code as your chosen IDE. It's not a question of which service do I choose for my developers, these two services are better together! *Changes have been announced to Dev Box pricing. A W365 license is already required today and dev boxes will continue to be managed through Azure. For more information please see: Microsoft Dev Box capabilities are coming to Windows 365 - Microsoft Dev Box | Microsoft Learn1.1KViews2likes0CommentsExploring Traffic Manager Integration for External DNS
When you deploy externally accessible applications into Kubernetes, there is usually a requirement for creating some DNS records pointing to these applications, to allow your users to resolve them. Rather than manually creating these DNS records, there are tools that will do this work for you, one of which is External DNS. External DNS can watch your Kubernetes resource configuration for specific annotations and then use these to create DNS records in your DNS zone. It has integrations with many DNS providers, including Azure DNS. This solution works well and is in use by many customers using AKS and Azure DNS. Where we hit a limitation with External DNS in Azure is in scenarios where we are need to distribute traffic across multiple clusters for load balancing and global distribution. There are a few ways to achieve this global distribution in Azure, one way is to use Azure Traffic Manger. Unlike something like Azure Front Door, Azure Traffic Manager is a DNS based global load balancer. Traffic is directed to your different AKS clusters based on DNS resolution using Traffic Manager. When a user queries your Traffic Manager CNAME, they hit the Traffic Manager DNS servers, which then return a DNS record for a specific cluster based on your load balancing configuration. Traffic Manager can then introduce advanced load balancing scenarios such as: Geographic routing - direct users to the nearest endpoint based on their location Weighted distribution - split traffic percentages (e.g., 80% to one region, 20% to another) Priority-based failover - automatic disaster recovery with primary/backup regions Performance-based routing - direct users to the endpoint with lowest latency So, given Azure Traffic Manager is a essentially a DNS service, it would be good if we could manage it using External DNS. We already use External DNS to create DNS records for each of our individual clusters, so why not use it to also create the Traffic Manager DNS configuration to load balance across them. This would also provides the added benefit of allowing us to change our load balancing strategy or configuration by making changes to our External DNS annotations. Unfortunately, an External DNS integration for Traffic Manager doesn't currently exist. In this post, I'll walk through a proof-of-concept for a provider I built to explore whether this integration is viable, and share what I learned along the way. External DNS Webhook Provider External DNS has two types of integrations. The first, and most common for "official" providers are "In-tree" providers, which are the integrations that have been created by External DNS contributors and sit within the central External DNS repository. This includes the Azure DNS provider. The second type of provider is the WebHook provider, which allows for external contributors to easily create their own providers without the need to submit them to the core External DNS repo and go through that release process. We are going to use the WebHook provider External DNS has begun the process of phasing out "In-tree" providers and replacing with WebHook ones. No new "In-tree" providers will be accepted. By using the WebHook provider mechanism, I was able to create a proof of concept Azure Traffic Manager provider that does the following: Watches Kubernetes Services for Traffic Manager annotations Automatically creates and manages Traffic Manager profiles and endpoints Syncs state between your Kubernetes clusters and Azure Enables annotation-driven configuration - no manual Traffic Manager management needed Handles duplication so that when your second cluster attempts to create a Traffic Manager record it adds an endpoint to the existing instance Works alongside the standard External DNS Azure provider for complete DNS automation Here's what the architecture looks like: Example Configuration: Multi-Region Deployment Let me walk you through a practical example using this provider to see how it works. We will deploy an application across two Azure regions with weighted traffic distribution and automatic failover. This example assumes you have built and deployed the PoC provider as discussed below. Step 1: Deploy Your Application You deploy your application to both East US and West US AKS clusters. Each deployment is a standard Kubernetes Service with LoadBalancer type. We then apply annotations that tell our External DNS provider how to create and configure the Traffic Manger resource. These annotations are defined as part of our plugin. apiVersion: v1 kind: Service metadata: name: my-app-east namespace: production annotations: # Standard External DNS annotation external-dns.alpha.kubernetes.io/hostname: my-app-east.example.com # Enable Traffic Manager integration external-dns.alpha.kubernetes.io/webhook-traffic-manager-enabled: "true" external-dns.alpha.kubernetes.io/webhook-traffic-manager-resource-group: "my-tm-rg" external-dns.alpha.kubernetes.io/webhook-traffic-manager-profile-name: "my-app-global" # Weighted routing configuration external-dns.alpha.kubernetes.io/webhook-traffic-manager-weight: "70" external-dns.alpha.kubernetes.io/webhook-traffic-manager-endpoint-name: "east-us" external-dns.alpha.kubernetes.io/webhook-traffic-manager-endpoint-location: "eastus" # Health check configuration external-dns.alpha.kubernetes.io/webhook-traffic-manager-monitor-path: "/health" external-dns.alpha.kubernetes.io/webhook-traffic-manager-monitor-protocol: "HTTPS" spec: type: LoadBalancer ports: - port: 443 targetPort: 8080 selector: app: my-app The West US deployment has identical annotations, except: Weight: "30" (sending 30% of traffic here initially) Endpoint name: "west-us" Location: "westus" Step 2: Automatic Resource Creation When you deploy these Services, here's what happens automatically: Azure Load Balancer provisions and assigns public IPs to each Service External DNS (Azure provider) creates A records: my-app-east.example.com → East US LB IP my-app-west.example.com → West US LB IP External DNS (Webhook provider) creates a Traffic Manager profile named my-app-global Webhook adds endpoints to the profile: East endpoint (weight: 70, target: my-app-east.example.com) West endpoint (weight: 30, target: my-app-west.example.com) External DNS (Azure provider) creates a CNAME: my-app.example.com → Traffic Manager FQDN Now when users access my-app.example.com, Traffic Manager routes 70% of traffic to East US and 30% to West US, with automatic health checking on both endpoints. Step 3: Gradual Traffic Migration Want to shift more traffic to West US? Just update the annotations: kubectl annotate service my-app-east \ external-dns.alpha.kubernetes.io/webhook-traffic-manager-weight="50" \ --overwrite kubectl annotate service my-app-west \ external-dns.alpha.kubernetes.io/webhook-traffic-manager-weight="50" \ --overwrite Within minutes, traffic distribution automatically adjusts to 50/50. This enables: Blue-green deployments - test new versions with small traffic percentages Canary releases - gradually increase traffic to new deployments Geographic optimisation - adjust weights based on user distribution Step 4: Automatic Failover If the East US cluster becomes unhealthy, Traffic Manager's health checks detect this and automatically fail over 100% of traffic to West US, no manual intervention required. How It Works The webhook implements External DNS's webhook provider protocol: 1. Negotiate Capabilities External DNS queries the webhook to determine supported features and versions. 2. Adjust Endpoints External DNS sends all discovered endpoints to the webhook. The webhook: Filters for Services with Traffic Manager annotations Validates configuration Enriches endpoints with metadata Returns only Traffic Manager-enabled endpoints 3. Record State External DNS queries the webhook for current Traffic Manager state. The webhook: Syncs profiles from Azure Converts to External DNS endpoint format Returns CNAME records pointing to Traffic Manager FQDNs 4. Apply Changes External DNS sends CREATE/UPDATE/DELETE operations. The webhook: Creates Traffic Manager profiles as needed Adds/updates/removes endpoints Configures health monitoring Updates in-memory state cache The webhook uses Azure SDK for Go to interact with the Traffic Manager API and maintains an in-memory cache of profile state to optimise performance and reduce API calls. Proof of Concept 🛑 Important: This is a Proof of Concept This project is provided as example code to demonstrate the integration pattern between External DNS and Traffic Manager. It is not a supported product and comes with no SLAs, warranties, or commitments. The code is published to help you understand how to build this type of integration. If you decide to implement something similar for your production environment, you should treat this as inspiration and build your own solution that you can properly test, secure, and maintain. Think of this as a blueprint, not a finished product. With that caveat out of the way, if you want to experiment with this approach, the PoC is available on GitHub: github.com/sam-cogan/external-dns-traffic-manager. The readme file containers detailed instructions on how to deploy the PoC into a single and multi-cluster environment, along with demo applications to try it out. Use Cases This integration unlocks several powerful scenarios: Multi-Region High Availability - Deploy your application across multiple Azure regions with automatic DNS-based load balancing and health-based failover. No additional load balancers or gateways required. Blue-Green Deployments - Deploy a new version alongside your current version, send 5% of traffic to test, gradually increase, and roll back instantly if issues arise by changing annotations. Geographic Distribution - Route European users to your Europe region and US users to your US region automatically using Traffic Manager's geographic routing with the same annotation-based approach. Disaster Recovery - Configure priority-based routing with your primary region at priority 1 and DR region at priority 2. Traffic automatically fails over when health checks fail. Cost Optimisation - Use weighted routing to balance traffic across regions based on capacity and costs. Send more traffic to regions where you have reserved capacity or lower egress costs. Considerations and Future Work This is a proof of concept and should be thoroughly tested before production use. Some areas for improvement: Current Limitations In-memory state only - no persistent storage (restarts require resync) Basic error handling - needs more robust retry logic Limited observability - could use more metrics and events Manual CRD cleanup - DNSEndpoint CRDs need manual cleanup when switching providers Potential Enhancements Support for more endpoint types - currently focuses on ExternalEndpoints Advanced health check configuration - custom intervals, timeouts, and thresholds Metric-based routing decisions - integrate with Azure Monitor for intelligent routing GitOps integration - Flux/ArgoCD examples and best practices Helm chart - simplified deployment If you try this out or have ideas for improvements, please open an issue or PR on GitHub. Wrapping Up This proof of concept shows that External DNS and Traffic Manager can work together nicely. Since Traffic Manager is really just an advanced DNS service, bringing it into External DNS's annotation-driven workflow makes a lot of sense. You get the same declarative approach for both basic DNS records and sophisticated traffic routing. While this isn't production-ready code (and you shouldn't use it as-is), it demonstrates a viable pattern. If you're dealing with multi-region Kubernetes deployments and need intelligent DNS-based routing, this might give you some ideas for building your own solution. The code is out there for you to learn from, break, and hopefully improve upon. If you build something based on this or have feedback on the approach, I'd be interested to hear about it. Resources GitHub Repository: github.com/sam-cogan/external-dns-traffic-manager External DNS Documentation: kubernetes-sigs.github.io/external-dns Azure Traffic Manager: learn.microsoft.com/azure/traffic-manager Webhook Provider Guide: External DNS Webhook Tutorial322Views0likes0CommentsSimplifying Image Signing with Notary Project and Artifact Signing (GA)
Securing container images is a foundational part of protecting modern cloud‑native applications. Teams need a reliable way to ensure that the images moving through their pipelines are authentic, untampered, and produced by trusted publishers. We’re excited to share an updated approach that combines the Notary Project, the CNCF standard for signing and verifying OCI artifacts, with Artifact Signing—formerly Trusted Signing—which is now generally available as a managed signing service. The Notary Project provides an open, interoperable framework for signing and verification across container images and other OCI artifacts, while Notary Project tools like Notation and Ratify enable enforcement in CI/CD pipelines and Kubernetes environments. Artifact Signing complements this by removing the operational complexity of certificate management through short‑lived certificates, verified Azure identities, and role‑based access control, without changing the underlying standards. If you previously explored container image signing using Trusted Signing, the core workflows remain unchanged. As Artifact Signing reaches GA, customers will see updated terminology across documentation and tooling, while existing Notary Project–based integrations continue to work without disruption. Together, Notary Project and Artifact Signing make it easier for teams to adopt image signing as a scalable platform capability—helping ensure that only trusted artifacts move from build to deployment with confidence. Get started Sign container images using Notation CLI Sign container images in CI/CD pipelines Verify container images in CI/CD pipelines Verify container images in AKS Extend signing and verification to all OCI artifacts in registries Related content Simplifying Code Signing for Windows Apps: Artifact Signing (GA) Simplify Image Signing and Verification with Notary Project (preview article)355Views3likes0CommentsDeploy Dynatrace OneAgent on your Container Apps
TOC Introduction Setup References 1. Introduction Dynatrace OneAgent is an advanced monitoring tool that automatically collects performance data across your entire IT environment. It provides deep visibility into applications, infrastructure, and cloud services, enabling real-time observability. OneAgent supports multiple platforms, including containers, VMs, and serverless architectures, ensuring seamless monitoring with minimal configuration. It captures detailed metrics, traces, and logs, helping teams diagnose performance issues, optimize resources, and enhance user experiences. With AI-driven insights, OneAgent proactively detects anomalies and automates root cause analysis, making it an essential component for modern DevOps, SRE, and cloud-native monitoring strategies. 2. Setup 1. After registering your account, go to the control panel and search for Deploy OneAgent. 2. Obtain your Environment ID and create a PaaS token. Be sure to save them for later use. 3. In your local environment's console, log in to the Dynatrace registry. docker login -u XXX XXX.live.dynatrace.com # XXX is your Environment ID # Input PaaS Token when password prompt 4. Create a Dockerfile and an sshd_config file. FROM mcr.microsoft.com/devcontainers/javascript-node:20 # Change XXX into your Environment ID COPY --from=XXX.live.dynatrace.com/linux/oneagent-codemodules:all / / ENV LD_PRELOAD /opt/dynatrace/oneagent/agent/lib64/liboneagentproc.so # SSH RUN apt-get update \ && apt-get install -y --no-install-recommends dialog openssh-server tzdata screen lrzsz htop cron \ && echo "root:Docker!" | chpasswd \ && mkdir -p /run/sshd \ && chmod 700 /root/.ssh/ \ && chmod 600 /root/.ssh/id_rsa COPY ./sshd_config /etc/ssh/ # OTHER EXPOSE 2222 CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"] Port 2222 ListenAddress 0.0.0.0 LoginGraceTime 180 X11Forwarding yes Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr MACs hmac-sha2-256,hmac-sha2-512,hmac-sha1,hmac-sha1-96 StrictModes yes SyslogFacility DAEMON PasswordAuthentication yes PermitEmptyPasswords no PermitRootLogin yes Subsystem sftp internal-sftp AllowTcpForwarding yes 5. Build the container and push it to Azure Container Registry (ACR). # YYY is your ACR name docker build -t oneagent:202503201710 . --no-cache # you could setup your own image name docker tag oneagent:202503201710 YYY.azurecr.io/oneagent:202503201710 docker push YYY.azurecr.io/oneagent:202503201710 6. Create an Azure Container App (ACA), set Ingress to port 3000, allow all inbound traffic, and specify the ACR image you just created. 7. Once the container starts, open a console and run the following command to create a temporary HTTP server simulating a Node.js app. mkdir app && cd app echo 'console.log("Node.js app started...")' > index.js npm init -y npm install express cat <<EOF > server.js const express = require('express'); const app = express(); app.get('/', (req, res) => res.send('hello')); app.listen(3000, () => console.log('Server running on port 3000')); EOF # Please Press Ctrl + C to terminate the next command and run again for 3 times node server.js 8. You should now see the results on the ACA homepage. 9. Go back to the Dynatrace control panel, search for Host Classic, and you should see the collected data. 3. References Integrate OneAgent on Azure App Service for Linux and containers — Dynatrace Docs2KViews0likes1CommentProactive Cloud Ops with SRE Agent: Scheduled Checks for Cloud Optimization
The Cloud Optimization Challenge Your cloud environment is always changing: New features ship weekly Traffic patterns shift seasonally Costs creep up quietly Security best practices evolve Teams spin up resources and forget them It's Monday morning. You open the Azure portal. Everything looks... fine. But "fine" isn't great. That VM has been at 8% CPU for weeks. A Key Vault secret expires in 12 days. Nothing's broken. But security is drifting, costs are creeping, and capacity gaps are growing silently. The question isn't "is something broken?" it's "could this be better?" Four Pillars of Cloud Optimization Pillar What Teams Want The Challenge Security Stay compliant, reduce risk Config drift, legacy settings, expiring creds Cost Spend efficiently, justify budget Hard to spot waste across 100s of resources Performance Meet SLOs, handle growth Know when to scale before demand hits Availability Maximize uptime, build resilience Hidden dependencies, single points of failure Most teams check these sometimes. SRE Agent checks them continuously. Enter SRE Agent + Scheduled tasks SRE Agent can pull data from Azure Monitor, resource configurations, metrics, logs, traces, errors, cost data and analyze it on a schedule. If you use tools outside Azure (Datadog, PagerDuty, Splunk), you can connect those via MCP servers so the agent sees your full observability stack. My setup uses Azure-native sources. Here's how I wired it up. How I Set It Up: Step by Step Step 1: Create SRE Agent with Subscription Access I created an SRE Agent without attaching it to any specific resource group. Instead, I gave it Reader access at the subscription level. This lets the agent scan across all my resource groups for optimization opportunities. No resource group configuration needed. The agent builds a knowledge graph of everything VMs, storage accounts, Key Vaults, NSGs, web apps across the subscription. Step 2: Create and Upload My Organization Practices I created an org-practices.md file that defines what "good" looks like for my team: I uploaded this to SRE Agent's knowledge base. Now the agent knows our bar, not just Azure defaults. 👉 See my full org-practices.md Source repos for this demo: security-demoapp - App with intentional security misconfigurations costoptimizationapp - App with cost optimization opportunities Step 3: Connect to Teams Channel I connected SRE Agent to my team's Teams channel so findings land where we already work. Critical findings get immediate notifications. Warnings go into a daily digest. No more logging into separate dashboards. The insights come to us. Step 4: Connect Resource Groups to GitHub Repos Add the two resource groups to the SRE Agent and link the apps to their corresponding GitHub repos: Resource Group GitHub Repository rg-security-opt-demo security-demoapp rg-cost-opt-sreademo costoptimizationapp This enables the agent to create GitHub issues for findings linking violations directly to the repo responsible for that infrastructure. Step 5: Test with Prompts Before setting up automation, I tested the agent with manual prompts to make sure it was finding the right issues. The agent ran the checks, compared against my org-practices.md, and identified the issues. Security Check: Scan resource group "rg-security-opt-demo" for any violations of our security practices defined in org-practices.md in your knowledge base. list violations with severity and remediation steps. Make sure to check against all critical requirements and send message in teams channel with your findings and create an issue in the github repo https://github.com/dm-chelupati/security-demoapp.git Cost Check: Scan resource group "rg-cost-opt-sreademo" for any violations of our costpractices defined in org-practices.md in your knowledge base. list violations with severity and remediation steps. Make sure to check against all critical requirements and send message in teams channel with your findings and create an issue in the github repo https://github.com/dm-chelupati/costoptimizationapp.git Step 6: Check Output via GitHub Issues After running prompts, I checked GitHub. The agent had created issues. Each issue has the root cause, impact, and fix ready for the team to action or for Coding Agent to pick up and create a PR. 👉 See the actual issues created: Security findings issue Cost findings issue Step 7: Set Up Scheduled Triggers This is where it gets powerful. I configured recurring schedules: Weekly Security Check (Wednesdays 8 AM): Create a scheduled trigger that performs security practices checks against the org practices in knowledge base org-practices.md, creates github issue and send teams message on a weekly basis Wednesdays at 8 am UTC Weekly Cost Review (Mondays 8 AM): Create a scheduled trigger that performs cost practices checks against the org practices in knowledge base org-practices.md, creates github issue and send teams message on a weekly basis on Mondays at 8 am UTC Now optimization runs automatically. Every week, fresh findings land in GitHub Issues and Teams. Why Context Makes the SRE Agent Powerful Think about hiring a new SRE. They're excellent at their craft—they know Kubernetes, networking, Azure inside out. But on day one, they can't solve problems in your environment yet. Why? They don't have context: What are your SLOs? What's "acceptable" latency for your app? When do you rotate secrets? Monthly? Quarterly? Before each release? Which resources are production-critical vs. dev experiments? What's your tagging policy? Who owns what? How do you deploy? GitOps? Pipelines? Manual approvals? A great engineer becomes your great engineer once they learn how your team operates. SRE Agent works the same way. Out of the box, it knows Azure resource types, networking, best practices. But it doesn't know your bar. Is 20% CPU utilization acceptable or wasteful? Should secrets expire in 30 days or 90? Are public endpoints ever okay, or never? The more context you give the agent, your SLOs, your runbooks, your policies, the more it reasons like a team member who understands your environment, not just Azure in general. That's why Step 2 matters so much. When I uploaded our standards, the agent stopped checking generic Azure best practices and started checking our best practices. Bring your existing knowledge: You don't have to start from scratch. If your team's documentation already lives in Atlassian Confluence, SharePoint, or other tools, you can connect those via MCP servers. The agent pulls context from where your team already works, no need to duplicate content. Why This Matters Before this setup, optimization was a quarterly thing. Now it happens automatically: Before After Check security when audit requests it Daily automated posture check Find waste when finance complains Weekly savings report in Teams Discover capacity issues during incidents Scheduled headroom analysis Expire credentials and debug at 2 AM 30-day warning with exact secret names Optimization isn't a project anymore. It's a practice. Try It Yourself Create an SRE Agent with access to your subscription Upload your team's standards (security policies, cost thresholds, tagging rules) Set up a scheduled trigger, start with a daily security check Watch the first report land in Teams See what you've been missing while everything looked "fine." Learn More Azure SRE Agent documentation Azure SRE Agent blogs Azure SRE Agent community Azure SRE Agent home page Azure SRE Agent pricing Azure SRE Agent is currently in preview. Get Started567Views1like0CommentsFind the Alerts You Didn't Know You Were Missing with Azure SRE Agent
I had 6 alert rules. CPU. Memory. Pod restarts. Container errors. OOMKilled. Job failures. I thought I was covered. Then my app went down. I kept refreshing the Azure portal, waiting for an alert. Nothing. That's when it hit me: my alerts were working perfectly. They just weren't designed for this failure mode. Sound familiar? The Problem Every Developer Knows If you're a developer or DevOps engineer, you've been here: a customer reports an issue, you scramble to check your monitoring, and then you realize you don't have the right alerts set up. By the time you find out, it's already too late. You set up what seems like reasonable alerting and assume you're covered. But real-world failures are sneaky. They slip through the cracks of your carefully planned thresholds. My Setup: AKS with Redis I love to vibe code apps using GitHub Copilot Agent mode with Claude Opus 4.5. It's fast, it understands context, and it lets me focus on building rather than boilerplate. For this project, I built a simple journal entry app: AKS cluster hosting the web API Azure Cache for Redis storing journal data Azure Monitor alerts for CPU, memory, pod restarts, container errors, OOMKilled, and job failures Seemed solid. What could go wrong? The Scenario: Redis Password Rotation Here's something that happens constantly in enterprise environments: the security team rotates passwords. It's best practice. It's in the compliance checklist. And it breaks things when apps don't pick up the new credentials. I simulated exactly this. The pods came back up. But they couldn't connect to Redis (as expected). The readiness probes started failing. The LoadBalancer had no healthy backends. The endpoint timed out. And not a single alert fired. Using SRE Agent to Find the Alert Gaps Instead of manually auditing every alert rule and trying to figure out what I missed, I turned to Azure SRE Agent. I asked it a simple question: "My endpoint is timing out. What alerts do I have, and why didn't any of them fire?" Within minutes, it had diagnosed the problem. Here's what it found: My Existing Alerts Why They Didn't Fire High CPU/Memory No resource pressure,just auth failures Pod Restarts Pods weren't restarting, just unhealthy Container Errors App logs weren't being written OOMKilled No memory issues Job Failures No K8s jobs involved The gaps SRE Agent identified: ❌ No synthetic URL availability test ❌ No readiness/liveness probe failure alerts ❌ No "pods not ready" alerts scoped to my namespace ❌ No Redis connection error detection ❌ No ingress 5xx/timeout spike alerts ❌ No per-pod resource alerts (only node-level) SRE Agent didn't just tell me what was wrong, it created a GitHub issue with : KQL queries to detect each failure type Bicep code snippets for new alert rules Remediation suggestions for the app code Exact file paths in my repo to update Check it out: GitHub Issue How I Built It: Step by Step Let me walk you through exactly how I set this up inside SRE Agent. Step 1: Create an SRE Agent I created a new SRE Agent in the Azure portal. Since this workflow analyzes alerts across my subscription (not just one resource group), I didn't configure any specific resource groups. Instead, I gave the agent's managed identity Reader permissions on my entire subscription. This lets it discover resources, list alert rules, and query Log Analytics across all my resource groups. Step 2: Connect GitHub to SRE Agent via MCP I added a GitHub MCP server to give the agent access to my source code repository.MCP (Model Context Protocol) lets you bring any API into the agent. If your tool has an API, you can connect it. I use GitHub for both source code and tracking dev tickets, but you can connect to wherever your code lives (GitLab, Azure DevOps) or your ticketing system (Jira, ServiceNow, PagerDuty). Step 3: Create a Subagent inside SRE Agent for managing Azure Monitor Alerts I created a focused subagent with a specific job and only the tools it needs: Azure Monitor Alerts Expert Prompt: " You are expert in managing operations related to azure monitor alerts on azure resources including discovering alert rules configured on azure resources, creating new alert rules (with user approval and authorization only), processing the alerts fired on azure resources and identifying gaps in the alert rules. You can get the resource details from azure monitor alert if triggered via alert. If not, you need to ask user for the specific resource to perform analysis on. You can use az cli tool to diagnose logs, check the app health metrics. You must use the app code and infra code (bicep files) files you have access to in the github repo <insert your repo> to further understand the possible diagnoses and suggest remediations. Once analysis is done, you must create a github issue with details of analysis and suggested remediation to the source code files in the same repo." Tools enabled: az cli – List resources, alert rules, action groups Log Analytics workspace querying – Run KQL queries for diagnostics GitHub MCP – Search repositories, read file contents, create issues Step 4: Ask the Subagent About Alert Gaps I gave the agent context and asked a simple question: "@AzureAlertExpert: My API endpoint http://132.196.167.102/api/journals/john is timing out. What alerts do I have configured in rg-aks-journal, and why didn't any of them fire? The agent did the analysis autonomously and summarized findings with suggestions to add new alert rules in a GitHub issue. Here's the agentic workflow to perform azure monitor alert operations Why This Matters Faster response times. Issues get diagnosed in minutes, not hours of manual investigation. Consistent analysis. No more "I thought we had an alert for that" moments. The agent systematically checks what's covered and what's not. Proactive coverage. You don't have to wait for an incident to find gaps. Ask the agent to review your alerts before something breaks. The Bottom Line Your alerts have gaps. You just don't know it until something slips through. I had 6 alert rules and still missed a basic failure. My pods weren't restarting, they were just unhealthy. My CPU wasn't spiking, the app was just returning errors. None of my alerts were designed for this. You don't need to audit every alert rule manually. Give SRE Agent your environment, describe the failure, and let it tell you what's missing. Stop discovering alert gaps from customer complaints. Start finding them before they matter. A Few Tips Give the agent Reader access at subscription level so it can discover all resources Use a focused subagent prompt, don't try to do everything in one agent Test your MCP connections before running workflows What Alert Gaps Have Burned You? What's the alert you wish you had set up before an incident? Credential rotation? Certificate expiry? DNS failures? Let us know in the comments.436Views1like0CommentsStop Running Runbooks at 3 am: Let Azure SRE Agent Do Your On-Call Grunt Work
Your pager goes off. It's 2:47am. Production is throwing 500 errors. You know the drill - SSH into this, query that, check these metrics, correlate those logs. Twenty minutes later, you're still piecing together what went wrong. Sound familiar? The On-Call Reality Nobody Talks About Every SRE, DevOps engineer, and developer who's carried a pager knows this pain. When incidents hit, you're not solving problems - you're executing runbooks. Copy-paste this query. Check that dashboard. Run these az commands. Connect the dots between five different tools. It's tedious. It's error-prone at 3am. And honestly? It's work that doesn't require human creativity but requires human time. What if an AI agent could do this for you? Enter Azure SRE Agent + Runbook Automation Here's what I built: I gave SRE Agent a simple markdown runbook containing the same diagnostic steps I'd run manually during an incident. The agent executes those steps, collects evidence, and sends me an email with everything I need to take action. No more bouncing between terminals. No more forgetting a step because it's 3am and your brain is foggy. What My Runbook Contains Just the basics any on-call would run: az monitor metrics – CPU, memory, request rates Log Analytics queries – Error patterns, exception details, dependency failures App Insights data – Failed requests, stack traces, correlation IDs az containerapp logs – Revision logs, app configuration That's it. Plain markdown with KQL queries and CLI commands. Nothing fancy. What the Agent Does Reads the runbook from its knowledge base Executes each diagnostic step Collects results and evidence Sends me an email with analysis and findings I wake up to an email that says: "CPU spiked to 92% at 2:45am, triggering connection pool exhaustion. Top exception: SqlException (1,832 occurrences). Errors correlate with traffic spike. Recommend scaling to 5 replicas." All the evidence. All the queries used. All the timestamps. Ready for me to act. How to Set This Up (6 Steps) Here's how you can build this yourself: Step 1: Create SRE Agent Create a new SRE Agent in the Azure portal. No Azure resource groups to configure. If your apps run on Azure, the agent pulls context from the incident itself. If your apps run elsewhere, you don't need Azure resource configuration at all. Step 2: Grant Reader Permission (Optional) If your runbooks execute against Azure resources, assign Reader role to the SRE Agent's managed identity on your subscription. This allows the agent to run az commands and query metrics. Skip this if your runbooks target non-Azure apps. Step 3: Add Your Runbook to SRE Agent's Knowledge base You already have runbooks, they're in your wiki, Confluence, or team docs. Just add them as .md files to the agent's knowledge base. To learn about other ways to link your runbooks to the agent, read this Step 4: Connect Outlook Connect the agent to your Outlook so it can send you the analysis email with findings. Step 5: Create a Subagent Create a subagent with simple instructions like: "You are an expert in triaging and diagnosing incidents. When triggered, search the knowledge base for the relevant runbook, execute the diagnostic steps, collect evidence, and send an email summary with your findings." Assign the tools the agent needs: RunAzCliReadCommands – for az monitor, az containerapp commands QueryLogAnalyticsByWorkspaceId – for KQL queries against Log Analytics QueryAppInsightsByResourceId – for App Insights data SearchMemory – to find the right runbook SendOutlookEmail – to deliver the analysis Step 6: Set Up Incident Trigger Connect your incident management tool - PagerDuty, ServiceNow, or Azure Monitor alerts and setup the incident trigger to the subagent. When an incident fires, the agent kicks off automatically. That's it. Your agentic workflow now looks like this: This Works for Any App, Not Just Azure Here's the thing: SRE Agent is platform agnostic. It's executing your runbooks, whatever they contain. On-prem databases? Add your diagnostic SQL. Custom monitoring stack? Add those API calls. The agent doesn't care where your app runs. It cares about following your runbook and getting you answers. Why This Matters Lower MTTR. By the time you're awake and coherent, the analysis is done. Consistent execution. No missed steps. No "I forgot to check the dependencies" at 4am. Evidence for postmortems. Every query, every result, timestamped and documented. Focus on what matters. Your brain should be deciding what to do not gathering data. The Bottom Line On-call runbook execution is the most common, most tedious, and most automatable part of incident response. It's grunt work that pulls engineers away from the creative problem-solving they were hired for. SRE Agent offloads that work from your plate. You write the runbook once, and the agent executes it every time, faster and more consistently than any human at 3am. Stop running runbooks. Start reviewing results. Try it yourself: Create a markdown runbook with your diagnostic queries and commands, add it to your SRE Agent's knowledge base, and let the agent handle your next incident. Your 3am self will thank you.1.1KViews1like0CommentsReimagining AI Ops with Azure SRE Agent: New Automation, Integration, and Extensibility features
Azure SRE Agent offers intelligent and context aware automation for IT operations. Enhanced by customer feedback from our preview, the SRE Agent has evolved into an extensible platform to automate and manage tasks across Azure and other environments. Built on an Agentic DevOps approach - drawing from proven practices in internal Azure operations - the Azure SRE Agent has already saved over 20,000 engineering hours across Microsoft product teams operations, delivering strong ROI for teams seeking sustainable AIOps. An Operations Agent that adapts to your playbooks Azure SRE Agent is an AI powered operations automation platform that empowers SREs, DevOps, IT operations, and support teams to automate tasks such as incident response, customer support, and developer operations from a single, extensible agent. Its value proposition and capabilities have evolved beyond diagnosis and mitigation of Azure issues, to automating operational workflows and seamless integration with the standards and processes used in your organization. SRE Agent is designed to automate operational work and reduce toil, enabling developers and operators to focus on high-value tasks. By streamlining repetitive and complex processes, SRE Agent accelerates innovation and improves reliability across cloud and hybrid environments. In this article, we will look at what’s new and what has changed since the last update. What’s New: Automation, Integration, and Extensibility Azure SRE Agent just got a major upgrade. From no-code automation to seamless integrations and expanded data connectivity, here’s what’s new in this release: No-code Sub-Agent Builder: Rapidly create custom automations without writing code. Flexible, event-driven triggers: Instantly respond to incidents and operational changes. Expanded data connectivity: Unify diagnostics and troubleshooting across more data sources. Custom actions: Integrate with your existing tools and orchestrate end-to-end workflows via MCP. Prebuilt operational scenarios: Accelerate deployment and improve reliability out of the box. Unlike generic agent platforms, Azure SRE Agent comes with deep integrations, prebuilt tools, and frameworks specifically for IT, DevOps, and SRE workflows. This means you can automate complex operational tasks faster and more reliably, tailored to your organization’s needs. Sub-Agent Builder: Custom Automation, No Code Required Empower teams to automate repetitive operational tasks without coding expertise, dramatically reducing manual workload and development cycles. This feature helps address the need for targeted automation, letting teams solve specific operational pain points without relying on one-size-fits-all solutions. Modular Sub-Agents: Easily create custom sub-agents tailored to your team’s needs. Each sub-agent can have its own instructions, triggers, and toolsets, letting you automate everything from outage response to customer email triage. Prebuilt System Tools: Eliminate the inefficiency of creating basic automation from scratch, and choose from a rich library of hundreds of built-in tools for Azure operations, code analysis, deployment management, diagnostics, and more. Custom Logic: Align automation to your unique business processes by defining your automation logic and prompts, teaching the agent to act exactly as your workflow requires. Flexible Triggers: Automate on Your Terms Invoke the agent to respond automatically to mission-critical events, not wait for manual commands. This feature helps speed up incident response and eliminate missed opportunities for efficiency. Multi-Source Triggers: Go beyond chat-based interactions, and trigger the agent to automatically respond to Incident Management and Ticketing systems like PagerDuty and ServiceNow, Observability Alerting systems like Azure Monitor Alerts, or even on a cron-based schedule for proactive monitoring and best-practices checks. Additional trigger sources such as GitHub issues, Azure DevOps pipelines, email, etc. will be added over time. This means automation can start exactly when and where you need it. Event-Driven Operations: Integrate with your CI/CD, monitoring, or support systems to launch automations in response to real-world events - like deployments, incidents, or customer requests. Vital for reducing downtime, it ensures that business-critical actions happen automatically and promptly. Expanded Data Connectivity: Unified Observability and Troubleshooting Integrate data, enabling comprehensive diagnostics and troubleshooting and faster, more informed decision-making by eliminating silos and speeding up issue resolution. Multiple Data Sources: The agent can now read data from Azure Monitor, Log Analytics, and Application Insights based on its Azure role-based access control (RBAC). Additional observability data sources such as Dynatrace, New Relic, Datadog, and more can be added via the Remote Model Context Protocol (MCP) servers for these tools. This gives you a unified view for diagnostics and automation. Knowledge Integration: Rather than manually detailing every instruction in your prompt, you can upload your Troubleshooting Guide (TSG) or Runbook directly, allowing the agent to automatically create an execution plan from the file. You may also connect the agent to resources like SharePoint, Jira, or documentation repositories through Remote MCP servers, enabling it to retrieve needed files on its own. This approach utilizes your organization’s existing knowledge base, streamlining onboarding and enhancing consistency in managing incidents. Azure SRE Agent is also building multi-agent collaboration by integrating with PagerDuty and Neubird, enabling advanced, cross-platform incident management and reliability across diverse environments. Custom Actions: Automate Anything, Anywhere Extend automation beyond Azure and integrate with any tool or workflow, solving the problem of limited automation scope and enabling end-to-end process orchestration. Out-of-the-Box Actions: Instantly automate common tasks like running azcli, kubectl, creating GitHub issues, or updating Azure resources, reducing setup time and operational overhead. Communication Notifications: The SRE Agent now features built-in connectors for Outlook, enabling automated email notifications, and for Microsoft Teams, allowing it to post messages directly to Teams channels for streamlined communication. Bring Your Own Actions: Drop in your own Remote MCP servers to extend the agent’s capabilities to any custom tool or workflow. Future-proof your agentic DevOps by automating proprietary or emerging processes with confidence. Prebuilt Operations Scenarios Address common operational challenges out of the box, saving teams time and effort while improving reliability and customer satisfaction. Incident Response: Minimize business impact and reduce operational risk by automating detection, diagnosis, and mitigation of your workload stack. The agent has built-in runbooks for common issues related to many Azure resource types including Azure Kubernetes Service (AKS), Azure Container Apps (ACA), Azure App Service, Azure Logic Apps, Azure Database for PostgreSQL, Azure CosmosDB, Azure VMs, etc. Support for additional resource types is being added continually, please see product documentation for the latest information. Root Cause Analysis & IaC Drift Detection: Instantly pinpoint incident causes with AI-driven root cause analysis including automated source code scanning via GitHub and Azure DevOps integration. Proactively detect and resolve infrastructure drift by comparing live cloud environments against source-controlled IaC, ensuring configuration consistency and compliance. Handle Complex Investigations: Enable the deep investigation mode that uses a hypothesis-driven method to analyze possible root causes. It collects logs and metrics, tests hypotheses with iterative checks, and documents findings. The process delivers a clear summary and actionable steps to help teams accurately resolve critical issues. Incident Analysis: The integrated dashboard offers a comprehensive overview of all incidents managed by the SRE Agent. It presents essential metrics, including the number of incidents reviewed, assisted, and mitigated by the agent, as well as those awaiting human intervention. Users can leverage aggregated visualizations and AI-generated root cause analyses to gain insights into incident processing, identify trends, enhance response strategies, and detect areas for improvement in incident management. Inbuilt Agent Memory: The new SRE Agent Memory System transforms incident response by institutionalizing the expertise of top SREs - capturing, indexing, and reusing critical knowledge from past incidents, investigations, and user guidance. Benefit from faster, more accurate troubleshooting, as the agent learns from both successes and mistakes, surfacing relevant insights, runbooks, and mitigation strategies exactly when needed. This system leverages advanced retrieval techniques and a domain-aware schema to ensure every on-call engagement is smarter than the last, reducing mean time to resolution (MTTR) and minimizing repeated toil. Automatically gain a continuously improving agent that remembers what works, avoids past pitfalls, and delivers actionable guidance tailored to the environment. GitHub Copilot and Azure DevOps Integration: Automatically triage, respond to, and resolve issues raised in GitHub or Azure DevOps. Integration with modern development platforms such as GitHub Copilot coding agent increases efficiency and ensures that issues are resolved faster, reducing bottlenecks in the development lifecycle. Ready to get started? Azure SRE Agent home page Product overview Pricing Page Pricing Calculator Pricing Blog Demo recordings Deployment samples What’s Next? Give us feedback: Your feedback is critical - You can Thumbs Up / Thumbs Down each interaction or thread, or go to the “Give Feedback” button in the agent to give us in-product feedback - or you can create issues or just share your thoughts in our GitHub repo at https://github.com/microsoft/sre-agent. We’re just getting started. In the coming months, expect even more prebuilt integrations, expanded data sources, and new automation scenarios. We anticipate continuous growth and improvement throughout our agentic AI platforms and services to effectively address customer needs and preferences. Let us know what Ops toil you want to automate next!3.9KViews1like0Comments