modern apps
100 TopicsModernizing Applications by Migrating Code to Use Managed Identity with Copilot App Modernization
Migrating application code to use Managed Identity removes hard‑coded secrets, reduces operational risk, and aligns with modern cloud security practices. Applications authenticate directly with Azure services without storing credentials. GitHub Copilot app modernization streamlines this transition by identifying credential usage patterns, updating code, and aligning dependencies for Managed Identity flows. Supported Migration Steps GitHub Copilot app modernization helps accelerate: Replacing credential‑based authentication with Managed Identity authentication. Updating SDK usage to token‑based flows. Refactoring helper classes that build credential objects. Surfacing libraries or APIs that require alternative authentication approaches. Preparing build configuration changes needed for managed identity integration. Migration Analysis Open the project in Visual Studio Code or IntelliJ IDEA. GitHub Copilot app modernization analyzes: Locations where secrets, usernames, passwords, or connection strings are referenced. Service clients using credential constructors or static credential factories. Environment‑variable‑based authentication workarounds. Dependencies and SDK versions required for Managed Identity authentication. The analysis outlines upgrade blockers and the required changes for cloud‑native authentication. Migration Plan Generation GitHub Copilot app modernization produces a migration plan containing: Replacement of hard‑coded credentials with Managed Identity authentication patterns. Version updates for Azure libraries aligning with Managed Identity support. Adjustments to application configuration to remove unnecessary secrets. Developers can review and adjust before applying. Automated Transformations GitHub Copilot app modernization applies changes: Rewrites code that initializes clients using username/password or connection strings. Introduces Managed Identity‑friendly constructors and token credential patterns. Updates imports, method signatures, and helper utilities. Cleans up configuration files referencing outdated credential flows. Build & Fix Iteration The tool rebuilds the project, identifies issues, and applies targeted fixes: Compilation errors from removed credential classes. Incorrect parameter types or constructors. Dependencies requiring updates for Managed Identity compatibility. Security & Behavior Checks GitHub Copilot app modernization validates: CVEs introduced through dependency updates. Behavior changes caused by new authentication flows. Optional fixes for dependency vulnerabilities. Expected Output A migrated codebase using Managed Identity: Updated authentication logic. Removed credential references. Updated SDKs and dependencies. A summary file listing code edits, dependency changes, and items requiring manual review. Developer Responsibilities Developers should: Validate identity access on Azure resources. Reconfigure role assignments for system‑assigned or user‑assigned managed identities. Test functional behavior across environments. Review integration points dependent on identity scopes and permissions. Learn full upgrade workflows in the Microsoft Learn guide for upgrading Java projects with GitHub Copilot app modernization. Learn more Predefined tasks for GitHub Copilot app modernization Apply a predefined task Install GitHub Copilot app modernization for VS Code and IntelliJ IDEA52Views0likes0CommentsModernizing Spring Framework Applications with GitHub Copilot App Modernization
Upgrading Spring Framework applications from version 5 to the latest 6.x line (including 6.2+) enables improved performance, enhanced security, alignment with modern Java releases, and full Jakarta namespace compatibility. The transition often introduces breaking API changes, updated module requirements, and dependency shifts. GitHub Copilot app modernization streamlines this upgrade by analyzing your project, generating targeted changes, and guiding you through the migration. Supported Upgrade Path GitHub Copilot app modernization supports: Upgrading Spring Framework to 6.x, including 6.2+ Migrating from javax to jakarta Aligning transitive dependencies and version constraints Updating build plugins and configurations Identifying deprecated or removed APIs Validating dependency updates and surfacing CVE issues These capabilities align with the Microsoft Learn quickstart for upgrading Java projects with GitHub Copilot app modernization. Project Setup Open your Spring Framework project in Visual Studio Code or IntelliJ IDEA with GitHub Copilot app modernization enabled. The tool works with Maven or Gradle projects and evaluates your existing Spring Framework, Java version, imports, and build configurations. Project Analysis When you trigger the upgrade, GitHub Copilot app modernization: Detects the current Spring Framework version Flags javax imports requiring Jakarta migration Identifies incompatible modules, libraries, and plugins Validates JDK compatibility requirements for Spring Framework 6.x Reviews transitive dependencies impacted by the update This analysis provides the foundation for the upgrade plan generated next. Upgrade Plan Generation GitHub Copilot app modernization produces a structured plan including: Updated Spring Framework version (6.x / 6.2+) Replacements for deprecated or removed APIs jakarta namespace updates Updated build plugins and version constraints JDK configuration adjustments You can review the plan, modify version targets, and confirm actions before the tool applies them. Automated Transformations After approval, GitHub Copilot app modernization applies automated changes such as: Updating Spring Framework module coordinates Rewriting imports from javax.* to jakarta.* Updating libraries required for Spring Framework 6.x Adjusting plugin versions and build logic Recommending fixes for API changes These transformations rely on OpenRewrite‑based rules to modernize your codebase efficiently. Build Fix Iteration Once changes are applied, the tool compiles your project and automatically responds to failures: Captures compilation errors Suggests targeted fixes Rebuilds iteratively This loop continues until the project compiles with Spring Framework 6.x in place. Security & Behavior Checks GitHub Copilot app modernization performs validation steps after the upgrade: Checks for CVEs in updated dependencies Identifies potential behavior changes introduced during the transition Offers optional fixes to address issues This adds confidence before final verification. Expected Output After a Spring Framework 5 → 6.x upgrade, you can expect: Updated module coordinates for Spring Framework 6.x / 6.2 jakarta‑aligned imports across the codebase Updated dependency versions aligned with the new Spring ecosystem Updated plugins and build tool configurations Modernized test stack (JUnit 5) A summary file detailing versions updated, code edits applied, dependencies changed, and items requiring manual review Developer Responsibilities GitHub Copilot app modernization accelerates framework upgrade mechanics, but developers remain responsible for: Running full test suites Reviewing custom components, filters, and validation logic Revisiting security configurations and reactive vs. servlet designs Checking integration points and application semantics post‑migration The tool handles the mechanical modernization work so you can focus on correctness, runtime behavior, and quality assurance. Learn More For prerequisites, setup steps, and the complete Java upgrade workflow, refer to the Microsoft Learn guide: Upgrade a Java Project with Github Copilot App Modernization Install GitHub Copilot app modernization for VS Code and IntelliJ IDEA44Views0likes0CommentsUpgrade your Java JDK (8, 11, 17, 21, or 25) with GitHub Copilot App Modernization
Developers modernizing Java applications often need to upgrade the Java Development Kit (JDK), update frameworks, align dependencies, or migrate older stacks such as Java EE. GitHub Copilot app modernization dramatically speeds up this process by analyzing your project, identifying upgrade blockers, and generating targeted changes. This post highlights supported upgrade paths and what you can expect when using GitHub Copilot app modernization—optimized for search discoverability rather than deep tutorial content. For complete, authoritative guidance, refer to the official Microsoft Learn quickstart. Supported Upgrade Scenarios GitHub Copilot app modernization supports upgrading: Java Development Kit (JDK) to versions 8, 11, 17, 21, or 25 Spring Boot up to 3.5 Spring Framework up to 6.2+ Java EE → Jakarta EE (up to Jakarta EE 10) JUnit Third‑party dependencies to specified versions Ant → Maven build migrations For the full capabilities list, see the Microsoft Learn quickstart. Prerequisites (VS Code or IntelliJ) To use GitHub Copilot app modernization, you’ll need: GitHub account + GitHub Copilot Free Tier, Pro, Pro+, Business, or Enterprise Visual Studio Code Version 1.101+ GitHub Copilot extension GitHub Copilot app modernization extension Restart after installation IntelliJ IDEA Version 2023.3+ GitHub Copilot plugin 1.5.59+ Restart after installation Recommended: Auto‑approve MCP Tool Annotations under Tools > GitHub Project Requirements Java project using Maven or Gradle Git‑managed Maven access to public Maven Central (if Maven) Gradle wrapper version 5+ Kotlin DSL supported VS Code setting: “Tools enabled” set to true if controlled by your org Selecting a Java Project to Upgrade Open any Java project in: Visual Studio Code IntelliJ IDEA Optional sample projects: Maven: uportal‑messaging Gradle: docraptor‑java Once open, launch GitHub Copilot app modernization using Agent Mode. Running an Upgrade (Example: Java 8 → Java 21) Open GitHub Copilot Chat → Switch to Agent Mode → Run a prompt such as: Upgrade this project to Java 21 You’ll receive: Upgrade Plan JDK version updates Build file changes (Maven/Gradle) Dependency version adjustments Framework upgrade paths, if relevant Automated Transformations GitHub Copilot app modernization applies changes using OpenRewrite‑based transformations. Dynamic Build / Fix Loop The agent iterates: Build Detect failure Fix Retry Until the project builds successfully. Security & Behavior Checks Detects CVEs in upgraded dependencies Flags potential behavior changes Offers optional fixes Final Upgrade Summary Generated as a markdown file containing: Updated JDK level Dependencies changed Code edits made Any remaining CVEs or warnings What You Can Expect in a JDK Upgrade Typical outcomes from upgrading Java 8 → Java 21: Updated build configuration (maven.compiler.release → 21) Removal or replacement of deprecated JDK APIs Updated library versions for Java 21 compatibility Surface warnings for manual review Successfully building project with modern JDK settings GitHub Copilot app modernization accelerates these updates while still leaving space for developer review of runtime or architectural changes. Learn More For the complete, authoritative upgrade workflow—covering setup, capabilities, and the full end‑to‑end process—visit: ➡ Quickstart: Upgrade a Java project with GitHub Copilot app modernization (Microsoft Learn) Install GitHub Copilot app modernization for VS Code and IntelliJ IDEA58Views0likes0CommentsModernizing Spring Boot Applications with GitHub Copilot App Modernization
Upgrading Spring Boot applications from 2.x to the latest 3.x releases introduces significant changes across the framework, dependencies, and Jakarta namespace. These updates improve long-term support, performance, and compatibility with modern Java platforms, but the migration can surface breaking API changes and dependency mismatches. GitHub Copilot app modernization helps streamline this transition by analyzing your project, generating an upgrade plan, and applying targeted updates. Supported Upgrade Path GitHub Copilot app modernization supports upgrading Spring Boot applications to Spring Boot 3.5, including: Updating Spring Framework libraries to 6.x Migrating from javax to jakarta Aligning dependency versions with Boot 3.x Updating plugins and starter configurations Adjusting build files for the required JDK level Validating dependency updates and surfacing CVE issues These capabilities complement the Microsoft Learn quickstart for upgrading Java projects using GitHub Copilot app modernization. How GitHub Copilot app modernization helps When you open a Spring Boot 2.x project in Visual Studio Code or IntelliJ IDEA and initiate an upgrade, GitHub Copilot app modernization performs: Project Analysis Detects your current Spring Boot version Identifies incompatible starters, libraries, and plugins Flags javax.* imports requiring Jakarta migration Evaluates your build configuration and JDK requirements Upgrade Plan Generation The tool produces an actionable plan that outlines: New Spring Boot parent version Updated Spring Framework and related modules Required namespace changes from javax.* to jakarta.* Build plugin updates JDK configuration alignment for Boot 3 You can review and adjust the plan before applying changes. Automated Transformations GitHub Copilot app modernization applies targeted changes such as: Updating spring-boot-starter-parent to 3.5.x Migrating imports to jakarta.* Updating dependencies and BOM versions Rewriting removed or deprecated APIs Aligning test dependencies (e.g., JUnit 5) Build / Fix Iteration The agent automatically: Builds the project Captures failures Suggests fixes Applies updates Rebuilds until the project compiles successfully This loop continues until all actionable issues are addressed. Security & Behavior Checks As part of the upgrade, the tool can: Validate CVEs introduced by dependency version changes Surface potential behavior changes Recommend optional fixes Expected Output After running the upgrade for a Spring Boot 2.x project, you should expect: An updated Spring Boot parent in Maven or Gradle Spring Framework 6.x and Jakarta-aligned modules Updated starter dependencies and plugin versions Rewritten imports from javax.* to jakarta.* Updated testing stack A summary file detailing: Versions updated Code edits applied Dependencies changed CVE results Remaining manual review items Developer Responsibilities GitHub Copilot app modernization accelerates technical migration tasks, but final validation still requires developer review, including: Running the full test suite Reviewing custom filters, security configuration, and web components Re-validating integration points Confirming application behavior across runtime environments The tool handles mechanical upgrade work so you can focus on correctness, quality, and functional validation. Learn more For setup, prerequisites, and the broader Java upgrade workflow, refer to the official Microsoft Learn guide: Quickstart: Upgrade a Java Project with GitHub Copilot App Modernization Install GitHub Copilot app modernization for VS Code and IntelliJ IDEA49Views0likes0CommentsModernizing Java EE Applications to Jakarta EE with GitHub Copilot App Modernization
Migrating a Java EE application to Jakarta EE is now a required step as the ecosystem has fully transitioned to the new jakarta.* namespace. This migration affects servlet APIs, persistence, security, messaging, and frameworks built on top of the Jakarta specifications. The changes are mechanical but widespread, and manual migration is slow, error‑prone, and difficult to validate at scale. GitHub Copilot app modernization accelerates this process by analyzing the project, identifying required namespace and dependency updates, and guiding developers through targeted upgrade steps. Supported Upgrade Path GitHub Copilot app modernization supports: Migrating Java EE applications to Jakarta EE (up to Jakarta EE 10) Updating javax.* imports to jakarta.* Aligning dependencies and application servers with Jakarta EE 10 requirements Updating build plugins, BOMs, and libraries impacted by namespace migration Fixing compilation issues and surfacing API incompatibilities Detecting dependency CVEs after migration These capabilities complement the Microsoft Learn guide for upgrading Java projects with GitHub Copilot app modernization. Getting Started Open your Java EE project in Visual Studio Code or IntelliJ IDEA with GitHub Copilot app modernization enabled. Copilot evaluates the project structure, build files, frameworks in use, and introduces a migration workflow tailored to Jakarta EE. Project Analysis The migration begins with a full project scan: Identifies Java EE libraries (javax.*) Detects frameworks depending on older EE APIs (Servlet, JPA, JMS, Security) Flags incompatible versions of application servers and dependencies Determines JDK constraints for Jakarta EE 10 compatibility Analyzes build configuration (Maven/Gradle) and transitive dependencies This analysis forms the basis for the generated migration plan. Migration Plan Generation GitHub Copilot app modernization generates a clear, actionable plan outlining: Required namespace transitions from javax.* to jakarta.* Updated dependency coordinates aligned with Jakarta EE 10 Plugin version updates Adjustments to JDK settings if needed Additional changes for frameworks relying on legacy EE APIs You can review and adjust versions or library targets before applying changes. Automated Transformations After approving the plan, Copilot performs transformation steps: Rewrites imports from javax. to jakarta. Updates dependencies to Jakarta EE 10–compatible coordinates Applies required framework-level changes (JPA, Servlet, Bean Validation, JAX‑RS, CDI) Updates plugin versions aligned with Jakarta EE–based libraries Converts removed or relocated APIs with recommended replacements These transformations rely on OpenRewrite‑based rules surfaced through Copilot app modernization. Build Fix Iteration Copilot iterates through a build‑and‑fix cycle: Runs the build Captures compilation errors introduced by the migration Suggests targeted fixes Applies changes Rebuilds until the project compiles successfully This loop eliminates hours or days of manual mechanical migration work. Security & Behavior Checks After a successful build, Copilot performs additional validation: Flags CVEs introduced by updated or newly resolved dependencies Surfaces potential behavior changes from updated APIs Offers optional fixes for dependency vulnerabilities These checks ensure the migrated application is secure and stable before runtime verification. Expected Output A Jakarta EE migration with GitHub Copilot app modernization results in: Updated imports from javax.* to jakarta.* Dependencies aligned with Jakarta EE 10 Updated Maven or Gradle plugin and library versions Rewritten API usage where needed Updated tests and validation logic if required A summary file containing: Versions updated Code edits applied Dependency changes CVE results Items requiring manual developer review Developer Responsibilities While GitHub Copilot app modernization accelerates the mechanical upgrade, developers remain responsible for: Running the full application test suite Reviewing security, validation, and persistence behavior changes Updating application server configuration (if applicable) Re‑verifying integrations with messaging, REST endpoints, and persistence providers Confirming semantic correctness post‑migration Copilot focuses on the mechanical modernization tasks so developers can concentrate on validating runtime behavior and business logic. Learn More For setup prerequisites and full upgrade workflow details, refer to the Microsoft Learn guide for upgrading Java projects with GitHub Copilot app modernization. Quickstart: Upgrade a Java Project with GitHub Copilot App Modernization | Microsoft Learn157Views0likes0CommentsApp Service Easy MCP: Add AI Agent Capabilities to Your Existing Apps with Zero Code Changes
The age of AI agents is here. Tools like GitHub Copilot, Claude, and other AI assistants are no longer just answering questions—they're taking actions, calling APIs, and automating complex workflows. But how do you make your existing applications and APIs accessible to these intelligent agents? At Microsoft Ignite, I teamed up to present session BRK116: Apps, agents, and MCP is the AI innovation recipe, where I demonstrated how you can add agentic capabilities to your existing applications with little to no code changes. Today, I'm excited to share a concrete example of that vision: Easy MCP—a way to expose any REST API to AI agents with absolutely zero code changes to your existing apps. The Challenge: Bridging REST APIs and AI Agents Most organizations have invested years building REST APIs that power their applications. These APIs represent critical business logic, data access patterns, and integrations. But AI agents speak a different language—they use protocols like Model Context Protocol (MCP) to discover and invoke tools. The traditional approach would require you to: Learn the MCP SDK Write new MCP server code Manually map each API endpoint to an MCP tool Deploy and maintain additional infrastructure What if you could skip all of that? Introducing Easy MCP (a proof of concept not associated with the App Service platform) Easy MCP is an OpenAPI-to-MCP translation layer that automatically generates MCP tools from your existing REST APIs. If your API has an OpenAPI (Swagger) specification—which most modern APIs do—you can make it accessible to AI agents in minutes. This means that if you have existing apps with OpenAPI specifications already running on App Service, or really any hosting platform, this tool makes enabling MCP seamless. How It Works Point the gateway at your API's base URL Detect your OpenAPI specification automatically Connect and the gateway generates MCP tools for every endpoint Use the MCP endpoint URL with any MCP-compatible AI client That's it. No code changes. No SDK integration. No manual tool definitions. See It in Action Let's say you have a Todo API running on Azure App Service at `https://my-todo-app.azurewebsites.net`. In just a few clicks: Open the Easy MCP web UI Enter your API URL Click "Detect" to find your OpenAPI spec Click "Connect" Now configure your AI client (like VS Code with GitHub Copilot) to use the gateway's MCP endpoint: { "servers": { "my-api": { "type": "http", "url": "https://my-gateway.azurewebsites.net/mcp" } } } Instantly, your AI assistant can: "What's on my todo list?" "Add 'Review PR #123' to my todos with high priority" "Mark all tasks as complete" All powered by your existing REST API, with zero modifications. The Bigger Picture: Modernization Without Rewrites This approach aligns perfectly with a broader modernization strategy we're enabling on Azure App Service. App Service Managed Instance: Move and Modernize Legacy Apps For organizations with legacy applications—whether they're running on older Windows frameworks, custom configurations, or traditional hosting environments—Azure App Service Managed Instance provides a path to the cloud with minimal friction. You can migrate these applications to a fully managed platform without rewriting code. Easy MCP: Add AI Capabilities Post-Migration Once your legacy applications are running on App Service, Easy MCP becomes the next step in your modernization journey. That 10-year-old internal API? It can now be accessed by AI agents. That legacy inventory system? AI assistants can query and update it. No code changes needed. The modernization path: Migrate legacy apps to App Service with Managed Instance (no code changes) Expose APIs to AI agents with Easy MCP Gateway (no code changes) Empower your organization with AI-assisted workflows Deploy It Yourself Easy MCP is open source and ready to deploy. If you already have an existing API to use with this tool, go for it. If you need an app to test with, check out this sample. Make sure you complete the "Add OpenAPI functionality to your web app" step. You don't need to go beyond that. GitHub Repository: seligj95/app-service-easy-mcp Deploy to Azure in minutes with Azure Developer CLI: azd auth login azd init azd up Or run it locally for testing: npm install npm run dev # Open http://localhost:3000 What's Next: Native App Service Integration Here's where it gets really exciting. We're exploring ways to build this capability directly into the Azure App Service platform so you won't have to deploy a second app or additional resources to get this capability. Azure API Management recently released a feature with functionality to expose a REST API, including an API on App Service, as an MCP server, which I highly recommend that you check out if you're familiar with Azure API Management. But in this case, imagine a future where adding AI agent capabilities to your App Service apps is as simple as flipping a switch in the Azure Portal—no gateway or API Management deployment required, no additional infrastructure or services to manage, and built-in security, monitoring, scaling, etc.—all of the features you're already using and are familiar with on App Service. Stay tuned for updates as we continue to make Azure App Service the best platform for AI-powered applications. And please share your feedback on Easy MCP—we want to hear how you're using it and what features you'd like to see next as we consider this feature for native integration.386Views1like0CommentsSecure Unique Default Hostnames Now GA for Functions and Logic Apps
We are pleased to announce that Secure Unique Default Hostnames are now generally available (GA) for Azure Functions and Logic Apps (Standard). This expands the security model previously available for Web Apps to the entire App Service ecosystem and provides customers with stronger, more secure, and standardized hostname behavior across all workloads. Why This Feature Matters Historically, App Service resources have used default hostname format such as: <SiteName>.azurewebsites.net While straightforward, this pattern introduced potential security risks, particularly in scenarios where DNS records were left behind after deleting a resource. In those situations, a different user could create a new resource with the same name and unintentionally receive traffic or bindings associated with the old DNS configuration, creating opportunities for issues such as subdomain takeover. Secure Unique Default Hostnames address this by assigning a unique, randomized, region‑scoped hostname to each resource, for example: <SiteName>-<Hash>.<Region>.azurewebsites.net This change ensures that: No other customer can recreate the same default hostname. Apps inherently avoid risks associated with dangling DNS entries. Customers gain a more secure, reliable baseline behavior across App Service. Adopting this model now helps organizations prepare for the long‑term direction of the platform while improving security posture today. What’s New: GA Support for Functions and Logic Apps With this release, both Azure Functions and Logic Apps (Standard) fully support the Secure Unique Default Hostname capability. This brings these services in line with Web Apps and ensures customers across all App Service workloads benefit from the same secure and consistent default hostname model. Azure CLI Support The Azure CLI for Web Apps and Function Apps now includes support for the “--domain-name-scope” parameter. This allows customers to explicitly specify the scope used when generating a unique default hostname during resource creation. Examples: az webapp create --domain-name-scope {NoReuse, ResourceGroupReuse, SubscriptionReuse, TenantReuse} az functionapp create --domain-name-scope {NoReuse, ResourceGroupReuse, SubscriptionReuse, TenantReuse} Including this parameter ensures that deployments consistently use the intended hostname scope and helps teams prepare their automation and provisioning workflows for the secure unique default hostname model. Why Customers Should Adopt This Now While existing resources will continue to function normally, customers are strongly encouraged to adopt Secure Unique Default Hostnames for all new deployments. Early adoption provides several important benefits: A significantly stronger security posture. Protection against dangling DNS and subdomain takeover scenarios. Consistent default hostname behavior as the platform evolves. Alignment with recommended deployment practices and modern DNS hygiene. This feature represents the current best practice for hostname management on App Service and adopting it now helps ensure that new deployments follow the most secure and consistent model available. Recommended Next Steps Enable Secure Unique Default Hostnames for all new Web Apps, Function Apps, and Logic Apps. Update automation and CLI scripts to include the --domain-name-scope parameter when creating new resources. Begin updating internal documentation and operational processes to reflect the new hostname pattern. Additional Resources For readers who want to explore the technical background and earlier announcements in more detail, the following articles offer deeper coverage of unique default hostnames: Public Preview: Creating Web App with a Unique Default Hostname This article introduces the foundational concepts behind unique default hostnames. It explains why the feature was created, how the hostname format works, and provides examples and guidance for enabling the feature when creating new resources. Secure Unique Default Hostnames: GA on App Service Web Apps and Public Preview on Functions This article provides the initial GA announcement for Web Apps and early preview details for Functions. It covers the security benefits, recommended usage patterns, and guidance on how to handle existing resources that were created without unique default hostnames. Conclusion Secure Unique Default Hostnames now provide a more secure and consistent default hostname model across Web Apps, Function Apps, and Logic Apps. This enhancement reduces DNS‑related risks and strengthens application security, and organizations are encouraged to adopt this feature as the standard for new deployments.266Views0likes0CommentsFrom Vibe Coding to Working App: How SRE Agent Completes the Developer Loop
The Most Common Challenge in Modern Cloud Apps There's a category of bugs that drive engineers crazy: multi-layer infrastructure issues. Your app deploys successfully. Every Azure resource shows "Succeeded." But the app fails at runtime with a vague error like Login failed for user ''. Where do you even start? You're checking the Web App, the SQL Server, the VNet, the private endpoint, the DNS zone, the identity configuration... and each one looks fine in isolation. The problem is how they connect and that's invisible in the portal. Networking issues are especially brutal. The error says "Login failed" but the actual causes could be DNS, firewall, identity, or all three. The symptom and the root causes are in completely different resources. Without deep Azure networking knowledge, you're just clicking around hoping something jumps out. Now imagine you vibe coded the infrastructure. You used AI to generate the Bicep, deployed it, and moved on. When it breaks, you're debugging code you didn't write, configuring resources you don't fully understand. This is where I wanted AI to help not just to build, but to debug. Enter SRE Agent + Coding Agent Here's what I used: Layer Tool Purpose Build VS Code Copilot Agent Mode + Claude Opus Generate code, Bicep, deploy Debug Azure SRE Agent Diagnose infrastructure issues and create developer issue with suggested fixes in source code (app code and IaC) Fix GitHub Coding Agent Create PRs with code and IaC fix from Github issue created by SRE Agent Copilot builds. SRE Agent debugs. Coding Agent fixes. What I Built I used VS Code Copilot in Agent Mode with Claude Opus to create a .NET 8 Web App connected to Azure SQL via private endpoint: Private networking (no public exposure) Entra-only authentication Managed identity (no secrets) Deployed with azd up. All green. Then I tested the health endpoint: $ curl https://app-tsdvdfdwo77hc.azurewebsites.net/health/sql {"status":"unhealthy","error":"Login failed for user ''.","errorType":"SqlException"} Deployment succeeded. App failed. One error. How I Fixed It: Step by Step Step 1: Create SRE Agent with Azure Access I created an SRE Agent with read access to my Azure subscription. You can scope it to specific resource groups. The agent builds a knowledge graph of your resources and their dependencies visible in the Resource Mapping view below. Step 2: Connect GitHub to SRE Agent using GitHub MCP server I connected the GitHub MCP server so the agent could read my repository and create issues. Step 3: Create Sub Agent to analyze source code I created a sub-agent for analyzing source code using GitHub mcp tools. this lets SRE Agent understand not just Azure resources, but also the Bicep and source code files that created them. "you are expert in analyzing source code (bicep and app code) from github repos" Step 4: Invoke Sub-Agent to Analyze the Error In the SRE Agent chat, I invoked the sub-agent to diagnose the error I received from my app end point. It correlated the runtime error with the infrastructure configuration Step 5: Watch the SRE Agent Think and Reason SRE Agent analyzed the error by tracing code in Program.cs, Bicep configurations, and Azure resource relationships Web App, SQL Server, VNet, private endpoint, DNS zone, and managed identity. Its reasoning process worked through each layer, eliminating possibilities one by one until it identified the root causes. Step 6: Agent Creates GitHub Issue Based on its analysis, SRE Agent summarized the root causes and suggested fixes in a GitHub issue: Root Causes: Private DNS Zone missing VNet link Managed identity not created as SQL user Suggested Fixes: Add virtualNetworkLinks resource to Bicep Add SQL setup script to create user with db_datareader and db_datawriter roles Step 7: Merge the PR from Coding Agent Assign the Github issue to Coding Agent which then creates a PR with the fixes. I just reviewed the fix. It made sense and I merged it. Redeployed with azd up, ran the SQL script: curl -s https://app-tsdvdfdwo77hc.azurewebsites.net/health/sql | jq . { "status": "healthy", "database": "tododb", "server": "tcp:sql-tsdvdfdwo77hc.database.windows.net,1433", "message": "Successfully connected to SQL Server" } 🎉 From error to fix in minutes without manually debugging a single Azure resource. Why This Matters If you're a developer building and deploying apps to Azure, SRE Agent changes how you work: You don't need to be a networking expert. SRE Agent understands the relationships between Azure resources private endpoints, DNS zones, VNet links, managed identities. It connects dots you didn't know existed. You don't need to guess. Instead of clicking through the portal hoping something looks wrong, the agent systematically eliminates possibilities like a senior engineer would. You don't break your workflow. SRE Agent suggests fixes in your Bicep and source code not portal changes. Everything stays version controlled. Deployed through pipelines. No hot fixes at 2 AM. You close the loop. AI helps you build fast. Now AI helps you debug fast too. Try It Yourself Do you vibe code your app, your infrastructure, or both? How do you debug when things break? Here's a challenge: Vibe code a todo app with a Web App, VNet, private endpoint, and SQL database. "Forget" to link the DNS zone to the VNet. Deploy it. Watch it fail. Then point SRE Agent at it and see how it identifies the root cause, creates a GitHub issue with the fix, and hands it off to Coding Agent for a PR. Share your experience. I'd love to hear how it goes. Learn More Azure SRE Agent documentation Azure SRE Agent blogs Azure SRE Agent community Azure SRE Agent home page Azure SRE Agent pricing589Views3likes0CommentsExtend SRE Agent with MCP: Build an Agentic Workflow to Triage Customer Issues
Your inbox is full. GitHub issues piling up. "App not working." "How do I configure alerts?" "Please add dark mode." You open each one, figure out what it is, ask for more info, add labels, route to the right team. An hour later, you're still sorting issues. Sound familiar? The Triage Tax Every L1 support engineer, PM, and on-call developer who's handled customer issues knows this pain. When tickets come in, you're not solving problems, you're sorting them. Read the issue. Is it a bug or a question? Check the docs. Does this feature exist? Ask for more info. Wait two days. Re-triage. Add labels. Route to engineering. It's tedious. It requires judgment, you need to understand the product, know what info is needed, check documentation. And honestly? It's work that nobody volunteers for but someone has to do. In large organizations, it gets even more complex. The issue doesn't just need to be triaged, it needs to be routed to the right engineering team. Is this an auth issue? Frontend? Backend? Infrastructure? A wrong routing decision means delays, re-assignments, and frustrated customers. What if an AI agent could do this for you? Enter Azure SRE Agent + MCP Here's what I built: I gave SRE Agent access to my GitHub and PagerDuty accounts via MCP, uploaded my triage rubric as a markdown file, and set it to run twice a day. No more reading every ticket manually. No more asking the same "please provide more info" questions. No more morning triage sessions. What My Setup Looks Like My app's customer issues come in through GitHub. My team uses PagerDuty to track bugs and incidents. So I connected both via MCP to the SRE Agent. I also uploaded my triage logic as a .md file on how to classify issues, what info is required for each category, which labels to use, which team handles what. And since I didn't want to run this workflow manually, I set up a scheduled task to trigger it twice a day. Now it just runs. I verify its work if I want to. What the Agent Does Fetches all open, unlabeled GitHub issues Reads each issue and classifies it (bug, doc question, feature request) Checks if required info is present Posts a comment asking for details if needed, or acknowledges the issue Adds appropriate labels Creates a PagerDuty incident for bugs ready for engineering Moves to the next issue How I Built It: Step by Step Let me walk you through exactly how I set this up inside SRE Agent. Step 1: Create an SRE Agent I created a new SRE Agent in the Azure portal. Since this workflow triages GitHub issues and not Azure resources, I didn't need to configure any Azure resource groups or subscriptions. Just an agent. Step 2: Connect MCP Servers I added two MCP servers to give the agent access to my tools: GitHub MCP– Fetch issues, post comments, add labels PagerDuty MCP – Create incidents for bugs that need dev team's attention MCP (Model Context Protocol) lets you bring any API into the agent. If your tool has an API, you can connect it. Step 3: Create Subagents I created two focused subagents, each with a specific job and only the tools it needs: GitHub Issue Triager "You are expert in triaging GitHub issues, classifying them into categories such as user needs to supply additional information, bug, documentation question, or feature request. Use the knowledge base to search for the right document that helps you with performing this triaging. Perform all actions autonomously without waiting for user input. Hand off to Incident Creator for the issues you classified as bugs." Tools: GitHub MCP (issues, labels, comments) Incident Creator Here "You are expert in managing incidents in PagerDuty, listing services, incidents, creating incidents with all details. Once done, hand off back to GitHub Issue Triager." Tools: PagerDuty MCP (services, incidents) The handoff between them creates a workflow. They collaborate without human involvement. Step 4: Add Your Knowledge I uploaded my triage logic as a .md file to the agent's knowledge base. This is my rubric - my mental model for how to triage issues: How do I classify bugs vs. doc questions vs. feature requests? What info is required for each category? What labels do I use? When should an incident be created? Which team handles which type of issue? I wrote it down the way I'd explain it to a new teammate. The agent searches and follows it. Step 5: Add a Scheduled Task I didn't want to trigger this workflow manually every time. SRE Agent supports scheduled tasks, workflows that run automatically on a cadence. I set up a trigger to run twice a day: morning and evening. Now the workflow is fully automated. Here is the end to end automated agentic workflow to triage customer tickets. Why MCP Matters Every team uses different tools. Maybe your customer issues live in Zendesk, incidents go to ServiceNow and you use Jira or Azure DevOps. SRE Agent doesn't lock you in. With MCP, you connect to whatever tools you already use. The agent orchestrates across them. That's the extensibility model: your tools, your workflow, orchestrated by the agent. The Result Before: 2 hours every morning sorting tickets. After: By the time anyone logs in, issues are labeled, missing-info requests are posted, urgent bugs have incidents, and feature requests are acknowledged. Your team can finally focus on the complex stuff not sorting tickets. Why This Matters Faster response times. Issues get acknowledged in minutes, not days. Consistent classification. No "this should have been a P1" moments. No tickets bouncing between teams. Happier customers. They get a response immediately even if it's just "we're looking into it." Focus on what matters. Your team should be solving problems, not sorting them. The Bottom Line Triage isn't the job, it's the tax on the job. It quietly eats the hours your team could spend building, debugging, and shipping. You don't need to build a custom triage bot. You don't need to wire up webhooks and write glue code. You give the SRE agent your tools, your logic, and a schedule and it handles the sorting. Use GitHub? Connect GitHub. Use Zendesk? Connect Zendesk. PagerDuty, ServiceNow, Jira - whatever your team runs on, the agent meets you there. Stop sorting tickets. Start shipping. A Few Tips Test MCP endpoints before configuring them in the SRE agent Give each subagent only the tools it needs, don't enable everything Start read-only until you trust the classification, then enable comments Do You Still Want to Triage Issues Manually? What tools does your team use to track customer-reported issues and incidents? Let us know in the comments, we'd love to hear how you'd use this workflow with your stack. Is triage your most toilsome workflow or is there something even worse eating your team's time? Let us know in the comments.465Views1like0CommentsBuild Long-Running AI Agents on Azure App Service with Microsoft Agent Framework
UPDATE 10/22/2025: An alternative implementation of this sample app has been added to this blog post. The alternate version uses a WebJob for background processing instead of an in-process hosted service. WebJobs are a great alternative for background processing in App Service, providing better separation of concerns, independent restarts, and dedicated logging. To learn more about WebJobs on App Service, see the Azure App Service WebJobs documentation. The AI landscape is evolving rapidly, and with the introduction of Microsoft Agent Framework, developers now have a powerful platform for building sophisticated AI agents that go far beyond simple chat completions. These agents can execute complex, multi-step workflows with persistent state, conversation threads, and structured execution—capabilities that are essential for production AI applications. Today, we're excited to share how Azure App Service provides an excellent platform for running Agent Framework workloads, especially those involving long-running operations. Let's explore why App Service is a great choice and walk through a practical example. 🔗 Quick link to sample app GitHub repo: https://github.com/Azure-Samples/app-service-agent-framework-travel-agent-dotnet 🔗 Quick link to WebJob sample app GitHub repo: https://github.com/Azure-Samples/app-service-agent-framework-travel-agent-dotnet-webjob The Challenge: Long-Running Agent Framework Flows Agent Framework enables AI agents to perform complex tasks that can take significant time to complete: Multi-turn reasoning: Iterative calls to large language models (LLMs) where each response informs the next prompt Tool integration: Function calling and external API interactions for real-time data Complex processing: Budget calculations, content optimization, multi-phase generation Persistent context: Maintaining conversation state across multiple interactions These workflows often take 30 seconds to several minutes to complete—far too long for synchronous HTTP request handling. Traditional web applications run into several constraints: ⏱️ Timeout Limitations: HTTP requests have timeout constraints (typically 30-230 seconds) ⚠️ Connection Issues: Clients may disconnect due to network interruptions or browser navigation 📈 Scalability Concerns: Long-running requests block worker threads and don't survive app restarts 🎯 Poor User Experience: Users see endless loading spinners with no progress feedback The Solution: Async Pattern with App Service Azure App Service provides a robust solution through the asynchronous request-reply pattern combined with background processing: API immediately returns (202 Accepted) with a task ID Background worker processes the Agent Framework workflow Client polls for status with real-time progress updates Durable state storage (Cosmos DB) maintains task status and results This pattern ensures: ✅ No HTTP timeouts—API responds in milliseconds ✅ Resilient to restarts—state survives deployments and scale events ✅ Progress tracking—users see real-time updates (10%, 45%, 100%) ✅ Better scalability—background workers process independently NOTE! This pattern can be implemented with either an in-process BackgroundService or as a separate WebJob process. Deployment Patterns: BackgroundService vs WebJob The following compares the two deployment options you have for this implementation. BackgroundService Pattern: ✅ Simpler deployment (single project) ✅ Shared process and memory ✅ Good for moderate workloads ⚠️ API and worker restart together WebJob Pattern (alternative): ✅ Separate processes (API + WebJob) ✅ Independent restart without API downtime ✅ Dedicated WebJob monitoring in portal ✅ Better for production operations ⚠️ Slightly more complex deployment (manual WebJob upload) Either of these options are a great way to help you get started with implementing long-running processes on App Service. To learn more about WebJobs on App Service, see the Azure App Service WebJobs documentation. Rapid Innovation Support The AI landscape is changing at an unprecedented pace. New models, frameworks, and capabilities are released constantly. Azure App Service's managed platform ensures your applications can adapt quickly without infrastructure rewrites: Framework Updates: Deploy new Agent Framework SDK versions like any application update Model Upgrades: Switch between GPT-4, GPT-4o, or future models with configuration changes Scaling Patterns: Start with combined API+worker, split into separate apps as needs grow New Capabilities: Integrate emerging AI services without changing hosting infrastructure App Service handles the platform complexity so you can focus on building great AI experiences. Sample Application: AI Travel Planner To demonstrate this pattern, we've built a Travel Planner application that uses Agent Framework to generate detailed, multi-day travel itineraries. The agent performs complex reasoning including: Researching destination attractions and activities Optimizing daily schedules based on location proximity Calculating detailed budget breakdowns Generating personalized travel tips and recommendations The entire application runs on a single P0v4 App Service with both the API and background worker combined—showcasing App Service's flexibility for hosting diverse workload patterns in one deployment. Key Architecture Components Azure App Service (P0v4 Premium) Hosts both REST API and background worker in a single app "Always On" feature keeps background worker running continuously Managed identity for secure, credential-less authentication Azure Service Bus Decouples API from long-running Agent Framework processing Reliable message delivery with automatic retries Dead letter queue for error handling Azure Cosmos DB Stores task status with real-time progress updates Automatic 24-hour TTL for cleanup Rich query capabilities for complex itinerary data Azure AI Foundry Hosts persistent agents with conversation threads Structured execution with Agent Framework runtime GPT-4o model for intelligent travel planning One of the powerful features of using Azure AI Foundry with Agent Framework is the ability to inspect agents and conversation threads directly in the Azure portal. This provides valuable visibility into what's happening during execution. Viewing Agents and Threads in Azure AI Foundry When you submit a travel plan request, the application creates an agent in Azure AI Foundry. You can navigate to your AI Foundry project in the Azure portal to see: Agents The application creates an agent for each request Important: Agents are **automatically deleted** after the itinerary is generated to keep your project clean Tip: You'll need to be quick! Navigate to Azure AI Foundry right after submitting a request to see the agent in action Once processing completes, the agent is removed as part of the cleanup process Conversation Threads Unlike agents, threads persist even after the agent completes You can view the complete conversation history at any time See the exact prompts sent to the model and the responses generated Useful for debugging, understanding agent behavior, and improving prompts The ephemeral nature of agents (created per request, deleted after completion) keeps your Azure AI Foundry project clean while the persistent threads give you full traceability of every interaction. Alternative Architecture: WebJob Pattern The alternate version of this app uses a WebJob for background processing instead of an in-process hosted service. However, just a single App Service is still required. WebJobs are a great alternative for background processing in App Service, providing better separation of concerns, independent restarts, and dedicated logging. To learn more about WebJobs on App Service, see the Azure App Service WebJobs documentation. Get Started Today The complete Travel Planner application is available as a reference implementation so you can quickly get started building your own apps with Agent Framework on App Service. Try one or both of these today! 🔗 GitHub Repository for background process version: https://github.com/Azure-Samples/app-service-agent-framework-travel-agent-dotnet 🔗 GitHub Repository for WebJob version: https://github.com/Azure-Samples/app-service-agent-framework-travel-agent-dotnet-webjob The repo includes: Complete .NET 9 source code with Agent Framework integration Infrastructure as Code (Bicep) for automated deployment Web UI with real-time progress tracking Comprehensive README with deployment instructions Deploy in minutes: git clone https://github.com/Azure-Samples/app-service-agent-framework-travel-agent-dotnet.git cd app-service-agent-framework-travel-agent-dotnet azd auth login azd up IMPORTANT! For the WebJob version, you will also need to manually deploy the WebJob. See the instructions in the README to learn how to do this. Key Takeaways ✅ Agent Framework enables sophisticated AI agents beyond simple chat completions ✅ Long-running workflows (30s-minutes) require async patterns to avoid timeouts ✅ App Service provides a simple, cost-effective platform for these workloads ✅ Async request-reply pattern with Service Bus + Cosmos DB ensures reliability ✅ Rapid innovation in AI is supported by App Service's adaptable platform Whether you're building travel planners, document processors, research assistants, or other AI-powered applications, Azure App Service gives you the flexibility and reliability you need—without the complexity of container orchestration or function programming models. What's Next? Build on This Foundation This Travel Planner is just the starting point—a foundation to help you understand the patterns and architecture. Agent Framework is designed to grow with your needs, making it easy to add sophisticated capabilities with minimal effort: 🛠️ Add Tool Calling Connect your agent to real-time APIs for weather, flight prices, hotel availability, and actual booking systems. Agent Framework's built-in tool calling makes this straightforward. 🤝 Implement Multi-Agent Systems Create specialized agents (flight expert, hotel specialist, activity planner) that collaborate to build comprehensive travel plans. Agent Framework handles the orchestration. 🧠 Enhance with RAG Add retrieval-augmented generation to give your agent deep knowledge of destinations, local customs, and insider tips from your own content library. 📊 Expand Functionality Real-time pricing and availability Interactive refinement based on user feedback Personalized recommendations from past trips Multi-language support for global users The beauty of Agent Framework is that these advanced features integrate seamlessly into the pattern we've built. Start with this sample, explore the Agent Framework documentation, and unlock powerful AI capabilities for your applications! Learn More Microsoft Agent Framework Documentation Azure App Service Documentation Async Request-Reply Pattern Azure App Service WebJobs documentation Have you built AI agents on App Service? We'd love to hear about your experience! Share your thoughts in the comments below. Questions about Agent Framework on App Service? Drop a comment and our team will help you get started.2KViews1like3Comments