web apps
362 TopicsHow to set up subdirectory Multisite in WordPress on Azure App Service
WordPress Multisite is a feature of WordPress that enables you to run and manage multiple WordPress websites using the same WordPress installation. Follow these steps to setup Multisite in your WordPress website on App Service...11KViews1like15CommentsAnnouncing the Public Preview of the New App Service Quota Self-Service Experience
Update 9/15/2025: The App Service Quota Self-Service experience has been temporarily taken offline to incorporate feedback received during this public preview. As this is public preview, availability and features are subject to change as we receive and incorporate feedback. We will post another update when the self-serve experience is available once more. In the meantime, if you require assistance, please file a support ticket following the guidance at the bottom of this post in the Filing a Support Ticket section. We appreciate your patience while we work to build the best experience possible for this scenario. What’s New? The updated experience introduces a dedicated App Service Quota blade in the Azure portal, offering a streamlined and intuitive interface to: View current usage and limits across the various SKUs Set custom quotas tailored to your App Service plan needs This new experience empowers developers and IT admins to proactively manage resources, avoid service disruptions, and optimize performance. Quick Reference - Start here! If your deployment requires quota for ten or more subscriptions, then file a support ticket with problem type Quota following the instructions at the bottom of this post. If any subscription included in your request requires zone redundancy, then file a support ticket with problem type Quota following the instructions at the bottom of this post. Otherwise, leverage the new self-service experience to increase your quota automatically. Self-service Quota Requests For non-zone-redundant needs, quota alone is sufficient to enable App Service deployment or scale-out. Follow the provided steps to place your request. 1. Navigate to the Quotas resource provider in the Azure portal 2. Select App Service Navigating the primary interface: Each App Service VM size is represented as a separate SKU. If the intention is to be able to scale up or down within a specific offering (e.g., Premium v3), then equivalent number of VMs need to be requested for each applicable size of that offering (e.g., request 5 instances for both P1v3 and P3v3). As with other quotas, you can filter by region, subscription, provider, or usage. You can also group the results by usage, quota (App Service VM type), or location (region). Current usage is represented as App Service VMs. This allows you to quickly identify which SKUs are nearing their quota limits. Adjustments can be made inline: no need to visit another page. This is covered in detail in the next section. 3. Request quota adjustments Clicking the pen icon opens a flyout window to capture the quota request: The quota type (App Service SKU) is already populated, along with current usage. Note that your request is not incremental: you must specify the new limit that you wish to see reflected in the portal. For example, to request two additional instances of P1v2 VMs, you would file the request like this: Click submit to send the request for automatic processing. How quota approvals work: Immediately upon submitting a quota request, you will see a processing dialog like the one shown: If the quota request can be automatically fulfilled, then no support request is needed. You should receive this confirmation within a few minutes of submission: If the request cannot be automatically fulfilled, then you will be given the option to file a support request with the same information. In the example below, the requested new limit exceeds what can be automatically granted for the region: 4. If applicable, create support ticket When creating a support ticket, you will need to repopulate the Region and App Service plan details; the new limit has already been populated for you. If you forget the region or SKU that was requested, you can reference them in your notifications pane: If you choose to create a support ticket, then you will interact with the capacity management team for that region. This is a 24x7 service, so requests may be created at any time. Once you have filed the support request, you can track its status via the Help + support dashboard. Known issues The self-service quota request experience for App Service is in public preview. Here are some caveats worth mentioning while the team finalizes the release for general availability: Closing the quota request flyout window will stop meaningful notifications for that request. You can still view the outcome of your quota requests by checking actual quota, but if you want to rely on notifications for alerts, then we recommend leaving the quota request window open for the few minutes that it is processing. Some SKUs are not yet represented in the quota dashboard. These will be added later in the public preview. The Activity Log does not currently provide a meaningful summary of previous quota requests and their outcomes. This will also be addressed during the public preview. As noted in the walkthrough, the new experience does not enable zone-redundant deployments. Quota is an inherently regional construct, and zone-redundant enablement requires a separate step that can only be taken in response to a support ticket being filed. Quota API documentation is being drafted to enable bulk non-zone redundant quota requests without requiring you to file a support ticket. Filing a Support Ticket If your deployment requires zone redundancy or contains many subscriptions, then we recommend filing a support ticket with issue type "Technical" and problem type "Quota": We want your feedback! If you notice any aspect of the experience that does not work as expected, or you have feedback on how to make it better, please use the comments below to share your thoughts!790Views2likes0CommentsBuilding Agent-to-Agent (A2A) Applications on Azure App Service
The world of AI agents is evolving rapidly, with new protocols and frameworks emerging to enable sophisticated multi-agent communication. Google's Agent-to-Agent (A2A) protocol represents one of the most promising approaches for building distributed AI systems that can coordinate tasks across different platforms and services. I'm excited to share how you can leverage Azure App Service to build, deploy, and scale A2A applications. Today, I'll walk you through a practical example that combines Microsoft Semantic Kernel with the A2A protocol to create an intelligent travel planning assistant. What We Built: An A2A Travel Agent on App Service I've taken an existing A2A travel planning sample and enhanced it to run seamlessly on Azure App Service. This demonstrates how A2A concepts can be adapted and hosted on one of Azure's platform-as-a-service offerings. What started as a sample implementation has been transformed into a full-featured web application with a modern interface, real-time streaming, and production-ready deployment automation. 🔗 View the complete source code on GitHub Acknowledgments and Attribution Before diving into the technical details, I want to give proper credit where it's due. This application was adapted and enhanced from excellent foundational work by the Microsoft Semantic Kernel team and the A2A project community: Original inspiration: Microsoft DevBlogs - Semantic Kernel A2A Integration Base implementation: A2A Samples - Semantic Kernel Python Agent This contribution builds upon these samples to demonstrate how you can take A2A concepts and create a complete, deployable application that runs seamlessly on Azure App Service with enterprise-grade features like managed identity authentication, monitoring, and infrastructure as code. Why A2A on Azure App Service? Azure App Service provides the perfect foundation for A2A applications because it handles the infrastructure complexity while giving you the flexibility to implement cutting-edge AI protocols. Here's what makes this combination powerful: 🚀Rapid Deployment & Scaling Deploy A2A agents with a single azd up command Auto-scaling based on demand without managing servers Built-in load balancing for high-availability agent endpoints 🔐Enterprise Security Managed identity authentication eliminates API key management Built-in SSL/TLS termination for secure agent communication Network isolation and private endpoint support for sensitive workloads 🔄Real-time Capabilities WebSocket support for streaming A2A protocol responses Always-on availability for agent discovery and task coordination Low-latency communication between distributed agents 📊Observability & Monitoring Application Insights integration for comprehensive telemetry Built-in logging and diagnostics for debugging agent interactions Performance monitoring to optimize multi-agent workflows Understanding the A2A Travel Agent Architecture Our sample demonstrates a multi-agent system where a main travel manager coordinates with specialized agents: ┌─────────────────────┐ ┌──────────────────────┐ ┌─────────────────────┐ │ Web Browser │ ──── │ FastAPI App │ ──── │ Semantic Kernel │ │ │ │ │ │ Travel Agent │ │ • Modern UI │ │ • REST API │ │ │ │ • Chat Interface │ │ • A2A Protocol │ │ • Currency API │ │ • Responsive │ │ • Session Management │ │ • Activity Planning │ └─────────────────────┘ └──────────────────────┘ └─────────────────────┘ │ ▼ ┌──────────────────────┐ │ A2A Protocol │ │ │ │ • Agent Discovery │ │ • Task Streaming │ │ • Multi-Agent Coord │ └──────────────────────┘ Key Components TravelManagerAgent: The orchestrator that analyzes user requests and delegates to specialized agents CurrencyExchangeAgent: Handles real-time currency conversion using the Frankfurter API ActivityPlannerAgent: Creates personalized itineraries and activity recommendations A2A Protocol Layer: Manages agent discovery, task coordination, and streaming responses Practical Example: Multi-Agent Travel Planning Let's see this in action with a real user scenario: User Request: "I'm traveling to Seoul, South Korea for 2 days with a budget of $100 USD per day. How much is that in Korean Won, and what can I do and eat?" A2A Workflow: TravelManager receives the request and identifies it needs both currency and activity planning CurrencyExchangeAgent is invoked to fetch live USD→KRW rates ActivityPlannerAgent generates budget-friendly recommendations Response Compilation combines results into a comprehensive travel plan Streaming Delivery provides real-time updates to the user interface Result: The user gets current exchange rates (~$100 USD = 130,000 KRW), daily budget breakdowns, recommended activities within budget, and restaurant suggestions—all coordinated seamlessly between multiple specialized agents. Implementation Highlights Modern Web Interface The application includes a responsive web interface built with modern HTML/CSS/JavaScript that provides: Real-time chat with typing indicators Streaming responses for immediate feedback Mobile-responsive design Session management for conversation context A2A Protocol Compliance Full implementation of Google's A2A specification including: Agent Discovery: Structured Agent Cards advertising capabilities Task Coordination: Multi-agent task delegation and handoffs Streaming Support: Real-time progress updates during complex workflows Session Management: Persistent conversation context Azure-Native Features Managed Identity: Secure authentication without API key management Bicep Templates: Infrastructure as code for reproducible deployments Azure Developer CLI: One-command deployment with azd up Getting Started: Deploy Your Own A2A Agent Ready to try it yourself? Here's how to deploy this A2A travel agent to Azure App Service: Prerequisites Azure CLI and Azure Developer CLI (azd) Python 3.10+ for local development An Azure subscription Deployment Steps 1. Clone the repository: git clone https://github.com/Azure-Samples/app-service-a2a-travel-agent cd app-service-a2a-travel-agent 2. Authenticate with Azure: azd auth login 3. Deploy to Azure: azd up That's it! The Azure Developer CLI will: Create an Azure App Service and App Service Plan Deploy an Azure OpenAI resource with GPT-4 model Configure managed identity authentication Deploy your application code Provide the live application URL Beyond This Example: A2A Possibilities While Semantic Kernel was chosen for this sample, we recognize that developers have many options for building A2A applications. The A2A protocol is framework-agnostic, and Azure App Service can host agents built with: LangChain for comprehensive LLM application development LlamaIndex for data-augmented agent workflows AutoGen for multi-agent conversation frameworks Custom implementations using OpenAI, Anthropic, or other AI APIs Any Python web framework (FastAPI, Django, Flask, etc.) And many more! The key insight is that Azure App Service provides a robust, scalable platform that adapts to whatever AI framework or protocol you choose. Why This Matters for the Future The AI agent ecosystem is evolving rapidly. New protocols, frameworks, and integration patterns emerge regularly. What excites me most about Azure App Service in this context is our platform's adaptability: 🔄Framework Flexibility: Host basically any AI framework or custom implementation 🌐Protocol Support: WebSocket, HTTP/2, and custom protocols for agent communication 🔐Security Evolution: Managed identity and certificate management that scales with new auth patterns 📈Performance Optimization: Auto-scaling and performance monitoring that adapts to AI workload patterns 🛠️DevOps Integration: CI/CD pipelines and deployment automation for rapid iteration Looking Ahead As A2A protocols mature and new agent frameworks emerge, Azure App Service will continue evolving to support the latest innovations in AI application development. Our goal is to provide a platform where you can focus on building intelligent agent experiences while we handle the infrastructure complexity. We're particularly excited about upcoming enhancements in: Integration with Azure AI services for even richer agent capabilities Streamlined deployment patterns for AI application architectures Improved monitoring and observability for multi-agent workflows Try It Today The A2A travel agent sample is available now on GitHub and ready for deployment. Whether you're exploring multi-agent architectures, evaluating A2A protocols, or looking to modernize your AI applications, this sample provides a practical starting point. 🚀 Deploy the A2A Travel Agent Update 9/16/2025: I created a .NET version of this sample. Feel free to check this one out too! https://github.com/Azure-Samples/app-service-a2a-travel-agent-dotnet We'd love to hear about the A2A applications you're building on Azure App Service. Share your experiences, challenges, and innovations with the community—together, we're shaping the future of distributed AI systems. Questions about this sample or Azure App Service for AI applications? Connect with us in the comments below. Resources: Azure App Service Documentation Google A2A Protocol Specification Microsoft Semantic Kernel Azure Developer CLI2.1KViews1like3CommentsUnlocking Application Modernisation with GitHub Copilot
AI-driven modernisation is unlocking new opportunities you may not have even considered yet. It's also allowing organisations to re-evaluate previously discarded modernisation attempts that were considered too hard, complex or simply didn't have the skills or time to do. During Microsoft Build 2025, we were introduced to the concept of Agentic AI modernisation and this post from Ikenna Okeke does a great job of summarising the topic - Reimagining App Modernisation for the Era of AI | Microsoft Community Hub. This blog post however, explores the modernisation opportunities that you may not even have thought of yet, the business benefits, how to start preparing your organisation, empowering your teams, and identifying where GitHub Copilot can help. I’ve spent the last 8 months working with customers exploring usage of GitHub Copilot, and want to share what my team members and I have discovered in terms of new opportunities to modernise, transform your applications, bringing some fun back into those migrations! Let’s delve into how GitHub Copilot is helping teams update old systems, move processes to the cloud, and achieve results faster than ever before. Background: The Modernisation Challenge (Then vs Now) Modernising legacy software has always been hard. In the past, teams faced steep challenges: brittle codebases full of technical debt, outdated languages (think decades-old COBOL or VB6), sparse documentation, and original developers long gone. Integrating old systems with modern cloud services often requiring specialised skills that were in short supply – for example, check out this fantastic post from Arvi LiVigni (@arilivigni ) which talks about migrating from COBOL “the number of developers who can read and write COBOL isn’t what it used to be,” making those systems much harder to update". Common pain points included compatibility issues, data migrations, high costs, security vulnerabilities, and the constant risk that any change could break critical business functions. It’s no wonder many modernisation projects stalled or were “put off” due to their complexity and risk. So, what’s different now (circa 2025) compared to two years ago? In a word: Intelligent AI assistance. Tools like GitHub Copilot have emerged as AI pair programmers that dramatically lower the barriers to modernisation. Arvi’s post talks about how only a couple of years ago, developers had to comb through documentation and Stack Overflow for clues when deciphering old code or upgrading frameworks. Today, GitHub Copilot can act like an expert co-developer inside your IDE, ready to explain mysterious code, suggest updates, and even rewrite legacy code in modern languages. This means less time fighting old code and more time implementing improvements. As Arvi says “nine times out of 10 it gives me the right answer… That speed – and not having to break out of my flow – is really what’s so impactful.” In short, AI coding assistants have evolved from novel experiments to indispensable tools, reimagining how we approach software updates and cloud adoption. I’d also add from my own experience – the models we were using 12 months ago have already been superseded by far superior models with ability to ingest larger context and tackle even further complexity. It's easier to experiment, and fail, bringing more robust outcomes – with such speed to create those proof of concepts, experimentation and failing faster, this has also unlocked the ability to test out multiple hypothesis’ and get you to the most confident outcome in a much shorter space of time. Modernisation is easier now because AI reduces the heavy lifting. Instead of reading the 10,000-line legacy program alone, a developer can ask Copilot to explain what the code does or even propose a refactored version. Rather than manually researching how to replace an outdated library, they can get instant recommendations for modern equivalents. These advancements mean that tasks which once took weeks or months can now be done in days or hours – with more confidence and less drudgery - more fun! The following sections will dive into specific opportunities unlocked by GitHub Copilot across the modernisation journey which you may not even have thought of. Modernisation Opportunities Unlocked by Copilot Modernising an application isn’t just about updating code – it involves bringing everyone and everything up to speed with cloud-era practices. Below are several scenarios and how GitHub Copilot adds value, with the specific benefits highlighted: 1. AI-Assisted Legacy Code Refactoring and Upgrades Instant Code Comprehension: GitHub Copilot can explain complex legacy code in plain English, helping developers quickly understand decades-old logic without scouring scarce documentation. For example, you can highlight a cryptic COBOL or C++ function and ask Copilot to describe what it does – an invaluable first step before making any changes. This saves hours and reduces errors when starting a modernisation effort. Automated Refactoring Suggestions: The AI suggests modern replacements for outdated patterns and APIs, and can even translate code between languages. For instance, Copilot can help convert a COBOL program into JavaScript or C# by recognising equivalent constructs. It also uses transformation tools (like OpenRewrite for Java/.NET) to systematically apply code updates – e.g. replacing all legacy HTTP calls with a modern library in one sweep. Developers remain in control, but GitHub Copilot handles the tedious bulk edits. Bulk Code Upgrades with AI: GitHub Copilot’s App Modernisation capabilities can analyse an entire codebase and generate a detailed upgrade plan, then execute many of the code changes automatically. It can upgrade framework versions (say from .NET Framework 4.x to .NET 6, or Java 8 to Java 17) by applying known fix patterns and even fixing compilation errors after the upgrade. Teams can finally tackle those hundreds of thousand-line enterprise applications – a task that could take multiple years with GitHub Copilot handling the repetitive changes. Technical Debt Reduction: By cleaning up old code and enforcing modern best practices, GitHub Copilot helps chip away at years of technical debt. The modernised codebase is more maintainable and stable, which lowers the long-term risk hanging over critical business systems. Notably, the tool can even scan for known security vulnerabilities during refactoring as it updates your code. In short, each legacy component refreshed with GitHub Copilot comes out safer and easier to work on, instead of remaining a brittle black box. 2. Accelerating Cloud Migration and Azure Modernisation Guided Azure Migration Planning: GitHub Copilot can assess a legacy application’s cloud readiness and recommend target Azure services for each component. For instance, it might suggest migrating an on-premises database to Azure SQL, moving file storage to Azure Blob Storage, and converting background jobs to Azure Functions. This provides a clear blueprint to confidently move an app from servers to Azure PaaS. One-Click Cloud Transformations: GitHub Copilot comes with predefined migration tasksthat automate the code changes required for cloud adoption. With one click, you can have the AI apply dozens of modifications across your codebase. For example: File storage: Replace local file read/writes with Azure Blob Storage SDK calls. Email/Comms: Swap out SMTP email code for Azure Communication Services or SendGrid. Identity: Migrate authentication from Windows AD to Azure AD (Entra ID) libraries. Configuration: Remove hard-coded configurations and use Azure App Configuration or Key Vault for secrets. GitHub Copilot performs these transformations consistently, following best practices (like using connection strings from Azure settings). After applying the changes, it even fixes any compile errors automatically, so you’re not left with broken builds. What used to require reading countless Azure migration guides is now handled in minutes. Automated Validation & Deployment: Modernisation doesn’t stop at code changes. GitHub Copilot can also generate unit tests to validate that the application still behaves correctly after the migration. It helps ensure that your modernised, cloud-ready app passes all its checks before going live. When you’re ready to deploy, GitHub Copilot can produce the necessary Infrastructure-as-Code templates (e.g. Azure Resource Manager Bicep files or Terraform configs) and even set up CI/CD pipeline scripts for you. In other words, the AI can configure the Azure environment and deployment process end-to-end. This dramatically reduces manual effort and error, getting your app to the cloud faster and with greater confidence. Integrations: GitHub Copilot also helps tackle larger migration scenarios that were previously considered too complex. For example, many enterprises want to retire expensive proprietary integration platforms like MuleSoft or Apigee and use Azure-native services instead, but rewriting hundreds of integration workflows was daunting. Now, GitHub Copilot can assist in translating those workflows: for instance, converting an Apigee API proxy into an Azure API Management policy, or a MuleSoft integration into an Azure Logic App. Multi-Cloud Migrations: if you plan to consolidate from other clouds into Azure, GitHub Copilot can suggest equivalent Azure services and SDK calls to replace AWS or GCP-specific code. These AI-assisted conversions significantly cut down the time needed to reimplement functionality on Azure. The business impact can be substantial. By lowering the effort of such migrations, GitHub Copilot makes it feasible to pursue opportunities that deliver big cost savings and simplification. 3. Boosting Developer Productivity and Quality Instant Unit Tests (TDD Made Easy): Writing tests for old code can be tedious, but GitHub Copilot can generate unit test cases on the fly. Developers can highlight an existing function and ask Copilot to create tests; it will produce meaningful test methods covering typical and edge scenarios. This makes it practical to apply test-driven development practices even to legacy systems – you can quickly build a safety net of tests before refactoring. By catching bugs early through these AI-generated tests, teams gain confidence to modernise code without breaking things. It essentially injects quality into the process from the start, which is crucial for successful modernisation. DevOps Automation: GitHub Copilot helps modernise your build and deployment process as well. It can draft CI/CD pipeline configurations, Dockerfiles, Kubernetes manifests, and other DevOps scripts by leveraging its knowledge of common patterns. For example, when setting up a GitHub Actions workflow to deploy your app, GitHub Copilot will autocomplete significant parts (like build steps, test runs, deployment jobs) based on the project structure. This not only saves time but also ensures best practices (proper caching, dependency installation, etc.) are followed by default. Microsoft even provides an extension where you can describe your Azure infrastructure needs in plain language and have GitHub Copilot generate the corresponding templates and pipeline YAML. By automating these pieces, teams can move to cloud-based, automated deployments much faster. Behaviour-Driven Development Support: Teams practicing BDD write human-readable scenarios (e.g. using Gherkin syntax) describing application behaviour. GitHub Copilot’s AI is adept at interpreting such descriptions and suggesting step definition code or test implementations to match. For instance, given a scenario “When a user with no items checks out, then an error message is shown,” GitHub Copilot can draft the code for that condition or the test steps required. This helps bridge the gap between non-technical specifications and actual code. It makes BDD more efficient and accessible, because even if team members aren’t strong coders, the AI can translate their intent into working code that developers can refine. Quality and Consistency: By using AI to handle boilerplate and repetitive tasks, developers can focus more on high-value improvements. GitHub Copilot’s suggestions are based on a vast corpus of code, which often means it surfaces well-structured, idiomatic patterns. Starting from these suggestions, developers are less likely to introduce errors or reinvent the wheel, which leads to more consistent code quality across the project. The AI also often reminds you of edge cases (for example, suggesting input validation or error handling code that might be missed), contributing to a more robust application. In practice, many teams find that adopting GitHub Copilot results in fewer bugs and quicker code reviews, as the code is cleaner on the first pass. It’s like having an extra set of eyes on every pull request, ensuring standards are met. Business Benefits of AI-Powered Modernisation Bringing together the technical advantages above, what’s the payoff for the business and stakeholders? Modernising with GitHub Copilot can yield multiple tangible and intangible benefits: Accelerated Time-to-Market: Modernisation projects that might have taken a year can potentially be completed in a few months, or an upgrade that took weeks can be done in days. This speed means you can deliver new features to customers sooner and respond faster to market changes. It also reduces downtime or disruption since migrations happen more swiftly. Cost Savings: By automating repetitive work and reducing the effort required from highly paid senior engineers, GitHub Copilot can trim development costs. Faster project completion also means lower overall project cost. Additionally, running modernised apps on cloud infrastructure (with updated code) often lowers operational costs due to more efficient resource usage and easier maintenance. There’s also an opportunity cost benefit: developers freed up by Copilot can work on other value-adding projects in parallel. Improved Quality & Reliability: GitHub Copilot’s contributions to testing, bug-fixing, and even security (like patching known vulnerabilities during upgrades) result in more robust applications. Modernised systems have fewer outages and security incidents than shaky legacy ones. Stakeholders will appreciate that with GitHub Copilot, modernisation doesn’t mean “trading one set of bugs for another” – instead, you can increase quality as you modernise (GitHub’s research noted higher code quality when using Copilot, as developers are less likely to introduce errors or skip tests). Business Agility: A modernised application (especially one refactored for cloud) is typically more scalable and adaptable. New integrations or features can be added much faster once the platform is up-to-date. GitHub Copilot helps clear the modernisation hurdle, after which the business can innovate on a solid, flexible foundation (for example, once a monolith is broken into microservices or moved to Azure PaaS, you can iterate on it much faster in the future). AI-assisted modernisation thus unlocks future opportunities (like easier expansion, integrations, AI features, etc.) that were impractical on the legacy stack. Employee Satisfaction and Innovation: Developer happiness is a subtle but important benefit. When tedious work is handled by AI, developers can spend more time on creative tasks – designing new features, improving user experience, exploring new technologies. This can foster a culture of innovation. Moreover, being seen as a company that leverages modern tools (like AI Co-pilots) helps attract and retain top tech talent. Teams that successfully modernise critical systems with Copilot will gain confidence to tackle other ambitious projects, creating a positive feedback loop of improvement. To sum up, GitHub Copilot acts as a force-multiplier for application modernisation. It enables organisations to do more with less: convert legacy “boat anchors” into modern, cloud-enabled assets rapidly, while improving quality and developer morale. This aligns IT goals with business goals – faster delivery, greater efficiency, and readiness for the future. Call to Action: Embrace the Future of Modernisation GitHub Copilot has proven to be a catalyst for transforming how we approach legacy systems and cloud adoption. If you’re excited about the possibilities, here are next steps and what to watch for: Start Experimenting: If you haven’t already, try GitHub Copilot on a sample of your code. Use Copilot or Copilot Chat to explain a piece of old code or generate a unit test. Seeing it in action on your own project can build confidence and spark ideas for where to apply it. Identify a Pilot Project: Look at your application portfolio for a candidate that’s ripe for modernisation – maybe a small legacy service that could be moved to Azure, or a module that needs a refactor. Use GitHub Copilot to assess and estimate the effort. Often, you’ll find tasks once deemed “too hard” might now be feasible. Early successes will help win support for larger initiatives. Stay Tuned for Our Upcoming Blog Series: This post is just the beginning. In forthcoming posts, we’ll dive deeper into: Setting Up Your Organisation for Copilot Adoption: Practical tips on preparing your enterprise environment – from licensing and security considerations to training programs. We’ll discuss best practices (like running internal awareness campaigns, defining success metrics, and creating Copilot champions in your teams) to ensure a smooth rollout. Empowering Your Colleagues: How to foster a culture that embraces AI assistance. This includes enabling continuous learning, sharing prompt techniques and knowledge bases, and addressing any scepticism. We’ll cover strategies to support developers in using Copilot effectively, so that everyone from new hires to veteran engineers can amplify their productivity. Identifying High-Impact Modernisation Areas: Guidance on spotting where GitHub Copilot can add the most value. We’ll look at different domains – code, cloud, tests, data – and how to evaluate opportunities (for example, using telemetry or feedback to find repetitive tasks suited for AI, or legacy components with high ROI if modernised). Engage and Share: As you start leveraging Copilot for modernisation, share your experiences and results. Success stories (even small wins like “GitHub Copilot helped reduce our code review times” or “we migrated a component to Azure in 1 sprint”) can build momentum within your organisation and the broader community. We invite you to discuss and ask questions in the comments or in our tech community forums. Take a look at the new App Modernisation Guidance—a comprehensive, step-by-step playbook designed to help organisations: Understand what to modernise and why Migrate and rebuild apps with AI-first design Continuously optimise with built-in governance and observability Modernisation is a journey, and AI is the new compass and co-pilot to guide the way. By embracing tools like GitHub Copilot, you position your organisation to break through modernisation barriers that once seemed insurmountable. The result is not just updated software, but a more agile, cloud-ready business and a happier, more productive development team. Now is the time to take that step. Empower your team with Copilot, and unlock the full potential of your applications and your developers. Stay tuned for more insights in our next posts, and let’s modernise what’s possible together!343Views4likes1CommentBuild Multi-Agent AI Systems on Azure App Service
Introduction: The Evolution of AI-Powered App Service Applications Over the past few months, we've been exploring how to supercharge existing Azure App Service applications with AI capabilities. If you've been following along with this series, you've seen how we can quickly integrate AI Foundry agents with MCP servers and host remote MCP servers directly on App Service. Today, we're taking the next leap forward by demonstrating how to build sophisticated multi-agent systems that leverage connected agents, Model Context Protocol (MCP), and OpenAPI tools - all running on Azure App Service's Premium v4 tier with .NET Aspire for enhanced observability and cloud-native development experience. 💡 Want the full technical details? This blog provides an overview of the key concepts and capabilities. For comprehensive setup instructions, architecture deep-dives, performance considerations, debugging guidance, and detailed technical documentation, check out the complete README on GitHub. What Makes This Sample Special? This fashion e-commerce demo showcases several cutting-edge technologies working together: 🤖 Multi-Agent Architecture with Connected Agents Unlike single-agent systems, this sample implements an orchestration pattern where specialized agents work together: Main Orchestrator: Coordinates workflow and handles inventory queries via MCP tools Cart Manager: Specialized in shopping cart operations via OpenAPI tools Fashion Advisor: Provides expert styling recommendations Content Moderator: Ensures safe, professional interactions 🔧 Advanced Tool Integration MCP Tools: Real-time connection to external inventory systems using the Model Context Protocol OpenAPI Tools: Direct agent integration with your existing App Service APIs Connected Agent Tools: Seamless agent-to-agent communication with automatic orchestration ⚡ .NET Aspire Integration Enhanced development experience with built-in observability Simplified cloud-native application patterns Real-time monitoring and telemetry (when developing locally) 🚀 Premium v4 App Service Tier Latest App Service performance capabilities Optimized for modern cloud-native workloads Enhanced scalability for AI-powered applications Key Technical Innovations Connected Agent Orchestration Your application communicates with a single main agent, which automatically coordinates with specialist agents as needed. No changes to your existing App Service code required. Dual Tool Integration This sample demonstrates both MCP tools for external system connectivity and OpenAPI tools for direct API integration. Zero-Infrastructure Overhead Agents work directly with your existing App Service APIs and external endpoints - no additional infrastructure deployment needed. Why These Technologies Matter for Real Applications The combination of these technologies isn't just about showcasing the latest features - it's about solving real business challenges. Let's explore how each component contributes to building production-ready AI applications. .NET Aspire: Enhancing the Development Experience This sample leverages .NET Aspire to provide enhanced observability and simplified cloud-native development patterns. While .NET Aspire is still in preview on App Service, we encourage you to start exploring its capabilities and keep an eye out for future updates planned for later this year. What's particularly exciting about Aspire is how it maintains the core principle we've emphasized throughout this series: making AI integration as simple as possible. You don't need to completely restructure your application to benefit from enhanced observability and modern development patterns. Premium v4 App Service: Built for Modern AI Workloads This sample is designed to run on Azure App Service Premium v4, which we recently announced is Generally Available. Premium v4 is the latest offering in the Azure App Service family, delivering enhanced performance, scalability, and cost efficiency. From Concept to Implementation: Staying True to Our Core Promise Throughout this blog series, we've consistently demonstrated that adding AI capabilities to existing applications doesn't require massive rewrites or complex architectural changes. This multi-agent sample continues that tradition - what might seem like a complex system is actually built using the same principles we've established: ✅ Incremental Enhancement: Build on your existing App Service infrastructure ✅ Simple Integration: Use familiar tools like azd up for deployment ✅ Production-Ready: Leverage mature Azure services you already trust ✅ Future-Proof: Easy to extend as new capabilities become available Looking Forward: What's Coming Next This sample represents just the beginning of what's possible with AI-powered App Service applications. Here's what we're working on next: 🔐 MCP Authentication Integration Enhanced security patterns for production MCP server deployments, including Azure Entra ID integration. 🚀 New Azure AI Foundry Features As Azure AI Foundry continues to evolve, we'll be updating this sample to showcase: New agent capabilities Enhanced tool integrations Performance optimizations Additional model support 📊 Advanced Analytics and Monitoring Deeper integration with Azure Monitor for: Agent performance analytics Business intelligence from agent interactions 🔧 Additional Programming Language Support Following our multi-language MCP server samples, we'll be adding support for other languages in samples that will be added to the App Service documentation. Getting Started Today Ready to add multi-agent capabilities to your existing App Service application? The process follows the same streamlined approach we've used throughout this series. Quick Overview Clone and Deploy: Use azd up for one-command infrastructure deployment Create Your Agents: Run a Python setup script to configure the multi-agent system Connect Everything: Add one environment variable to link your agents Test and Explore: Try the sample conversations and see agent interactions 📚 For detailed step-by-step instructions, including prerequisites, troubleshooting tips, environment setup, and comprehensive configuration guidance, see the complete setup guide in the README. Learning Resources If you're new to this ecosystem, we recommend starting with these foundational resources: Integrate AI into your Azure App Service applications - Comprehensive guide with language-specific tutorials for building intelligent applications on App Service Supercharge Your App Service Apps with AI Foundry Agents Connected to MCP Servers - Learn the basics of integrating AI Foundry agents with MCP servers Host Remote MCP Servers on App Service - Deploy and manage MCP servers on Azure App Service Conclusion: The Future of AI-Powered Applications This multi-agent sample represents the natural evolution of our App Service AI integration journey. We started with basic agent integration, progressed through MCP server hosting, and now we're showcasing sophisticated multi-agent orchestration - all while maintaining our core principle that AI integration should enhance, not complicate, your existing applications. Whether you're just getting started with AI agents or ready to implement complex multi-agent workflows, the path forward is clear and incremental. As Azure AI Foundry adds new capabilities and App Service continues to evolve, we'll keep updating these samples and sharing new patterns. Stay tuned - the future of AI-powered applications is being built today, one agent at a time. Additional Resources 🚀 Start Building GitHub repository for this sample - Comprehensive setup guide, architecture details, troubleshooting, and technical deep-dives 📚 Learn More Azure AI Foundry Documentation: Connected Agents Guide MCP Tools Setup: Model Context Protocol Integration .NET Aspire on App Service: Deployment Guide Premium v4 App Service: General Availability Announcement Have questions or want to share how you're using multi-agent systems in your applications? Join the conversation in the comments below. We'd love to hear about your AI-powered App Service success stories!804Views2likes0Comments🚀 Bring Your Own License (BYOL) Support for JBoss EAP on Azure App Service
We’re excited to announce that Azure App Service now supports Bring Your Own License (BYOL) for JBoss Enterprise Application Platform (EAP), enabling enterprise customers to deploy Java workloads with greater flexibility and cost efficiency. If you’ve evaluated Azure App Service in the past, now is the perfect time to take another look. With BYOL support, you can leverage your existing Red Hat subscriptions to optimize costs and align with your enterprise licensing strategy.92Views1like0CommentsAzure App Service Premium v4 plan is now in public preview
Azure App Service Premium v4 plan is the latest offering in the Azure App Service family, designed to deliver enhanced performance, scalability, and cost efficiency. We are excited to announce the public preview of this major upgrade to one of our most popular services. Key benefits: Fully managed platform-as-a-service (PaaS) to run your favorite web stack, on both Windows and Linux. Built using next-gen Azure hardware for higher performance and reliability. Lower total cost of ownership with new pricing tailored for large-scale app modernization projects. and more to come! [Note: As of September 1st, 2025 Premium v4 is Generally Available on Azure App Service! See the launch blog for more details!] Fully managed platform-as-a-service (PaaS) As the next generation of one of the leading PaaS solutions, Premium v4 abstracts infrastructure management, allowing businesses to focus on application development rather than server maintenance. This reduces operational overhead, as tasks like patching, load balancing, and auto-scaling are handled automatically by Azure, saving time and IT resources. App Service’s auto-scaling optimizes costs by adjusting resources based on demand and saves you the cost and overhead of under- or over-provisioning. Modernizing applications with PaaS delivers a compelling economic impact by helping you eliminate legacy inefficiencies, decrease long-term costs, and increase your competitive agility through seamless cloud integration, CI/CD pipelines, and support for multiple languages. Higher performance and reliability Built on the latest Dadsv6 / Eadsv6 series virtual machines and NVMe based temporary storage, the App Service Premium v4 plan offers higher performance compared to previous Premium generations. According to preliminary measurements during private preview, you may expect to see: >25% performance improvement using Pmv4 plans, relative to the prior generation of memory-optimized Pmv3 plans >50% performance improvement using Pv4 plans, relative to the prior generation of non-memory-optimized Pv3 plans Please note that these features and metrics are preliminary and subject to change during public preview. Premium v4 provides a similar line-up to Premium v3, with four non-memory optimized options (P0v4 though P3v4) and five memory-optimized options (P1mv4 through P5mv4). Features like deployment slots, integrated monitoring, and enhanced global zone resilience further enhance the reliability and user experience, improving customer satisfaction. Lower total cost of ownership (TCO) Driven by the urgency to adopt generative AI and to stay competitive, application modernization has rapidly become one of the top priorities in boardrooms everywhere. Whether you are a large enterprise or a small shop running your web apps in the cloud, you will find App Service Premium v4 is designed to offer you the most compelling performance-per-dollar compared to previous generations, making it an ideal managed solution to run high-demand applications. Using the agentic AI GitHub Copilot app modernization tooling announced in preview at Microsoft Build 2025, you can save up to 24% when you upgrade and modernize your .NET web apps running on Windows Server to Azure App Service for Windows on Premium v4 compared with Premium v3. You will also be able to use deeper commitment-based discounts such as reserved instances and savings plan for Premium v4 when the service is generally available (GA). For more detailed pricing on the various CPU and memory options, see the pricing pages for Windowsand Linux as well as the Azure Pricing Calculator. Get started The preview will roll out globally over the next few weeks. Premium v4 is currently available in the following regions [updated 08/22/2025]: Australia East Canada Central Central US East US East US 2 France Central Japan East Korea Central North Central US North Europe Norway East Poland Central Southeast Asia Sweden Central Switzerland North UK South West Central US West Europe West US West US 3 App Service is continuing to expand the Premium v4 footprint with many additional regions planned to come online over the coming weeks and months. Customers can reference the product documentation for details on how to configure Premium v4 as well as a regularly updated list of regional availability. We encourage you to start assessing your apps using the partners and tools for Azure App Modernization, start using Premium v4 to better understand the benefits and capabilities, and build a plan to hit the ground running when the service is generally available. Watch this space for more information on GA! Key Resources Microsoft Build 2025 on-demand session: https://aka.ms/Build25/BRK200 Azure App Service documentation: https://aka.ms/AppService/Pv4docs Azure App Service web page: https://www.azure.com/appservice Join the Community Standups: https://aka.ms/video/AppService/community Follow us on X: @AzAppService2.4KViews2likes2Comments