Azure App Service
451 TopicsHow to connect Azure SQL database from Python Function App using managed identity or access token
This blog will demonstrate on how to connect Azure SQL database from Python Function App using managed identity or access token. If you are looking for how to implement it in Windows App Service, you may refer to this post: https://techcommunity.microsoft.com/t5/apps-on-azure-blog/how-to-connect-azure-sql-database-from-azure-app-service-windows/ba-p/2873397. Note that Azure Active Directory managed identity authentication method was added in ODBC Driver since version 17.3.1.1 for both system-assigned and user-assigned identities. In Azure blessed image for Python Function, the ODBC Driver version is 17.8. Which makes it possible to leverage this feature in Linux App Service. Briefly, this post will provide you a step to step guidance with sample code and introduction on the authentication workflow. Steps: 1. Create a Linux Python Function App from portal 2. Set up the managed identity in the new Function App by enable Identity and saving from portal. It will generate an Object(principal) ID for you automatically. 3. Assign role in Azure SQL database. Search for your own account and save as admin. Note: Alternatively, you can search for the function app's name and set it as admin, then that function app would own admin permission on the database and you can skip step 4 and 5 as well. 4. Got to Query editor in database and be sure to login using your account set in previous step rather than username and password. Or step 5 will fail with below exception. "Failed to execute query. Error: Principal 'xxxx' could not be created. Only connections established with Active Directory accounts can create other Active Directory users." 5. Run below queries to create user for the function app and alter roles. You can choose to alter part of these roles per your demand. CREATE USER "yourfunctionappname" FROM EXTERNAL PROVIDER; ALTER ROLE db_datareader ADD MEMBER "yourfunctionappname" ALTER ROLE db_datawriter ADD MEMBER "yourfunctionappname" ALTER ROLE db_ddladmin ADD MEMBER "yourfunctionappname" 6. Leverage below sample code to build your own project and deploy to the function app. Sample Code: Below is the sample code on how to use Azure access token when run it from local and use managed identity when run in Function app. The token part needs to be replaced with your own. Basically, it is using "pyodbc.connect(connection_string+';Authentication=ActiveDirectoryMsi')" to authenticate with managed identity. Also, "MSI_SECRET" is used to tell if we are running it from local or function app, it will be created automatically as environment variable when the function app is enabled with Managed Identity. The complete demo project can be found from: https://github.com/kevin808/azure-function-pyodbc-MI import logging import azure.functions as func import os import pyodbc import struct def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') server="your-sqlserver.database.windows.net" database="your_db" driver="{ODBC Driver 17 for SQL Server}" query="SELECT * FROM dbo.users" # Optional to use username and password for authentication # username = 'name' # password = 'pass' db_token = '' connection_string = 'DRIVER='+driver+';SERVER='+server+';DATABASE='+database #When MSI is enabled if os.getenv("MSI_SECRET"): conn = pyodbc.connect(connection_string+';Authentication=ActiveDirectoryMsi') #Used when run from local else: SQL_COPT_SS_ACCESS_TOKEN = 1256 exptoken = b'' for i in bytes(db_token, "UTF-8"): exptoken += bytes({i}) exptoken += bytes(1) tokenstruct = struct.pack("=i", len(exptoken)) + exptoken conn = pyodbc.connect(connection_string, attrs_before = { SQL_COPT_SS_ACCESS_TOKEN:tokenstruct }) # Uncomment below line when use username and password for authentication # conn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password) cursor = conn.cursor() cursor.execute(query) row = cursor.fetchone() while row: print(row[0]) row = cursor.fetchone() return func.HttpResponse( 'Success', status_code=200 ) Workflow: Below are the workflow in these two authentication ways, with them in mind, we can understand what happened under the hood. Managed Identity: When we enable the managed identify for function app, a service principal will be generated automatically for it, then it follows the same steps as below to authenticate in database. Function App with managed identify -> send request to database with service principal -> database check the corresponding database user and its permission -> Pass authentication. Access Token: The access toke can be generated by executing ‘az account get-access-token --resource=https://database.windows.net/ --query accessToken’ from local, we then hold this token to authenticate. Please note that the default lifetime for the token is one hour, which means we would need to retrieve it again when it expires. az login -> az account get-access-token -> local function use token to authenticate in SQL database -> DB check if the database user exists and if the permissions granted -> Pass authentication. Thanks for reading. I hope you enjoy it.54KViews6likes18CommentsPowering Observability: Dynatrace Integration with Linux App Service through Sidecars
In this blog we continue to dive into the world of observability with Azure App Service. If you've been following our recent updates, you'll know that we announced the Public Preview for the Sidecar Pattern for Linux App Service. Building upon this architectural pattern, we're going to demonstrate how you can leverage it to integrate Dynatrace, an Azure Native ISV Services partner, with your .NET custom container application. In this blog, we'll guide you through the process of harnessing Dynatrace's powerful monitoring capabilities, allowing you to gain invaluable insights into your application's metrics and traces. Setting up your .NET application To get started, you'll need to containerize your .NET application. This tutorial walks you through the process step by step. This is what a sample Dockerfile for a .Net 8 application # Stage 1: Build the application FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build WORKDIR /app # Copy the project file and restore dependencies COPY *.csproj ./ RUN dotnet restore # Copy the remaining source code COPY . . # Build the application RUN dotnet publish -c Release -o out # Stage 2: Create a runtime image FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime WORKDIR /app # Copy the build output from stage 1 COPY --from=build /app/out ./ # Set the entry point for the application ENTRYPOINT ["dotnet", "<your app>.dll"] You're now ready to build the image and push it to your preferred container registry, be it Azure Container Registry, Docker Hub, or a private registry. Create your Linux Web App Create a new Linux Web App from the portal and choose the options for Container and Linux. On the Container tab, make sure that Sidecar support is Enabled. Specify the details of your application image. Note: Typically, .Net uses port 8080 but you can change it in your project. Setup your Dynatrace account If you don’t have a Dynatrace account, you can create an instance of Dynatrace on the Azure portal by following this Marketplace link. You can choose the Free Trial plan to get a 30 days subscription. AppSettings for Dynatrace Integration You need to set the following AppSettings. You can get more details about the Dynatrace related settings here. DT_TENANT – The environment ID DT_TENANTTOKEN – Same as DT_API_TOKEN. This is the PaaS token for your environment. DT_CONNECTIONPOINT DT_HOME - /home/dynatrace LD_PRELOAD - /home/dynatrace/oneagent/agent/lib64/liboneagentproc.so DT_LOGSTREAM - stdout DT_LOGLEVELCON – INFO We would encourage you to add sensitive information like DT_TENANTTOKEN to Azure Key vault Use Key Vault references - Azure App Service | Microsoft Learn. Add the Dynatrace Sidecar Go to the Deployment Center for your application and add a sidecar container. Image Source: Docker Hub and other registries Image type: Public Registry server URL: mcr.microsoft.com Image and tag: appsvc/docs/sidecars/sample-experiment:dynatrace-dotnet Port: <any port other than your main container port> Once you have added the sidecar, you would need to restart your website to see the data start flowing to the Dynatrace backend. Please note that this is an experimental container image for Dynatrace. We will be updating this blog with a new image soon. Disclaimer: Dynatrace Image Usage It's important to note that the Dynatrace image used here is sourced directly from Dynatrace and is provided 'as-is.' Microsoft does not own or maintain this image. Therefore, its usage is subject to the terms of use outlined by Dynatrace. Visualizing your Observability data in Dynatrace You are all set! You can now see your Observability data flow to Dynatrace backend. The Hosts tab gives you metrics about the VM which is hosting the application. Dynatrace also has a Services view which lets you look at your application specific information like Response Time, Failed Requests and application traces. You can learn more about Dynatrace’s Observability capabilities by going through the documentation. Observe and explore - Dynatrace Docs Next Steps As you've seen, the Sidecar Pattern for Linux App Service opens a world of possibilities for integrating powerful tools like Dynatrace into your Linux App Service-hosted applications. With Dynatrace being an Azure Native ISV Services partner, this integration marks just the beginning of a journey towards a closer and more simplified experience for Azure users. This is just the start. We're committed to providing even more guidance and resources to help you seamlessly integrate Dynatrace with your code-based Linux web applications and other language stacks. Stay tuned for upcoming updates and tutorials as we continue to empower you to make the most of your Azure environment. In the meantime, don't hesitate to explore further, experiment with different configurations, and leverage the full potential of observability with Dynatrace and Azure App Service.2.5KViews1like3CommentsHow to set up subdirectory Multisite in WordPress on Azure App Service
WordPress Multisite is a feature of WordPress that enables you to run and manage multiple WordPress websites using the same WordPress installation. Follow these steps to setup Multisite in your WordPress website on App Service...11KViews1like15CommentsAnnouncing the Public Preview of the New App Service Quota Self-Service Experience
Update 9/15/2025: The App Service Quota Self-Service experience has been temporarily taken offline to incorporate feedback received during this public preview. As this is public preview, availability and features are subject to change as we receive and incorporate feedback. We will post another update when the self-serve experience is available once more. In the meantime, if you require assistance, please file a support ticket following the guidance at the bottom of this post in the Filing a Support Ticket section. We appreciate your patience while we work to build the best experience possible for this scenario. What’s New? The updated experience introduces a dedicated App Service Quota blade in the Azure portal, offering a streamlined and intuitive interface to: View current usage and limits across the various SKUs Set custom quotas tailored to your App Service plan needs This new experience empowers developers and IT admins to proactively manage resources, avoid service disruptions, and optimize performance. Quick Reference - Start here! If your deployment requires quota for ten or more subscriptions, then file a support ticket with problem type Quota following the instructions at the bottom of this post. If any subscription included in your request requires zone redundancy, then file a support ticket with problem type Quota following the instructions at the bottom of this post. Otherwise, leverage the new self-service experience to increase your quota automatically. Self-service Quota Requests For non-zone-redundant needs, quota alone is sufficient to enable App Service deployment or scale-out. Follow the provided steps to place your request. 1. Navigate to the Quotas resource provider in the Azure portal 2. Select App Service Navigating the primary interface: Each App Service VM size is represented as a separate SKU. If the intention is to be able to scale up or down within a specific offering (e.g., Premium v3), then equivalent number of VMs need to be requested for each applicable size of that offering (e.g., request 5 instances for both P1v3 and P3v3). As with other quotas, you can filter by region, subscription, provider, or usage. You can also group the results by usage, quota (App Service VM type), or location (region). Current usage is represented as App Service VMs. This allows you to quickly identify which SKUs are nearing their quota limits. Adjustments can be made inline: no need to visit another page. This is covered in detail in the next section. 3. Request quota adjustments Clicking the pen icon opens a flyout window to capture the quota request: The quota type (App Service SKU) is already populated, along with current usage. Note that your request is not incremental: you must specify the new limit that you wish to see reflected in the portal. For example, to request two additional instances of P1v2 VMs, you would file the request like this: Click submit to send the request for automatic processing. How quota approvals work: Immediately upon submitting a quota request, you will see a processing dialog like the one shown: If the quota request can be automatically fulfilled, then no support request is needed. You should receive this confirmation within a few minutes of submission: If the request cannot be automatically fulfilled, then you will be given the option to file a support request with the same information. In the example below, the requested new limit exceeds what can be automatically granted for the region: 4. If applicable, create support ticket When creating a support ticket, you will need to repopulate the Region and App Service plan details; the new limit has already been populated for you. If you forget the region or SKU that was requested, you can reference them in your notifications pane: If you choose to create a support ticket, then you will interact with the capacity management team for that region. This is a 24x7 service, so requests may be created at any time. Once you have filed the support request, you can track its status via the Help + support dashboard. Known issues The self-service quota request experience for App Service is in public preview. Here are some caveats worth mentioning while the team finalizes the release for general availability: Closing the quota request flyout window will stop meaningful notifications for that request. You can still view the outcome of your quota requests by checking actual quota, but if you want to rely on notifications for alerts, then we recommend leaving the quota request window open for the few minutes that it is processing. Some SKUs are not yet represented in the quota dashboard. These will be added later in the public preview. The Activity Log does not currently provide a meaningful summary of previous quota requests and their outcomes. This will also be addressed during the public preview. As noted in the walkthrough, the new experience does not enable zone-redundant deployments. Quota is an inherently regional construct, and zone-redundant enablement requires a separate step that can only be taken in response to a support ticket being filed. Quota API documentation is being drafted to enable bulk non-zone redundant quota requests without requiring you to file a support ticket. Filing a Support Ticket If your deployment requires zone redundancy or contains many subscriptions, then we recommend filing a support ticket with issue type "Technical" and problem type "Quota": We want your feedback! If you notice any aspect of the experience that does not work as expected, or you have feedback on how to make it better, please use the comments below to share your thoughts!914Views2likes0CommentsBuilding Agent-to-Agent (A2A) Applications on Azure App Service
The world of AI agents is evolving rapidly, with new protocols and frameworks emerging to enable sophisticated multi-agent communication. Google's Agent-to-Agent (A2A) protocol represents one of the most promising approaches for building distributed AI systems that can coordinate tasks across different platforms and services. I'm excited to share how you can leverage Azure App Service to build, deploy, and scale A2A applications. Today, I'll walk you through a practical example that combines Microsoft Semantic Kernel with the A2A protocol to create an intelligent travel planning assistant. What We Built: An A2A Travel Agent on App Service I've taken an existing A2A travel planning sample and enhanced it to run seamlessly on Azure App Service. This demonstrates how A2A concepts can be adapted and hosted on one of Azure's platform-as-a-service offerings. What started as a sample implementation has been transformed into a full-featured web application with a modern interface, real-time streaming, and production-ready deployment automation. 🔗 View the complete source code on GitHub Acknowledgments and Attribution Before diving into the technical details, I want to give proper credit where it's due. This application was adapted and enhanced from excellent foundational work by the Microsoft Semantic Kernel team and the A2A project community: Original inspiration: Microsoft DevBlogs - Semantic Kernel A2A Integration Base implementation: A2A Samples - Semantic Kernel Python Agent This contribution builds upon these samples to demonstrate how you can take A2A concepts and create a complete, deployable application that runs seamlessly on Azure App Service with enterprise-grade features like managed identity authentication, monitoring, and infrastructure as code. Why A2A on Azure App Service? Azure App Service provides the perfect foundation for A2A applications because it handles the infrastructure complexity while giving you the flexibility to implement cutting-edge AI protocols. Here's what makes this combination powerful: 🚀Rapid Deployment & Scaling Deploy A2A agents with a single azd up command Auto-scaling based on demand without managing servers Built-in load balancing for high-availability agent endpoints 🔐Enterprise Security Managed identity authentication eliminates API key management Built-in SSL/TLS termination for secure agent communication Network isolation and private endpoint support for sensitive workloads 🔄Real-time Capabilities WebSocket support for streaming A2A protocol responses Always-on availability for agent discovery and task coordination Low-latency communication between distributed agents 📊Observability & Monitoring Application Insights integration for comprehensive telemetry Built-in logging and diagnostics for debugging agent interactions Performance monitoring to optimize multi-agent workflows Understanding the A2A Travel Agent Architecture Our sample demonstrates a multi-agent system where a main travel manager coordinates with specialized agents: ┌─────────────────────┐ ┌──────────────────────┐ ┌─────────────────────┐ │ Web Browser │ ──── │ FastAPI App │ ──── │ Semantic Kernel │ │ │ │ │ │ Travel Agent │ │ • Modern UI │ │ • REST API │ │ │ │ • Chat Interface │ │ • A2A Protocol │ │ • Currency API │ │ • Responsive │ │ • Session Management │ │ • Activity Planning │ └─────────────────────┘ └──────────────────────┘ └─────────────────────┘ │ ▼ ┌──────────────────────┐ │ A2A Protocol │ │ │ │ • Agent Discovery │ │ • Task Streaming │ │ • Multi-Agent Coord │ └──────────────────────┘ Key Components TravelManagerAgent: The orchestrator that analyzes user requests and delegates to specialized agents CurrencyExchangeAgent: Handles real-time currency conversion using the Frankfurter API ActivityPlannerAgent: Creates personalized itineraries and activity recommendations A2A Protocol Layer: Manages agent discovery, task coordination, and streaming responses Practical Example: Multi-Agent Travel Planning Let's see this in action with a real user scenario: User Request: "I'm traveling to Seoul, South Korea for 2 days with a budget of $100 USD per day. How much is that in Korean Won, and what can I do and eat?" A2A Workflow: TravelManager receives the request and identifies it needs both currency and activity planning CurrencyExchangeAgent is invoked to fetch live USD→KRW rates ActivityPlannerAgent generates budget-friendly recommendations Response Compilation combines results into a comprehensive travel plan Streaming Delivery provides real-time updates to the user interface Result: The user gets current exchange rates (~$100 USD = 130,000 KRW), daily budget breakdowns, recommended activities within budget, and restaurant suggestions—all coordinated seamlessly between multiple specialized agents. Implementation Highlights Modern Web Interface The application includes a responsive web interface built with modern HTML/CSS/JavaScript that provides: Real-time chat with typing indicators Streaming responses for immediate feedback Mobile-responsive design Session management for conversation context A2A Protocol Compliance Full implementation of Google's A2A specification including: Agent Discovery: Structured Agent Cards advertising capabilities Task Coordination: Multi-agent task delegation and handoffs Streaming Support: Real-time progress updates during complex workflows Session Management: Persistent conversation context Azure-Native Features Managed Identity: Secure authentication without API key management Bicep Templates: Infrastructure as code for reproducible deployments Azure Developer CLI: One-command deployment with azd up Getting Started: Deploy Your Own A2A Agent Ready to try it yourself? Here's how to deploy this A2A travel agent to Azure App Service: Prerequisites Azure CLI and Azure Developer CLI (azd) Python 3.10+ for local development An Azure subscription Deployment Steps 1. Clone the repository: git clone https://github.com/Azure-Samples/app-service-a2a-travel-agent cd app-service-a2a-travel-agent 2. Authenticate with Azure: azd auth login 3. Deploy to Azure: azd up That's it! The Azure Developer CLI will: Create an Azure App Service and App Service Plan Deploy an Azure OpenAI resource with GPT-4 model Configure managed identity authentication Deploy your application code Provide the live application URL Beyond This Example: A2A Possibilities While Semantic Kernel was chosen for this sample, we recognize that developers have many options for building A2A applications. The A2A protocol is framework-agnostic, and Azure App Service can host agents built with: LangChain for comprehensive LLM application development LlamaIndex for data-augmented agent workflows AutoGen for multi-agent conversation frameworks Custom implementations using OpenAI, Anthropic, or other AI APIs Any Python web framework (FastAPI, Django, Flask, etc.) And many more! The key insight is that Azure App Service provides a robust, scalable platform that adapts to whatever AI framework or protocol you choose. Why This Matters for the Future The AI agent ecosystem is evolving rapidly. New protocols, frameworks, and integration patterns emerge regularly. What excites me most about Azure App Service in this context is our platform's adaptability: 🔄Framework Flexibility: Host basically any AI framework or custom implementation 🌐Protocol Support: WebSocket, HTTP/2, and custom protocols for agent communication 🔐Security Evolution: Managed identity and certificate management that scales with new auth patterns 📈Performance Optimization: Auto-scaling and performance monitoring that adapts to AI workload patterns 🛠️DevOps Integration: CI/CD pipelines and deployment automation for rapid iteration Looking Ahead As A2A protocols mature and new agent frameworks emerge, Azure App Service will continue evolving to support the latest innovations in AI application development. Our goal is to provide a platform where you can focus on building intelligent agent experiences while we handle the infrastructure complexity. We're particularly excited about upcoming enhancements in: Integration with Azure AI services for even richer agent capabilities Streamlined deployment patterns for AI application architectures Improved monitoring and observability for multi-agent workflows Try It Today The A2A travel agent sample is available now on GitHub and ready for deployment. Whether you're exploring multi-agent architectures, evaluating A2A protocols, or looking to modernize your AI applications, this sample provides a practical starting point. 🚀 Deploy the A2A Travel Agent Update 9/16/2025: I created a .NET version of this sample. Feel free to check this one out too! https://github.com/Azure-Samples/app-service-a2a-travel-agent-dotnet We'd love to hear about the A2A applications you're building on Azure App Service. Share your experiences, challenges, and innovations with the community—together, we're shaping the future of distributed AI systems. Questions about this sample or Azure App Service for AI applications? Connect with us in the comments below. Resources: Azure App Service Documentation Google A2A Protocol Specification Microsoft Semantic Kernel Azure Developer CLI2.2KViews1like3CommentsBuild Multi-Agent AI Systems on Azure App Service
Introduction: The Evolution of AI-Powered App Service Applications Over the past few months, we've been exploring how to supercharge existing Azure App Service applications with AI capabilities. If you've been following along with this series, you've seen how we can quickly integrate AI Foundry agents with MCP servers and host remote MCP servers directly on App Service. Today, we're taking the next leap forward by demonstrating how to build sophisticated multi-agent systems that leverage connected agents, Model Context Protocol (MCP), and OpenAPI tools - all running on Azure App Service's Premium v4 tier with .NET Aspire for enhanced observability and cloud-native development experience. 💡 Want the full technical details? This blog provides an overview of the key concepts and capabilities. For comprehensive setup instructions, architecture deep-dives, performance considerations, debugging guidance, and detailed technical documentation, check out the complete README on GitHub. What Makes This Sample Special? This fashion e-commerce demo showcases several cutting-edge technologies working together: 🤖 Multi-Agent Architecture with Connected Agents Unlike single-agent systems, this sample implements an orchestration pattern where specialized agents work together: Main Orchestrator: Coordinates workflow and handles inventory queries via MCP tools Cart Manager: Specialized in shopping cart operations via OpenAPI tools Fashion Advisor: Provides expert styling recommendations Content Moderator: Ensures safe, professional interactions 🔧 Advanced Tool Integration MCP Tools: Real-time connection to external inventory systems using the Model Context Protocol OpenAPI Tools: Direct agent integration with your existing App Service APIs Connected Agent Tools: Seamless agent-to-agent communication with automatic orchestration ⚡ .NET Aspire Integration Enhanced development experience with built-in observability Simplified cloud-native application patterns Real-time monitoring and telemetry (when developing locally) 🚀 Premium v4 App Service Tier Latest App Service performance capabilities Optimized for modern cloud-native workloads Enhanced scalability for AI-powered applications Key Technical Innovations Connected Agent Orchestration Your application communicates with a single main agent, which automatically coordinates with specialist agents as needed. No changes to your existing App Service code required. Dual Tool Integration This sample demonstrates both MCP tools for external system connectivity and OpenAPI tools for direct API integration. Zero-Infrastructure Overhead Agents work directly with your existing App Service APIs and external endpoints - no additional infrastructure deployment needed. Why These Technologies Matter for Real Applications The combination of these technologies isn't just about showcasing the latest features - it's about solving real business challenges. Let's explore how each component contributes to building production-ready AI applications. .NET Aspire: Enhancing the Development Experience This sample leverages .NET Aspire to provide enhanced observability and simplified cloud-native development patterns. While .NET Aspire is still in preview on App Service, we encourage you to start exploring its capabilities and keep an eye out for future updates planned for later this year. What's particularly exciting about Aspire is how it maintains the core principle we've emphasized throughout this series: making AI integration as simple as possible. You don't need to completely restructure your application to benefit from enhanced observability and modern development patterns. Premium v4 App Service: Built for Modern AI Workloads This sample is designed to run on Azure App Service Premium v4, which we recently announced is Generally Available. Premium v4 is the latest offering in the Azure App Service family, delivering enhanced performance, scalability, and cost efficiency. From Concept to Implementation: Staying True to Our Core Promise Throughout this blog series, we've consistently demonstrated that adding AI capabilities to existing applications doesn't require massive rewrites or complex architectural changes. This multi-agent sample continues that tradition - what might seem like a complex system is actually built using the same principles we've established: ✅ Incremental Enhancement: Build on your existing App Service infrastructure ✅ Simple Integration: Use familiar tools like azd up for deployment ✅ Production-Ready: Leverage mature Azure services you already trust ✅ Future-Proof: Easy to extend as new capabilities become available Looking Forward: What's Coming Next This sample represents just the beginning of what's possible with AI-powered App Service applications. Here's what we're working on next: 🔐 MCP Authentication Integration Enhanced security patterns for production MCP server deployments, including Azure Entra ID integration. 🚀 New Azure AI Foundry Features As Azure AI Foundry continues to evolve, we'll be updating this sample to showcase: New agent capabilities Enhanced tool integrations Performance optimizations Additional model support 📊 Advanced Analytics and Monitoring Deeper integration with Azure Monitor for: Agent performance analytics Business intelligence from agent interactions 🔧 Additional Programming Language Support Following our multi-language MCP server samples, we'll be adding support for other languages in samples that will be added to the App Service documentation. Getting Started Today Ready to add multi-agent capabilities to your existing App Service application? The process follows the same streamlined approach we've used throughout this series. Quick Overview Clone and Deploy: Use azd up for one-command infrastructure deployment Create Your Agents: Run a Python setup script to configure the multi-agent system Connect Everything: Add one environment variable to link your agents Test and Explore: Try the sample conversations and see agent interactions 📚 For detailed step-by-step instructions, including prerequisites, troubleshooting tips, environment setup, and comprehensive configuration guidance, see the complete setup guide in the README. Learning Resources If you're new to this ecosystem, we recommend starting with these foundational resources: Integrate AI into your Azure App Service applications - Comprehensive guide with language-specific tutorials for building intelligent applications on App Service Supercharge Your App Service Apps with AI Foundry Agents Connected to MCP Servers - Learn the basics of integrating AI Foundry agents with MCP servers Host Remote MCP Servers on App Service - Deploy and manage MCP servers on Azure App Service Conclusion: The Future of AI-Powered Applications This multi-agent sample represents the natural evolution of our App Service AI integration journey. We started with basic agent integration, progressed through MCP server hosting, and now we're showcasing sophisticated multi-agent orchestration - all while maintaining our core principle that AI integration should enhance, not complicate, your existing applications. Whether you're just getting started with AI agents or ready to implement complex multi-agent workflows, the path forward is clear and incremental. As Azure AI Foundry adds new capabilities and App Service continues to evolve, we'll keep updating these samples and sharing new patterns. Stay tuned - the future of AI-powered applications is being built today, one agent at a time. Additional Resources 🚀 Start Building GitHub repository for this sample - Comprehensive setup guide, architecture details, troubleshooting, and technical deep-dives 📚 Learn More Azure AI Foundry Documentation: Connected Agents Guide MCP Tools Setup: Model Context Protocol Integration .NET Aspire on App Service: Deployment Guide Premium v4 App Service: General Availability Announcement Have questions or want to share how you're using multi-agent systems in your applications? Join the conversation in the comments below. We'd love to hear about your AI-powered App Service success stories!873Views2likes0Comments🚀 Bring Your Own License (BYOL) Support for JBoss EAP on Azure App Service
We’re excited to announce that Azure App Service now supports Bring Your Own License (BYOL) for JBoss Enterprise Application Platform (EAP), enabling enterprise customers to deploy Java workloads with greater flexibility and cost efficiency. If you’ve evaluated Azure App Service in the past, now is the perfect time to take another look. With BYOL support, you can leverage your existing Red Hat subscriptions to optimize costs and align with your enterprise licensing strategy.96Views1like0Comments