azure app service
451 TopicsPowering Observability: Dynatrace Integration with Linux App Service through Sidecars
In this blog we continue to dive into the world of observability with Azure App Service. If you've been following our recent updates, you'll know that we announced the Public Preview for the Sidecar Pattern for Linux App Service. Building upon this architectural pattern, we're going to demonstrate how you can leverage it to integrate Dynatrace, an Azure Native ISV Services partner, with your .NET custom container application. In this blog, we'll guide you through the process of harnessing Dynatrace's powerful monitoring capabilities, allowing you to gain invaluable insights into your application's metrics and traces. Setting up your .NET application To get started, you'll need to containerize your .NET application. This tutorial walks you through the process step by step. This is what a sample Dockerfile for a .Net 8 application # Stage 1: Build the application FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build WORKDIR /app # Copy the project file and restore dependencies COPY *.csproj ./ RUN dotnet restore # Copy the remaining source code COPY . . # Build the application RUN dotnet publish -c Release -o out # Stage 2: Create a runtime image FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime WORKDIR /app # Copy the build output from stage 1 COPY --from=build /app/out ./ # Set the entry point for the application ENTRYPOINT ["dotnet", "<your app>.dll"] You're now ready to build the image and push it to your preferred container registry, be it Azure Container Registry, Docker Hub, or a private registry. Create your Linux Web App Create a new Linux Web App from the portal and choose the options for Container and Linux. On the Container tab, make sure that Sidecar support is Enabled. Specify the details of your application image. Note: Typically, .Net uses port 8080 but you can change it in your project. Setup your Dynatrace account If you don’t have a Dynatrace account, you can create an instance of Dynatrace on the Azure portal by following this Marketplace link. You can choose the Free Trial plan to get a 30 days subscription. AppSettings for Dynatrace Integration You need to set the following AppSettings. You can get more details about the Dynatrace related settings here. DT_TENANT – The environment ID DT_TENANTTOKEN – Same as DT_API_TOKEN. This is the PaaS token for your environment. DT_CONNECTIONPOINT DT_HOME - /home/dynatrace LD_PRELOAD - /home/dynatrace/oneagent/agent/lib64/liboneagentproc.so DT_LOGSTREAM - stdout DT_LOGLEVELCON – INFO We would encourage you to add sensitive information like DT_TENANTTOKEN to Azure Key vault Use Key Vault references - Azure App Service | Microsoft Learn. Add the Dynatrace Sidecar Go to the Deployment Center for your application and add a sidecar container. Image Source: Docker Hub and other registries Image type: Public Registry server URL: mcr.microsoft.com Image and tag: appsvc/docs/sidecars/sample-experiment:dynatrace-dotnet Port: <any port other than your main container port> Once you have added the sidecar, you would need to restart your website to see the data start flowing to the Dynatrace backend. Please note that this is an experimental container image for Dynatrace. We will be updating this blog with a new image soon. Disclaimer: Dynatrace Image Usage It's important to note that the Dynatrace image used here is sourced directly from Dynatrace and is provided 'as-is.' Microsoft does not own or maintain this image. Therefore, its usage is subject to the terms of use outlined by Dynatrace. Visualizing your Observability data in Dynatrace You are all set! You can now see your Observability data flow to Dynatrace backend. The Hosts tab gives you metrics about the VM which is hosting the application. Dynatrace also has a Services view which lets you look at your application specific information like Response Time, Failed Requests and application traces. You can learn more about Dynatrace’s Observability capabilities by going through the documentation. Observe and explore - Dynatrace Docs Next Steps As you've seen, the Sidecar Pattern for Linux App Service opens a world of possibilities for integrating powerful tools like Dynatrace into your Linux App Service-hosted applications. With Dynatrace being an Azure Native ISV Services partner, this integration marks just the beginning of a journey towards a closer and more simplified experience for Azure users. This is just the start. We're committed to providing even more guidance and resources to help you seamlessly integrate Dynatrace with your code-based Linux web applications and other language stacks. Stay tuned for upcoming updates and tutorials as we continue to empower you to make the most of your Azure environment. In the meantime, don't hesitate to explore further, experiment with different configurations, and leverage the full potential of observability with Dynatrace and Azure App Service.2.5KViews1like3CommentsHow to set up subdirectory Multisite in WordPress on Azure App Service
WordPress Multisite is a feature of WordPress that enables you to run and manage multiple WordPress websites using the same WordPress installation. Follow these steps to setup Multisite in your WordPress website on App Service...10KViews1like15CommentsAnnouncing the Public Preview of the New App Service Quota Self-Service Experience
Update 9/15/2025: The App Service Quota Self-Service experience has been temporarily taken offline to incorporate feedback received during this public preview. As this is public preview, availability and features are subject to change as we receive and incorporate feedback. We will post another update when the self-serve experience is available once more. In the meantime, if you require assistance, please file a support ticket following the guidance at the bottom of this post in the Filing a Support Ticket section. We appreciate your patience while we work to build the best experience possible for this scenario. What’s New? The updated experience introduces a dedicated App Service Quota blade in the Azure portal, offering a streamlined and intuitive interface to: View current usage and limits across the various SKUs Set custom quotas tailored to your App Service plan needs This new experience empowers developers and IT admins to proactively manage resources, avoid service disruptions, and optimize performance. Quick Reference - Start here! If your deployment requires quota for ten or more subscriptions, then file a support ticket with problem type Quota following the instructions at the bottom of this post. If any subscription included in your request requires zone redundancy, then file a support ticket with problem type Quota following the instructions at the bottom of this post. Otherwise, leverage the new self-service experience to increase your quota automatically. Self-service Quota Requests For non-zone-redundant needs, quota alone is sufficient to enable App Service deployment or scale-out. Follow the provided steps to place your request. 1. Navigate to the Quotas resource provider in the Azure portal 2. Select App Service Navigating the primary interface: Each App Service VM size is represented as a separate SKU. If the intention is to be able to scale up or down within a specific offering (e.g., Premium v3), then equivalent number of VMs need to be requested for each applicable size of that offering (e.g., request 5 instances for both P1v3 and P3v3). As with other quotas, you can filter by region, subscription, provider, or usage. You can also group the results by usage, quota (App Service VM type), or location (region). Current usage is represented as App Service VMs. This allows you to quickly identify which SKUs are nearing their quota limits. Adjustments can be made inline: no need to visit another page. This is covered in detail in the next section. 3. Request quota adjustments Clicking the pen icon opens a flyout window to capture the quota request: The quota type (App Service SKU) is already populated, along with current usage. Note that your request is not incremental: you must specify the new limit that you wish to see reflected in the portal. For example, to request two additional instances of P1v2 VMs, you would file the request like this: Click submit to send the request for automatic processing. How quota approvals work: Immediately upon submitting a quota request, you will see a processing dialog like the one shown: If the quota request can be automatically fulfilled, then no support request is needed. You should receive this confirmation within a few minutes of submission: If the request cannot be automatically fulfilled, then you will be given the option to file a support request with the same information. In the example below, the requested new limit exceeds what can be automatically granted for the region: 4. If applicable, create support ticket When creating a support ticket, you will need to repopulate the Region and App Service plan details; the new limit has already been populated for you. If you forget the region or SKU that was requested, you can reference them in your notifications pane: If you choose to create a support ticket, then you will interact with the capacity management team for that region. This is a 24x7 service, so requests may be created at any time. Once you have filed the support request, you can track its status via the Help + support dashboard. Known issues The self-service quota request experience for App Service is in public preview. Here are some caveats worth mentioning while the team finalizes the release for general availability: Closing the quota request flyout window will stop meaningful notifications for that request. You can still view the outcome of your quota requests by checking actual quota, but if you want to rely on notifications for alerts, then we recommend leaving the quota request window open for the few minutes that it is processing. Some SKUs are not yet represented in the quota dashboard. These will be added later in the public preview. The Activity Log does not currently provide a meaningful summary of previous quota requests and their outcomes. This will also be addressed during the public preview. As noted in the walkthrough, the new experience does not enable zone-redundant deployments. Quota is an inherently regional construct, and zone-redundant enablement requires a separate step that can only be taken in response to a support ticket being filed. Quota API documentation is being drafted to enable bulk non-zone redundant quota requests without requiring you to file a support ticket. Filing a Support Ticket If your deployment requires zone redundancy or contains many subscriptions, then we recommend filing a support ticket with issue type "Technical" and problem type "Quota": We want your feedback! If you notice any aspect of the experience that does not work as expected, or you have feedback on how to make it better, please use the comments below to share your thoughts!770Views2likes0CommentsBuilding Agent-to-Agent (A2A) Applications on Azure App Service
The world of AI agents is evolving rapidly, with new protocols and frameworks emerging to enable sophisticated multi-agent communication. Google's Agent-to-Agent (A2A) protocol represents one of the most promising approaches for building distributed AI systems that can coordinate tasks across different platforms and services. I'm excited to share how you can leverage Azure App Service to build, deploy, and scale A2A applications. Today, I'll walk you through a practical example that combines Microsoft Semantic Kernel with the A2A protocol to create an intelligent travel planning assistant. What We Built: An A2A Travel Agent on App Service I've taken an existing A2A travel planning sample and enhanced it to run seamlessly on Azure App Service. This demonstrates how A2A concepts can be adapted and hosted on one of Azure's platform-as-a-service offerings. What started as a sample implementation has been transformed into a full-featured web application with a modern interface, real-time streaming, and production-ready deployment automation. 🔗 View the complete source code on GitHub Acknowledgments and Attribution Before diving into the technical details, I want to give proper credit where it's due. This application was adapted and enhanced from excellent foundational work by the Microsoft Semantic Kernel team and the A2A project community: Original inspiration: Microsoft DevBlogs - Semantic Kernel A2A Integration Base implementation: A2A Samples - Semantic Kernel Python Agent This contribution builds upon these samples to demonstrate how you can take A2A concepts and create a complete, deployable application that runs seamlessly on Azure App Service with enterprise-grade features like managed identity authentication, monitoring, and infrastructure as code. Why A2A on Azure App Service? Azure App Service provides the perfect foundation for A2A applications because it handles the infrastructure complexity while giving you the flexibility to implement cutting-edge AI protocols. Here's what makes this combination powerful: 🚀Rapid Deployment & Scaling Deploy A2A agents with a single azd up command Auto-scaling based on demand without managing servers Built-in load balancing for high-availability agent endpoints 🔐Enterprise Security Managed identity authentication eliminates API key management Built-in SSL/TLS termination for secure agent communication Network isolation and private endpoint support for sensitive workloads 🔄Real-time Capabilities WebSocket support for streaming A2A protocol responses Always-on availability for agent discovery and task coordination Low-latency communication between distributed agents 📊Observability & Monitoring Application Insights integration for comprehensive telemetry Built-in logging and diagnostics for debugging agent interactions Performance monitoring to optimize multi-agent workflows Understanding the A2A Travel Agent Architecture Our sample demonstrates a multi-agent system where a main travel manager coordinates with specialized agents: ┌─────────────────────┐ ┌──────────────────────┐ ┌─────────────────────┐ │ Web Browser │ ──── │ FastAPI App │ ──── │ Semantic Kernel │ │ │ │ │ │ Travel Agent │ │ • Modern UI │ │ • REST API │ │ │ │ • Chat Interface │ │ • A2A Protocol │ │ • Currency API │ │ • Responsive │ │ • Session Management │ │ • Activity Planning │ └─────────────────────┘ └──────────────────────┘ └─────────────────────┘ │ ▼ ┌──────────────────────┐ │ A2A Protocol │ │ │ │ • Agent Discovery │ │ • Task Streaming │ │ • Multi-Agent Coord │ └──────────────────────┘ Key Components TravelManagerAgent: The orchestrator that analyzes user requests and delegates to specialized agents CurrencyExchangeAgent: Handles real-time currency conversion using the Frankfurter API ActivityPlannerAgent: Creates personalized itineraries and activity recommendations A2A Protocol Layer: Manages agent discovery, task coordination, and streaming responses Practical Example: Multi-Agent Travel Planning Let's see this in action with a real user scenario: User Request: "I'm traveling to Seoul, South Korea for 2 days with a budget of $100 USD per day. How much is that in Korean Won, and what can I do and eat?" A2A Workflow: TravelManager receives the request and identifies it needs both currency and activity planning CurrencyExchangeAgent is invoked to fetch live USD→KRW rates ActivityPlannerAgent generates budget-friendly recommendations Response Compilation combines results into a comprehensive travel plan Streaming Delivery provides real-time updates to the user interface Result: The user gets current exchange rates (~$100 USD = 130,000 KRW), daily budget breakdowns, recommended activities within budget, and restaurant suggestions—all coordinated seamlessly between multiple specialized agents. Implementation Highlights Modern Web Interface The application includes a responsive web interface built with modern HTML/CSS/JavaScript that provides: Real-time chat with typing indicators Streaming responses for immediate feedback Mobile-responsive design Session management for conversation context A2A Protocol Compliance Full implementation of Google's A2A specification including: Agent Discovery: Structured Agent Cards advertising capabilities Task Coordination: Multi-agent task delegation and handoffs Streaming Support: Real-time progress updates during complex workflows Session Management: Persistent conversation context Azure-Native Features Managed Identity: Secure authentication without API key management Bicep Templates: Infrastructure as code for reproducible deployments Azure Developer CLI: One-command deployment with azd up Getting Started: Deploy Your Own A2A Agent Ready to try it yourself? Here's how to deploy this A2A travel agent to Azure App Service: Prerequisites Azure CLI and Azure Developer CLI (azd) Python 3.10+ for local development An Azure subscription Deployment Steps 1. Clone the repository: git clone https://github.com/Azure-Samples/app-service-a2a-travel-agent cd app-service-a2a-travel-agent 2. Authenticate with Azure: azd auth login 3. Deploy to Azure: azd up That's it! The Azure Developer CLI will: Create an Azure App Service and App Service Plan Deploy an Azure OpenAI resource with GPT-4 model Configure managed identity authentication Deploy your application code Provide the live application URL Beyond This Example: A2A Possibilities While Semantic Kernel was chosen for this sample, we recognize that developers have many options for building A2A applications. The A2A protocol is framework-agnostic, and Azure App Service can host agents built with: LangChain for comprehensive LLM application development LlamaIndex for data-augmented agent workflows AutoGen for multi-agent conversation frameworks Custom implementations using OpenAI, Anthropic, or other AI APIs Any Python web framework (FastAPI, Django, Flask, etc.) And many more! The key insight is that Azure App Service provides a robust, scalable platform that adapts to whatever AI framework or protocol you choose. Why This Matters for the Future The AI agent ecosystem is evolving rapidly. New protocols, frameworks, and integration patterns emerge regularly. What excites me most about Azure App Service in this context is our platform's adaptability: 🔄Framework Flexibility: Host basically any AI framework or custom implementation 🌐Protocol Support: WebSocket, HTTP/2, and custom protocols for agent communication 🔐Security Evolution: Managed identity and certificate management that scales with new auth patterns 📈Performance Optimization: Auto-scaling and performance monitoring that adapts to AI workload patterns 🛠️DevOps Integration: CI/CD pipelines and deployment automation for rapid iteration Looking Ahead As A2A protocols mature and new agent frameworks emerge, Azure App Service will continue evolving to support the latest innovations in AI application development. Our goal is to provide a platform where you can focus on building intelligent agent experiences while we handle the infrastructure complexity. We're particularly excited about upcoming enhancements in: Integration with Azure AI services for even richer agent capabilities Streamlined deployment patterns for AI application architectures Improved monitoring and observability for multi-agent workflows Try It Today The A2A travel agent sample is available now on GitHub and ready for deployment. Whether you're exploring multi-agent architectures, evaluating A2A protocols, or looking to modernize your AI applications, this sample provides a practical starting point. 🚀 Deploy the A2A Travel Agent Update 9/16/2025: I created a .NET version of this sample. Feel free to check this one out too! https://github.com/Azure-Samples/app-service-a2a-travel-agent-dotnet We'd love to hear about the A2A applications you're building on Azure App Service. Share your experiences, challenges, and innovations with the community—together, we're shaping the future of distributed AI systems. Questions about this sample or Azure App Service for AI applications? Connect with us in the comments below. Resources: Azure App Service Documentation Google A2A Protocol Specification Microsoft Semantic Kernel Azure Developer CLI2.1KViews1like3CommentsBuild Multi-Agent AI Systems on Azure App Service
Introduction: The Evolution of AI-Powered App Service Applications Over the past few months, we've been exploring how to supercharge existing Azure App Service applications with AI capabilities. If you've been following along with this series, you've seen how we can quickly integrate AI Foundry agents with MCP servers and host remote MCP servers directly on App Service. Today, we're taking the next leap forward by demonstrating how to build sophisticated multi-agent systems that leverage connected agents, Model Context Protocol (MCP), and OpenAPI tools - all running on Azure App Service's Premium v4 tier with .NET Aspire for enhanced observability and cloud-native development experience. 💡 Want the full technical details? This blog provides an overview of the key concepts and capabilities. For comprehensive setup instructions, architecture deep-dives, performance considerations, debugging guidance, and detailed technical documentation, check out the complete README on GitHub. What Makes This Sample Special? This fashion e-commerce demo showcases several cutting-edge technologies working together: 🤖 Multi-Agent Architecture with Connected Agents Unlike single-agent systems, this sample implements an orchestration pattern where specialized agents work together: Main Orchestrator: Coordinates workflow and handles inventory queries via MCP tools Cart Manager: Specialized in shopping cart operations via OpenAPI tools Fashion Advisor: Provides expert styling recommendations Content Moderator: Ensures safe, professional interactions 🔧 Advanced Tool Integration MCP Tools: Real-time connection to external inventory systems using the Model Context Protocol OpenAPI Tools: Direct agent integration with your existing App Service APIs Connected Agent Tools: Seamless agent-to-agent communication with automatic orchestration ⚡ .NET Aspire Integration Enhanced development experience with built-in observability Simplified cloud-native application patterns Real-time monitoring and telemetry (when developing locally) 🚀 Premium v4 App Service Tier Latest App Service performance capabilities Optimized for modern cloud-native workloads Enhanced scalability for AI-powered applications Key Technical Innovations Connected Agent Orchestration Your application communicates with a single main agent, which automatically coordinates with specialist agents as needed. No changes to your existing App Service code required. Dual Tool Integration This sample demonstrates both MCP tools for external system connectivity and OpenAPI tools for direct API integration. Zero-Infrastructure Overhead Agents work directly with your existing App Service APIs and external endpoints - no additional infrastructure deployment needed. Why These Technologies Matter for Real Applications The combination of these technologies isn't just about showcasing the latest features - it's about solving real business challenges. Let's explore how each component contributes to building production-ready AI applications. .NET Aspire: Enhancing the Development Experience This sample leverages .NET Aspire to provide enhanced observability and simplified cloud-native development patterns. While .NET Aspire is still in preview on App Service, we encourage you to start exploring its capabilities and keep an eye out for future updates planned for later this year. What's particularly exciting about Aspire is how it maintains the core principle we've emphasized throughout this series: making AI integration as simple as possible. You don't need to completely restructure your application to benefit from enhanced observability and modern development patterns. Premium v4 App Service: Built for Modern AI Workloads This sample is designed to run on Azure App Service Premium v4, which we recently announced is Generally Available. Premium v4 is the latest offering in the Azure App Service family, delivering enhanced performance, scalability, and cost efficiency. From Concept to Implementation: Staying True to Our Core Promise Throughout this blog series, we've consistently demonstrated that adding AI capabilities to existing applications doesn't require massive rewrites or complex architectural changes. This multi-agent sample continues that tradition - what might seem like a complex system is actually built using the same principles we've established: ✅ Incremental Enhancement: Build on your existing App Service infrastructure ✅ Simple Integration: Use familiar tools like azd up for deployment ✅ Production-Ready: Leverage mature Azure services you already trust ✅ Future-Proof: Easy to extend as new capabilities become available Looking Forward: What's Coming Next This sample represents just the beginning of what's possible with AI-powered App Service applications. Here's what we're working on next: 🔐 MCP Authentication Integration Enhanced security patterns for production MCP server deployments, including Azure Entra ID integration. 🚀 New Azure AI Foundry Features As Azure AI Foundry continues to evolve, we'll be updating this sample to showcase: New agent capabilities Enhanced tool integrations Performance optimizations Additional model support 📊 Advanced Analytics and Monitoring Deeper integration with Azure Monitor for: Agent performance analytics Business intelligence from agent interactions 🔧 Additional Programming Language Support Following our multi-language MCP server samples, we'll be adding support for other languages in samples that will be added to the App Service documentation. Getting Started Today Ready to add multi-agent capabilities to your existing App Service application? The process follows the same streamlined approach we've used throughout this series. Quick Overview Clone and Deploy: Use azd up for one-command infrastructure deployment Create Your Agents: Run a Python setup script to configure the multi-agent system Connect Everything: Add one environment variable to link your agents Test and Explore: Try the sample conversations and see agent interactions 📚 For detailed step-by-step instructions, including prerequisites, troubleshooting tips, environment setup, and comprehensive configuration guidance, see the complete setup guide in the README. Learning Resources If you're new to this ecosystem, we recommend starting with these foundational resources: Integrate AI into your Azure App Service applications - Comprehensive guide with language-specific tutorials for building intelligent applications on App Service Supercharge Your App Service Apps with AI Foundry Agents Connected to MCP Servers - Learn the basics of integrating AI Foundry agents with MCP servers Host Remote MCP Servers on App Service - Deploy and manage MCP servers on Azure App Service Conclusion: The Future of AI-Powered Applications This multi-agent sample represents the natural evolution of our App Service AI integration journey. We started with basic agent integration, progressed through MCP server hosting, and now we're showcasing sophisticated multi-agent orchestration - all while maintaining our core principle that AI integration should enhance, not complicate, your existing applications. Whether you're just getting started with AI agents or ready to implement complex multi-agent workflows, the path forward is clear and incremental. As Azure AI Foundry adds new capabilities and App Service continues to evolve, we'll keep updating these samples and sharing new patterns. Stay tuned - the future of AI-powered applications is being built today, one agent at a time. Additional Resources 🚀 Start Building GitHub repository for this sample - Comprehensive setup guide, architecture details, troubleshooting, and technical deep-dives 📚 Learn More Azure AI Foundry Documentation: Connected Agents Guide MCP Tools Setup: Model Context Protocol Integration .NET Aspire on App Service: Deployment Guide Premium v4 App Service: General Availability Announcement Have questions or want to share how you're using multi-agent systems in your applications? Join the conversation in the comments below. We'd love to hear about your AI-powered App Service success stories!796Views2likes0Comments🚀 Bring Your Own License (BYOL) Support for JBoss EAP on Azure App Service
We’re excited to announce that Azure App Service now supports Bring Your Own License (BYOL) for JBoss Enterprise Application Platform (EAP), enabling enterprise customers to deploy Java workloads with greater flexibility and cost efficiency. If you’ve evaluated Azure App Service in the past, now is the perfect time to take another look. With BYOL support, you can leverage your existing Red Hat subscriptions to optimize costs and align with your enterprise licensing strategy.91Views1like0CommentsBuild an AI Image-Caption Generator on Azure App Service with Streamlit and GPT-4o-mini
This tiny app just does one thing: upload an image → get a natural one-line caption. Under the hood: Azure AI Vision extracts high-confidence tags from the image. Azure OpenAI (GPT-4o-mini) turns those tags into a fluent caption. Streamlit provides a lightweight, Python-native UI so you can ship fast. All code + infra templates: image_caption_app in the App Service AI Samples repo: https://github.com/Azure-Samples/appservice-ai-samples/tree/main/image_caption_app What are these components? What is Streamlit? An open-source Python framework to build interactive data/AI apps with just a few lines of code—perfect for quick, clean UIs. What is Azure AI Vision (Vision API)? A cloud service that analyzes images and returns rich signals like tags with confidence scores, which we use as grounded inputs for captioning. How it works (at a glance) User uploads a photo in Streamlit. The app calls Azure AI Vision → gets a list of tags (keeps only high-confidence ones). The app sends those tags to GPT-4o-mini → generates a one-line caption. Caption is shown instantly in the browser. Prerequisites Azure subscription — https://azure.microsoft.com/en-us/pricing/purchase-options/azure-account Azure CLI — https://learn.microsoft.com/azure/cli/azure/install-azure-cli-linux Azure Developer CLI (azd) — https://learn.microsoft.com/azure/developer/azure-developer-cli/install-azd Python 3.10+ — https://www.python.org/downloads/ Visual Studio Code (optional) — https://code.visualstudio.com/download Streamlit (optional for local runs) — https://docs.streamlit.io/get-started/installation Managed Identity on App Service (recommended) — https://learn.microsoft.com/azure/app-service/overview-managed-identity Resources you’ll deploy You can create everything manually or with the provided azd template. What you need Azure App Service (Linux) to host the Streamlit app. Azure AI Foundry/OpenAI with a gpt-4o-mini deployment for caption generation. Azure AI Vision (Computer Vision) for image tagging. Managed Identity enabled on the Web App, with RBAC grants so the app can call Vision and OpenAI without secrets. One-command deploy with azd (recommended) The sample includes infra under image_caption_app/infra so azd up can provision + deploy in one go. # 1) Clone and move into the sample git clone https://github.com/Azure-Samples/appservice-ai-samples cd appservice-ai-samples/image_caption_app # 2) Log in and provision + deploy azd auth login azd up Manual path (if you prefer doing it yourself) Create Azure AI Vision, note the endpoint (custom subdomain). Create Azure AI Foundry/OpenAI and deploy gpt-4o-mini. Create App Service (Linux, Python) and enable System-Assigned Managed Identity. Assign roles to the Web App’s Managed Identity: Cognitive Services OpenAI User on your OpenAI resource. Cognitive Services User on your Vision resource. Add app settings for endpoints and deployment names (see repo), deploy the code, and run. Startup command (manual setting): If you’re configuring the Web App yourself (instead of using the Bicep), set the Startup Command to: streamlit run app.py --server.port 8000 --server.address 0.0.0.0 Portal path: App Service → Configuration → General settings → Startup Command. CLI example: az webapp config set \ --name <your-webapp-name> \ --resource-group <your-rg> \ --startup-file "streamlit run app.py --server.port 8000 --server.address 0.0.0.0" (The provided Bicep template already sets this for you.) Code tour (the important bits) Top-level flow (app.py) First we get tags from Vision, then ask GPT-4o-mini for a one-liner: tags = extract_tags(image_bytes) caption = generate_caption(tags) Vision call (utils/vision.py) Call the Vision REST API, parse JSON, and keep high-confidence tags (> 0.6): response = requests.post( VISION_API_URL, headers=headers, params=PARAMS, data=image_bytes, timeout=30, ) response.raise_for_status() analysis = response.json() tags = [ t.get('name') for t in analysis.get('tags', []) if t.get('name') and t.get('confidence', 0) > 0.6 ] Caption generation (utils/openai_caption.py) Join tags and ask GPT-4o-mini for a natural caption: tag_text = ", ".join(tags) prompt = f""" You are an assistant that generates vivid, natural-sounding captions for images. Create a one-line caption for an image that contains the following: {tag_text}. """ response = client.chat.completions.create( model=DEPLOYMENT_NAME, messages=[ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": prompt.strip()} ], max_tokens=60, temperature=0.7 ) return response.choices[0].message.content.strip() Security & auth: Managed Identity by default (recommended) This sample ships to use Managed Identity on App Service—no keys in config. The Web App’s Managed Identity authenticates to Vision and Azure OpenAI via Microsoft Entra ID. Prefer Managed Identity in production; if you need to test locally, you can switch to key-based auth by supplying the service keys in your environment. Run it locally (optional) # From the sample folder python -m venv .venv && source .venv/bin/activate # Windows: .venv\Scripts\activate pip install -r requirements.txt # Set env vars for endpoints + deployment (and keys if not using MI locally) streamlit run app.py Repo map App + Streamlit UI + helpers: image_caption_app/ Bicep infrastructure (used by azd up): image_caption_app/infra/ What’s next — ways to extend this sample Richer vision signals: Add object detection, OCR, or brand detection; blend those into the prompt for sharper captions. Persistence & gallery: Save images to Blob Storage and captions/metadata to Cosmos DB or SQLite; add a Streamlit gallery. Performance & cost: Cache tags by image hash; cap image size; track tokens/latency. Observability: Wire up Application Insights with custom events (e.g., caption_generated). Looking for more Python samples? Check out the repo: https://github.com/Azure-Samples/appservice-ai-samples/tree/main For more Azure App Service AI samples and best practices, check out the Azure App Service AI integration documentation215Views0likes0Comments