azure functions
353 TopicsFrom Vibe Coding to Working App: How SRE Agent Completes the Developer Loop
The Most Common Challenge in Modern Cloud Apps There's a category of bugs that drive engineers crazy: multi-layer infrastructure issues. Your app deploys successfully. Every Azure resource shows "Succeeded." But the app fails at runtime with a vague error like Login failed for user ''. Where do you even start? You're checking the Web App, the SQL Server, the VNet, the private endpoint, the DNS zone, the identity configuration... and each one looks fine in isolation. The problem is how they connect and that's invisible in the portal. Networking issues are especially brutal. The error says "Login failed" but the actual causes could be DNS, firewall, identity, or all three. The symptom and the root causes are in completely different resources. Without deep Azure networking knowledge, you're just clicking around hoping something jumps out. Now imagine you vibe coded the infrastructure. You used AI to generate the Bicep, deployed it, and moved on. When it breaks, you're debugging code you didn't write, configuring resources you don't fully understand. This is where I wanted AI to help not just to build, but to debug. Enter SRE Agent + Coding Agent Here's what I used: Layer Tool Purpose Build VS Code Copilot Agent Mode + Claude Opus Generate code, Bicep, deploy Debug Azure SRE Agent Diagnose infrastructure issues and create developer issue with suggested fixes in source code (app code and IaC) Fix GitHub Coding Agent Create PRs with code and IaC fix from Github issue created by SRE Agent Copilot builds. SRE Agent debugs. Coding Agent fixes. What I Built I used VS Code Copilot in Agent Mode with Claude Opus to create a .NET 8 Web App connected to Azure SQL via private endpoint: Private networking (no public exposure) Entra-only authentication Managed identity (no secrets) Deployed with azd up. All green. Then I tested the health endpoint: $ curl https://app-tsdvdfdwo77hc.azurewebsites.net/health/sql {"status":"unhealthy","error":"Login failed for user ''.","errorType":"SqlException"} Deployment succeeded. App failed. One error. How I Fixed It: Step by Step Step 1: Create SRE Agent with Azure Access I created an SRE Agent with read access to my Azure subscription. You can scope it to specific resource groups. The agent builds a knowledge graph of your resources and their dependencies visible in the Resource Mapping view below. Step 2: Connect GitHub to SRE Agent using GitHub MCP server I connected the GitHub MCP server so the agent could read my repository and create issues. Step 3: Create Sub Agent to analyze source code I created a sub-agent for analyzing source code using GitHub mcp tools. this lets SRE Agent understand not just Azure resources, but also the Bicep and source code files that created them. "you are expert in analyzing source code (bicep and app code) from github repos" Step 4: Invoke Sub-Agent to Analyze the Error In the SRE Agent chat, I invoked the sub-agent to diagnose the error I received from my app end point. It correlated the runtime error with the infrastructure configuration Step 5: Watch the SRE Agent Think and Reason SRE Agent analyzed the error by tracing code in Program.cs, Bicep configurations, and Azure resource relationships Web App, SQL Server, VNet, private endpoint, DNS zone, and managed identity. Its reasoning process worked through each layer, eliminating possibilities one by one until it identified the root causes. Step 6: Agent Creates GitHub Issue Based on its analysis, SRE Agent summarized the root causes and suggested fixes in a GitHub issue: Root Causes: Private DNS Zone missing VNet link Managed identity not created as SQL user Suggested Fixes: Add virtualNetworkLinks resource to Bicep Add SQL setup script to create user with db_datareader and db_datawriter roles Step 7: Merge the PR from Coding Agent Assign the Github issue to Coding Agent which then creates a PR with the fixes. I just reviewed the fix. It made sense and I merged it. Redeployed with azd up, ran the SQL script: curl -s https://app-tsdvdfdwo77hc.azurewebsites.net/health/sql | jq . { "status": "healthy", "database": "tododb", "server": "tcp:sql-tsdvdfdwo77hc.database.windows.net,1433", "message": "Successfully connected to SQL Server" } đ From error to fix in minutes without manually debugging a single Azure resource. Why This Matters If you're a developer building and deploying apps to Azure, SRE Agent changes how you work: You don't need to be a networking expert. SRE Agent understands the relationships between Azure resources private endpoints, DNS zones, VNet links, managed identities. It connects dots you didn't know existed. You don't need to guess. Instead of clicking through the portal hoping something looks wrong, the agent systematically eliminates possibilities like a senior engineer would. You don't break your workflow. SRE Agent suggests fixes in your Bicep and source code not portal changes. Everything stays version controlled. Deployed through pipelines. No hot fixes at 2 AM. You close the loop. AI helps you build fast. Now AI helps you debug fast too. Try It Yourself Do you vibe code your app, your infrastructure, or both? How do you debug when things break? Here's a challenge: Vibe code a todo app with a Web App, VNet, private endpoint, and SQL database. "Forget" to link the DNS zone to the VNet. Deploy it. Watch it fail. Then point SRE Agent at it and see how it identifies the root cause, creates a GitHub issue with the fix, and hands it off to Coding Agent for a PR. Share your experience. I'd love to hear how it goes. Learn More Azure SRE Agent documentation Azure SRE Agent blogs Azure SRE Agent community Azure SRE Agent home page Azure SRE Agent pricing123Views0likes0CommentsHost ChatGPT apps on Azure Functions
This blog post is for developers learning and building ChatGPT apps. It provides an overview of how these apps work, why build them, and how to host one on Azure Functions. Chat with ChatGPT apps OpenAI recently launched ChatGPT apps. These are apps you can chat with right inside ChatGPT, extending what ChatGPT can do beyond simple chats to actions. These apps can be invoked by starting a message with the app name, or they can be suggested by ChatGPT when relevant to the conversation. The following shows an example of invoking the Booking.com app to find hotels that meet certain criteria: OpenAI calls these âa new generation of appsâ that âblend familiar interactive elementsâŚwith new ways of interacting through conversation.â For users, ChatGPT apps fit directly into an interface theyâre already familiar with and can use with little to no learning. For developers, building these apps is great way to get them in the hands of ChatGPTâs 800 million users without having to build custom frontends or worry about distribution and discovery. The following summarizes key benefits of ChatGPT apps: Native Integration: Once connected, users can invoke apps with a simple @ mention. Contextual Actions: Your app doesn't just "chat"âit does. It can fetch real-time data or execute actions. Massive Distribution and Easy Discovery: ChatGPT has added an app directory and just announced that theyâre accepting submissions. Apps in the directory are exposed ChatGPTâs massive user base. ChatGPT apps are remote MCP servers ChatGPT apps are simply remote MCP servers that expose tools, but with two notable distinctions: Their tools use metadata to specify UI elements that should be rendered when it returns a result The UI elements are exposed as MCP resources. ChatGPT invokes tools the same way agents invoke tools on any MCP server. The difference is the added ability to render the tool results in a custom UI thatâs embedded in the chat as an iFrame. A UI can include buttons, text boxes, maps, and other components that users can interact with. Instead of calling a RESTful API, the UI can trigger additional tool calls in the MCP server as the user interacts with it. Learn more about building the custom UI . For example, when the Zillow app returns results to the userâs question, the results are home listings and a map that users can interact with: Since ChatGPT apps are just MCP servers, any existing server you may have can be turned into a ChatGPT app. To do that, you must ensure the server uses the streamable HTTP transport if it doesnât already and then find a place to host it. Hosting remote MCP servers While there are many hosting platforms available, Azure Functions is uniquely positioned to host remote MCP servers as the platform provides several key benefits: Scalable Infrastructure: ChatGPT apps can go viral. Azure Functionâs Flex Consumption plan can handle bursty traffic during high traffic times and scale back to zero when needed Built-in auth: Keep your server secured with Azure Functionâs built-in server authentication and authorization feature Serverless billing: Pay for only when the app is run instead of idle time Learn more about remote MCP servers hosted on Azure Functions. Create ChatGPT app Letâs quickly create a sample ChatGPT app that returns the weather of a place. Prerequisites Ensure you have the following prerequisites before proceeding: Azure subscription for creating Azure Functions and related resources Azure Developer CLI for deploying MCP server via infrastructure as code ChatGPT Plus subscription for testing ChatGPT app in developer mode Deploy MCP server to Azure Functions Clone this sample MCP server: `git clone https://github.com/Azure-Samples/chatgpt-app-azure-function-mcp`. Open terminal, run `azd auth login` and complete the login flow in the browser. Navigate to sample root directory, run `azd up` to deploy the server and related resources. Youâll be prompted with: Enter a unique environment name: Enter a unique name. This is the name of the resource group where all deployed resources live. Select an Azure Subscription: Pick your subscription Enter a value for the âlocationâ infrastructure: East US Once deployment completes, copy the app url for the next step. It should look like: https://<your-app>.azurewebsites.net Sample code walkthrough The sample server is built using the Python FastMCP package. You can find more information and how to test server locally in this repo. We'll walkthough the code briefly here. In main.py, you find the `get_weather_widget` resource and `get_current_weather` tool (code abbreviated here): .resource("ui://widget/current-weather.html", mime_type="text/html+skybridge") def get_weather_widget() -> str: """Interactive HTML widget to display current weather data in ChatGPT.""" # some code... @mcp.tool( annotations={ "title": "Get Current Weather", "readOnlyHint": True, "openWorldHint": True, }, meta={ "openai/outputTemplate": "ui://widget/current-weather.html", "openai/toolInvocation/invoking": "Fetching weather data", "openai/toolInvocation/invoked": "Weather data retrieved" }, ) def get_current_weather(latitude: float, longitude: float) -> ToolResult: """Get current weather for a given latitude and longitude using Open-Meteo API.""" # some code... return ToolResult( content=content_text, structured_content=data ) When you ask ChatGPT a question, it calls the MCP tool which returns a `ToolResult` containing both human-readable content (for ChatGPT to understand) and machine-readable data (`structured_content`, raw data for the widget). Because the `get_current_weather` tool specifies an `outputTemplate` in the metadata, ChatGPT fetches the corresponding widget HTML from the `get_weather_widget` resource. To return results, it creates an iframe and injects the weather results (`structured_content`) into the widget's JavaScript environment (via `window.openai.toolOutput`). The widget's JavaScript then renders the weather data into a beautiful UI. Test ChatGPT app in developer mode Turn on Developer mode in ChatGPT: Go to Settings â Connectors â Advanced â Developer mode In the chat, click + â More â Add sources The Add + button should show next to Sources. Click Add + â Connect more In the Enable apps window, look for Advanced settings. Click Create app. A form should open. Fill out the form to create the new app Name: WeatherApp MCP Server URL: Enter the MCP server endpoint, which is the app URL you previously saved with /mcp appended. Example: https://<you-app>.azurewebsites.net/mcp Authentication: Choose No Auth Check the box for âI understand and want to continueâ and click Create. Once connected, you should find the server listed under Enabled apps. Test by asking ChatGPT â@WeatherApp whatâs the temperature in NYC today?â Submit to ChatGPT App Directory OpenAI has opened app submission recently. Submitting the app to the App Directory makes it accessible to all users on ChatGPT. You may want to read through the submission guidelines to ensure your app meets the requirements before submitting. Whatâs next In this blog post, we gave an overview of ChatGPT apps and showed how to host one in Azure Functions. Weâll dedicate the next blog post to elaborate on configuring authentication and authorization for apps hosted on Azure Functions. For users familiar with the Azure Functions MCP extension, weâre working on support for MCP Resources in the extension. Youâll be able to build ChatGPT apps using the extension once that support is out. For now, you need to use the official MCP SDKs. Closing thoughts ChatGPT apps extend the ability of ChatGPT beyond chat by letting users take actions like searching for an apartment, ordering groceries, and turning an outline into slide deck with just a mention of the app name in the chat. The directory OpenAI created where developers can submit their apps reminds one of the App Store in the iPhone. It seems to be a no-brainer now that such a marketplace should be provided. Would this also be the case for ChatGPT? Do you think the introduction of these apps is a gamechanger? And are they useful for your scenarios? Share with us your thoughts!387Views0likes0CommentsBuilding Reliable AI Travel Agents with the Durable Task Extension for Microsoft Agent Framework
The durable task extension for Microsoft Agent Framework makes all this possible. In this post, we'll walk through the AI Travel Planner, a C# application I built that demonstrates how to build reliable, scalable multi-agent applications using the durable task extension for Microsoft agent framework. While I work on the python version, I've included code snippets that show the python equivalent. If you haven't already seen the announcement post on the durable task extension for Microsoft Agent Framework, I suggest you read that first before continuing with this post: http://aka.ms/durable-extension-for-af-blog. In brief, production AI agents face real challenges: crashes can lose conversation history, unpredictable behavior makes debugging difficult, human-in-the-loop workflows require waiting without wasting resources, and variable demand needs flexible scaling. The durable task extension addresses each of these: Serverless Hosting: Deploy agents on Azure Functions with auto-scaling from thousands of instances to zero, while retaining full control in a serverless architecture. Automatic Session Management: Agents maintain persistent sessions with full conversation context that survives process crashes, restarts, and distributed execution across instances Deterministic Multi-Agent Orchestrations: Coordinate specialized durable agents with predictable, repeatable, code-driven execution patterns Human-in-the-Loop with Serverless Cost Savings: Pause for human input without consuming compute resources or incurring costs Built-in Observability with Durable Task Scheduler: Deep visibility into agent operations and orchestrations through the Durable Task Scheduler UI dashboard AI Travel Planner Architecture Overview The Travel Planner application takes user trip preferences and starts a workflow that orchestrates three specialized agent framework agents (a Destination Recommender, an Itinerary Planner, and a Local Recommender) to build a comprehensive, personalized travel plan. Once a travel plan is created, the workflow includes human-in-the-loop approval before booking the trip (mocked), showcasing how the durable task extension handles long-running operations easily: Application Workflow User Request: User submits travel preferences via React frontend Orchestration Scheduled: Azure Functions backend receives the request and schedules a deterministic agentic workflow using the Durable Task Extension for Agent Framework. Destination Recommendation: The orchestrator first coordinates the Destination Recommender agent to analyze preferences and suggest destinations Itinerary Planning and Local Recommendations: The orchestrator then parallelizes the invocation of the Itinerary Planner agent to create detailed day-by-day plans for the given destination and the Local Recommendations agent to add insider tips and attractions Storage: Created travel plan is saved to Azure Blob Storage Approval: User reviews and approves the plan (human-in-the-loop) Booking: Upon approval, booking of the trip completes Key Components Azure Static Web Apps: Hosts the React frontend Azure Functions (.NET 9): Serverless compute hosting the agents and workflow with automatic scaling Durable Task Extension for Microsoft Agent Framework: The AI agent SDK with durable task extension Durable Task Scheduler: Manages state persistence, orchestration, and observability Azure OpenAI (GPT-4o-mini): Powers the AI agents Now letâs dive into the code. Along the way, Iâll highlight the value the durable task extension brings and patterns you can apply to your own applications. Creating Durable Agents Making the standard Agent Framework agents durable agents is simple. Include the durable task extension package and register your agents within the ConfigureDurableAgents extension method and you automatically get: Persistent conversation sessions that survive restarts HTTP endpoints for agent interactions Automatic state checkpointing that survive restarts Distributed execution across instances C# FunctionsApplication .CreateBuilder(args) .ConfigureDurableAgents(configure => { configure.AddAIAgentFactory("DestinationRecommenderAgent", sp => chatClient.CreateAIAgent( instructions: "You are a travel destination expert...", name: "DestinationRecommenderAgent", services: sp)); configure.AddAIAgentFactory("ItineraryPlannerAgent", sp => chatClient.CreateAIAgent( instructions: "You are a travel itinerary planner...", name: "ItineraryPlannerAgent", services: sp, tools: [AIFunctionFactory.Create(CurrencyConverterTool.ConvertCurrency)])); configure.AddAIAgentFactory("LocalRecommendationsAgent", sp => chatClient.CreateAIAgent( instructions: "You are a local expert...", name: "LocalRecommendationsAgent", services: sp)); }); Python # Create the Azure OpenAI chat client chat_client = AzureOpenAIChatClient( endpoint=endpoint, deployment_name=deployment_name, credential=DefaultAzureCredential() ) # Destination Recommender Agent destination_recommender_agent = chat_client.create_agent( name="DestinationRecommenderAgent", instructions="You are a travel destination expert..." ) # Itinerary Planner Agent (with tools) itinerary_planner_agent = chat_client.create_agent( name="ItineraryPlannerAgent", instructions="You are a travel itinerary planner...", tools=[get_exchange_rate, convert_currency] ) # Local Recommendations Agent local_recommendations_agent = chat_client.create_agent( name="LocalRecommendationsAgent", instructions="You are a local expert..." ) # Configure Function App with Durable Agents. AgentFunctionApp is where the magic happens app = AgentFunctionApp(agents=[ destination_recommender_agent, itinerary_planner_agent, local_recommendations_agent ]) The Orchestration Programming Model The durable task extension uses an intuitive async/await programming model for deterministic orchestration. You write orchestration logic as ordinary imperative code (if/else, try/catch), and the framework handles all the complexity of coordination, durability, retries, and distributed execution. The Travel Planner Orchestration Here's the actual orchestration from the application that coordinates all three agents, runs tasks in parallel, handles human approval, and books the trip: C# [Function(nameof(RunTravelPlannerOrchestration))] public async Task<TravelPlanResult> RunTravelPlannerOrchestration( [OrchestrationTrigger] TaskOrchestrationContext context) { var travelRequest = context.GetInput<TravelRequest>()!; // Get durable agents and create conversation threads DurableAIAgent destinationAgent = context.GetAgent("DestinationRecommenderAgent"); DurableAIAgent itineraryAgent = context.GetAgent("ItineraryPlannerAgent"); DurableAIAgent localAgent = context.GetAgent("LocalRecommendationsAgent"); // Step 1: Get destination recommendations var destinations = await destinationAgent.RunAsync<DestinationRecommendations>( $"Recommend destinations for {travelRequest.Preferences}", destinationAgent.GetNewThread()); var topDestination = destinations.Result.Recommendations.First(); // Steps 2 & 3: Run itinerary and local recommendations IN PARALLEL var itineraryTask = itineraryAgent.RunAsync<TravelItinerary>( $"Create itinerary for {topDestination.Name}", itineraryAgent.GetNewThread()); var localTask = localAgent.RunAsync<LocalRecommendations>( $"Local recommendations for {topDestination.Name}", localAgent.GetNewThread()); await Task.WhenAll(itineraryTask, localTask); // Step 4: Save to blob storage await context.CallActivityAsync(nameof(SaveTravelPlanToBlob), travelPlan); // Step 5: Wait for human approval (NO COMPUTE COSTS while waiting!) var approval = await context.WaitForExternalEvent<ApprovalResponse>( "ApprovalEvent", TimeSpan.FromDays(7)); // Step 6: Book if approved if (approval.Approved) await context.CallActivityAsync(nameof(BookTrip), travelPlan); return new TravelPlanResult(travelPlan, approval.Approved); } Python app.orchestration_trigger(context_name="context") def travel_planner_orchestration(context: df.DurableOrchestrationContext): travel_request_data = context.get_input() travel_request = TravelRequest(**travel_request_data) # Get durable agents and create conversation threads destination_agent = app.get_agent(context, "DestinationRecommenderAgent") itinerary_agent = app.get_agent(context, "ItineraryPlannerAgent") local_agent = app.get_agent(context, "LocalRecommendationsAgent") # Step 1: Get destination recommendations destinations_result = yield destination_agent.run( messages=f"Recommend destinations for {travel_request.preferences}", thread=destination_agent.get_new_thread(), response_format=DestinationRecommendations ) destinations = cast(DestinationRecommendations, destinations_result.value) top_destination = destinations.recommendations[0] # Steps 2 & 3: Run itinerary and local recommendations IN PARALLEL itinerary_task = itinerary_agent.run( messages=f"Create itinerary for {top_destination.destination_name}", thread=itinerary_agent.get_new_thread(), response_format=Itinerary ) local_task = local_agent.run( messages=f"Local recommendations for {top_destination.destination_name}", thread=local_agent.get_new_thread(), response_format=LocalRecommendations ) results = yield context.task_all([itinerary_task, local_task]) itinerary = cast(Itinerary, results[0].value) local_recs = cast(LocalRecommendations, results[1].value) # Step 4: Save to blob storage yield context.call_activity("save_travel_plan_to_blob", travel_plan) # Step 5: Wait for human approval (NO COMPUTE COSTS while waiting!) approval_task = context.wait_for_external_event("ApprovalEvent") timeout_task = context.create_timer( context.current_utc_datetime + timedelta(days=7)) winner = yield context.task_any([approval_task, timeout_task]) if winner == approval_task: timeout_task.cancel() approval = approval_task.result # Step 6: Book if approved if approval.get("approved"): yield context.call_activity("book_trip", travel_plan) return TravelPlanResult(plan=travel_plan, approved=approval.get("approved")) return TravelPlanResult(plan=travel_plan, approved=False) Notice how the orchestration combines: Agent calls (await agent.RunAsync(...)) for AI-driven decisions Parallel execution (Task.WhenAll) for running multiple agents concurrently Activity calls (await context.CallActivityAsync(...)) for non-intelligent business tasks Human-in-the-loop (await context.WaitForExternalEvent(...)) for approval workflows The orchestration automatically checkpoints after each step. If a failure occurs, completed steps aren't re-executed. The orchestration resumes exactly where it left off, no need for manual intervention. Agent Patterns in Action Agent Chaining: Sequential Handoffs The Travel Planner demonstrates agent chaining where the Destination Recommender's output feeds into both the Itinerary Planner and Local Recommendations agents: C# // Agent 1: Get destination recommendations var destinations = await destinationAgent.RunAsync<DestinationRecommendations>(prompt, thread); var topDestination = destinations.Result.Recommendations.First(); // Agent 2: Create itinerary based on Agent 1's output var itinerary = await itineraryAgent.RunAsync<TravelItinerary>( $"Create itinerary for {topDestination.Name}", thread); Python # Agent 1: Get destination recommendations destinations_result = yield destination_agent.run( messages=prompt, thread=thread, response_format=DestinationRecommendations ) destinations = cast(DestinationRecommendations, destinations_result.value) top_destination = destinations.recommendations[0] # Agent 2: Create itinerary based on Agent 1's output itinerary_result = yield itinerary_agent.run( messages=f"Create itinerary for {top_destination.destination_name}", thread=thread, response_format=Itinerary ) itinerary = cast(Itinerary, itinerary_result.value) Agent Parallelization: Concurrent Execution The app runs the Itinerary Planner and Local Recommendations agents in parallel to reduce latency: C# // Launch both agent calls simultaneously var itineraryTask = itineraryAgent.RunAsync<TravelItinerary>(itineraryPrompt, thread1); var localTask = localAgent.RunAsync<LocalRecommendations>(localPrompt, thread2); // Wait for both to complete await Task.WhenAll(itineraryTask, localTask); Python # Launch both agent calls simultaneously itinerary_task = itinerary_agent.run( messages=itinerary_prompt, thread=thread1, response_format=Itinerary ) local_task = local_agent.run( messages=local_prompt, thread=thread2, response_format=LocalRecommendations ) # Wait for both to complete results = yield context.task_all([itinerary_task, local_task]) itinerary = cast(Itinerary, results[0].value) local_recs = cast(LocalRecommendations, results[1].value) Human-in-the-Loop: Approval Workflows The Travel Planner includes a complete human-in-the-loop pattern. After generating the travel plan, the workflow pauses for user approval: C# // Send approval request notification await context.CallActivityAsync(nameof(RequestApproval), travelPlan); // Wait for approval - NO COMPUTE COSTS OR LLM TOKENS while waiting! var approval = await context.WaitForExternalEvent<ApprovalResponse>( "ApprovalEvent", TimeSpan.FromDays(7)); if (approval.Approved) await context.CallActivityAsync(nameof(BookTrip), travelPlan); Python # Send approval request notification await context.CallActivityAsync(nameof(RequestApproval), travelPlan); # Wait for approval - NO COMPUTE COSTS OR LLM TOKENS while waiting! var approval = await context.WaitForExternalEvent<ApprovalResponse>( "ApprovalEvent", TimeSpan.FromDays(7)); if (approval.Approved) await context.CallActivityAsync(nameof(BookTrip), travelPlan); The API endpoint to handle approval responses: C# [Function(nameof(HandleApprovalResponse))] public async Task HandleApprovalResponse( [HttpTrigger("post", Route = "approve/{instanceId}")] HttpRequestData req, string instanceId, [DurableClient] DurableTaskClient client) { var approval = await req.ReadFromJsonAsync<ApprovalResponse>(); await client.RaiseEventAsync(instanceId, "ApprovalEvent", approval); } Python app.function_name(name="ApproveTravelPlan") app.route(route="travel-planner/approve/{instance_id}", methods=["POST"]) app.durable_client_input(client_name="client") async def approve_travel_plan(req: func.HttpRequest, client) -> func.HttpResponse: instance_id = req.route_params.get("instance_id") approval = req.get_json() await client.raise_event(instance_id, "ApprovalEvent", approval) return func.HttpResponse( json.dumps({"message": "Approval processed"}), status_code=200, mimetype="application/json" ) The workflow generates a complete travel plan and waits up to 7 days for user approval. During this entire waiting period, since we're hosting this application on the Functions Flex Consumption plan, the app scales down and zero compute resources or LLM tokens are consumed. When the user approves (or the timeout expires), the app scales back up and the orchestration automatically resumes with full context intact. Real-Time Monitoring with the Durable Task Scheduler Since we're using the Durable Task Scheduler as the backend for our durable agents, we're provided with a built-in dashboard for monitoring our agents and orchestrations in real-time. Agent Thread Insights Conversation history: View complete conversation threads for each agent session, including all messages, tool calls, and agent decisions Task timing: Monitor how long specific tasks and agent interactions take to complete Orchestration Insights Multi-agent visualization: See the execution flow across multiple agents with visual representation of parallel executions and branching Real-time monitoring: Track active orchestrations, queued work items, and agent states Performance metrics: Monitor response times, token usage, and orchestration duration Debugging Capabilities View structured inputs and outputs for activities, agents, and tool calls Trace tool invocations and their outcomes Monitor external event handling for human-in-the-loop scenarios The dashboard enables you to understand exactly what your agents and workflows are doing, diagnose issues quickly, and optimize performance, all without adding custom logging to your code. Try The Travel Planner Application Thatâs it! Thatâs the gist of how the AI Travel Planner application is put together and some of the key components that it took to build it. I'd love for you to try the application out for yourself. It's fully instrumented with the Azure Developer CLI and Bicep, so you can deploy it to Azure with a few simple CLI commands. Click here to try the AI Travel Planner sample Python version coming soon! Demo Video Learn More Durable Task Extension Overview Durable Agent Features Durable Task Scheduler AI Travel Planner GitHub Repository535Views2likes0CommentsCall Function App from Azure Data Factory with Managed Identity Authentication
Integrating Azure Function Apps into your Azure Data Factory (ADF) workflows is a common practice. To enhance security beyond the use of function API keys, leveraging managed identity authentication is strongly recommended. Given the fact that many existing guides were outdated with recent updates to Azure services, this article provides a comprehensive, up-to-date walkthrough on configuring managed identity in ADF to securely call Function Apps. The provided methods can also be adapted to other Azure services that need to call Function Apps with managed identity authentication. The high level process is: Enable Managed Identity on Data Factory Configure Microsoft Entra Sign-in on Azure Function App Configure Linked Service in Data Factory Assign Permissions to the Data Factory in Azure Function Step 1: Enable Managed Identity on Data Factory On the Data Factoryâs portal, go to Managed Identities, and enable a system assigned managed identity. Step 2: Configure Microsoft Entra Sign-in on Azure Function App 1. Go to Function App portal and enable Authentication. Choose "Microsoft" as the identity provider. 2. Add an app registration to the app, it could be an existing one or you can choose to let the platform create a new app registration. 3. Next, allow the ADF as a client application to authenticate to the function app. This step is a new requirement from previous guides. If these settings are not correctly set, the 403 response will be returned: Add the Application ID of the ADF managed identity in Allowed client application and Object ID of the ADF managed identity in the Allowed identities. If the requests are only allowed from specific tenants, add the Tenant ID of the managed identity in the last box. 4. This part sets the response from function app for the unauthenticated requests. We should set the response as "HTTP 401 Unauthorized: recommended for APIs" as sign-in page is not feasible for API calls from ADF. 5. Then, click next and use the default permission option. 6. After everything is set, click "Add" to complete the configuration. Copy the generated App (client) id, as this is used in data factory to handle authorization. Step 3: Configure Linked Service in Data Factory 1. To use an Azure Function activity in a pipeline, follow the steps here: Create an Azure Function activity with UI 2. Then Edit or New a Azure Function Linked Service. 3. Change authentication method to System Assigned Managed Identity, and paste the copied client ID of function app identity provider from Step 2 into Resource ID. This step is necessary as authorization does not work without this. Step 4: Assign Permissions to the Data Factory in Azure Function 1. On the function app portal, go to Access control (IAM), and Add a new role assignment. 2. Assign reader role. 3. Assign the Data Factoryâs Managed Identity to that role. After everything is set, test that the function app can be called from Azure Data Factory successfully. Reference: https://prodata.ie/2022/06/16/enabling-managed-identity-authentication-on-azure-functions-in-data-factory/ https://learn.microsoft.com/en-us/azure/data-factory/control-flow-azure-function-activity https://docs.azure.cn/en-us/app-service/overview-authentication-authorization1.7KViews0likes2CommentsIndustry-Wide Certificate Changes Impacting Azure App Service Certificates
Executive Summary In early 2026, industry-wide changes mandated by browser applications and the CA/B Forum will affect both how TLS certificates are issued as well as their validity period. The CA/B Forum is a vendor body that establishes standards for securing websites and online communications through SSL/TLS certificates. Azure App Service is aligning with these standards for both App Service Managed Certificates (ASMC, free, DigiCert-issued) and App Service Certificates (ASC, paid, GoDaddy-issued). Most customers will experience no disruption. Action is required only if you pin certificates or use them for client authentication (mTLS). Who Should Read This? App Service administrators Security and compliance teams Anyone responsible for certificate management or application security Quick Reference: Whatâs Changing & What To Do Topic ASMC (Managed, free) ASC (GoDaddy, paid) Required Action New Cert Chain New chain (no action unless pinned) New chain (no action unless pinned) Remove certificate pinning Client Auth EKU Not supported (no action unless cert is used for mTLS) Not supported (no action unless cert is used for mTLS) Transition from mTLS Validity No change (already compliant) Two overlapping certs issued for the full year None (automated) If you do not pin certificates or use them for mTLS, no action is required. Timeline of Key Dates Date Change Action Required Mid-Jan 2026 and after ASMC migrates to new chain ASMC stops supporting client auth EKU Remove certificate pinning if used Transition to alternative authentication if the certificate is used for mTLS Mar 2026 and after ASC validity shortened ASC migrates to new chain ASC stops supporting client auth EKU Remove certificate pinning if used Transition to alternative authentication if the certificate is used for mTLS Actions Checklist For All Users Review your use of App Service certificates. If you do not pin these certificates and do not use them for mTLS, no action is required. If You Pin Certificates (ASMC or ASC) Remove all certificate or chain pinning before their respective key change dates to avoid service disruption. See Best Practices: Certificate Pinning. If You Use Certificates for Client Authentication (mTLS) Switch to an alternative authentication method before their respective key change dates to avoid service disruption, as client authentication EKU will no longer be supported for these certificates. See Sunsetting the client authentication EKU from DigiCert public TLS certificates. See Set Up TLS Mutual Authentication - Azure App Service Details & Rationale Why Are These Changes Happening? These updates are required by major browser programs (e.g., Chrome) and apply to all public CAs. They are designed to enhance security and compliance across the industry. Azure App Service is automating updates to minimize customer impact. Whatâs Changing? New Certificate Chain Certificates will be issued from a new chain to maintain browser trust. Impact: Remove any certificate pinning to avoid disruption. Removal of Client Authentication EKU Newly issued certificates will not support client authentication EKU. This change aligns with Google Chromeâs root program requirements to enhance security. Impact: If you use these certificates for mTLS, transition to an alternate authentication method. Shortening of Certificate Validity Certificate validity is now limited to a maximum of 200 days. Impact: ASMC is already compliant; ASC will automatically issue two overlapping certificates to cover one year. No billing impact. Frequently Asked Questions (FAQs) Will I lose coverage due to shorter validity? No. For App Service Certificate, App Service will issue two certificates to span the full year you purchased. Is this unique to DigiCert and GoDaddy? No. This is an industry-wide change. Do these changes impact certificates from other CAs? Yes. These changes are an industry-wide change. We recommend you reach out to your certificatesâ CA for more information. Do I need to act today? If you do not pin or use these certs for mTLS, no action is required. Glossary ASMC: App Service Managed Certificate (free, DigiCert-issued) ASC: App Service Certificate (paid, GoDaddy-issued) EKU: Extended Key Usage mTLS: Mutual TLS (client certificate authentication) CA/B Forum: Certification Authority/Browser Forum Additional Resources Changes to the Managed TLS Feature Set Up TLS Mutual Authentication Azure App Service Best Practices â Certificate pinning DigiCert Root and Intermediate CA Certificate Updates 2023 Sunsetting the client authentication EKU from DigiCert public TLS certificates Feedback & Support If you have questions or need help, please visit our official support channels or the Microsoft Q&A, where our team and the community can assist you.742Views1like0Comments[Design Pattern] Handling race conditions and state in serverless data pipelines
Hello community, I recently faced a tricky data engineering challenge involving a lot of Parquet files (about 2 million records) that needed to be ingested, transformed, and split into different entities. The hard part wasn't the volume, but the logic. We needed to generate globally unique, sequential IDs for specific columns while keeping the execution time under two hours. We were restricted to using only Azure Functions, ADF, and Storage. This created a conflict: we needed parallel processing to meet the time limit, but parallel processing usually breaks sequential ID generation due to race conditions on the counters. I documented the three architecture patterns we tested to solve this: Sequential processing with ADF (Safe, but failed the 2-hour time limit). 2. Parallel processing with external locking/e-tags on Table Storage (Too complex and we still hit issues with inserts). 3. A "Fan-Out/Fan-In" pattern using Azure Durable Functions and Durable Entities. We ended up going with Durable Entities. Since they act as stateful actors, they allowed us to handle the ID counter state sequentially in memory while the heavy lifting (transformation) ran in parallel. It solved the race condition issue without killing performance. I wrote a detailed breakdown of the logic and trade-offs here if anyone is interested in the implementation details: https://medium.com/@yahiachames/data-ingestion-pipeline-a-data-engineers-dilemma-and-azure-solutions-7c4b36f11351 I am curious if others have used Durable Entities for this kind of ETL work, or if you usually rely on an external database sequence to handle ID generation in serverless setups? Thanks, Chameseddine38Views0likes1CommentPantoneâs Palette Generator enhances creative exploration with agentic AI on Azure
Color can be powerful. When creative professionals shape the mood and direction of their work, color plays a vital role because it provides context and cues for the end product or creation. For more than 60 years, creatives from all areas of designâincluding fashion, product, and digitalâhave turned to Pantone color guides to translate inspiration into precise, reproducible color choices. These guides offer a shared language for colors, as well as inspiration and communication across industries. Once rooted in physical tools, Pantone has evolved to meet the needs of modern creators through its trend forecasting, consulting services, and digital platform. Today, Pantone Connect and its multi-agent solution called the Pantone Palette Generator seamlessly bring color inspiration and accuracy into everyday design workflows (as well as the New York City mayoral race). Simply by typing in a prompt, designers can generate palettes in seconds. Available in Pantone Connect, the tool uses Azure services like Microsoft Foundry, Azure AI Search, and Azure Cosmos DB to serve up the companyâs vast collection of trend and color research from the color experts at the Pantone Color Institute. reached in seconds instead of days. Now, with Microsoft Foundry, creatives can use agents to get instant color palettes and suggestions based on human insights and trend direction.â Turning Pantoneâs color legacy into an AI offering The Palette Generator accelerates the process of researching colors and helps designers find inspiration or validate some of their ideas through trend-backed research. âPantone wants to be where our customers are,â says Rohani Jotshi, Director of Software Engineering and Data at Pantone. âAs workflows become increasingly digital, we wanted to give our customers a way to find inspiration while keeping the same level of accuracy and trust they expect from Pantone.â The Palette Generator taps into thousands of articles from Pantoneâs Color Insider library, as well as trend guides and physical color books in a way that preserves the companyâs color standards science while streamlining the creative process. Built entirely on Microsoft Foundry, the solution uses Azure AI Search for agentic retrieval-augmented generation (RAG) and Azure OpenAI in Foundry Models to reason over the data. It quickly serves up palette options in response to questions like âShow me soft pastels for an eco-friendly line of baby clothesâ or âI want to see vibrant metallics for next spring.â Over the course of two months, the Pantone team built the initial proof of concept for the Palette Generator, using GitHub Copilot to streamline the process and save over 200 hours of work across multiple sprints. This allowed Pantoneâs engineers to focus on improving prompt engineering, adding new agent capabilities, and refining orchestration logic rather than writing repetitive code. Building a multi-agent architecture that accelerates creativity The Pantone team worked with Microsoft to develop the multi-agent architecture, which is made up of three connected agents. Using Microsoft Agent Frameworkâan open source development kit for building AI orchestration systemsâit was a straightforward process to bring the agents together into one workflow. âThe Microsoft team recommended Microsoft Agent Framework and when we tried it, we saw how it was extremely fast and easy to create architectural patterns,â says Kristijan Risteski, Solutions Architect at Pantone. âWith Microsoft Agent Framework, we can spin up a model in five lines of code to connect our agents.â When a user types in a question, they interact with an orchestrator agent that routes prompts and coordinates the more specialized agents. Behind the scenes an additional agent retrieves contextually relevant insights from Pantoneâs proprietary Color Insider dataset. Using Azure AI Search with vectorized data indexing, this agent interprets the semantics of a userâs query rather than relying solely on keywords. A third agent then applies rules from color science to assemble a balanced palette. This agent ensures the output is a color combination that meets harmony, contrast, and accessibility standards. The result is a set of Pantone-curated colors that match the emotional and aesthetic tone of the request. âAll of this happens in seconds,â says Risteski. To manage conversation flow and achieve long-term data persistence, Pantone uses Azure Cosmos DB, which stores user sessions, prompts, and results. The database not only enables designers to revisit past palette explorations but also provides Pantone with valuable usage intelligence to refine the system over time. âWe use Azure Cosmos DB to track inputs and outputs,â says Risteski. âThat data helps us fine-tune prompts, measure engagement, and plan how weâll train future models.â Improving accuracy and performance with Azure AI Search With Azure AI Search, the Palette Generator can understand the nuance of color language. Instead of relying solely on keyword searches that might miss the complexity of words like âvibrantâ or âmuted,â Pantoneâs team decided to use a vectorized index for more accurate palette results. Using the built-in vectorization capability of Azure AI Search, the team converted their color knowledge baseâincluding text-based color psychology and trend articlesâinto numerical embeddings. âOverall, vector search gave us better results because it could understand the intent of the prompt, not just the words,â says Risteski. âIf someone types, âShow me colors that feel serene and oceanic,â the system understands intent. It finds the right references across our color psychology and trend archives and delivers them instantly.â The team also found ways to reduce latency as they evolved their proof of concept. Initially, they encountered slow inference times and performance lags when retrieving search results. By switching from GPT-4.1 to GPT-5, latency improved. And using Azure AI Search to manage ranking and filtering results helped reduce the number of calls to the large language model (LLM). âWith Azure, we just get the articles, put them in a bucket, and say âindex it now,â says Risteski. âIt takes one or two minutesâand thatâs it. The results are so much better than traditional search.â Moving from inspiration to palettes faster The Palette Generator has transformed how designers and color enthusiasts interact with Pantoneâs expertise. What once took weeks of research and review can now be done in seconds. âTypically, if someone wanted to develop a palette for a product launch, it might take many months of research,â says Jotshi. âNow, they can type one sentence to describe their inspiration then immediately find Pantone-backed insight and options. Human curation will still be hugely important, but a strong set of starting options can significantly accelerate the palette development process.â Expanding the palette: The next phase for Pantoneâs design agent Rapidly launching the Palette Generator in beta has redefined what the Pantone engineering team thought was possible. âWeâre a small development team, but with Azure we built an enterprise-grade AI system in a matter of weeks,â says Risteski. âThatâs a huge win for us.â Next up, the team plans to migrate the entire orchestration layer to Azure Functions, moving to a fully scalable, serverless deployment. This will allow Pantone to run its agents more efficiently, handle variable workloads automatically, and integrate seamlessly with other Azure products such as Microsoft Foundry and Azure Cosmos DB. At the same time, Pantone plans to expand its multi-agent system to include new specialized agents, including one focused on palette harmony and another focused on trend prediction.491Views1like0CommentsImportant Changes to App Service Managed Certificates: Is Your Certificate Affected?
Overview As part of an upcoming industry-wide change, DigiCert, the Certificate Authority (CA) for Azure App Service Managed Certificates (ASMC), is required to migrate to a new validation platform to meet multi-perspective issuance corroboration (MPIC) requirements. While most certificates will not be impacted by this change, certain site configurations and setups may prevent certificate issuance or renewal starting July 28, 2025. Update December 8, 2025 Weâve published an update in November about how App Service Managed Certificates can now be supported on sites that block public access. This reverses the limitation introduced in July 2025, as mentioned in this blog. Note: This blog post reflects a point-in-time update and will not be revised. For the latest and most accurate details on App Service Managed Certificates, please refer to official documentation or subsequent updates. Learn more about the November 2025 update here: Follow-Up to âImportant Changes to App Service Managed Certificatesâ: November 2025 Update. August 5, 2025 Weâve published a Microsoft Learn documentation titled App Service Managed Certificate (ASMC) changes â July 28, 2025 that contains more in-depth mitigation guidance and a growing FAQ section to support the changes outlined in this blog post. While the blog currently contains the most complete overview, the documentation will soon be updated to reflect all blog content. Going forward, any new information or clarifications will be added to the documentation page, so we recommend bookmarking it for the latest guidance. What Will the Change Look Like? For most customers: No disruption. Certificate issuance and renewals will continue as expected for eligible site configurations. For impacted scenarios: Certificate requests will fail (no certificate issued) starting July 28, 2025, if your site configuration is not supported. Existing certificates will remain valid until their expiration (up to six months after last renewal). Impacted Scenarios You will be affected by this change if any of the following apply to your site configurations: Your site is not publicly accessible: Public accessibility to your app is required. If your app is only accessible privately (e.g., requiring a client certificate for access, disabling public network access, using private endpoints or IP restrictions), you will not be able to create or renew a managed certificate. Other site configurations or setup methods not explicitly listed here that restrict public access, such as firewalls, authentication gateways, or any custom access policies, can also impact eligibility for managed certificate issuance or renewal. Action: Ensure your app is accessible from the public internet. However, if you need to limit access to your app, then you must acquire your own SSL certificate and add it to your site. Your site uses Azure Traffic Manager "nested" or "external" endpoints: Only âAzure Endpointsâ on Traffic Manager will be supported for certificate creation and renewal. âNested endpointsâ and âExternal endpointsâ will not be supported. Action: Transition to using "Azure Endpoints". However, if you cannot, then you must obtain a different SSL certificate for your domain and add it to your site. Your site relies on *.trafficmanager.net domain: Certificates for *.trafficmanager.net domains will not be supported for creation or renewal. Action: Add a custom domain to your app and point the custom domain to your *.trafficmanager.net domain. After that, secure the custom domain with a new SSL certificate. If none of the above applies, no further action is required. How to Identify Impacted Resources? To assist with the upcoming changes, you can use Azure Resource Graph (ARG) queries to help identify resources that may be affected under each scenario. Please note that these queries are provided as a starting point and may not capture every configuration. Review your environment for any unique setups or custom configurations. Scenario 1: Sites Not Publicly Accessible This ARG query retrieves a list of sites that either have the public network access property disabled or are configured to use client certificates. It then filters for sites that are using App Service Managed Certificates (ASMC) for their custom hostname SSL bindings. These certificates are the ones that could be affected by the upcoming changes. However, please note that this query does not provide complete coverage, as there may be additional configurations impacting public access to your app that are not included here. Ultimately, this query serves as a helpful guide for users, but a thorough review of your environment is recommended. You can copy this query, paste it into Azure Resource Graph Explorer, and then click "Run query" to view the results for your environment. // ARG Query: Identify App Service sites that commonly restrict public access and use ASMC for custom hostname SSL bindings resources | where type == "microsoft.web/sites" // Extract relevant properties for public access and client certificate settings | extend publicNetworkAccess = tolower(tostring(properties.publicNetworkAccess)), clientCertEnabled = tolower(tostring(properties.clientCertEnabled)) // Filter for sites that either have public network access disabled // or have client certificates enabled (both can restrict public access) | where publicNetworkAccess == "disabled" or clientCertEnabled != "false" // Expand the list of SSL bindings for each site | mv-expand hostNameSslState = properties.hostNameSslStates | extend hostName = tostring(hostNameSslState.name), thumbprint = tostring(hostNameSslState.thumbprint) // Only consider custom domains (exclude default *.azurewebsites.net) and sites with an SSL certificate bound | where tolower(hostName) !endswith "azurewebsites.net" and isnotempty(thumbprint) // Select key site properties for output | project siteName = name, siteId = id, siteResourceGroup = resourceGroup, thumbprint, publicNetworkAccess, clientCertEnabled // Join with certificates to find only those using App Service Managed Certificates (ASMC) // ASMCs are identified by the presence of the "canonicalName" property | join kind=inner ( resources | where type == "microsoft.web/certificates" | extend certThumbprint = tostring(properties.thumbprint), canonicalName = tostring(properties.canonicalName) // Only ASMC uses the "canonicalName" property | where isnotempty(canonicalName) | project certName = name, certId = id, certResourceGroup = tostring(properties.resourceGroup), certExpiration = properties.expirationDate, certThumbprint, canonicalName ) on $left.thumbprint == $right.certThumbprint // Final output: sites with restricted public access and using ASMC for custom hostname SSL bindings | project siteName, siteId, siteResourceGroup, publicNetworkAccess, clientCertEnabled, thumbprint, certName, certId, certResourceGroup, certExpiration, canonicalName Scenario 2: Traffic Manager Endpoint Types For this scenario, please manually review your Traffic Manager profile configurations to ensure only âAzure Endpointsâ are in use. We recommend inspecting your Traffic Manager profiles directly in the Azure portal or using relevant APIs to confirm your setup and ensure compliance with the new requirements. Scenario 3: Certificates Issued to *.trafficmanager.net Domains This ARG query helps you identify App Service Managed Certificates (ASMC) that were issued to *.trafficmanager.net domains. In addition, it also checks whether any web apps are currently using those certificates for custom domain SSL bindings. You can copy this query, paste it into Azure Resource Graph Explorer, and then click "Run query" to view the results for your environment. // ARG Query: Identify App Service Managed Certificates (ASMC) issued to *.trafficmanager.net domains // Also checks if any web apps are currently using those certificates for custom domain SSL bindings resources | where type == "microsoft.web/certificates" // Extract the certificate thumbprint and canonicalName (ASMCs have a canonicalName property) | extend certThumbprint = tostring(properties.thumbprint), canonicalName = tostring(properties.canonicalName) // Only ASMC uses the "canonicalName" property // Filter for certificates issued to *.trafficmanager.net domains | where canonicalName endswith "trafficmanager.net" // Select key certificate properties for output | project certName = name, certId = id, certResourceGroup = tostring(properties.resourceGroup), certExpiration = properties.expirationDate, certThumbprint, canonicalName // Join with web apps to see if any are using these certificates for SSL bindings | join kind=leftouter ( resources | where type == "microsoft.web/sites" // Expand the list of SSL bindings for each site | mv-expand hostNameSslState = properties.hostNameSslStates | extend hostName = tostring(hostNameSslState.name), thumbprint = tostring(hostNameSslState.thumbprint) // Only consider bindings for *.trafficmanager.net custom domains with a certificate bound | where tolower(hostName) endswith "trafficmanager.net" and isnotempty(thumbprint) // Select key site properties for output | project siteName = name, siteId = id, siteResourceGroup = resourceGroup, thumbprint ) on $left.certThumbprint == $right.thumbprint // Final output: ASMCs for *.trafficmanager.net domains and any web apps using them | project certName, certId, certResourceGroup, certExpiration, canonicalName, siteName, siteId, siteResourceGroup Ongoing Updates We will continue to update this post with any new queries or important changes as they become available. Be sure to check back for the latest information. Note on Comments We hope this information helps you navigate the upcoming changes. To keep this post clear and focused, comments are closed. If you have questions, need help, or want to share tips or alternative detection methods, please visit our official support channels or the Microsoft Q&A, where our team and the community can assist you.24KViews1like1CommentFollow-Up to âImportant Changes to App Service Managed Certificatesâ: November 2025 Update
This post provides an update to the Tech Community article âImportant Changes to App Service Managed Certificates: Is Your Certificate Affected?â and covers the latest changes introduced since July 2025. With the November 2025 update, ASMC now remains supported even if the site is not publicly accessible, provided all other requirements are met. Details on requirements, exceptions, and validation steps are included below. Background Context to July 2025 Changes As of July 2025, all ASMC certificate issuance and renewals use HTTP token validation. Previously, public access was required because DigiCert needed to access the endpoint https://<hostname>/.well-known/pki-validation/fileauth.txt to verify the token before issuing the certificate. App Service automatically places this token during certificate creation and renewal. If DigiCert cannot access this endpoint, domain ownership validation fails, and the certificate cannot be issued. November 2025 Update Starting November 2025, App Service now allows DigiCert's requests to the https://<hostname>/.well-known/pki-validation/fileauth.txt endpoint, even if the site blocks public access. If thereâs a request to create an App Service Managed Certificate (ASMC), App Service places the domain validation token at the validation endpoint. When DigiCert tries to reach the validation endpoint, App Service front ends present the token, and the request terminates at the front end layer. DigiCert's request does not reach the workers running the application. This behavior is now the default for ASMC issuance for initial certificate creation and renewals. Customers do not need to specifically allow DigiCert's IP addresses. Exceptions and Unsupported Scenarios This update addresses most scenarios that restrict public access, including App Service Authentication, disabling public access, IP restrictions, private endpoints, and client certificates. However, a public DNS record is still required. For example, sites using a private endpoint with a custom domain on a private DNS cannot validate domain ownership and obtain a certificate. Even with all validations now relying on HTTP token validation and DigiCert requests being allowed through, certain configurations are still not supported for ASMC: Sites configured as "Nested" or "External" endpoints behind Traffic Manager. Only "Azure" endpoints are supported. Certificates requested for domains ending in *.trafficmanager.net are not supported. Testing Customers can easily test whether their siteâs configuration or set-up supports ASMC by attempting to create one for their site. If the initial request succeeds, renewals should also work, provided all requirements are met and the site is not listed in an unsupported scenario.6.3KViews1like0Comments