azure functions
292 TopicsCollaborative Function App Development Using Repo Branches
In this example, I demonstrate a Windows-based Function App using PowerShell, with deployment via Azure DevOps (ADO) and a Bicep template. Local development is done in VSCode. Scenario: Your Function App project resides in a shared repository maintained by a team. Each developer works on a separate branch. Whenever a branch is updated, the Function App is deployed to a slot named after that branch. If the slot doesn't exist, it will be automatically created. How to use it: Create a Function App You can create a Function App using any method of your choice. Prepare a corresponding repo in Azure DevOps Set up your repo structure for the Function App source code. Create Function App code using the VSCode wizard In this example, we use PowerShell and create an anonymous HTTP trigger. Then, we manually add three additional files. The resulting directory structure looks like this: deploy.yml trigger: branches: include: - '*' pool: vmImage: 'ubuntu-latest' variables: azureSubscription: '<YOUR_CONNECTION_STRING_FROM_ADO>' functionAppName: '<YOUR_FUNCTION_APP_NAME>' resourceGroup: '<YOUR_RG_NAME>' location: '<YOUR_LOCATION_NAME>' steps: - checkout: self - task: AzureCLI@2 name: DeploySlotInfra inputs: azureSubscription: $(azureSubscription) scriptType: bash scriptLocation: inlineScript inlineScript: | BRANCH_NAME=$(Build.SourceBranchName) if [ "$BRANCH_NAME" = "master" ]; then echo "##[command]Deploying production infrastructure" az deployment group create \ --resource-group $(resourceGroup) \ --template-file deploy-master.bicep \ --parameters functionAppName=$(functionAppName) location=$(location) else SLOT_NAME="$BRANCH_NAME" echo "##[command]Deploying slot: $SLOT_NAME" az deployment group create \ --resource-group $(resourceGroup) \ --template-file deploy.bicep \ --parameters functionAppName=$(functionAppName) slotName=$SLOT_NAME location=$(location) fi - task: ArchiveFiles@2 displayName: 'Package Function App as ZIP' inputs: rootFolderOrFile: '$(System.DefaultWorkingDirectory)/' includeRootFolder: false archiveType: zip archiveFile: '$(Build.ArtifactStagingDirectory)/functionapp.zip' replaceExistingArchive: true - task: AzureCLI@2 name: ZipDeploy inputs: azureSubscription: $(azureSubscription) scriptType: bash scriptLocation: inlineScript inlineScript: | BRANCH_NAME=$(Build.SourceBranchName) if [ "$BRANCH_NAME" = "master" ]; then echo "##[command]Deploying code to production" az functionapp deployment source config-zip \ --name $(functionAppName) \ --resource-group $(resourceGroup) \ --src "$(Build.ArtifactStagingDirectory)/functionapp.zip" else SLOT_NAME="$BRANCH_NAME" echo "##[command]Deploying code to slot: $SLOT_NAME" az functionapp deployment source config-zip \ --name $(functionAppName) \ --resource-group $(resourceGroup) \ --slot $SLOT_NAME \ --src "$(Build.ArtifactStagingDirectory)/functionapp.zip" fi Please replace all <YOUR_XXX> placeholders with values relevant to your environment. Additionally, update the two instances of "master" to match your repo's default branch name (e.g., main), as updates from this branch will always deploy to the production slot. deploy-master.bicep @description('Function App Name') param functionAppName string @description('Function App location') param location string resource functionApp 'Microsoft.Web/sites@2022-09-01' existing = { name: functionAppName } resource appSettings 'Microsoft.Web/sites/config@2022-09-01' = { name: 'appsettings' parent: functionApp properties: { FUNCTIONS_EXTENSION_VERSION: '~4' } } deploy.bicep @description('Function App Name') param functionAppName string @description('Slot Name (e.g., dev, test, feature-xxx)') param slotName string @description('Function App location') param location string resource functionApp 'Microsoft.Web/sites@2022-09-01' existing = { name: functionAppName } resource functionSlot 'Microsoft.Web/sites/slots@2022-09-01' = { name: slotName parent: functionApp location: location properties: { serverFarmId: functionApp.properties.serverFarmId } } resource slotAppSettings 'Microsoft.Web/sites/slots/config@2022-09-01' = { name: 'appsettings' parent: functionSlot properties: { FUNCTIONS_EXTENSION_VERSION: '~4' } } Deploy from the master branch Once deployed, the HTTP trigger becomes active in the production slot, and can be accessed via: https://<FUNCTION_APP_NAME>.azurewebsites.net/api/<TRIGGER_NAME> Switch to a custom branch like member1 and create a test HTTP trigger After publishing, a new deployment slot named member1 will be created (if not already existing). You can open it in the Azure Portal and view its dedicated interface. The branch-specific HTTP trigger will now work at the following URL: https://<FUNCTION_APP_NAME>-<BRANCH_NAME>.azurewebsites.net/api/<TRIGGER_NAME> Notice: Using deployment slots for collaborative development is subject to slot count and SKU limits. For example, the Premium SKU supports up to 20 slots. See the Azure subscription and service limits, quotas, and constraints - Azure Resource Manager | Microsoft Learn for details. If you need to delete a slot after use, you can do so using PowerShell with the Remove-AzWebAppSlot command: Remove-AzWebAppSlot (Az.Websites) | Microsoft Learn165Views0likes0CommentsAzure API Management Your Auth Gateway For MCP Servers
The Model Context Protocol (MCP) is quickly becoming the standard for integrating Tools 🛠️ with Agents 🤖 and Azure API Management is at the fore-front, ready to support this open-source protocol 🚀. You may have already encountered discussions about MCP, so let's clarify some key concepts: Model Context Protocol (MCP) is a standardized way, (a protocol), for AI models to interact with external tools, (and either read data or perform actions) and to enrich context for ANY language models. AI Agents/Assistants are autonomous LLM-powered applications with the ability to use tools to connect to external services required to accomplish tasks on behalf of users. Tools are components made available to Agents allowing them to interact with external systems, perform computation, and take actions to achieve specific goals. Azure API Management: As a platform-as-a-service, API Management supports the complete API lifecycle, enabling organizations to create, publish, secure, and analyze APIs with built-in governance, security, analytics, and scalability. New Cool Kid in Town - MCP AI Agents are becoming widely adopted due to enhanced Large Language Model (LLM) capabilities. However, even the most advanced models face limitations due to their isolation from external data. Each new data source requires custom implementations to extract, prepare, and make data accessible for any model(s). - A lot of heavy lifting. Anthropic developed an open-source standard - the Model Context Protocol (MCP), to connect your agents to external data sources such as local data sources (databases or computer files) or remote services (systems available over the internet through e.g. APIs). MCP Hosts: LLM applications such as chat apps or AI assistant in your IDEs (like GitHub Copilot in VS Code) that need to access external capabilities MCP Clients: Protocol clients that maintain 1:1 connections with servers, inside the host application MCP Servers: Lightweight programs that each expose specific capabilities and provide context, tools, and prompts to clients MCP Protocol: Transport layer in the middle At its core, MCP follows a client-server architecture where a host application can connect to multiple servers. Whenever your MCP host or client needs a tool, it is going to connect to the MCP server. The MCP server will then connect to for example a database or an API. MCP hosts and servers will connect with each other through the MCP protocol. You can create your own custom MCP Servers that connect to your or organizational data sources. For a quick start, please visit our GitHub repository to learn how to build a remote MCP server using Azure Functions without authentication: https://aka.ms/mcp-remote Remote vs. Local MCP Servers The MCP standard supports two modes of operation: Remote MCP servers: MCP clients connect to MCP servers over the Internet, establishing a connection using HTTP and Server-Sent Events (SSE), and authorizing the MCP client access to resources on the user's account using OAuth. Local MCP servers: MCP clients connect to MCP servers on the same machine, using stdio as a local transport method. Azure API Management as the AI Auth Gateway Now that we have learned that MCP servers can connect to remote services through an API. The question now rises, how can we expose our remote MCP servers in a secure and scalable way? This is where Azure API Management comes in. A way that we can securely and safely expose tools as MCP servers. Azure API Management provides: Security: AI agents often need to access sensitive data. API Management as a remote MCP proxy safeguards organizational data through authentication and authorization. Scalability: As the number of LLM interactions and external tool integrations grows, API Management ensures the system can handle the load. Security remains to be a critical piece of building MCP servers, as agents will need to securely connect to protected endpoints (tools) to perform certain actions or read protected data. When building remote MCP servers, you need a way to allow users to login (Authenticate) and allow them to grant the MCP client access to resources on their account (Authorization). MCP - Current Authorization Challenges State: 4/10/2025 Recent changes in MCP authorization have sparked significant debate within the community. 🔍 𝗞𝗲𝘆 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 with the Authorization Changes: The MCP server is now treated as both a resource server AND an authorization server. This dual role has fundamental implications for MCP server developers and runtime operations. 💡 𝗢𝘂𝗿 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: To address these challenges, we recommend using 𝗔𝘇𝘂𝗿𝗲 𝗔𝗣𝗜 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 as your authorization gateway for remote MCP servers. 🔗For an enterprise-ready solution, please check out our azd up sample repo to learn how to build a remote MCP server using Azure API Management as your authentication gateway: https://aka.ms/mcp-remote-apim-auth The Authorization Flow The workflow involves three core components: the MCP client, the APIM Gateway, and the MCP server, with Microsoft Entra managing authentication (AuthN) and authorization (AuthZ). Using the OAuth protocol, the client starts by calling the APIM Gateway, which redirects the user to Entra for login and consent. Once authenticated, Entra provides an access token to the Gateway, which then exchanges a code with the client to generate an MCP server token. This token allows the client to communicate securely with the server via the Gateway, ensuring user validation and scope verification. Finally, the MCP server establishes a session key for ongoing communication through a dedicated message endpoint. Diagram source: https://aka.ms/mcp-remote-apim-auth-diagram Conclusion Azure API Management (APIM) is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). In this blog, we've emphasized the simplicity of connecting AI agents to various data sources through MCP, streamlining previously complex implementations. Given the critical role of secure access to platforms and services for AI agents, APIM offers robust solutions for managing OAuth tokens and ensuring secure access to protected endpoints, making it an invaluable asset for enterprises, despite the challenges of authentication. API Management: An Enterprise Solution for Securing MCP Servers Azure API Management is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). It is designed to help you to securely expose your remote MCP servers. MCP servers are still very new, and as the technology evolves, API Management provides an enterprise-ready solution that will evolve with the latest technology. Stay tuned for further feature announcements soon! Acknowledgments This post and work was made possible thanks to the hard work and dedication of our incredible team. Special thanks to Pranami Jhawar, Julia Kasper, Julia Muiruri, Annaji Sharma Ganti Jack Pa, Chaoyi Yuan and Alex Vieira for their invaluable contributions. Additional Resources MCP Client Server integration with APIM as AI gateway Blog Post: https://aka.ms/remote-mcp-apim-auth-blog Sequence Diagram: https://aka.ms/mcp-remote-apim-auth-diagram APIM lab: https://aka.ms/ai-gateway-lab-mcp-client-auth Python: https://aka.ms/mcp-remote-apim-auth .NET: https://aka.ms/mcp-remote-apim-auth-dotnet On-Behalf-Of Authorization: https://aka.ms/mcp-obo-sample 3rd Party APIs – Backend Auth via Credential Manager: Blog Post: https://aka.ms/remote-mcp-apim-lab-blog APIM lab: https://aka.ms/ai-gateway-lab-mcp YouTube Video: https://aka.ms/ai-gateway-lab-demo13KViews9likes3CommentsSuperfast using Web App and Managed Identity to invoke Function App triggers
TOC Introduction Setup References 1. Introduction Many enterprises prefer not to use App Keys to invoke Function App triggers, as they are concerned that these fixed strings might be exposed. This method allows you to invoke Function App triggers using Managed Identity for enhanced security. I will provide examples in both Bash and Node.js. 2. Setup 1. Create a Linux Python 3.11 Function App 1.1. Configure Authentication to block unauthenticated callers while allowing the Web App’s Managed Identity to authenticate. Identity Provider Microsoft Choose a tenant for your application and it's users Workforce Configuration App registration type Create Name [automatically generated] Client Secret expiration [fit-in your business purpose] Supported Account Type Any Microsoft Entra Directory - Multi-Tenant Client application requirement Allow requests from any application Identity requirement Allow requests from any identity Tenant requirement Use default restrictions based on issuer Token store [checked] 1.2. Create an anonymous trigger. Since your app is already protected by App Registration, additional Function App-level protection is unnecessary; otherwise, you will need a Function Key to trigger it. 1.3. Once the Function App is configured, try accessing the endpoint directly—you should receive a 401 Unauthorized error, confirming that triggers cannot be accessed without proper Managed Identity authorization. 1.4. After making these changes, wait 10 minutes for the settings to take effect. 2. Create a Linux Node.js 20 Web App and Obtain an Access Token and Invoke the Function App Trigger Using Web App (Bash Example) 2.1. Enable System Assigned Managed Identity in the Web App settings. 2.2. Open Kudu SSH Console for the Web App. 2.3. Run the following commands, making the necessary modifications: subscriptionsID → Replace with your Subscription ID. resourceGroupsID → Replace with your Resource Group ID. application_id_uri → Replace with the Application ID URI from your Function App’s App Registration. https://az-9640-faapp.azurewebsites.net/api/test_trigger → Replace with the corresponding Function App trigger URL. # Please setup the target resource to yours subscriptionsID="01d39075-XXXX-XXXX-XXXX-XXXXXXXXXXXX" resourceGroupsID="XXXX" # Variable Setting (No need to change) identityEndpoint="$IDENTITY_ENDPOINT" identityHeader="$IDENTITY_HEADER" application_id_uri="api://9c0012ad-XXXX-XXXX-XXXX-XXXXXXXXXXXX" # Install necessary tool apt install -y jq # Get Access Token tokenUri="${identityEndpoint}?resource=${application_id_uri}&api-version=2019-08-01" accessToken=$(curl -s -H "Metadata: true" -H "X-IDENTITY-HEADER: $identityHeader" "$tokenUri" | jq -r '.access_token') echo "Access Token: $accessToken" # Run Trigger response=$(curl -s -o response.json -w "%{http_code}" -X GET "https://az-9640-myfa.azurewebsites.net/api/my_test_trigger" -H "Authorization: Bearer $accessToken") echo "HTTP Status Code: $response" echo "Response Body:" cat response.json 2.4. If everything is set up correctly, you should see a successful invocation result. 3. Invoke the Function App Trigger Using Web App (nodejs Example) I have also provide my example, which you can modify accordingly and save it to /home/site/wwwroot/callFunctionApp.js and run it cd /home/site/wwwroot/ vi callFunctionApp.js npm init -y npm install azure/identity axios node callFunctionApp.js // callFunctionApp.js const { DefaultAzureCredential } = require("@azure/identity"); const axios = require("axios"); async function callFunctionApp() { try { const applicationIdUri = "api://9c0012ad-XXXX-XXXX-XXXX-XXXXXXXXXXXX"; // Change here const credential = new DefaultAzureCredential(); console.log("Requesting token..."); const tokenResponse = await credential.getToken(applicationIdUri); if (!tokenResponse || !tokenResponse.token) { throw new Error("Failed to acquire access token"); } const accessToken = tokenResponse.token; console.log("Token acquired:", accessToken); const apiUrl = "https://az-9640-myfa.azurewebsites.net/api/my_test_trigger"; // Change here console.log("Calling the API now..."); const response = await axios.get(apiUrl, { headers: { Authorization: `Bearer ${accessToken}`, }, }); console.log("HTTP Status Code:", response.status); console.log("Response Body:", response.data); } catch (error) { console.error("Failed to call the function", error.response ? error.response.data : error.message); } } callFunctionApp(); Below is my execution result: 3. References Tutorial: Managed Identity to Invoke Azure Functions | Microsoft Learn How to Invoke Azure Function App with Managed Identity | by Krizzia 🤖 | Medium Configure Microsoft Entra authentication - Azure App Service | Microsoft Learn732Views1like2CommentsAzure Functions Flex Consumption is now generally available
We are excited to announce that Azure Functions Flex Consumption is now generally available. This hosting plan provides the highest performance for Azure Functions with concurrency-based scaling for both HTTP and non-HTTP triggers, scale from zero to 1000 instances, and no cold start with the Always Ready feature. Flex Consumption also allows you to enjoy seamless integration with your virtual network at no extra cost, ensuring secure and private communication, with no considerable impact to your app’s scale out performance. Learn more about How to achieve high HTTP scale with Azure Functions Flex Consumption, the engineering innovation behind it, and project Legion, the platform behind Flex Consumption. In addition to the fast scaling based on per-instance concurrency, you can choose between 2048MB and 4096MB instance sizes. As the function app receives requests it will automatically scale from zero to as many instances of that instance size as needed based on per instance concurrency, and back to zero for cost efficiency when there’s no more requests to process. You can also take advantage of the built-in integration with Azure Load Testing and the Performance Optimizer to optimize your HTTP functions for performance and cost. Flex Consumption is now generally available for .NET 8 on the isolated worker model, Java 11, Java 17, Node 20, PowerShell 7.4, Python 3.10, and Python 3.11 in Australia East, East Asia, East US, North Europe, Southeast Asia, Sweden Central, UK South, and West US 2, and in preview in East US 2, South Central US, and West US 3. By December 9th 2024, .NET 9 will also generally available in Australia East, East Asia, East US, North Europe, Southeast Asia, Sweden Central, and UK South. Besides the currently supported DevOps and dev tools like VS Code, Java tooling, Azure Pipeline tasks, and GitHub Actions, you can now use the latest Visual Studio 2022 v17.12 update or newer to create and publish to Flex Consumption apps. The Flex Consumption plan offers competitive pricing with flexible options to fit your needs, with GA pricing taking effect on December 1, 2024. For detailed pricing information, please refer to the pricing page. Customer adoption and scenarios We have been working with several internal and external customers during the public preview period, with hundreds of external customers actively using Flex Consumption. “ At Yggdrasil, we immediately started adopting Flex Consumption functions when they went into public preview, as they offer the combination of cost-efficiency, scalability, and security features we need to run our company. We already have 100 Flex Consumption functions running in production, and expect to move at least another 50 functions, now that the product has reached GA. We migrated to Flex from consumption to have VNet integration and private endpoints. – Andreas Strandfelt, Partner & Senior Cloud Specialist at Yggdrasil Commodities ApS “ What really matters to us is that the app scales up and down based on demand. Azure Functions Flex Consumption is very appealing to us because of how it dynamically scales based on the number of messages that are queued up in Azure Event Hubs – Stephan Miehe, GitHub Senior Director. Public case study “ Microsoft AI We had a need to process a large queue, representing a significant volume of data with inconsistent availability. Azure Functions Flex Consumption dramatically simplified the code footprint needed to perform this embarrassingly parallel task and helped us complete it in a much shorter timeframe that we had expected. – Craig Presti, Office of the CTO, Microsoft AI project “ Going Forward In the upcoming months we look forward to rolling out even more features to Flex Consumption, including: Availability zones: Enabling availability zones will be possible for new and existing Flex Consumption apps 512 MB instance size: We will introduce a new, smaller instance size for more granular control Enhanced tooling support: PowerShell modules and Terraform AzureRM support New language versions: Support for the latest language versions like Node 22, Python 3.12, and Java 21 Expanded regional availability: The number of regions will continue to expand in early 2025 with UAE North, Centra US, West US 3, South Central US, East US 2, West US, Canada Central, France Central, and Norway East coming first Metrics support: Full Azure Monitor metrics support for Flex Consumption apps Deployment improvements: Zero-downtime deployment to ensure no disruption to running executions More triggers: Kafka and SQL triggers Closing features: Addressing the limitations identified in Considerations. Please let us know which ones are most important to you! Get Started! Explore our reference samples, quickstarts, and comprehensive documentation to get started with the Azure Functions Flex Consumption hosting plan today!8.7KViews2likes24CommentsAnnouncing Public Preview of Azure Functions Flex Consumption
We're thrilled to announce the Public Preview of Azure Functions Flex Consumption, a new Linux-based hosting plan which offers the features in Consumption that you have been waiting for: networking and fast scaling features on a serverless billing model!17KViews4likes11CommentsCan't access http context user claims in Azure Function
Background: Create an Azure Function (.NET Core & C#) that will be consumed in a SPO App. We created an Entra App Registration for the Azure Function and added App Roles for this App Registration where the App Role is using “Users/Group”, but not “Application”. Issue: In the SPO App (deployed in SPO Page), we can see the user claim and App Registration’s - App Role in the context of the user that’s hitting the SPO Page (thru Authorization header), however, in the Azure Function code the req.HttpContext.User.Claims object is empty. So what is required or missing from a configuration perspective either in the Azure Function or App Registration to make this work?54Views0likes1CommentUse managed identity instead of AzureWebJobsStorage to connect a function app to a storage account
In a function app, usually we use appsetting AzureWebJobsStorage to connect to storage. This blog shows you how to configure a function app using Azure Active Directory identities instead of secrets or connection strings, where possible. Using identities helps you avoid accidentally leaking sensitive secrets and can provide better visibility into how data is accessed. This will not work if the storage account is in a sovereign cloud or has a custom DNS. IMPORTANT! When running in a Consumption or Elastic Premium plan, your app uses the WEBSITE_AZUREFILESCONNECTIONSTRING and WEBSITE_CONTENTSHARE settings when connecting to Azure Files on the storage account used by your function app. Azure Files doesn't support using managed identity when accessing the file share. That is to say, if your functio app is running on Consumption/EP, plan, you can only delete and recreate function app on app service plan to avoid using File Share. For more information, see Azure Files supported authentication scenarios Below are the steps to do configuration. 1. Enable system assigned identity in your function app and save it. 2. Give storage access to your function app. Search for Storage Blob Data Owner, select it. 3. If you configure a blob-triggered function app, repeat the step 2 to add Storage Account Contributor and Storage Queue Data Contributor roles which will be used for blob trigger. 4. Return to Access Control (IAM), click Role assignments, search for your function app name to confirm the roles are added successfully. 5. Navigate to your function app. Select Configuration and edit AzureWebJobsStorage. Change the name to AzureWebJobsStorage__accountname. Change the value to your storage account name. (The new setting uses a double underscore ( __ ), which is a special character in application settings.) 6. Delete the previous AzureWebJobsStorage. Then you will find your function app still works fine.85KViews8likes56CommentsBuilding Durable and Deterministic Multi-Agent Orchestrations with Durable Execution
Durable Execution Durable Execution is a reliable approach to running code, designed to handle failures smoothly with automatic retries and state persistence. It is built on three core principles: Incremental Execution: Each operation runs independently and in order. State Persistence: The output of each step is durably saved to ensure progress is not lost. Fault Tolerance: If a step fails, the operation is retried from the last successful step, skipping previously completed steps. Durable Execution is particularly beneficial for scenarios requiring stateful chaining of operations, such as order-processing applications, data processing pipelines, ETL (extract, transform, load), and as we'll get into in this post, intelligent applications with AI agents. Durable execution simplifies the implementation of complex, long-running, stateful, and fault-tolerant application patterns. Technologies like Durable Functions provide a programming model that makes the implementation of these patterns straightforward. Some common stateful application patterns that require stateful chaining and are easily implemented with durable execution, like Durable Functions include: Durable Task Programming Model Before solutions like Azure Durable Functions, developers had to manually coordinate operations and maintain state using infrastructure like message queues and state stores, adding complexity to the code and increased the operational maintenance burden. Durable Functions streamlines this process by providing a programming model backed by a durable state store, enabling developers to define a series of steps to be executed in a specific order. This is called an orchestrator function. Activity functions within the orchestration function are the "steps," and the durable task runtime ensures each step is scheduled in order and executed on your compute of choice, with outputs persisted. Durable for Orchestrating Agents With the rapid advancements in AI, we are witnessing an increasing trend of scenarios that require orchestration, specifically when it comes to working with multiple AI agents within applications. These agents often work together to accomplish a larger task. Two emerging designs for these applications are deterministic agentic workflows and self-directed agentic workflows: Deterministic Agentic Workflows: Agents work together through a series of predefined steps to accomplish a larger task, leading to a deterministic result. A Deterministic Agentic Workflow orchestrates a series of predefined steps, each calling sub-agents to achieve a deterministic outcome. Self-Directed Agentic Workflows: Agents dynamically explore and determine the workflow plan as they proceed. Each approach fits different business scenarios and requirements. However, as we're learning, many scenarios benefit from deterministic outcomes, and durable execution truly shines in the deterministic agentic workflow pattern. It excels at providing efficient and reliable deterministic outcomes by following a predefined set path that maps to orchestration and activity functions. The programming model makes it extremely easy to call your agents independently and implement common agent app patterns, such as prompt chaining for function chaining and parallelization with fan-out/fan-in. For more on this, please reference this insightful blog post by my colleague Chris Gillum. Self-directed agentic workflows are advantageous for unpredictable, creative tasks where the agents can determine their plan during execution. However, this can be less efficient and lead to non-deterministic outcomes, which may cause undesirable results. When using durable execution for your agent orchestration, it enhances the resiliency of your agentic workflows. If any step fails, there’s no need to start from the beginning. Given that requests to LLMs can be expensive and may yield different outcomes, durable execution ensures that your orchestrations can recover right from their last success point. Let’s look at a specific example of where I used durable execution, specifically Azure Durable Functions to implement a multi-agent application that requires durability – The Travel Planner Assistant. The Travel Planner Assistant Travel planning inherently follows a structured sequence – selecting destinations, crafting itineraries, gathering local insights, and booking the trip. This makes it ideal for an agentic workflow with predefined steps, rather than a self-directed agentic workflow with exploration. The outcome must be deterministic – we want a complete travel itinerary and a fully booked trip. The application exposes a durable function that schedules a predefined agentic workflow (orchestration) to create a travel plan, which will then be used to book the trip. The orchestration interacts with specialized sub-agents for the first three steps. These include: Destination Recommender Agent: Provides global knowledge across thousands of locations. Itinerary Planner Agent: Creates a daily itinerary based on a deep understanding of the specific location’s logistics and seasonal considerations. Local Recommendations Agent: Offers popular attractions to visit. Orchestration Activity Function Calls - Sequential AI agent activities. Each activity is executed as a separate function with its own context. By using Durable Functions to coordinate these specialized agents, the travel planner agent creates a more accurate and comprehensive travel plan than a single generalist agent. Once the travel plan has been created, Durable Function orchestrations provide built-in support for human interaction, allowing human approval of the travel plan before proceeding to book the trip. This can be crucial in some scenario because, despite the advancements in agents and LLMs, there are still critical tasks that require human input. Relying solely on LLM decision-making without review for such important task can be risky, and human approval ensure accuracy and reliability. Seeking this approval can be a long-running operation that may encounter failures along the way. However, by leveraging Durable Functions, the application benefits from resiliency through built-in state persistence, ensuring the orchestration can resume in the event of a failure, such as downstream dependency outage or if the application restarts while waiting for approval. Demo Video emo Video Wrap up For orchestrating agents, I recommend using Durable Execution technologies like Azure Durable Functions, as they offer determinism, reliability, and efficiency. The programming model simplifies the orchestration of agents, ensuring predictable outcomes. It enhances the resiliency of agentic workflows, allowing them to recover seamlessly from their last successful point. To provide evidence of customers using Durable in real-world production applications, take a look at this Toyota case study where they are using Durable Functions for orchestrating their multi-agent application, exactly as outlined above. If you have any questions or thoughts about this, please feel free to comment below. I'd love to hear if you find this interesting or if you're already using durable execution in your agent applications.2.4KViews3likes0CommentsAnnouncing General Availability: Azure Logic Apps Standard Custom Code with .NET 8
We’re excited to announce the General Availability (GA) of Custom Code support in Azure Logic Apps Standard with .NET 8. This release marks a significant step forward in enabling developers to build more powerful, flexible, and maintainable integration workflows using familiar .NET tools and practices. With this capability, developers can now embed custom .NET 8 code directly within their Logic Apps Standard workflows. This unlocks advanced logic scenarios, promotes code reuse, and allows seamless integration with existing .NET libraries and services—making it easier than ever to build enterprise-grade solutions on Azure. What’s New in GA This GA release introduces several key enhancements that improve the development experience and expand the capabilities of custom code in Logic Apps: Bring Your Own Packages Developers can now include and manage their own NuGet packages within custom code projects without having to resolve conflicts with the dependencies used by the language worker host. The update includes the ability to load the assembly dependencies of the custom code project into a separate Assembly context allowing you to bring any NET8 compatible dependent assembly versions that your project need. There are only three exceptions to this rule: Microsoft.Extensions.Logging.Abstractions Microsoft.Extensions.DependencyInjection.Abstractions Microsoft.Azure.Functions.Extensions.Workflows.Abstractions Dependency Injection Native Support Custom code now supports native Dependency Injection (DI), enabling better separation of concerns and more testable, maintainable code. This aligns with modern .NET development patterns and simplifies service management within your custom logic. To enable Dependency Injection, developers will need to provide a StartupConfiguration class, defining the list of dependencies: public class StartupConfiguration : IConfigureStartup { /// <summary> /// Configures services for the Azure Functions application. /// </summary> /// <param name="services">The service collection to configure.</param> public void Configure(IServiceCollection services) { // Register the routing service with dependency injection services.AddSingleton<IRoutingService, OrderRoutingService>(); services.AddSingleton<IDiscountService, DiscountService>(); } } You will also need to initialize those register those services during your custom code class constructor: public class MySampleFunction { private readonly ILogger<MySampleFunction> logger; private readonly IRoutingService routingService; private readonly IDiscountService discountService; public MySampleFunction(ILoggerFactory loggerFactory, IRoutingService routingService, IDiscountService discountService) { this.logger = loggerFactory.CreateLogger<MySampleFunction>(); this.routingService = routingService; this.discountService = discountService; } // your function logic here } Improved Authoring Experience The development experience has been significantly enhanced with improved tooling and templates. Whether you're using Visual Studio or Visual Studio Code, you’ll benefit from streamlined scaffolding, local debugging, and deployment workflows that make building and managing custom code faster and more intuitive. The following user experience improvements were added: Local functions metadata are kept between VS Code sessions, so you don't receive validation errors when editing workflows that depend on the local functions. Projects are also built when designer starts, so you don't have to manually update references. New context menu gestures, allowing you to create new local functions or build your functions project directly from the explorer area Unified debugging experience, making it easer for you to debug. We have now a single task for debugging custom code and logic apps, which makes starting a new debug session as easy as pressing F5. Learn More To get started with custom code in Azure Logic Apps Standard, visit the official Microsoft Learn documentation: Create and run custom code in Azure Logic Apps Standard You can also find example code for Dependency injection wsilveiranz/CustomCode-Dependency-Injection