logic apps
364 TopicsMoving the Logic Apps Designer Forward
Today, we're excited to announce a major redesign of the Azure Logic Apps designer experience, now entering Public Preview for Standard workflows. While these improvements are currently Standard-only, our vision is to quickly extend them across all Logic Apps surfaces and SKUs. ⚠️ Important: As this is a Public Preview release, we recommend using these features for development and testing workflows rather than production workloads. We're actively stabilizing the experience based on your feedback. This Is Just the Beginning This is not us declaring victory and moving on. This is Phase I of a multi-phase journey, and I'm committed to sharing our progress through regular blog posts as we continue iterating. More importantly, we want to hear from you. Your feedback drives these improvements, and it will continue to shape what comes next. This redesign comes from listening to you—our customers—watching how you actually work, and adapting the designer to better fit your workflows. We've seen the pain points, heard the frustrations, and we're addressing them systematically. Our Roadmap: Three Phases Phase I: Perfecting the Development Loop (What we're releasing today) We're focused on making it cleaner and faster to edit your workflow, test it, and see the results. The development loop should feel effortless, not cumbersome. Phase II: Reimagining the Canvas Next, we'll rethink how the canvas works—introducing new shortcuts and workflows that make modifications easier and more intuitive. Phase III: Unified Experiences Across All Surfaces We'll ensure VS Code, Consumption, and Standard all deliver similarly powerful flows, regardless of where you're working. Beyond these phases, we have several standalone improvements planned: a better search experience, streamlined connection creation and management, and removing unnecessary overhead when creating new workflows. We're also tackling fundamental questions that shouldn't be barriers: What do stateful and stateless mean? Why can't you switch between them? Why do you have to decide upfront if something is an agent? You shouldn't. We're working toward making these decisions dynamic—something you can change directly in the designer as you build, not rigid choices you're locked into at creation time. We want to make it easier to add agentic capabilities to any workflow, whenever you need them. What's New in Phase I Let me walk you through what we're shipping at Ignite. Faster Onboarding: Get to Building Sooner We're removing friction from the very beginning. When you create a new workflow, you'll get to the designer before having to choose stateful, stateless, or agentic. Eventually, we want to eliminate that upfront choice entirely—making it a decision you can defer until after your workflow is created. This one still needs a bit more work, but it's coming soon. One View to Rule Them All We've removed the side panel. Workflows now exist in a single, unified view with all the tooling you need. No more context switching. You can easily hop between run history, code view, or visual editor, and change your settings inline—all without leaving your workflow. Draft Mode: Auto-Save Without the Risk Here's one of our biggest changes: draft mode with auto-save. We know the best practice is to edit locally in VS Code, store workflows in GitHub, and deploy properly to keep editing separate from production. But we also know that's not always possible or practical for everyone. It sucks to get your workflow into the perfect state, then lose everything if something goes wrong before you hit save. Now your workflow auto-saves every 10 seconds in draft mode. If you refresh the window, you're right back where you were—but your changes aren't live in production. There's now a separate Publish action that promotes your draft to production. This means you can work, test your workflow against the draft using the designer tools, verify everything works, and then publish to production—even when editing directly on the resource. Another benefit: draft saves won't restart your app. Your app keeps running. Restarts only happen when you publish. Smarter, Faster Search We've reorganized how browsing works—no more getting dropped into an endless list of connectors. You now get proper guidance as you explore and can search directly for what you need. Even better, we're moving search to the backend in the coming weeks, which will eliminate the need to download information about thousands of connectors upfront and deliver instant results. Our goal: no search should ever feel slow. Document Your Workflows with Notes You can now add sticky notes anywhere in your workflow. Drop a post-it note, add markdown (yes, even YouTube videos), and document your logic right on the canvas. We have plans to improve this with node anchoring and better stability features, but for now, you can visualize and explain your workflows more clearly than ever. Unified Monitoring and Run History Making the development loop smoother means keeping everything in one place. Your run history now lives on the same page as your designer. Switch between runs without waiting for full blade reloads. We've also added the ability to view both draft and published runs—a powerful feature that lets you test and validate your changes before they go live. We know there's a balance between developer and operator personas. Developers need quick iteration and testing capabilities, while operators need reliable monitoring and production visibility. This unified view serves both: developers can test draft runs and iterate quickly, while the clear separation between draft and published runs ensures operators maintain full visibility into what's actually running in production. New Timeline View for Better Debugging We experimented with a timeline concept in Agentic Apps to explain handoff—Logic Apps' first foray into cyclic graphs. But it was confusing and didn't work well with other Logic App types. We've refined it. On the left-hand side, you'll now see a hierarchical view of every action your Logic App ran, in execution order. This makes navigation and debugging dramatically easier when you're trying to understand exactly what happened during a run. What's Next This is Phase I. We're shipping these improvements, but we're not stopping here. As we move into Phase II and beyond, I'll continue sharing updates through blog posts like this one. How to Share Your Feedback We're actively listening and want to hear from you: Use the feedback button in the Azure Portal designer Join the discussion in GitHub/Community Forum – https://github.com/Azure/LogicAppsUX Comment below with your thoughts and suggestions Your input directly shapes our roadmap and priorities. Keep the feedback coming. It's what drives these changes, and it's what will shape the future of Azure Logic Apps. Let's build something great together.683Views5likes2CommentsTypical Storage access issues troubleshooting
We get a big number of cases with Storage Account connection failing and sometimes we see that our customers are not aware of the troubleshooting steps they can take to accelerate the resolution of this issue. As such, we've compiled some scenarios and the usual troubleshooting steps we ask you to take. Always remember that if you have done changes to your infrastructure, consider rolling them back to ensure that this is not the root cause. Even a small change that apparently has no effect, may cause downtime on your application. Common messages The errors that are shown in the portal when the Storage Account connectivity is down are very similar, and they may not indicate correctly the cause. Error Message that surfaces in the Portal for Logic Apps Standard System.Private.Core.Lib: Access to the path 'C:\home\site\wwwroot\host.json' is denied Cannot reach host runtime. Error details, Code: 'BadRequest', Message: 'Encountered an error (InternalServerError) from host runtime.' System.Private.CoreLib: The format of the specified network name is invalid. : 'C:\\home\\site\\wwwroot\\host.json'.' System.Private.CoreLib: The user name or password is incorrect. : 'C:\home\site\wwwroot\host.json'. Microsoft.Windows.Azure.ResourceStack: The SSL connection could not be established, see inner exception. System.Net.Http: The SSL connection could not be established, see inner exception. System.Net.Security: Authentication failed because the remote party has closed the transport stream Unexpected error occurred while loading workflow content and artifacts The errors don't really indicate what the root cause is, but it's very common to be a broken connection with the Storage. What to verify? There are 4 major components to verify in these cases: Logic App environment variables and network settings Storage Account networking settings Network settings DNS settings Logic App environment variables and Network From an App Settings point of view, there is not much to verify, but these are important steps, that sometimes are overlook. At this time, all or nearly all Logic Apps have been migrated to dotnet Functions_Worker_Runtime (under Environmental Variables tab), but this is good to confirm. It's also good to confirm if your Platform setting is set to 64 bits (under Configuration tab/ General Settings). We've seen that some deployments are using old templates and setting this as 32 bits, which doesn't make full use of the available resources. Check if Logic App has the following environment variables with value: WEBSITE_CONTENTOVERVNET - set to 1 OR WEBSITE_VNET_ROUTE_ALL - set to 1. OR vnetRouteAllEnabled set to 1. Configure virtual network integration with application and configuration routing. - Azure App Service | Microsoft Learn These settings can also be replaced with the UI setting in the Virtual Network tab, when you select "Content Storage" in the Configuration routing. For better understanding, vnetContentShareEnabled takes precedence. In other words, if it is set (true/false), WEBSITE_CONTENTOVERVNET is ignored. Only if vnetContentShareEnabled is null, WEBSITE_CONTENTOVERVNET is taken into account. Also keep this in mind: Storage considerations for Azure Functions | Microsoft Learn WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and AzureWebJobsStorage have the connection string as in the Storage Account Website_contentazurefileconnectionstring | App settings reference for Azure Functions | Microsoft Learn Azurewebjobsstorage | App settings reference for Azure Functions | Microsoft Learn WEBSITE_CONTENTSHARE has the Fileshare name Website_contentshare | App settings reference for Azure Functions | Microsoft Learn These are the first points to validate. Storage Account settings If all these are matching/properly configured and still the Logic App is in error, we move to the next step, that is to validate the Storage Account network settings. When the Storage Account does not have Vnet integration enabled, there should be no issues, because the connection is made through the public endpoints. Still, even with this, you must ensure that at least the "Allow storage account key access" is enabled. This is because at this time, the Logic App is dependent on the Access key to connect to the Storage Account. Although you can set the AzureWebJobsStorage to run with Managed Identity, you can't fully disable storage account key access for Standard logic apps that use the Workflow Service Plan hosting option. However, with ASE v3 hosting option, you can disable storage account key access after you finish the steps to set up managed identity authentication. Create example Standard workflow in Azure portal - Azure Logic Apps | Microsoft Learn If this setting is enabled, you must check if Storage Account is behind Firewall. The Access may be Enabled for select networks or fully disabled. Both options require Service Endpoints or Private Endpoints configured. Deploying Standard Logic App to Storage Account behind Firewall using Service or Private Endpoints | Microsoft Community Hub So check the Networking tab under the Storage Account and confirm the following: In case you select the "selected networks" option, confirm that the VNET is the same as the Logic App is extended to. Your Logic App and Storage may be hosted in different Vnets, but you must ensure that there is full connectivity between them. They must be peered and with HTTPS and SMB traffic allowed (more explained in the Network section). You can select "Disabled" network access as well. You should also confirm that the Fileshare is created. Usually this is created automatically with the creation of the Logic App, but if you use Terraform or ARM, it may not create the file share and you must do it manually. Confirm if all 4 Private Endpoints are created and approved (File, Table, Queue and Blob). All these resources are used for different components of the Logic App. This is not fully documented, as it is internal engine documentation and not publicly available. For Azure Functions, the runtime base, this is partially documented, as you can read in the article: Storage considerations for Azure Functions | Microsoft Learn If a Private Endpoint is missing, create it and link it to the Vnet as Shared Resource. Not having all Private Endpoints created may end in runtime errors, connections errors or trigger failures. For example, if a workflow is not generating the URL even if it saves correctly, it may be the Table and Queue Private Endpoints missing, as we've seen many times with customers. You can read a bit more about the integration of the Logic App and a firewall secured Storage Account and the needed configuration in these articles: Secure traffic between Standard workflows and virtual networks - Azure Logic Apps | Microsoft Learn Deploy Standard logic apps to private storage accounts - Azure Logic Apps | Microsoft Learn You can use the Kudu console (Advanced tools tab) to further troubleshoot the connection with the Storage Account by using some network troubleshooting commands. If the Kudu console is not available, we recommend using a VM in the same Vnet as the Logic App, to mimic the scenario. Nslookup [hostname or IP] [DNS HOST IP] TCPPing [hostname or IP]:[PORT] Test-Netconnection [hostname] -port [PORT] If you have Custom DNS, the command NSLookup will not return the results from your DNS unless you specify the IP address as a parameter. Instead, you can use the nameresolver command for this, which will use the Vnet DNS settings to check for the endpoint name resolution. nameresolver [endpoint hostname or IP address] Networking Related Commands for Azure App Services | Microsoft Community Hub Vnet configuration Having configured the Private Endpoint for the Logic App will not affect traffic to the Storage. This is because the PE is only for Inbound traffic. The Storage Communication will the considered as outbound traffic, as it's the Logic App that actively communicates with the Storage. Secure traffic between Standard workflows and virtual networks - Azure Logic Apps | Microsoft Learn So consider that the link between these resources must not be interrupted. This forces you to understand that the Logic App uses both HTTPS and SMB protocols to communicate with the Storage Account, meaning that traffic under the ports 443 and 445 needs to be fully allowed in your Vnet. If you have a Network Security Group associated with the Logic App subnet, you need to confirm that the rules are allowing this traffic. You may need to explicitly create rules to allow this. Source port Destination port Source Destination Protocol Purpose * 443 Subnet integrated with Standard logic app Storage account TCP Storage account * 445 Subnet integrated with Standard logic app Storage account TCP Server Message Block (SMB) File Share In case you have forced routing to your Network Virtual Appliance (i.e. Firewall), you must also ensure that this resource is not filtering the traffic or blocking it. Having TLS inspection enabled in your Firewall must also be disabled, for the Logic App traffic. In short, this is because the Firewall will replace the certificate in the message, thus making the Logic App not recognizing the returned certificate, invalidating the message. You can read more about TLS inspection in this URL: Azure Firewall Premium features | Microsoft Learn DNS If you are using Azure DNS, this section should not apply, because all records are automatically created once you create the resources, but if you're using a Custom DNS, when you create the Azure resource (ex: Storage Private Endpoint), the IP address won't be registered in your DNS, so you must do it manually. You must ensure that all A Records are created and maintained, also keeping in mind that they need to point to the correct IP and name. If there are mismatches, you may see the communications severed between the Logic App and other resources, such as the Storage Account. So double-check all DNS records, and confirm that all is in proper state and place. And to make it even easier, with the help of my colleague Mohammed_Barqawi , this information is now translated into a easy to understand flowchart. If you continue to have issues after all these steps are verified, I suggest you open a case with us, so that we can validate what else may be happening, because either a step may have been missed, or some other issue may be occurring.Duplicate detection in Logic App Trigger
Duplicate Detection in Logic App Trigger Overview In certain scenarios, a Logic App may process the same item multiple times due to various reasons. Common examples include: SharePoint Polling: The trigger might fire twice because of subsequent edits. (Reference: Microsoft SharePoint Connector for Power Automate | Microsoft Learn) Dataverse Webhook: This can occur when a field is updated by plugins. Solution Approach To address this issue, I implemented a solution using the Logic Apps REST API: Workflow Runs - List - REST API (Azure Logic Apps) | Microsoft Learn The idea is simple: Each time the Logic App is triggered, it searches all available runs within a specified time window for the same clientTrackingId. Implementation Details Step 1: Extract clientTrackingId For the SharePoint trigger example, I used the file name as the clientTrackingId with the following expression: @string(trigger()['outputs']['body']['value'][0]['{FilenameWithExtension}']) Step 2: Pass Payload to Duplicate Detector Flow The payload includes: clientTrackingId (chosen identifier) Resource URI for the SharePoint flow Time window to check against Duplicate Detector Flow Logic The flow retrieves the run history and checks if the number of runs is greater than 1. If so, the same file was processed before: the http response will have the below status if(greater(length(body('HTTP-Get_History')['value']), 1), '400', '200') Important Notes The Duplicate Detector Flow must have Logic Apps Standard Reader (Preview) permission for the resource group. The solution is built on Logic App Standard, but it can be adapted for Logic App Consumption or Power Automate. GitHub Link logicappfiles/Logic Apps Duplicate Detector at main · mbarqawi/logicappfiles🎉Announcing the General Availability of the XML Parse and Compose Actions in Azure Logic Apps
The XML Operations connector We have recently added two actions for the XML Operations connector: Parse XML with schema and Compose XML with schema. With this addition, Logic Apps customers can now interact with the token picker during design time. The tokens are generated from the XML schema provided by the customer. As a result, the XML document and its contained properties will be easily accessible, created and manipulated in the workflow. XML parse with schema The XML parse with schema allow customers to parse XML data using an XSD file (an XML schema file). XSD files need to be uploaded to the Logic App schemas artifacts or an Integration account. Once they have been uploaded, you need to enter the enter your XML content, the source of the schema and the name of the schema file. The XML content may either be provided in-line or selected from previous operations in the workflow using the token picker. For instance, the following is a parsed XML: XML compose with schema The XML compose with schema allows customers to generate XML data, using an XSD file. XSD files need to be uploaded to the Logic App schemas artifacts or an Integration account. Once they have been uploaded, you should select the XSD file along with entering the JSON root element or elements of your input XML schema. The JSON input elements will be dynamically generated based on the selected XML schema. For instance, the following is a Composed file: Learnings from Transition from Public Preview to General Availability: Upon feedback received from multiple customers, we would love to share the following recommendations and considerations, that will you maximize the reliability, flexibility, and internationalization capabilities of XML Parse and Compose actions in Azure Logic Apps. Handling Array Inputs in XML Switch to array input mode when mapping arrays. By default, the Logic App Designer expects individual array items for XML elements with maxOccurs > 1. If you want to assign an entire array token, use the array input mode icon in the Designer. This avoids unnecessary For Each loops and streamlines your workflow. For instance, the following: Click the Switch to input entire array Enter your array token. Managing Non-UTF-8 Encoded XML Leverage the encoding parameter in XML Compose. Customers can specify the desired character encoding (e.g., iso-2022-jp for Japanese). This controls both the .NET XML writer settings and the output encoding, allowing for broader internationalization support. Example configuration: Use the XML writer settings property to set encoding as needed. Safe Transport of Binary and Non-UTF-8 Content Utilize the Logic App content envelope. The XML Compose action outputs content in a safe envelope, enabling transport of binary and non-UTF-8 content within the UTF-8 JSON payload. Downstream actions (e.g., HTTP Request) can consume this envelope directly. Content-Type Header Management XML Compose now specifies the exact character set in the Content-Type header. This ensures downstream systems receive the correct encoding information. For example, application/xml; charset=iso-2022-jp will be set for Japanese-encoded XML. Consuming XML Output in HTTP Actions Reference the XML output property directly in HTTP actions. The envelope’s content-type is promoted to the HTTP header, and the base64-encoded content is decoded and sent as the raw HTTP body. This preserves encoding and binary fidelity. Documentation and External References Consult official documentation for advanced scenarios: Support non-Unicode character encoding in Azure Logic Apps. Content-Type and Content-Encoding for clarifying header usage. Do not confuse Content-Type with Content-Encoding. Content-Type specifies character set encoding (e.g., UTF-8, ISO-2022-JP), while Content-Encoding refers to compression (e.g., gzip). Check this short video to learn more:263Views0likes0CommentsClone a Consumption Logic App to a Standard Workflow
Azure Logic Apps now supports cloning existing Consumption Logic Apps into Standard workflows. This feature simplifies migration by allowing you to reuse your existing workflows without starting from scratch. It’s designed to help organizations transition to the Standard model, which offers improved performance, flexibility, and advanced capabilities. Why Does It Matter? Migrating from Consumption to Standard has been a common challenge for teams looking to modernize their integration solutions. Standard workflows provide: Better performance through single-tenant architecture. Enhanced developer experience with local development and VS Code integration. Advanced features like built-in connectors, stateful/stateless workflows, and integration with Azure Functions. This new cloning capability reduces friction, accelerates adoption, and ensures continuity for existing business processes. And with the announcement of Agent Loop Support in Logic Apps Consumption, the ability to clone agentic workflows to a Logic Apps Standard application allow you to have a graduation strategy for your agents. Key Highlights Direct cloning: Convert an existing Consumption Logic App into a Standard workflow with minimal manual effort. Preserves workflow design: Your triggers, actions, and configurations are carried over. Supports modernization: Enables migration to a model that supports advanced features like custom connectors and private endpoints. Improved governance: Standard workflows allow better control over deployment and scaling. How to Get Started Navigate to your Consumption Logic App in the Azure portal. Select Clone to Standard from the available options. Choose the target Logic App (Standard) environment and configure settings. Validate and deploy the cloned workflow. Limitations Infrastructure settings aren’t cloned (e.g., integration account configurations). Connections and credentials must be reconfigured after cloning. Secure parameters aren’t copied; placeholders are created and need updating from Key Vault. Connectors default to shared even if built-in versions exist. Unsupported items include: Integration account references XML/Flat file transformations EDIFACT and X12 actions Nested workflows Azure Function calls Learn more To learn more about this new feature, check the official documentation on Microsoft Docs.258Views0likes0Comments🎉Announcing the HL7 connector for Azure Logic Apps Standard and Hybrid (Public Preview)
We’re excited to announce the Public Preview of the HL7 connector for Azure Logic Apps, empowering healthcare organizations to seamlessly integrate clinical applications and automate data exchange using industry-standard HL7 protocols. Why HL7 Integration Matters Healthcare organizations are complex ecosystems, with departments such as admissions, laboratories, nursing stations, and billing all generating and consuming critical patient data. Efficient, secure, and standardized data exchange is essential for delivering quality care and streamlining operations. HL7 Connector: Key Capabilities Simplified healthcare application integration with HL7-specific adapters and schemas. Standardized clinical data interchange between diverse medical applications. Automated communication processes to minimize manual intervention. End-to-end business process support for scenarios like admissions, discharge, and transfer, using publisher/subscriber patterns. Typical Scenarios End-to-End Integration: Publishers (e.g., Admissions Discharge and Transfer systems) send HL7 messages to subscribers (Hospital Information System, Pharmacy System). Azure Logic Apps processes, validates, reformats, and routes these messages, with acknowledgments sent back to publishers. Interrogative (Query/Response): Line-of-business applications can query other systems (e.g., requesting lab results from the Hospital Information System), with Azure Logic Apps managing routing and responses. HL7 Encode/Decode actions The HL7 connector provides native parsing and serialization of HL7 messages. It provides a Decode action that receives an HL7 message to decode in flat file format to produce an XML file, and an Encode action that receives a message to encode, with a header to encode to produce a Flat file. Decode Input: Flat file HL7 message. Output: Decoded XML message content and XML header content. Encode Input: XML message and header content. Output: Encoded HL7 flat file message. Important Considerations HL7 Message Schemas: Supports HL7 v2.X flat-file and XML schemas. Schema Management: Upload BizTalk schemas directly into integration accounts for seamless migration. Hybrid Deployment: Integration accounts connect to Azure. Message Processing: Logic Apps currently supports single message processing, with batching planned for future updates. Multipart Handling: Logic Apps exposes header, body, and custom segments as first-class outputs, simplifying orchestration compared to BizTalk’s multipart pipelines. Connector Availability: HL7 is available for both hybrid and Logic App Standard. Integration accounts: Currently we only support uploading schemas in Integration accounts. Announcing MLLP Private Preview We’re pleased to announce that MLLP support is now available as a private preview. MLLP Adapter: Enables HL7 messaging over the Minimal Lower Layer Protocol (MLLP). MLLP is available only in hybrid due to port restrictions in app services. Customers interested in early access and providing feedback can sign up via this survey: Private bundle for MLLP request Get Started To learn more about the HL7 connector and how it can transform your healthcare integration workflows, visit our documentation at Integrate Healthcare Systems with HL7 in Standard Workflows | Microsoft Learn. Video explaining the features of the new HL7 connector226Views0likes0Comments📢 Announcing Foundry Control Plane support for Logic Apps Agent Loop (Preview)
At Ignite 2025, Microsoft announced the Foundry Control Plane as a way to build, operate, and govern every agent from a single location. To align to this Microsoft vision, you will find Logic Apps Agent Loop implementations participate in this holistic governance experience. This ensures that Logic Apps customers are included in the broader Microsoft agent ecosystem, without trade-offs! Agent governance isn’t just a technical safeguard—it’s the foundation for trust and scalability in the agentic era. As AI agents gain autonomy to make decisions and execute tasks, governance ensures they operate within ethical, secure, and compliant boundaries. Without clear guardrails, agents can introduce risks like unauthorized data access, regulatory violations, and costly errors that undermine business confidence. The Foundry Control Plane seeks to address all of these enterprise needs. Public Preview Launch There are no additional actions required by a customer to have their Agent Loop instances show up in the Foundry Control Plan. This support has been built-in, without any customer intervention. To access the Foundry Control plan, navigate to this link and then toggle on the new experience. Once in the new experience select a Foundry project followed by clicking on the Operate link in the upper right hand corner. Next, click Assets followed by selecting the appropriate Subscription and specify Logic Apps Agent Loop from the Source dropdown. You will now see your Agent Loop instances. For a selected row, you will also have access to additional capabilities such as stopping/starting the agent and deep linking to that particular instance. Coming Soon A capability that we are actively working on is making Agent Loop telemetry also available to the Foundry Control Plan. Note: The images below represent pre-release capabilities. A list of conversations will be displayed. For each conversation you will see important information like token usage (inputs/outputs). To dive deeper into a specific conversation, click a specific trace instance to further explore that execution. Once we have clicked on a specific trace id, we can see rich telemetry for that particular execution. Additional Content To see a demo of the Foundry Control Panel and Logic Apps in action, please check out the following video.🔐Enabling API Key Authentication for Logic Apps MCP Servers
We previously announced support for building Logic Apps MCP servers. This capability comes in a couple forms including via the wizard hosted in API Center and through manual configurations inside an existing Logic App. When we initially shipped this feature, we offered two authentication options: OAuth2 (default) Anonymous (opt-in) Based upon customer feedback and the need for greater interoperability with non-Microsoft agent frameworks we have enabled API Key authentication. API Key authentication now becomes part of the default security configuration, which means when you use API Center (or AI Foundry) to generate an MCP Server, we will setup OAuth and API Key by default. How can I configure my authentication settings? In our product documentation, we discuss the role of host.json and that becomes a key configuration to enable a Logic App as an MCP server. We have recently introduced an authentication node which includes a type property. Within this property there are 3 valid options that can be used: OAuth2 ApiKey Anonymous By default, you won't see a type property in host.json and that means that you have the default settings which include OAuth2 and ApiKey. If you explicitly add one of these values, only that authentication scheme will be implemented. { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows", "version": "[1.*, 2.0.0)" }, "extensions": { "workflow": { "McpServerEndpoints": { "enable": true, "authentication": { "type": "ApiKey" } } } } } Note: We will be adding a user experience around accessing key information, but for now, you can access this information by calling our backend APIs. Pre-requisites In order to connect to the related APIs, we will need a Bearer token that can be used to authenticate our request. A simple way to obtain this token is to log into the Azure Portal and then open up a Cloud Shell session. Issue the following command: az account get-access-token --resource https://management.azure.com/ You will see a response that includes an 'accessToken'. Copy this value. Retrieving API Key To obtain an API key, use your favorite REST client and call the following endpoint with the query parameter getApikey=true: REST Endpoint: POST /listMcpServers POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{RGName}/providers/Microsoft.Web/sites/{LAName}/hostruntime/runtime/webhooks/workflow/api/management/listMcpServers?api-version=2021-02-01&getApikey=true You should receive a response that includes all MCP Servers for that Logic App, related tools and a value of X-API-Key. This will be the key that you can use to connect to your MCP Server. Note: The API Keys that are available, both primary and secondary, apply to the entire Logic App and not individual MCP Servers If you prefer using a CLI, you can alternatively issue this command instead: az rest --method post --url https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{RGName}/providers/Microsoft.Web/sites/{LAName}/hostruntime/runtime/webhooks/workflow/api/management/listMcpServers?api-version=2021-02-01&getApikey=true Note: Providing a request body is not required unless you want some additional control over the behavior of the API. For example, you have the ability to control whether you are interested in the primary or secondary key and notAfter when dealing with expiry dates. { "keyType": "primary", // Optional: "primary" or "secondary". Defaults to "primary" if not provided. "notAfter": "2026-09-04T10:04:24Z" // Optional: UTC timestamp for API Key expiry. Defaults to 24 hours if not provided. } Tools As you may have noticed, one of the benefits of using the listMCPServers API is that it will also display any of the preconfigured tools for an MCP Server. This is a good way to discover what tools are available in the MCP Server. Regenerating API Key To regenerate an API key, we do have a different API that we can call to accomplish this. Endpoint: POST /regenerateMcpServerAccessKey POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{RGName}/providers/Microsoft.Web/sites/{LAName}/hostruntime/runtime/webhooks/workflow/api/management/regenerateMcpServerAccessKey?api-version=2021-02-01 CLI: Alternatively, if you have CLI installed, then you can use this command: az rest --method post --url https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{RGName}/providers/Microsoft.Web/sites/{LAName}/hostruntime/runtime/webhooks/workflow/api/management/regenerateMcpServerAccessKey?api-version=2021-02-01 When making these calls, we do need to ensure that we provide a keyType in our request body. { "keyType": "primary" // Required: "primary" or "secondary" } When you make this call you will receive a 200 OK HTTP response. To access the new key value, call the listMcpServee API that we previously discussed in this article. Usage You can use the API Key MCP Server authentication from any MCP client that supports it. We have recently introduced Agent Loop support for MCP Servers so we can use that here. With our URL, APIKey we can configure connection to call our MCP Server using API Key. Start by building a Conversational Agent (this will also work with autonomous agents as well) and click Add an MCP server (preview). Click on Add MCP Server Add a new connection and select Key as your Authentication Type. From there, insert your MCP Server URL and Key. For the Key Header Name provide a value of X-API-KEY. Save and run your agent. Demo To see a demo of how to setup API Key authentication, please see the following video🔐Secure AI Agent Knowledge Retrieval - Introducing Security Filters in Agent Loop
Building secure, permission-aware AI agents with Agent Loop We’re excited to introduce a new capability in Azure Logic Apps that enables document-level authorization for Retrieval-Augmented Generation (RAG) workflows. With security filters, you can now ensure that agents only retrieve and respond with information users are authorized to view. Why Security Trimming Matters In RAG-enabled workflows, agents often retrieve knowledge from indexed documents. Without proper filtering, users may receive responses based on documents they shouldn’t access. Security trimming ensures: Responses are contextually appropriate based on user permissions Sensitive data is protected AI interactions remain compliant and secure The Challenge: Securing AI Agent Knowledge Bases AI agents are transforming how organizations interact with their data, but they introduce a critical security challenge: how do you ensure an agent only retrieves and shares information the requesting user is permitted to see? Without proper security controls, an AI agent with access to a corporate knowledge base could inadvertently expose confidential documents, financial records, or sensitive HR information to unauthorized users. Traditional approaches required developers to: Manually implement complex security filters in every retrieval operation Maintain parallel permission systems alongside existing access controls Handle edge cases like nested group memberships and dynamic role changes Risk security vulnerabilities from custom code errors The Solution: Agent Loop + AI Search with Native ACL Support The Azure Logic Apps Agent Loop now integrates seamlessly with Azure AI Search's document-level access control capabilities, providing a secure-by-default approach to AI agent knowledge retrieval. This integration combines the conversational power of AI agents with enterprise-grade security enforcement. How It Works: Two-Phase Security Architecture Phase I: Permission-Aware Indexing During the ingestion phase, you must index your documents in Azure AI Search with a custom UserIds field that maps each document to the Object Ids of the users allowed to access it. Azure AI Search indexes documents along with their permission metadata natively: ADLS Gen2 Indexer (Pull Model): The enhanced indexer automatically retrieves ACL assignments from Azure Data Lake Storage containers and directories, computing effective permissions for each file Push API (Push Model): Developers can manually push documents with permission metadata (user IDs or group IDs) using the REST API or Azure SDKs Pro Tip: Use group IDs instead of individual user IDs for easier management. When a user's role changes, you simply update their group membership rather than reindexing documents. Phase II: Filtered Retrieval via Agent Loop This is where magic happens. In your Logic Apps workflow using the Azure AI Search action, you configure the agent to apply security filters during vector search automatically. For User-Based Filtering: In your Logic Apps workflow, you must configure the agent to apply a filter condition during vector search: UserIds/any(u: u eq '@{currentRequest()['headers']['X-MS-CLIENT-PRINCIPAL-ID']}') This ensures that agents only generate responses from documents the user is permitted to access. This filter expression: Extracts the authenticated user's principal ID from the incoming request headers Applies it as a filter condition during the AI Search query Ensures only documents with matching user permissions are retrieved Happens automatically before results reach the LLM for response generation For Group-Based Filtering: For more flexible permission management, developers can leverage group-based access control: Extract the user's principal ID from request headers Query Microsoft Entra to retrieve the user's group memberships Apply a filter using group IDs instead: GroupIds/any(g: g in ('@{variables('userGroups')}')) This approach provides significant advantages: Easier maintenance: Update group memberships without reindexing Hierarchical permissions: Support nested groups and organizational structures Role-based access: Align with existing RBAC patterns in your organization The Complete Agent Loop Flow User sends a query to the AI agent through your application Logic Apps Agent Loop receives the request with the user's authentication token Security filter is applied using the Azure AI Search action, leveraging the user's principal ID or group memberships Azure AI Search performs natural language search or vector search and returns only authorized documents LLM generates a response grounded exclusively in the user's permitted data Agent returns the answer with full confidence that no unauthorized information was accessed Example: HR Knowledge Assistant Imagine an HR AI agent built with Agent Loop that helps employees find information about benefits, policies, and procedures: Executive team members can ask about confidential compensation strategies and merger discussions People managers can inquire about performance review guidelines and team-specific policies All employees can access general benefits information and company-wide policies With the Agent Loop + AI Search integration, the same AI agent serves all these user types securely—automatically filtering knowledge retrieval based on each user's permissions. No separate agents, no custom code, no security gaps. The Bottom Line The integration of Agent Loop with Azure AI Search's ACL support transforms how organizations build secure AI agents. What once required complex custom security implementations now works through simple configuration in Logic Apps workflows. By combining conversational AI capabilities with document-level access control, this solution enables organizations to deploy AI agents that users can trust—knowing every response respects their permissions and organizational security policies. For developers, this means faster time-to-market for AI agent applications. For security teams, it means enforceable, auditable access controls. For end users, it means confident interaction with AI systems that understand boundaries. Learn More For a step-by-step guide on setting up security filters, indexing documents, and configuring your Logic App workflow, visit the full tutorial here: Add security filters for agent knowledge trimming For more information about document-access level control, refer to: https://learn.microsoft.com/en-us/azure/search/search-document-level-access-overview194Views0likes0Comments🤖 Agent Loop Demos 🤖
We announced the public preview of agent loop at Build 2025. Agent Loop is a new feature in Logic Apps to build AI Agents for use cases that span across industry domains and patterns. Here are some resources to learn more about them Logic Apps Labs - https://aka.ms/lalabs Agent in a day workshop - https://aka.ms/la-agent-in-a-day In this article, share with you use cases implemented in Logic Apps using agent loop and other features. This video shows a conversational agent that answers questions about Health Plans and their coverage. The demo features document ingestion of policy documents in AI Search and then retrieving them in Agent loop using natural language This video shows an autonomous agent that generates sales report. The demo features Python Code Interpreter which analyzed excel data and aggregates it for the LLM to reason on it This video shows a conversational agent that helps recruiters with candidate screening and interview scheduling. The demo features OBO (On-Behalf-Of) where agent uses tools in the security context of user. This video shows a conversational agent for a Utility company. The demo features multi agent orchestration using handoff, and a native chat client that supports multi turn conversations and streaming, and is secured via Entra for user authentication This video shows an autonomous Loan Approval Agent specifically that handles auto loans for a bank. The demo features an AI Agent that uses an Azure Open AI model, company's policies, and several tools to process loan application. For edge cases, huma in involved via Teams connector. This video shows an autonomous Product Return Agent for Fourth Coffee company. The returns are processed by agent based on company policy, and other criterions. In this case also, a human is involved when decisions are outside the agent's boundaries This video shows a commercial agent that grants credits for purchases of groceries and other products, for Northwind Stores. The Agent extracts financial information from an IBM Mainframe and an IBM i system to assess each requestor and updates the internal Northwind systems with the approved customers information. Multi-Agent scenario including both a codeful and declarative method of implementation. Note: This is pre-release functionality and is subject to change. If you are interested in further discussing Logic Apps codeful Agents, please fill out the following feedback form. Operations Agent (part 1): In this conversational agent, we will perform Logic Apps operations such as repair and resubmit to ensure our integration platform is healthy and processing transactions. To ensure of compliance we will ensure all operational activities are logged in ServiceNow. Operations Agent (part 2): In this autonomous agent, we will perform Logic Apps operations such as repair and resubmit to ensure our integration platform is healthy and processing transactions. To ensure of compliance we will ensure all operational activities are logged in ServiceNow.4KViews4likes2Comments