azure
140 TopicsAzure API Management Your Auth Gateway For MCP Servers
The Model Context Protocol (MCP) is quickly becoming the standard for integrating Tools 🛠️ with Agents 🤖 and Azure API Management is at the fore-front, ready to support this open-source protocol 🚀. You may have already encountered discussions about MCP, so let's clarify some key concepts: Model Context Protocol (MCP) is a standardized way, (a protocol), for AI models to interact with external tools, (and either read data or perform actions) and to enrich context for ANY language models. AI Agents/Assistants are autonomous LLM-powered applications with the ability to use tools to connect to external services required to accomplish tasks on behalf of users. Tools are components made available to Agents allowing them to interact with external systems, perform computation, and take actions to achieve specific goals. Azure API Management: As a platform-as-a-service, API Management supports the complete API lifecycle, enabling organizations to create, publish, secure, and analyze APIs with built-in governance, security, analytics, and scalability. New Cool Kid in Town - MCP AI Agents are becoming widely adopted due to enhanced Large Language Model (LLM) capabilities. However, even the most advanced models face limitations due to their isolation from external data. Each new data source requires custom implementations to extract, prepare, and make data accessible for any model(s). - A lot of heavy lifting. Anthropic developed an open-source standard - the Model Context Protocol (MCP), to connect your agents to external data sources such as local data sources (databases or computer files) or remote services (systems available over the internet through e.g. APIs). MCP Hosts: LLM applications such as chat apps or AI assistant in your IDEs (like GitHub Copilot in VS Code) that need to access external capabilities MCP Clients: Protocol clients that maintain 1:1 connections with servers, inside the host application MCP Servers: Lightweight programs that each expose specific capabilities and provide context, tools, and prompts to clients MCP Protocol: Transport layer in the middle At its core, MCP follows a client-server architecture where a host application can connect to multiple servers. Whenever your MCP host or client needs a tool, it is going to connect to the MCP server. The MCP server will then connect to for example a database or an API. MCP hosts and servers will connect with each other through the MCP protocol. You can create your own custom MCP Servers that connect to your or organizational data sources. For a quick start, please visit our GitHub repository to learn how to build a remote MCP server using Azure Functions without authentication: https://aka.ms/mcp-remote Remote vs. Local MCP Servers The MCP standard supports two modes of operation: Remote MCP servers: MCP clients connect to MCP servers over the Internet, establishing a connection using HTTP and Server-Sent Events (SSE), and authorizing the MCP client access to resources on the user's account using OAuth. Local MCP servers: MCP clients connect to MCP servers on the same machine, using stdio as a local transport method. Azure API Management as the AI Auth Gateway Now that we have learned that MCP servers can connect to remote services through an API. The question now rises, how can we expose our remote MCP servers in a secure and scalable way? This is where Azure API Management comes in. A way that we can securely and safely expose tools as MCP servers. Azure API Management provides: Security: AI agents often need to access sensitive data. API Management as a remote MCP proxy safeguards organizational data through authentication and authorization. Scalability: As the number of LLM interactions and external tool integrations grows, API Management ensures the system can handle the load. Security remains to be a critical piece of building MCP servers, as agents will need to securely connect to protected endpoints (tools) to perform certain actions or read protected data. When building remote MCP servers, you need a way to allow users to login (Authenticate) and allow them to grant the MCP client access to resources on their account (Authorization). MCP - Current Authorization Challenges State: 4/10/2025 Recent changes in MCP authorization have sparked significant debate within the community. 🔍 𝗞𝗲𝘆 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 with the Authorization Changes: The MCP server is now treated as both a resource server AND an authorization server. This dual role has fundamental implications for MCP server developers and runtime operations. 💡 𝗢𝘂𝗿 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: To address these challenges, we recommend using 𝗔𝘇𝘂𝗿𝗲 𝗔𝗣𝗜 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 as your authorization gateway for remote MCP servers. 🔗For an enterprise-ready solution, please check out our azd up sample repo to learn how to build a remote MCP server using Azure API Management as your authentication gateway: https://aka.ms/mcp-remote-apim-auth The Authorization Flow The workflow involves three core components: the MCP client, the APIM Gateway, and the MCP server, with Microsoft Entra managing authentication (AuthN) and authorization (AuthZ). Using the OAuth protocol, the client starts by calling the APIM Gateway, which redirects the user to Entra for login and consent. Once authenticated, Entra provides an access token to the Gateway, which then exchanges a code with the client to generate an MCP server token. This token allows the client to communicate securely with the server via the Gateway, ensuring user validation and scope verification. Finally, the MCP server establishes a session key for ongoing communication through a dedicated message endpoint. Diagram source: https://aka.ms/mcp-remote-apim-auth-diagram Conclusion Azure API Management (APIM) is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). In this blog, we've emphasized the simplicity of connecting AI agents to various data sources through MCP, streamlining previously complex implementations. Given the critical role of secure access to platforms and services for AI agents, APIM offers robust solutions for managing OAuth tokens and ensuring secure access to protected endpoints, making it an invaluable asset for enterprises, despite the challenges of authentication. API Management: An Enterprise Solution for Securing MCP Servers Azure API Management is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). It is designed to help you to securely expose your remote MCP servers. MCP servers are still very new, and as the technology evolves, API Management provides an enterprise-ready solution that will evolve with the latest technology. Stay tuned for further feature announcements soon! Acknowledgments This post and work was made possible thanks to the hard work and dedication of our incredible team. Special thanks to Pranami Jhawar, Julia Kasper, Julia Muiruri, Annaji Sharma Ganti Jack Pa, Chaoyi Yuan and Alex Vieira for their invaluable contributions. Additional Resources MCP Client Server integration with APIM as AI gateway Blog Post: https://aka.ms/remote-mcp-apim-auth-blog Sequence Diagram: https://aka.ms/mcp-remote-apim-auth-diagram APIM lab: https://aka.ms/ai-gateway-lab-mcp-client-auth Python: https://aka.ms/mcp-remote-apim-auth .NET: https://aka.ms/mcp-remote-apim-auth-dotnet On-Behalf-Of Authorization: https://aka.ms/mcp-obo-sample 3rd Party APIs – Backend Auth via Credential Manager: Blog Post: https://aka.ms/remote-mcp-apim-lab-blog APIM lab: https://aka.ms/ai-gateway-lab-mcp YouTube Video: https://aka.ms/ai-gateway-lab-demo20KViews12likes4Comments🎉 Announcing General Availability of AI & RAG Connectors in Logic Apps (Standard)
We’re excited to share that a comprehensive set of AI and Retrieval-Augmented Generation (RAG) capabilities is now Generally Available in Azure Logic Apps (Standard). This release brings native support for document processing, semantic retrieval, embeddings, and grounded reasoning directly into the Logic Apps workflow engine. 🔌 Available AI Connectors in Logic Apps Standard Logic Apps (Standard) had previously previewed four AI-focused connectors that open the door for a new generation of intelligent automation across the enterprise. Whether you're processing large volumes of documents, enriching operational data with intelligence, or enabling employees to interact with systems using natural language, these connectors provide the foundation for building solutions that are smarter, faster, and more adaptable to business needs. These are now in GA. They allow teams to move from routine workflow automation to AI-assisted decisioning, contextual responses, and multi-step orchestration that reflects real business intent. Below is the full set of built-in connectors and their actions as they appear in the designer. 1. Azure OpenAI Actions Get an embedding Get chat completions Get chat completions using Prompt Template Get completion Get multiple chat completions Get multiple embeddings What this unlocks Bring natural language reasoning and structured AI responses directly into workflows. Common scenarios include guided decisioning, user-facing assistants, classification and routing, or preparing embeddings for semantic search and RAG workflows. 2. Azure AI Search Actions Delete a document Delete multiple documents Get agentic retrieval output (Preview) Index a document Index multiple documents Merge document Search vectors Search vectors with natural language What this unlocks Add vector, hybrid semantic, and natural language search directly to workflow logic. Ideal for retrieving relevant content from enterprise data, powering search-driven workflows, and grounding AI responses with context from your own documents. 3. Azure AI Document Intelligence Action Analyze document What this unlocks Document Intelligence serves as the entry point for document-heavy scenarios. It extracts structured information from PDFs, images, and forms, allowing workflows to validate documents, trigger downstream processes, or feed high-quality data into search and embeddings pipelines. 4. AI Operations Actions Chunk text with metadata Parse document with metadata What this unlocks Transform unstructured files into enriched, structured content. Enables token-aware chunking, page-level metadata, and clean preparation of content for embeddings and semantic search at scale. 🤖 Advanced AI & Agentic Workflows with AgentLoop Logic Apps (Standard) also supports AgentLoop (also Generally Available), allowing AI models to use workflow actions as tools and iterate until the task is complete. Combined with chunking, embeddings, and natural language search, this opens the door to advanced agentic scenarios such as document intelligence agents, RAG-based assistants, and iterative evaluators. Conclusion With these capabilities now built into Logic Apps Standard, teams can bring AI directly into their integration workflows without additional infrastructure or complexity. Whether you’re streamlining document-heavy processes, enabling richer search experiences, or exploring more advanced agentic patterns, these capabilities provide a strong foundation to start building today.Logic Apps Aviators Newsletter - December 2025
In this issue: Ace Aviator of the Month News from our product group Community Playbook News from our community Ace Aviator of the Month December’s Ace Aviator: Daniel Jonathan What's your role and title? What are your responsibilities? I’m an Azure Integration Architect at Cnext, helping organizations modernize and migrate their integrations to Azure Integration Services. I design and build solutions using Logic Apps, Azure Functions, Service Bus, and API Management. I also work on AI solutions using Semantic Kernel and LangChain to bring intelligence into business processes. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My day usually begins by attending customer requests and handling recent deployments. Most of my time goes into designing integration patterns, building Logic Apps, mentoring the team, and helping customers with technical solutions. Lately, I’ve also been integrating AI capabilities into workflows. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community is open, friendly, and full of knowledge. I enjoy sharing ideas, writing posts, and helping others solve real-world challenges. It’s great to learn and grow together. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Start small and stay consistent. Learn the basics well—like messaging, retries, and error handling—before diving into complex tools. Keep learning and share what you know. What has helped you grow professionally? Hands-on experience, teamwork, and continuous learning. Working across different projects taught me how to design reliable and scalable systems. Exploring AI with Semantic Kernel and LangChain has also helped me think beyond traditional integrations. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? I’d add an “Overview Page” in Logic Apps containing the HTTP URLs for each workflow, so developers can quickly access to test from one place. It would save time and make working with multiple workflows much easier. News from our product group Logic Apps Community Day 2025 Playlist Did you miss or want to catch up on individual sessions from Logic Apps Community Day 2025? Here is the full playlist – choose your favorite sessions and have fun! The future of integration is here and it's agentic Missed Kent Weare and Divya Swarnkar session at Ignite? It is here for you to watch on demand. Enterprise integration is being reimagined. It’s no longer just about connecting systems, but about enabling adaptive, agentic workflows that unify apps, data, and systems. In this session, discover how to modernize integration, migrate from BizTalk, and adopt AI-driven patterns that deliver agility and intelligence. Through customer stories and live demos, see how to bring these workflows to life with Agent Loop in Azure Logic Apps. Public Preview: Azure Logic Apps Connectors as MCP Tools in Microsoft Foundry Unlock secure enterprise connectivity with Azure Logic Apps connectors as MCP tools in Microsoft Foundry. Agents can now use hundreds of connectors natively—no custom code required. Learn how to configure and register MCP servers for seamless integration. Announcing AI Foundry Agent Service Connector v2 (Preview) AI Foundry Agent Service Connector v2 (Preview) is here! Azure Logic Apps can now securely invoke Foundry agents, enabling low-code AI integration, multi-agent workflows, and faster time-to-value. Explore new operations for orchestration and monitoring. Announcing the General Availability of the XML Parse and Compose Actions in Azure Logic Apps XML Parse and Compose Actions are now GA in Azure Logic Apps! Easily handle XML with XSD schemas, streamline workflows, and support internationalization. Learn best practices for arrays, encoding, and safe transport of content. Clone a Consumption Logic App to a Standard Workflow Clone your Consumption Logic Apps into Standard workflows with ease! This new feature accelerates migration, preserves design, and unlocks advanced capabilities for modern integration solutions. Announcing the HL7 connector for Azure Logic Apps Standard and Hybrid (Public Preview) Connect healthcare systems effortlessly! The new HL7 connector for Azure Logic Apps (Standard & Hybrid) enables secure, standardized data exchange and automation using HL7 protocols—now in Public Preview. Announcing Foundry Control Plane support for Logic Apps Agent Loop (Preview) Foundry Control Plane now supports Logic Apps Agent Loop (Preview)! Manage, govern, and observe agents at scale with built-in integration—no extra steps required. Ensure trust, compliance, and scalability in the agentic era. Announcing General Availability of Agent Loop in Azure Logic Apps Agent Loop transforms Logic Apps into a multi-agent automation platform, enabling AI agents to collaborate with workflows and humans. Build secure, enterprise-ready agentic solutions for business automation at scale. Agent Loop Ignite Update - New Set of AI Features Arrive in Public Preview We are releasing a broad set of Agent Loop new and powerful AI-first capabilities in Public Preview that dramatically expand what developers can build: run agents in the Consumption SKU ,bring your own models through APIM AI Gateway, call any tool through MCP, deploy agents directly into Teams, secure RAG with document-level permissions, onboard with Okta, and build in a completely redesigned workflow designer. Announcing MCP Server Support for Logic Apps Agent Loop Agent Loop in Azure Logic Apps now supports Model Context Protocol (MCP), enabling secure, standardized tool integration. Bring your own MCP connector, use Azure-managed servers, or build custom connectors for enterprise workflows. Enabling API Key Authentication for Logic Apps MCP Servers Logic Apps MCP Servers now support API Key authentication alongside OAuth2 and Anonymous options. Configure keys via host.json or Azure APIs, retrieve and regenerate keys easily, and connect MCP clients securely for agentic workflows. Announcing Public Preview of Agent Loop in Azure Logic Apps Consumption Agent Loop now brings AI-powered automation to Logic Apps Consumption with a frictionless, pay-as-you-go model. Build autonomous and conversational agents using 1,400+ connectors—no dedicated infrastructure required. Ideal for rapid prototyping and enterprise workflows. Moving the Logic Apps Designer Forward Major redesign of Azure Logic Apps designer enters Public Preview for Standard workflows. Phase I focuses on faster onboarding, unified views, draft mode with auto-save, improved search, and enhanced debugging. Feedback will shape future phases for a seamless development experience. Announcing the General Availability of the RabbitMQ Connector RabbitMQ Connector for Azure Logic Apps is now generally available, enabling reliable message exchange for Standard and Hybrid workflows. It supports triggers, publishing, and advanced routing, with global rollout underway for robust, scalable integration scenarios. Duplicate Detection in Logic App Trigger Prevent duplicate processing in Logic Apps triggers with a REST API-based solution. It checks recent runs using clientTrackingId to avoid reprocessing items caused by edits or webhook updates. Works with Logic App Standard and adaptable for Consumption or Power Automate. Announcing the BizTalk Server 2020 Cumulative Update 7 BizTalk Server 2020 Cumulative Update 7 is out, adding support for Visual Studio 2022, Windows Server 2022, SQL Server 2022, and Windows 11. Includes all prior fixes. Upgrade from older versions or consider migrating to Azure Logic Apps for modernization. News from our community Logic Apps Local Development Series Post by Daniel Jonathan Last month I shared an article from Daniel about debugging XSLT in VS Code. This month, I bumped into not one, but five articles in a series about Build, Test and Run Logic Apps Standard locally – definitely worth the read! Working with sessions in Agentic Workflows Post by Simon Stender Build AI-powered chat experiences with session-based agentic workflows in Azure Logic Apps. Learn how they enable dynamic, stateful interactions, integrate with APIs and apps, and avoid common pitfalls like workflows stuck in “running” forever. Integration Love Story with Mimmi Gullberg Video by Ahmed Bayoumy and Robin Wilde Meet Mimmi Gullberg, Green Cargo’s new integration architect driving smarter, sustainable rail logistics. With experience from BizTalk to Azure, she blends tech and business insight to create real value. Her mantra: understand the problem first, then choose the right tools—Logic Apps, Functions, or AI. Integration Love Story with Jenny Andersson Video by Ahmed Bayoumy and Robin Wilde Discover Jenny Andesson’s inspiring journey from skepticism to creativity in tech. In this episode, she shares insights on life as an integration architect, tackling system challenges, listening to customers, and how AI is shaping the future of integration. You Can Get an XPath value in Logic Apps without returning an array Post by Luis Rigueira Working with XML in Azure Logic Apps? The xpath() function always returns an array—even for a single node. Or does it? Found how to return just the values you want on this Friday Fact from Luis Rigueira. Set up Azure Standard Logic App Connectors as MCP Server Video by Srikanth Gunnala Expose your Azure Logic Apps integrations as secure tools for AI assistants. Learn how to turn connectors like SAP, SQL, and Jira into MCP tools, protect them with Entra ID/OAuth, and test in GitHub Copilot Chat for safe, action-ready AI workflows. Making Logic Apps Speak Business Post by Al Ghoniem Stop forcing Logic Apps to look like business diagrams. With Business Process Tracking, you can keep workflows technically sound while giving business users clear, stage-based visibility into processes—decoupled, visual, and KPI-driven.309Views0likes0CommentsMoving the Logic Apps Designer Forward
Today, we're excited to announce a major redesign of the Azure Logic Apps designer experience, now entering Public Preview for Standard workflows. While these improvements are currently Standard-only, our vision is to quickly extend them across all Logic Apps surfaces and SKUs. ⚠️ Important: As this is a Public Preview release, we recommend using these features for development and testing workflows rather than production workloads. We're actively stabilizing the experience based on your feedback. This Is Just the Beginning This is not us declaring victory and moving on. This is Phase I of a multi-phase journey, and I'm committed to sharing our progress through regular blog posts as we continue iterating. More importantly, we want to hear from you. Your feedback drives these improvements, and it will continue to shape what comes next. This redesign comes from listening to you—our customers—watching how you actually work, and adapting the designer to better fit your workflows. We've seen the pain points, heard the frustrations, and we're addressing them systematically. Our Roadmap: Three Phases Phase I: Perfecting the Development Loop (What we're releasing today) We're focused on making it cleaner and faster to edit your workflow, test it, and see the results. The development loop should feel effortless, not cumbersome. Phase II: Reimagining the Canvas Next, we'll rethink how the canvas works—introducing new shortcuts and workflows that make modifications easier and more intuitive. Phase III: Unified Experiences Across All Surfaces We'll ensure VS Code, Consumption, and Standard all deliver similarly powerful flows, regardless of where you're working. Beyond these phases, we have several standalone improvements planned: a better search experience, streamlined connection creation and management, and removing unnecessary overhead when creating new workflows. We're also tackling fundamental questions that shouldn't be barriers: What do stateful and stateless mean? Why can't you switch between them? Why do you have to decide upfront if something is an agent? You shouldn't. We're working toward making these decisions dynamic—something you can change directly in the designer as you build, not rigid choices you're locked into at creation time. We want to make it easier to add agentic capabilities to any workflow, whenever you need them. What's New in Phase I Let me walk you through what we're shipping at Ignite. Faster Onboarding: Get to Building Sooner We're removing friction from the very beginning. When you create a new workflow, you'll get to the designer before having to choose stateful, stateless, or agentic. Eventually, we want to eliminate that upfront choice entirely—making it a decision you can defer until after your workflow is created. This one still needs a bit more work, but it's coming soon. One View to Rule Them All We've removed the side panel. Workflows now exist in a single, unified view with all the tooling you need. No more context switching. You can easily hop between run history, code view, or visual editor, and change your settings inline—all without leaving your workflow. Draft Mode: Auto-Save Without the Risk Here's one of our biggest changes: draft mode with auto-save. We know the best practice is to edit locally in VS Code, store workflows in GitHub, and deploy properly to keep editing separate from production. But we also know that's not always possible or practical for everyone. It sucks to get your workflow into the perfect state, then lose everything if something goes wrong before you hit save. Now your workflow auto-saves every 10 seconds in draft mode. If you refresh the window, you're right back where you were—but your changes aren't live in production. There's now a separate Publish action that promotes your draft to production. This means you can work, test your workflow against the draft using the designer tools, verify everything works, and then publish to production—even when editing directly on the resource. Another benefit: draft saves won't restart your app. Your app keeps running. Restarts only happen when you publish. Smarter, Faster Search We've reorganized how browsing works—no more getting dropped into an endless list of connectors. You now get proper guidance as you explore and can search directly for what you need. Even better, we're moving search to the backend in the coming weeks, which will eliminate the need to download information about thousands of connectors upfront and deliver instant results. Our goal: no search should ever feel slow. Document Your Workflows with Notes You can now add sticky notes anywhere in your workflow. Drop a post-it note, add markdown (yes, even YouTube videos), and document your logic right on the canvas. We have plans to improve this with node anchoring and better stability features, but for now, you can visualize and explain your workflows more clearly than ever. Unified Monitoring and Run History Making the development loop smoother means keeping everything in one place. Your run history now lives on the same page as your designer. Switch between runs without waiting for full blade reloads. We've also added the ability to view both draft and published runs—a powerful feature that lets you test and validate your changes before they go live. We know there's a balance between developer and operator personas. Developers need quick iteration and testing capabilities, while operators need reliable monitoring and production visibility. This unified view serves both: developers can test draft runs and iterate quickly, while the clear separation between draft and published runs ensures operators maintain full visibility into what's actually running in production. New Timeline View for Better Debugging We experimented with a timeline concept in Agentic Apps to explain handoff—Logic Apps' first foray into cyclic graphs. But it was confusing and didn't work well with other Logic App types. We've refined it. On the left-hand side, you'll now see a hierarchical view of every action your Logic App ran, in execution order. This makes navigation and debugging dramatically easier when you're trying to understand exactly what happened during a run. What's Next This is Phase I. We're shipping these improvements, but we're not stopping here. As we move into Phase II and beyond, I'll continue sharing updates through blog posts like this one. How to Share Your Feedback We're actively listening and want to hear from you: Use the feedback button in the Azure Portal designer Join the discussion in GitHub/Community Forum – https://github.com/Azure/LogicAppsUX Comment below with your thoughts and suggestions Your input directly shapes our roadmap and priorities. Keep the feedback coming. It's what drives these changes, and it's what will shape the future of Azure Logic Apps. Let's build something great together.952Views5likes2CommentsTypical Storage access issues troubleshooting
We get a big number of cases with Storage Account connection failing and sometimes we see that our customers are not aware of the troubleshooting steps they can take to accelerate the resolution of this issue. As such, we've compiled some scenarios and the usual troubleshooting steps we ask you to take. Always remember that if you have done changes to your infrastructure, consider rolling them back to ensure that this is not the root cause. Even a small change that apparently has no effect, may cause downtime on your application. Common messages The errors that are shown in the portal when the Storage Account connectivity is down are very similar, and they may not indicate correctly the cause. Error Message that surfaces in the Portal for Logic Apps Standard System.Private.Core.Lib: Access to the path 'C:\home\site\wwwroot\host.json' is denied Cannot reach host runtime. Error details, Code: 'BadRequest', Message: 'Encountered an error (InternalServerError) from host runtime.' System.Private.CoreLib: The format of the specified network name is invalid. : 'C:\\home\\site\\wwwroot\\host.json'.' System.Private.CoreLib: The user name or password is incorrect. : 'C:\home\site\wwwroot\host.json'. Microsoft.Windows.Azure.ResourceStack: The SSL connection could not be established, see inner exception. System.Net.Http: The SSL connection could not be established, see inner exception. System.Net.Security: Authentication failed because the remote party has closed the transport stream Unexpected error occurred while loading workflow content and artifacts The errors don't really indicate what the root cause is, but it's very common to be a broken connection with the Storage. What to verify? There are 4 major components to verify in these cases: Logic App environment variables and network settings Storage Account networking settings Network settings DNS settings Logic App environment variables and Network From an App Settings point of view, there is not much to verify, but these are important steps, that sometimes are overlook. At this time, all or nearly all Logic Apps have been migrated to dotnet Functions_Worker_Runtime (under Environmental Variables tab), but this is good to confirm. It's also good to confirm if your Platform setting is set to 64 bits (under Configuration tab/ General Settings). We've seen that some deployments are using old templates and setting this as 32 bits, which doesn't make full use of the available resources. Check if Logic App has the following environment variables with value: WEBSITE_CONTENTOVERVNET - set to 1 OR WEBSITE_VNET_ROUTE_ALL - set to 1. OR vnetRouteAllEnabled set to 1. Configure virtual network integration with application and configuration routing. - Azure App Service | Microsoft Learn These settings can also be replaced with the UI setting in the Virtual Network tab, when you select "Content Storage" in the Configuration routing. For better understanding, vnetContentShareEnabled takes precedence. In other words, if it is set (true/false), WEBSITE_CONTENTOVERVNET is ignored. Only if vnetContentShareEnabled is null, WEBSITE_CONTENTOVERVNET is taken into account. Also keep this in mind: Storage considerations for Azure Functions | Microsoft Learn WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and AzureWebJobsStorage have the connection string as in the Storage Account Website_contentazurefileconnectionstring | App settings reference for Azure Functions | Microsoft Learn Azurewebjobsstorage | App settings reference for Azure Functions | Microsoft Learn WEBSITE_CONTENTSHARE has the Fileshare name Website_contentshare | App settings reference for Azure Functions | Microsoft Learn These are the first points to validate. Storage Account settings If all these are matching/properly configured and still the Logic App is in error, we move to the next step, that is to validate the Storage Account network settings. When the Storage Account does not have Vnet integration enabled, there should be no issues, because the connection is made through the public endpoints. Still, even with this, you must ensure that at least the "Allow storage account key access" is enabled. This is because at this time, the Logic App is dependent on the Access key to connect to the Storage Account. Although you can set the AzureWebJobsStorage to run with Managed Identity, you can't fully disable storage account key access for Standard logic apps that use the Workflow Service Plan hosting option. However, with ASE v3 hosting option, you can disable storage account key access after you finish the steps to set up managed identity authentication. Create example Standard workflow in Azure portal - Azure Logic Apps | Microsoft Learn If this setting is enabled, you must check if Storage Account is behind Firewall. The Access may be Enabled for select networks or fully disabled. Both options require Service Endpoints or Private Endpoints configured. Deploying Standard Logic App to Storage Account behind Firewall using Service or Private Endpoints | Microsoft Community Hub So check the Networking tab under the Storage Account and confirm the following: In case you select the "selected networks" option, confirm that the VNET is the same as the Logic App is extended to. Your Logic App and Storage may be hosted in different Vnets, but you must ensure that there is full connectivity between them. They must be peered and with HTTPS and SMB traffic allowed (more explained in the Network section). You can select "Disabled" network access as well. You should also confirm that the Fileshare is created. Usually this is created automatically with the creation of the Logic App, but if you use Terraform or ARM, it may not create the file share and you must do it manually. Confirm if all 4 Private Endpoints are created and approved (File, Table, Queue and Blob). All these resources are used for different components of the Logic App. This is not fully documented, as it is internal engine documentation and not publicly available. For Azure Functions, the runtime base, this is partially documented, as you can read in the article: Storage considerations for Azure Functions | Microsoft Learn If a Private Endpoint is missing, create it and link it to the Vnet as Shared Resource. Not having all Private Endpoints created may end in runtime errors, connections errors or trigger failures. For example, if a workflow is not generating the URL even if it saves correctly, it may be the Table and Queue Private Endpoints missing, as we've seen many times with customers. You can read a bit more about the integration of the Logic App and a firewall secured Storage Account and the needed configuration in these articles: Secure traffic between Standard workflows and virtual networks - Azure Logic Apps | Microsoft Learn Deploy Standard logic apps to private storage accounts - Azure Logic Apps | Microsoft Learn You can use the Kudu console (Advanced tools tab) to further troubleshoot the connection with the Storage Account by using some network troubleshooting commands. If the Kudu console is not available, we recommend using a VM in the same Vnet as the Logic App, to mimic the scenario. Nslookup [hostname or IP] [DNS HOST IP] TCPPing [hostname or IP]:[PORT] Test-Netconnection [hostname] -port [PORT] If you have Custom DNS, the command NSLookup will not return the results from your DNS unless you specify the IP address as a parameter. Instead, you can use the nameresolver command for this, which will use the Vnet DNS settings to check for the endpoint name resolution. nameresolver [endpoint hostname or IP address] Networking Related Commands for Azure App Services | Microsoft Community Hub Vnet configuration Having configured the Private Endpoint for the Logic App will not affect traffic to the Storage. This is because the PE is only for Inbound traffic. The Storage Communication will the considered as outbound traffic, as it's the Logic App that actively communicates with the Storage. Secure traffic between Standard workflows and virtual networks - Azure Logic Apps | Microsoft Learn So consider that the link between these resources must not be interrupted. This forces you to understand that the Logic App uses both HTTPS and SMB protocols to communicate with the Storage Account, meaning that traffic under the ports 443 and 445 needs to be fully allowed in your Vnet. If you have a Network Security Group associated with the Logic App subnet, you need to confirm that the rules are allowing this traffic. You may need to explicitly create rules to allow this. Source port Destination port Source Destination Protocol Purpose * 443 Subnet integrated with Standard logic app Storage account TCP Storage account * 445 Subnet integrated with Standard logic app Storage account TCP Server Message Block (SMB) File Share In case you have forced routing to your Network Virtual Appliance (i.e. Firewall), you must also ensure that this resource is not filtering the traffic or blocking it. Having TLS inspection enabled in your Firewall must also be disabled, for the Logic App traffic. In short, this is because the Firewall will replace the certificate in the message, thus making the Logic App not recognizing the returned certificate, invalidating the message. You can read more about TLS inspection in this URL: Azure Firewall Premium features | Microsoft Learn DNS If you are using Azure DNS, this section should not apply, because all records are automatically created once you create the resources, but if you're using a Custom DNS, when you create the Azure resource (ex: Storage Private Endpoint), the IP address won't be registered in your DNS, so you must do it manually. You must ensure that all A Records are created and maintained, also keeping in mind that they need to point to the correct IP and name. If there are mismatches, you may see the communications severed between the Logic App and other resources, such as the Storage Account. So double-check all DNS records, and confirm that all is in proper state and place. And to make it even easier, with the help of my colleague Mohammed_Barqawi , this information is now translated into a easy to understand flowchart. If you continue to have issues after all these steps are verified, I suggest you open a case with us, so that we can validate what else may be happening, because either a step may have been missed, or some other issue may be occurring.Duplicate detection in Logic App Trigger
Duplicate Detection in Logic App Trigger Overview In certain scenarios, a Logic App may process the same item multiple times due to various reasons. Common examples include: SharePoint Polling: The trigger might fire twice because of subsequent edits. (Reference: Microsoft SharePoint Connector for Power Automate | Microsoft Learn) Dataverse Webhook: This can occur when a field is updated by plugins. Solution Approach To address this issue, I implemented a solution using the Logic Apps REST API: Workflow Runs - List - REST API (Azure Logic Apps) | Microsoft Learn The idea is simple: Each time the Logic App is triggered, it searches all available runs within a specified time window for the same clientTrackingId. Implementation Details Step 1: Extract clientTrackingId For the SharePoint trigger example, I used the file name as the clientTrackingId with the following expression: @string(trigger()['outputs']['body']['value'][0]['{FilenameWithExtension}']) Step 2: Pass Payload to Duplicate Detector Flow The payload includes: clientTrackingId (chosen identifier) Resource URI for the SharePoint flow Time window to check against Duplicate Detector Flow Logic The flow retrieves the run history and checks if the number of runs is greater than 1. If so, the same file was processed before: the http response will have the below status if(greater(length(body('HTTP-Get_History')['value']), 1), '400', '200') Important Notes The Duplicate Detector Flow must have Logic Apps Standard Reader (Preview) permission for the resource group. The solution is built on Logic App Standard, but it can be adapted for Logic App Consumption or Power Automate. GitHub Link logicappfiles/Logic Apps Duplicate Detector at main · mbarqawi/logicappfilesClone a Consumption Logic App to a Standard Workflow
Azure Logic Apps now supports cloning existing Consumption Logic Apps into Standard workflows. This feature simplifies migration by allowing you to reuse your existing workflows without starting from scratch. It’s designed to help organizations transition to the Standard model, which offers improved performance, flexibility, and advanced capabilities. Why Does It Matter? Migrating from Consumption to Standard has been a common challenge for teams looking to modernize their integration solutions. Standard workflows provide: Better performance through single-tenant architecture. Enhanced developer experience with local development and VS Code integration. Advanced features like built-in connectors, stateful/stateless workflows, and integration with Azure Functions. This new cloning capability reduces friction, accelerates adoption, and ensures continuity for existing business processes. And with the announcement of Agent Loop Support in Logic Apps Consumption, the ability to clone agentic workflows to a Logic Apps Standard application allow you to have a graduation strategy for your agents. Key Highlights Direct cloning: Convert an existing Consumption Logic App into a Standard workflow with minimal manual effort. Preserves workflow design: Your triggers, actions, and configurations are carried over. Supports modernization: Enables migration to a model that supports advanced features like custom connectors and private endpoints. Improved governance: Standard workflows allow better control over deployment and scaling. How to Get Started Navigate to your Consumption Logic App in the Azure portal. Select Clone to Standard from the available options. Choose the target Logic App (Standard) environment and configure settings. Validate and deploy the cloned workflow. Limitations Infrastructure settings aren’t cloned (e.g., integration account configurations). Connections and credentials must be reconfigured after cloning. Secure parameters aren’t copied; placeholders are created and need updating from Key Vault. Connectors default to shared even if built-in versions exist. Unsupported items include: Integration account references XML/Flat file transformations EDIFACT and X12 actions Nested workflows Azure Function calls Learn more To learn more about this new feature, check the official documentation on Microsoft Docs.379Views0likes0CommentsAnnouncing the General Availability (GA) of the Premium v2 tier of Azure API Management
Superior capacity, highest entity limits, unlimited included calls, and the most comprehensive set of features set the Premium v2 tier apart from other API Management tiers. Customers rely on the Premium v2 tier for running enterprise-wide API programs at scale, with high availability, and performance. The Premium v2 tier has a new architecture that eliminates management traffic from the customer VNet, making private networking much more secure and easier to setup. During the creation of a Premium v2 instance, you can choose between VNet injection or VNet integration (introduced in the Standard v2 tier) options. In addition, today we are also adding three new features to Premium v2: Inbound Private Link: You can now enable private endpoint connectivity to restrict inbound access to your Premium v2 instance. It can be enabled along with VNet injection or VNet integration or without a VNet. Availability zone support: Premium v2 now supports availability zones (zone redundancy) to enhance the reliability and resilience of your API gateway. Custom CA certificates: Azure API management v2 gateway can now validate TLS connections with the backend service using custom CA certificates. New and improved VNet injection Using VNet injection in Premium v2 no longer requires configuring routes or service endpoints. Customers can secure their API workloads without impacting API Management dependencies, while Microsoft can secure the infrastructure without interfering with customer API workloads. In short, the new VNet injection implementation enables both parties to manage network security and configuration settings independently and without affecting each other. You can now configure your APIs with complete networking flexibility: force tunnel all outbound traffic to on-premises, send all outbound traffic through an NVA, or add a WAF device to monitor all inbound traffic to your API Management Premium v2—all without constraints. Inbound Private Link Customers can now configure an inbound private endpoint for their API Management Premium v2 instance to allow your API consumers securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Apply different API Management policies based on whether traffic comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with inbound virtual network injection or outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. More details can be found here Today, only the API Management instance’s Gateway endpoint supports inbound private link connections. Each API management instance can support at most 100 Private Link connections. Availability zones Azure API Management Premium v2 now supports Availability Zones (AZ) redundancy to enhance the reliability and resilience of your API gateway. When deploying an API Management instance in an AZ-enabled region, users can choose to enable zone redundancy. This distributes the service's units, including Gateway, management plane, and developer portal, across multiple, physically separate AZs within that region. Learn how to enable AZs here. CA certificates If the API Management Gateway needs to connect to the backends secured with TLS certificates issued by private certificate authorities (CA), you need to configure custom CA certificates in the API Management instance. Custom CA certificates can be added and managed as Authorization Credentials in the Backend entities. The Backend entity has been extended with new properties allowing customers to specify a list of certificate thumbprints or subject name + issuer thumbprint pairs that Gateway should trust when establishing TLS connection with associated backend endpoint. More details can be found here. Region availability The Premium v2 tier is now generally available in six public regions (Australia East, East US2, Germany West Central, Korea Central, Norway East and UK South) with additional regions coming soon. For pricing information and regional availability, please visit the API Management pricing page. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentationPreview: Govern, Secure, and Observe A2A APIs with Azure API Management
Today, we’re announcing the preview support for A2A (Agent2Agent) APIs in Azure API Management. With this capability, organizations can now manage and govern agent APIs alongside AI model APIs, Model Context Protocol (MCP) tools, and traditional APIs such as REST, SOAP, GraphQL, WebSocket, and gRPC — all within a single, consistent API management plane. Extending API Governance into the Agentic Ecosystem As organizations adopt agentic systems, the need for consistent governance, security, and observability grows. With A2A API support, Azure API Management enables you to extend established API practices into the agentic world — ensuring secure access, consistent policy enforcement, and complete visibility for AI agents. A2A APIs in Azure API Management: Mediate JSON-RPC runtime operations with policy support Expose and manage agent cards for users, clients, or other agents Support OpenTelemetry GenAI semantic conventions when logging traces to Application Insights — including "gen_ai.agent.id" and "gen_ai.agent.name" attributes How It Works When you import an A2A API, API Management mediates runtime calls to your agent backend (JSON-RPC only) and exposes the agent card as an operation within the same API. The agent card is transformed automatically to represent the A2A API managed by API Management — with the hostname replaced by API Management’s gateway address, security schemes converted to authentication configured in API Management, and unsupported interfaces removed. When integrated with Application Insights, API Management enriches traces with GenAI-compliant telemetry attributes — allowing easy identification of the agent and deep correlation between API and agent execution traces for monitoring and debugging. Try It Out To import an A2A API: Navigate to the APIs page in the Azure portal and select the A2A Agent tile. Enter your agent card URL. If accessible, the portal will automatically populate relevant settings. Configure the remaining properties, such as API path in API Management. This functionality is currently available only in v2 tiers of API Management and it will continue to roll out to all tiers in the coming months. Start Managing Your Agent APIs With A2A support in Azure API Management, you can now bring agent APIs under the same governance and security umbrella as your existing APIs — strengthening control, security, and observability across your AI and API ecosystems. Learn more about A2A API support in Azure API Management.Announcing Public Preview of Agent Loop in Azure Logic Apps Consumption
We’re excited to announce a major leap forward in democratizing AI-powered business process automation: Agent Loop is now available in Azure Logic Apps Consumption, bringing advanced AI agent capabilities to a broader audience with a frictionless, pay-as-you-go experience. NOTE: This feature is being rolled out and is expected to be in all planned regions by end of the week What’s New? Agent Loop, previously available only in Logic Apps Standard, is now available in Consumption logic apps, providing developers, small and medium-sized businesses, startups, and enterprise teams with the ability to create autonomous and conversational AI agents without the necessity of provisioning or managing dedicated AI infrastructure. With Agent Loop, customers can develop both autonomous and conversational agents, seamlessly transforming any workflow into an intelligent workflow using the agent loop action. These agents are powered by knowledge and tools through access to over 1,400 connectors and MCPs (to be introduced soon). Why Does This Matter? By extending Agent Loop to Logic Apps Consumption, we’re making AI agent capabilities accessible to everyone—from individual developers to large enterprises—without barriers. This move supports rapid prototyping, experimentation, and production workloads, all while maintaining the flexibility to upgrade as requirements evolve. Key highlights: Hosted on Behalf Of (HOBO) Model: With this model, customers can harness the power of advanced Foundry models directly within their Logic Apps, without the need to provision or manage AI resources themselves. Microsoft handles all the underlying large language model (LLM) infrastructure, preserving the serverless, low-overhead nature of Consumption Logic Apps that lets you focus purely on building intelligent workflows. Frictionless Entry Point: With Microsoft hosting and managing the Foundry model, customers only need an Azure subscription to set up an agentic workflow. This dramatically reduces entry barriers and enables anyone with access to Azure to leverage powerful AI agent automation right away. Pay-As-You-Go Billing: You’re billed based on the number of tokens used for each agentic iteration, making experimentation and scaling cost-effective. No fixed infrastructure costs or complex setup. Extensive Connector Ecosystem: Provides access to an extensive ecosystem of over 1,400 connectors, facilitating seamless integration with a broad range of enterprise systems, APIs, and data sources. Enterprise-Grade Upgrade Path: As your needs grow—whether for higher performance, compliance, or custom model hosting—you can seamlessly graduate to Logic Apps Standard, bringing your own model and unlocking advanced features like VNET support and local development. Refer https://learn.microsoft.com/en-us/azure/logic-apps/clone-consumption-logic-app-to-standard-workflow Security and Tenant Isolation: The HOBO model ensures strong tenant isolation and security boundaries, so your data and workflows remain protected. Chat client Authentication: Setting up the chat client is straightforward, with built-in security provided using OAuth policies. How to Get Started? Check out the video below to see examples of conversational and autonomous agent workflows in Consumption Logic Apps. For detailed instructions on creating agentic workflows, visit Overview | Logic Apps Labs. Refer the official documentation for more information on this feature- Workflows with AI Agents and Models - Azure Logic Apps | Microsoft Learn. Limitations: Local development capabilities and VNET integration are not supported with Consumption Logic Apps. Regional data residency isn't guaranteed for the agentic actions. If any GDPR (General Data Protection Regulation) concerns, use Logic Apps Standard. Nested agents and MCP tools are currently unavailable but will be added soon. If you need these features, refer Logic Apps Standard. Currently, West Europe and West US are supported regions; additional regions will be available soon.