ai
36 TopicsAzure API Management Your Auth Gateway For MCP Servers
The Model Context Protocol (MCP) is quickly becoming the standard for integrating Tools 🛠️ with Agents 🤖 and Azure API Management is at the fore-front, ready to support this open-source protocol 🚀. You may have already encountered discussions about MCP, so let's clarify some key concepts: Model Context Protocol (MCP) is a standardized way, (a protocol), for AI models to interact with external tools, (and either read data or perform actions) and to enrich context for ANY language models. AI Agents/Assistants are autonomous LLM-powered applications with the ability to use tools to connect to external services required to accomplish tasks on behalf of users. Tools are components made available to Agents allowing them to interact with external systems, perform computation, and take actions to achieve specific goals. Azure API Management: As a platform-as-a-service, API Management supports the complete API lifecycle, enabling organizations to create, publish, secure, and analyze APIs with built-in governance, security, analytics, and scalability. New Cool Kid in Town - MCP AI Agents are becoming widely adopted due to enhanced Large Language Model (LLM) capabilities. However, even the most advanced models face limitations due to their isolation from external data. Each new data source requires custom implementations to extract, prepare, and make data accessible for any model(s). - A lot of heavy lifting. Anthropic developed an open-source standard - the Model Context Protocol (MCP), to connect your agents to external data sources such as local data sources (databases or computer files) or remote services (systems available over the internet through e.g. APIs). MCP Hosts: LLM applications such as chat apps or AI assistant in your IDEs (like GitHub Copilot in VS Code) that need to access external capabilities MCP Clients: Protocol clients that maintain 1:1 connections with servers, inside the host application MCP Servers: Lightweight programs that each expose specific capabilities and provide context, tools, and prompts to clients MCP Protocol: Transport layer in the middle At its core, MCP follows a client-server architecture where a host application can connect to multiple servers. Whenever your MCP host or client needs a tool, it is going to connect to the MCP server. The MCP server will then connect to for example a database or an API. MCP hosts and servers will connect with each other through the MCP protocol. You can create your own custom MCP Servers that connect to your or organizational data sources. For a quick start, please visit our GitHub repository to learn how to build a remote MCP server using Azure Functions without authentication: https://aka.ms/mcp-remote Remote vs. Local MCP Servers The MCP standard supports two modes of operation: Remote MCP servers: MCP clients connect to MCP servers over the Internet, establishing a connection using HTTP and Server-Sent Events (SSE), and authorizing the MCP client access to resources on the user's account using OAuth. Local MCP servers: MCP clients connect to MCP servers on the same machine, using stdio as a local transport method. Azure API Management as the AI Auth Gateway Now that we have learned that MCP servers can connect to remote services through an API. The question now rises, how can we expose our remote MCP servers in a secure and scalable way? This is where Azure API Management comes in. A way that we can securely and safely expose tools as MCP servers. Azure API Management provides: Security: AI agents often need to access sensitive data. API Management as a remote MCP proxy safeguards organizational data through authentication and authorization. Scalability: As the number of LLM interactions and external tool integrations grows, API Management ensures the system can handle the load. Security remains to be a critical piece of building MCP servers, as agents will need to securely connect to protected endpoints (tools) to perform certain actions or read protected data. When building remote MCP servers, you need a way to allow users to login (Authenticate) and allow them to grant the MCP client access to resources on their account (Authorization). MCP - Current Authorization Challenges State: 4/10/2025 Recent changes in MCP authorization have sparked significant debate within the community. 🔍 𝗞𝗲𝘆 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 with the Authorization Changes: The MCP server is now treated as both a resource server AND an authorization server. This dual role has fundamental implications for MCP server developers and runtime operations. 💡 𝗢𝘂𝗿 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: To address these challenges, we recommend using 𝗔𝘇𝘂𝗿𝗲 𝗔𝗣𝗜 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 as your authorization gateway for remote MCP servers. 🔗For an enterprise-ready solution, please check out our azd up sample repo to learn how to build a remote MCP server using Azure API Management as your authentication gateway: https://aka.ms/mcp-remote-apim-auth The Authorization Flow The workflow involves three core components: the MCP client, the APIM Gateway, and the MCP server, with Microsoft Entra managing authentication (AuthN) and authorization (AuthZ). Using the OAuth protocol, the client starts by calling the APIM Gateway, which redirects the user to Entra for login and consent. Once authenticated, Entra provides an access token to the Gateway, which then exchanges a code with the client to generate an MCP server token. This token allows the client to communicate securely with the server via the Gateway, ensuring user validation and scope verification. Finally, the MCP server establishes a session key for ongoing communication through a dedicated message endpoint. Diagram source: https://aka.ms/mcp-remote-apim-auth-diagram Conclusion Azure API Management (APIM) is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). In this blog, we've emphasized the simplicity of connecting AI agents to various data sources through MCP, streamlining previously complex implementations. Given the critical role of secure access to platforms and services for AI agents, APIM offers robust solutions for managing OAuth tokens and ensuring secure access to protected endpoints, making it an invaluable asset for enterprises, despite the challenges of authentication. API Management: An Enterprise Solution for Securing MCP Servers Azure API Management is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). It is designed to help you to securely expose your remote MCP servers. MCP servers are still very new, and as the technology evolves, API Management provides an enterprise-ready solution that will evolve with the latest technology. Stay tuned for further feature announcements soon! Acknowledgments This post and work was made possible thanks to the hard work and dedication of our incredible team. Special thanks to Pranami Jhawar, Julia Kasper, Julia Muiruri, Annaji Sharma Ganti Jack Pa, Chaoyi Yuan and Alex Vieira for their invaluable contributions. Additional Resources MCP Client Server integration with APIM as AI gateway Blog Post: https://aka.ms/remote-mcp-apim-auth-blog Sequence Diagram: https://aka.ms/mcp-remote-apim-auth-diagram APIM lab: https://aka.ms/ai-gateway-lab-mcp-client-auth Python: https://aka.ms/mcp-remote-apim-auth .NET: https://aka.ms/mcp-remote-apim-auth-dotnet On-Behalf-Of Authorization: https://aka.ms/mcp-obo-sample 3rd Party APIs – Backend Auth via Credential Manager: Blog Post: https://aka.ms/remote-mcp-apim-lab-blog APIM lab: https://aka.ms/ai-gateway-lab-mcp YouTube Video: https://aka.ms/ai-gateway-lab-demo12KViews9likes3CommentsFeedback Opportunity: SRE Agent + Logic Apps
Azure SRE Agent was recently announced at Microsoft Build in May 2025 and allows organizations to scale their operations through automation, proactive management, and AI. Using AI Agents and Large Language Models (LLM), SRE Agent can identify logs and metrics to enable rapid root cause analysis and issue mitigation. To see a demo of SRE Agent in action, please see the following video. Feedback Opportunity The Logic Apps team is looking to make further investments in SRE Agent to deliver a streamlined experience for agent and integration customers. We would love to learn more about your operational challenges and how we can better address your needs through this platform. Please share your perspectives through the following form: https://aka.ms/logicapps/sre-agent and I would be happy to have a follow-up conversation with you.Enhancing AI Integrations with MCP and Azure API Management
As AI Agents and assistants become increasingly central to modern applications and experiences, the need for seamless, secure integration with external tools and data sources is more critical than ever. The Model Context Protocol (MCP) is emerging as a key open standard enabling these integrations - allowing AI models to interact with APIs, Databases and other services in a consistent, scalable way. Understanding MCP MCP utilizes a client-host-server architecture built upon JSON-RPC 2.0 for messaging. Communication between clients and servers occurs over defined transport layers, primarily: stdio: Standard input/output, suitable for efficient communication when the client and server run on the same machine. HTTP with Server-Sent Events (SSE): Uses HTTP POST for client-to-server messages and SSE for server-to-client messages, enabling communication over networks, including remote servers. Why MCP Matters While Large Language Models (LLMs) are powerful, their utility is often limited by their inability to access real-time or proprietary data. Traditionally, integrating new data sources or tools required custom connectors/ implementations and significant engineering efforts. MCP addresses this by providing a unified protocol for connecting agents to both local and remote data sources - unifying and streamlining integrations. Leveraging Azure API Management for remote MCP servers Azure API Management is a fully managed platform for publishing, securing, and monitoring APIs. By treating MCP server endpoints as other backend APIs, organizations can apply familiar governance, security, and operational controls. With MCP adoption, the need for robust management of these backend services will intensify. API Management retains a vital role in governing these underlying assets by: Applying security controls to protect the backend resources. Ensuring reliability. Effective monitoring and troubleshooting with tracing requests and context flow. n this blog post, I will walk you through a practical example: hosting an MCP server behind Azure API Management, configuring credential management, and connecting with GitHub Copilot. A Practical Example: Automating Issue Triage To follow along with this scenario, please check out our Model Context Protocol (MCP) lab available at AI-Gateway/labs/model-context-protocol Let's move from theory to practice by exploring how MCP, Azure API Management (APIM) and GitHub Copilot can transform a common engineering workflow. Imagine you're an engineering manager aiming to streamline your team's issue triage process - reducing manual steps and improving efficiency. Example workflow: Engineers log bugs/ feature requests as GitHub issues Following a manual review, a corresponding incident ticket is generated in ServiceNow. This manual handoff is inefficient and error prone. Let's see how we can automate this process - securely connecting GitHub and ServiceNow, enabling an AI Agent (GitHub Copilot in VS Code) to handle triage tasks on your behalf. A significant challenge in this integration involves securely managing delegated access to backend APIs, like GitHub and ServiceNow, from your MCP Server. Azure API Management's credential manager solves this by centralizing secure credential storage and facilitating the secure creation of connections to your third-party backend APIs. Build and deploy your MCP server(s) We'll start by building two MCP servers: GitHub Issues MCP Server Provides tools to authenticate on GitHub (authorize_github), retrieve user infromation (get_user ) and list issues for a specified repository (list_issues). ServiceNow Incidents MCP Server Provides tools to authenticate with ServiceNow (authorize_servicenow), list existing incidents (list_incidents) and create new incidents (create_incident). We are using Azure API Management to secure and protect both MCP servers, which are built using Azure Container Apps. Azure API Management's credential manager centralizes secure credential storage and facilitates the secure creation of connections to your backend third-party APIs. Client Auth: You can leverage API Management subscriptions to generate subscription keys, enabling client access to these APIs. Optionally, to further secure /sse and /messages endpoints, we apply the validate-jwt policy to ensure that only clients presenting a valid JWT can access these endpoints, preventing unauthorized access. (see: AI-Gateway/labs/model-context-protocol/src/github/apim-api/auth-client-policy.xml) After registering OAuth applications in GitHub and ServiceNow, we update APIM's credential manager with the respective Client IDs and Client Secrets. This enables APIM to perform OAuth flows on behalf of users, securely storing and managing tokens for backend calls to GitHub and ServiceNow. Connecting your MCP Server in VS Code With your MCP servers deployed and secured behind Azure API Management, the next step is to connect them to your development workflow. Visual Studio Code now supports MCP, enabling GitHub Copilot's agent mode to connect to any MCP-compatible server and extend its capabilities. Open Command Pallette and type in MCP: Add Server ... Select server type as HTTP (HTTP or Server-Sent Events) Paste in the Server URL Provide a Server ID This process automatically updates your settings.json with the MCP server configuration. Once added, GitHub Copilot can connect to your MCP servers and access the defined tools, enabling agentic workflows such as issue triage and automation. You can repeat these steps to add the ServiceNow MCP Server. Understanding Authentication and Authorization with Credential Manager When a user initiates an authentication workflow (e.g, via the authorize_github tool), GitHub Copilot triggers the MCP server to generate an authorization request and a unique login URL. The user is redirected to a consent page, where their registered OAuth application requests permissions to access their GitHub account. Azure API Management acts as a secure intermediary, managing the OAuth flow and token storage. Flow of authorize_github: Step 1 - Connection initiation: GitHub Copilot Agent invokes a sse connection to API Management via the MCP Client (VS Code) Step 2 - Tool Discovery: APIM forwards the request to the GitHub MCP Server, which responds with available tools Step 3 - Authorization Request: GitHub Copilot selects and executes authorize_github tool. The MCP server generates an authorization_id for the chat session. Step 4 - User Consent: If it's the 1st login, APIM requests a login redirect URL from the MCP Server The MCP Server sends the Login URL to the client, prompting the user to authenticate with GitHub Upon successful login, GitHub redirects the client with an authorization code Step 5 - Token Exchange and Storage: The MCP Client sends the authorization code to API Management APIM exchanges the code for access and refresh tokens from GitHub APIM securely stores the token and creates an Access Control List (ACL) for the service principal. Step 6 - Confirmation: APIM confirms successful authentication to the MCP Client, and the user can now perform authenticated actions, such as accessing private repositories. Check out the python logic for how to implement it: AI-Gateway/labs/model-context-protocol/src/github/mcp-server/mcp-server.py Understanding Tool Calling with underlaying APIs in API Management Using the list_issues tool, Connection confirmed APIM confirms the connection to the MCP Client Issue retrieval: The MCP Client requests issues from the MCP server The MCP Server attaches the authorization_id as a header and forwards the request to APIM The list of issues is returned to the agent You can use the same process to add the ServiceNow MCP Server. With both servers connected, GitHub Copilot Agent can extract issues from a private repo in GitHub and create new incidences in ServiceNow, automating your triage workflow. You can define additional tools such as suggest_assignee tool, assign_engineer tool, update_incident_status tool, notify_engineer tool, request_feedback tool and other to demonstrate a truly closed-loop, automated engineering workflow - from issue creation to resolution and feedback. Take a look at this brief demo showcasing the entire end-to-end process: Summary Azure API Management (APIM) is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). In this blog, we demonstrated how Azure API Management's credential manager solves the secure creation of connections to your backend APIs. By integrating MCP servers with VS Code and leveraging APIM for OAuth flows and token management, you can enable secure, agenting automation across your engineering tools. This approach not only streamlines workflows like issues triage and incident creation but also ensures enterprise-grade security and governance for all APIs. Additional Resources Using Credential Manager will help with managing OAuth 2.0 tokens to backend services. Client Auth for remote MCP servers: AZD up: https://aka.ms/mcp-remote-apim-auth AI lab Client Auth: AI-Gateway/labs/mcp-client-authorization/mcp-client-authorization.ipynb Blog Post: https://aka.ms/remote-mcp-apim-auth-blog If you have any questions or would like to learn more about how MCP and Azure API Management can benefit your organization, feel free to reach out to us. We are always here to help and provide further insights. Connect with us on LinkedIn (Julia Kasper & Julia Muiruri) and follow for more updates, insights, and discussions on AI integrations and API management.3.8KViews3likes2CommentsIntroducing Agent in a Day
Looking for a jumpstart on how to build Agents? Confused by the plethora of options when building Agents? You have come to the right place. In May 2025, the Logic Apps team introduced Agent Loop which provides the ability to build Autonomous or Conversational agents in Logic Apps. This gives customers an easy-to-use agent building design surface, the ability to deploy your agent to a Azure and integrate with Azure AI Foundry. Azure represents an enterprise-ready platform that addresses your organizational requirements including VNET integration, Private Endpoint support, Managed Identity and gives you several scaling options. Sounds great? It does, but how can I get started? This is where Logic Apps Agent in a Day comes in. We have recently published a step-by-step lab guide that will help you build an IT Support Agent that uses ServiceNow as the IT Service Management tool. The guide is available here: https://aka.ms/la-agent-in-a-day and we have included an Instructor Slide Deck in Module 1. Agent in a Day represents a fantastic opportunity for customers to participate in hackathon-style contests where attendees learn how to build agents and then can apply them to their unique business use cases. For Partners, Agent in a Day represents a great way to engage your customers by building agents with them and uncovering new use cases. Have any feedback or ideas on how to make this better? Feel free to send me a DM and we can discuss further.Announcing the General Availability of the Azure Logic Apps Rules Engine
This week we announced our agent loop, a groundbreaking new capability in Azure Logic Apps to build AI agents into your enterprise workflows. With agent loop, you can embed advanced AI decision-making directly into your processes – enabling your apps and automation to not just follow predefined steps, but to reason, adapt, and act autonomously towards goals. Now, we are announcing the General Availability of our Azure Logic Apps Rules Engine. A deterministic rules engine runtime based on the RETE algorithm that allows in-memory execution, prioritization, and reevaluation of business rules in Azure Logic Apps. The Azure Logic Apps Rules Engine is a decision management inference engine in Azure Logic Apps, which provides the capability for customers to build Standard workflows in Azure Logic Apps and integrate readable, declarative, and semantically rich rules that operate on multiple data sources. The native data sources available today for the Rules Engine are XML and .NET objects. These data sources are called "facts" and are used to construct rules from small building blocks of business logic or "rulesets". To create rules, you need the Rules Composer. It can be downloaded from the download center. The Rules Engine can also interact with the data exchanged by all the connectors available for Standard logic app resources. This design pattern promotes code reuse, design simplicity, and business logic modularity. Our Rules engine uses a VSCode experience to create Logic Apps projects with Rules engine support. For more information on how to create projects with Rules Engine, visit here. Now. What can I do with it? In a world of AI that essentially follows a probabilistic approach, rules engines are vital because they provide consistency, clarity, and compliance across different business goals. When you use rules with a workflow in Azure Logic Apps, you can define the logic, constraints, and policies that govern how to process, validate, and exchange data across systems, while you avoid AI hallucinations. Rules also help you make sure that applications follow the regulations and standards of their respective industries and markets. By using a rules engine, you can manage and update your workflow's business logic independently from the code and without having to alter your workflow. This approach helps you reduce the complexity and maintenance costs of your applications and increase their agility and scalability. From a technical perspective, the Azure Logic Apps rules engine allows you to do forward chaining or forward reasoning, in other words to do a re-evaluation of rules triggered by changes in the facts because of a rule’s execution. This is one of those scenarios where rules engine is unique; instead of writing complex code or creating complex “state-machine” workflows, the logic apps rules engine conducts this task with an instruction called “Update”. Getting started In the example below, I will show how to use the Logic Apps rules engine to ground an AI workflow loop. For this to scenario, I am adding a Rules Engine workflow, to an existing agent loop workflow, and use it to correct rates and provide a “cross-sell” recommendation. First, I need to deploy the workflow from VSCode to Azure. As the rules engine currently only supports XML and .NET Framework objects, I create an XSD schema (using Copilot if you don’t have an existing one) and use it with a “Compose XML with schema” action to create the XML fact that is needed. To obtain the returned data, I am using the “Parse XML with schema” action as well. After the logic app was deployed, I added it as a tool in the Logic Apps agent workflow loop, with a Call workflow in this logic app. I then pass the values that I need for parameters for the Rules engine to work. And I leave the rules engine return values empty. Then I updated my system prompt to indicate how I want the Rules engine to be used. The agent loop will find the right tool for the right job. Once the system prompt has been updated, I proceed to run the workflow with a payload. I have highlighted in red in the Agent chat, the guardrails imposed by the Rules Engine. Those rules have been used to make sure that the AI responses fall within the internal compliance and cross-sell company criteria. Some of the business rules can have different priorities and might require re-calculation for accuracy. The Logic Apps Rules Engine takes care of it without coding or adding complex business logic through additional workflows. Further adjustments to the rules using the Rules Composer will ground the agent’s results even more. What else can I do with it? You can use a Rules Engine in any context. In fact, decision management that falls under Intelligent business processes automation is growing in customers who want to provide flexibility, governance and compliance with their cloud workloads. Another well-known scenario is for BizTalk Migrations to Azure Logic Apps. For customers who have implemented the BizTalk BRE for decision management, content redirection, SWIFT or .NET framework. Demo Please watch the following short demo of this sample. How to use it If you are running the public preview version of the Rules engine, we recommend you to recreate your Rules engine project to get the latest rules engine nuget package loaded. If you cannot recreate your project, conduct the following steps: Update to csproj file by adding the rules engine nuget and updating the Webjob sdk nuget as follows: <PackageReference Include="Microsoft.Azure.Workflows.RulesEngine" Version="1.0.0" /> <PackageReference Include="Microsoft.Azure.Workflows.WebJobs.Sdk" Version="1.2.0" /> Update to the code making the rule explorer to be created as part of the constructor: public user_function_class(ILoggerFactory loggerFactory) { logger = loggerFactory.CreateLogger<user_function_class>(); this.ruleExplorer = new FileStoreRuleExplorer(loggerFactory); } The above rule explorer needs to be used for getting any rule set in the RunRules method as: var ruleSet = this.ruleExplorer.GetRuleSet(ruleSetName); Open a terminal and run dotnet restore Run dotnet build. Contact Us Have feedback or questions about the Rules Engine? We’d love to hear from you. Reply directly to this blog post or reach out to us through this form. Your input helps shape the future of Logic Apps and the rules engine.1.1KViews0likes0CommentsExpose REST APIs as MCP servers with Azure API Management and API Center (now in preview)
As AI-powered agents and large language models (LLMs) become central to modern application experiences, developers and enterprises need seamless, secure ways to connect these models to real-world data and capabilities. Today, we’re excited to introduce two powerful preview capabilities in the Azure API Management Platform: Expose REST APIs in Azure API Management as remote Model Context Protocol (MCP) servers Discover and manage MCP servers using API Center as a centralized enterprise registry Together, these updates help customers securely operationalize APIs for AI workloads and improve how APIs are managed and shared across organizations. Unlocking the value of AI through secure API integration While LLMs are incredibly capable, they are stateless and isolated unless connected to external tools and systems. Model Context Protocol (MCP) is an open standard designed to bridge this gap by allowing agents to invoke tools—such as APIs—via a standardized, JSON-RPC-based interface. With this release, Azure empowers you to operationalize your APIs for AI integration—securely, observably, and at scale. 1. Expose REST APIs as MCP servers with Azure API Management An MCP server exposes selected API operations to AI clients over JSON-RPC via HTTP or Server-Sent Events (SSE). These operations, referred to as “tools,” can be invoked by AI agents through natural language prompts. With this new capability, you can expose your existing REST APIs in Azure API Management as MCP servers—without rebuilding or rehosting them. Addressing common challenges Before this capability, customers faced several challenges when implementing MCP support: Duplicating development efforts: Building MCP servers from scratch often led to unnecessary work when existing REST APIs already provided much of the needed functionality. Security concerns: Server trust: Malicious servers could impersonate trusted ones. Credential management: Self-hosted MCP implementations often had to manage sensitive credentials like OAuth tokens. Registry and discovery: Without a centralized registry, discovering and managing MCP tools was manual and fragmented, making it hard to scale securely across teams. API Management now addresses these concerns by serving as a managed, policy-enforced hosting surface for MCP tools—offering centralized control, observability, and security. Benefits of using Azure API Management with MCP By exposing MCP servers through Azure API Management, customers gain: Centralized governance for API access, authentication, and usage policies Secure connectivity using OAuth 2.0 and subscription keys Granular control over which API operations are exposed to AI agents as tools Built-in observability through APIM’s monitoring and diagnostics features How it works MCP servers: In your API Management instance navigate to MCP servers Choose an API: + Create a new MCP Server and select the REST API you wish to expose. Configure the MCP Server: Select the API operations you want to expose as tools. These can be all or a subset of your API’s methods. Test and Integrate: Use tools like MCP Inspector or Visual Studio Code (in agent mode) to connect, test, and invoke the tools from your AI host. Getting started and availability This feature is now in public preview and being gradually rolled out to early access customers. To use the MCP server capability in Azure API Management: Prerequisites Your APIM instance must be on a SKUv1 tier: Premium, Standard, or Basic Your service must be enrolled in the AI Gateway early update group (activation may take up to 2 hours) Use the Azure Portal with feature flag: ➤ Append ?Microsoft_Azure_ApiManagement=mcp to your portal URL to access the MCP server configuration experience Note: Support for SKUv2 and broader availability will follow in upcoming updates. Full setup instructions and test guidance can be found via aka.ms/apimdocs/exportmcp. 2. Centralized MCP registry and discovery with Azure API Center As enterprises adopt MCP servers at scale, the need for a centralized, governed registry becomes critical. Azure API Center now provides this capability—serving as a single, enterprise-grade system of record for managing MCP endpoints. With API Center, teams can: Maintain a comprehensive inventory of MCP servers. Track version history, ownership, and metadata. Enforce governance policies across environments. Simplify compliance and reduce operational overhead. API Center also addresses enterprise-grade security by allowing administrators to define who can discover, access, and consume specific MCP servers—ensuring only authorized users can interact with sensitive tools. To support developer adoption, API Center includes: Semantic search and a modern discovery UI. Easy filtering based on capabilities, metadata, and usage context. Tight integration with Copilot Studio and GitHub Copilot, enabling developers to use MCP tools directly within their coding workflows. These capabilities reduce duplication, streamline workflows, and help teams securely scale MCP usage across the organization. Getting started This feature is now in preview and accessible to customers: https://aka.ms/apicenter/docs/mcp AI Gateway Lab | MCP Registry 3. What’s next These new previews are just the beginning. We're already working on: Azure API Management (APIM) Passthrough MCP server support We’re enabling APIM to act as a transparent proxy between your APIs and AI agents—no custom server logic needed. This will simplify onboarding and reduce operational overhead. Azure API Center (APIC) Deeper integration with Copilot Studio and VS Code Today, developers must perform manual steps to surface API Center data in Copilot workflows. We’re working to make this experience more visual and seamless, allowing developers to discover and consume MCP servers directly from familiar tools like VS Code and Copilot Studio. For questions or feedback, reach out to your Microsoft account team or visit: Azure API Management documentation Azure API Center documentation — The Azure API Management & API Center Teams5.8KViews4likes6CommentsForrester Study Finds 315% ROI with Azure API Management and a Path to AI Readiness
APIs are the engines of modern digital experiences, powering everything from mobile apps to AI copilots. As AI reshapes products, experiences and workflows, APIs have become essential infrastructure. But without the right management strategy, managing APIs at scale can introduce complexity, security risks, and inefficiencies. That’s where Azure API Management comes in. It’s a fully managed service that helps you publish, secure, monitor, and scale APIs across clouds, on-premises, and hybrid environments. With deep integration across the Microsoft ecosystem — including Azure OpenAI Service, GitHub Copilot, and Microsoft Defender for APIs — it delivers unmatched efficiency and value. Bottom line? A 315% ROI, according to a Forrester study. To quantify the business impact, in 2025 Microsoft commissioned Forrester Consulting to conduct a Total Economic Impact™ (TEI) study. The study evaluated the costs, benefits, and risks of adopting Azure API Management, based on interviews with decision-makers from seven organizations across financial services, manufacturing, financial technology, and consumer goods sectors. Forrester created a composite organization modeled after these interviews, a $1B global enterprise based in North America, and the results were compelling: 315% ROI over three years driven by faster time to market, reduced legacy costs, improved developer productivity, and preparation for AI and the future. From legacy bottlenecks to agile innovation Before adopting Azure API Management, the organizations faced common challenges: Complex, siloed systems High integration and maintenance costs Low API reuse and poor discoverability Inefficient development cycles Starting API programs with limited internal support Azure API Management helped them turn these challenges into opportunities. Key quantified benefits over three years Based on interviews and Forrester’s financial model, the composite organization experienced the following top benefits: 30% more efficient API development and 50% more efficient policy configuration Developers saved more than a week per API and an hour per policy, enabling them to reallocate time to higher-value tasks. Total value: $679,000 “[With our prior solution,] it took three to four weeks of development. With Azure API Management, it is one week of development. That is the difference.” Principal architect, manufacturing 5%+ improvement in API and policy reuse With Azure API Management and Azure API Center, the composite organization increases its reuse of APIs and API policies. It consolidates and tracks all the APIs it creates, which improves discovery and allows its developers to reuse APIs and API policies instead of creating new ones. Over three years, this translates to $352,000 for the composite organization. “Azure API Management is a good tool. It does what it is supposed to do well, and it works well with the Azure ecosystem. We are happy, and we are investing heavily to use it more.” Director of data platforms and services, financial technology 80% boost in API management and support productivity Compared to legacy systems, platform engineers recaptured over 12 hours per API per year, enabling a shift to more strategic work. Total value: $370,000 “Every hour saved with Azure API Management is a leverage for a new business case.” Senior engineering manager, financial services 50% faster time to market, accelerating operating profit Instead of taking three months to bring an API initiative to market, the composite organization does this twice as fast in 1.5 months with Azure API Management. Because the developers and IT professionals work more efficiently, the composite earns additional months of recurring revenue and achieves its API-related business goals more quickly. Over three years, this benefit is worth $1.5 million to the composite organization. $190K+ saved annually from retiring legacy infrastructure By adopting Azure API Management and modernizing, the composite organization progressively retires and consolidates legacy hardware and software. These yield cost savings it can reinvest in other initiatives. Over three years, this benefit is worth $568,000 to the composite organization. Strategic advantages beyond the numbers While the quantifiable results are impressive, customers also shared unquantified but critical benefits: Stronger security and governance across hybrid and multi-cloud environments Greater API resilience with less downtime and smoother experiences Improved developer experience and higher satisfaction across engineering teams AI lifecycle governance, with Azure API Management acting as a centralized AI gateway Seamless integration across the Microsoft ecosystem, boosting innovation and productivity Improved AI governance and visibility with a centralized gateway As generative AI becomes core to digital transformation, organizations must not only innovate quickly — they must govern AI usage with precision and responsibility. Interviewees emphasized how Azure API Management acted as a centralized AI gateway, helping them securely manage the rising complexity of AI-driven apps. With Azure API Management, they gained: Full visibility into generative AI usage across teams and apps Rate-limiting and throttling, preventing unexpected costs or misuse Centralized logging and monitoring for compliance and oversight Consistent performance and low latency for AI-powered user experiences “We have enforced rate-limiting in nonproduction environment. The rate limit helps us to control the usage pattern so that there is no excess usage with respect to APIs. Azure API Management provides visibility and traceability. That helps us to see the usage pattern and take proactive steps to communicate to the consumers [of AI].” — Cloud data architect, global consumer goods company Strengthened security posture and simplified compliance Modern digital infrastructure, especially with AI workloads, demands secure, reliable API exposure across hybrid and multi-cloud environments. Organizations in the study reported that Azure API Management significantly improved their security posture and reduced operational risk. Key benefits included: A single secure gateway to all APIs, reducing attack surface Always-on protection with automated patching, threat detection, and identity integration Built-in integrations with Microsoft security offerings including Microsoft Sentinel, enabling seamless SIEM visibility and faster incident response “We created a single door to get to the APIs, It increased our security posture... [We are] twice as secure.” — Director of data platforms, financial technology firm “[The cybersecurity team] is very excited about Azure API Management and it’s rare to say [that team] is excited about a solution. … Microsoft is responsible for keeping security patched,.. and the logs going to our SIEM solution are more seamless….” — Senior engineering manager, financial services company These capabilities didn’t just make APIs safer. They also freed up time for engineering and security teams to focus on innovation instead of operational overhead. A platform built for the future Whether modernizing legacy infrastructure or launching a new API strategy, organizations used Azure API Management to leapfrog challenges and build intelligent, composable applications. The result? Faster time to value. Improved governance. Better developer experience. And a platform ready for the AI era. “The main value is flexibility. Azure API Management is a very scalable and resilient solution. It integrates well with the Azure ecosystem, and it supports all the modern paradigms.” — Director of Data Platforms, FinTech Ready to realize the impact? Whether you're modernizing legacy systems, building new digital experiences, or governing AI workloads, Azure API Management provides a trusted, enterprise-grade platform to support your strategy. See how Azure API Management delivered 315% ROI — read the full Forrester TEI study. Start building with Azure API Management — try it free today!351Views0likes0CommentsLogic Apps Aviators Newsletter - June 25
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month April’s Ace Aviator: Andrew Wilson What's your role and title? What are your responsibilities? I am the Chief Consultancy Officer at Black Marble, a multi-award-winning software company with a big focus on the Microsoft stack. I work with a talented team of consultants to help our customers get the most out of Azure. My role is all about enabling organisations to modernise, integrate, and optimise their systems, always with an eye on DevOps best practices. I’m involved across most of the software development lifecycle, but my focus tends to lean toward consultations, gathering requirements, and architecting solutions that solve real-world problems. I work across a range of areas including application modernisation, BizTalk to Azure Integration Services (AIS) migrations, system integrations, and cloud optimisation. Over time, I've developed a strong focus on Azure, especially around AIS. In short, I help bridge the gap between technical possibilities and business needs, making sure the solutions we design are both practical and future-ready. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? No two days are quite the same which keeps things interesting! I usually kick things off with a quick plan for the day (and a bit of reshuffling for the week ahead) to make sure we’re focused on what matters most for both customers and the team. My time is a mix of customer-facing work, sales conversations with new prospects, and supporting existing clients, whether that’s through solution design, quick fixes, or hands-on consultancy. I’m often reviewing or writing proposals and architectures, and jumping in to support the team on delivery when needed. There’s always some active learning in the mix too, reading, experimenting, or spinning up quick ideas to explore better ways of doing things. We don’t work in silos at Black Marble, so I’ll often jump in where I can add value, whether or not I’m directly on the project. It’s a real team effort, and that collaboration is a big part of what makes the role so rewarding. What motivates and inspires you to be an active member of the Aviators/Microsoft community? I’ve always enjoyed the challenge of bringing systems and applications together, there’s something really satisfying about seeing everything click into place and knowing it’s driving real business value What makes the Aviators and wider Microsoft community special is that everyone shares that same excitement. It’s a group of people who genuinely care about solving problems, pushing technology forward, and learning from one another. Being part of that kind of community is motivating in itself, we’re all collaborating, sharing ideas, and helping shape a better, more connected future. It’s hard not to be inspired when you’re surrounded by people who are just as passionate about the work as you are. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Stay curious, always ask “why,” and don’t be afraid to get things wrong, because you will, and that’s how you learn. Some of the best breakthroughs come after a few missteps (and maybe a bit of head-scratching). It’s easy to look around and feel like others have it all figured out, don’t let that discourage you. Everyone’s journey is different, and what looks effortless on the outside often has a lot of trial and error behind it. One of the best things about STEM is its diversity, there are so many different roles, paths, and people in this space. Whether you’re hands-on with code, designing systems, or solving data challenges, there’s a place for you. It’s not a one-size-fits-all, and that’s what makes it exciting. Most importantly, share what you learn. Even if something’s been “done,” your take on it might be exactly what someone else needs to see to help them get started. And yes, imposter syndrome is real, but don’t let it silence you. You belong here just as much as anyone else. What has helped you grow professionally? A big part of my growth has come from simply committing to continuous learning, whether that’s diving into new tech, attending conferences like Integrate, or being part of user groups where ideas (and challenges) get shared openly. I’ve also learned to say yes to opportunities, even when they’ve felt a bit daunting at first. Pushing through the unknown, especially with the support of a great team and community, has led to some of my most rewarding experiences. And finally, I try to approach everything with the mindset that I’m someone others can count on. That sense of responsibility has helped me stay focused, accountable, and constantly improving. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? Wow, what an exciting question! If I had a magic wand, the first thing I’d add is having the option to throw exceptions that can be caught by try-catch scope blocks, this would bring much-needed clarity and flexibility to error handling. It’s a feature that would really help build more resilient and maintainable solutions. Then, the ability to break or continue loops, sometimes you need that fine-tuned control to keep your workflows running smoothly without extra workarounds. And lastly, full GA support for unit and integration testing, because testing is the backbone of reliable software, and having that baked in would save so much time and stress down the line. News from our product group Logic Apps Live May 2025 Missed Logic Apps Live in May? You can watch it here. We focused on the Logic Apps big announcements from Microsoft Build 2025. There are a lot of great things to check! Announcing agent loop: Build AI Agents in Azure Logic Apps The era of intelligent business processes has arrived! Today, we are excited to announce agent loop, a groundbreaking new capability in Azure Logic Apps to build AI agents into your enterprise workflows. With agent loop, you can embed advanced AI decision-making directly into your processes – enabling your apps and automation to not just follow predefined steps, but to reason, adapt, and act autonomously towards goals. Agent Loop Demos We announced the public preview of agent loop at Build 2025. Agent Loop is a new feature in Logic Apps to build AI Agents for use cases that span across industry domains and patterns. In this article, share with you use cases implemented in Logic Apps using agent loop and other features. Announcement: Azure Logic Apps Document Indexer in Azure Cosmos DB We’re excited to announce the public preview of Azure Logic Apps as a document indexer for Azure Cosmos DB!00 With this release, you can now use Logic Apps connectors and templates to ingest documents directly into Cosmos DB’s vector store—powering AI workloads like Retrieval-Augmented Generation (RAG) with ease. Announcement: Logic Apps connectors in Azure AI Search for Integrated Vectorization We’re excited to announce that Azure Logic Apps connectors are now supported within AI Search as data sources for ingestion into Azure AI Search vector stores. This unlocks the ability to ingest unstructured documents from a variety of systems—including SharePoint, Amazon S3, Dropbox and many more —into your vector index using a low-code experience. Announcement: Power your Agents in Azure AI Foundry Agent Service with Azure Logic Apps We’re excited to announce the Public Preview of two major integrations that bring the power of Azure Logic Apps to AI Agents in Foundry – Logic Apps as Tools and AI Agent Service Connector. Learn more on our announcement post! Codeful Workflows: A New Authoring Model for Logic Apps Standard Codeful Workflows expand the authoring and execution models of a Logic Apps Standard, offering developers the ability to implement, test and run workflows using an imperative programming model both locally and in the cloud. Announcing the General Availability of the Azure Logic Apps Rules Engine we are announcing the General Availability of our Azure Logic Apps Rules Engine. A deterministic rules engine runtime based on the RETE algorithm that allows in-memory execution, prioritization, and reevaluation of business rules in Azure Logic Apps. Integration Environment Update – Unified experience to create and manage alerts We’re excited to announce the next milestone in our journey to simplify monitoring across Azure Integration Services. As a follow-up to our earlier preview release on unified monitoring and dashboards, we’re now making it easier than ever to configure alerts for your integration applications. Automate Invoice data extraction with Logic Apps and Document Intelligence This blog post demonstrates how you can use Azure Logic Apps, the new Analyze Document Details action, and Azure OpenAI to automatically convert invoice images into structured data and store them in Azure Cosmos DB. Log Ingestion to Azure Log Analytics Workspace with Logic App Standard Discover how to send logs to Azure Log Analytics Workspace using Logic App Standard for VNet integration. Learn about shared key authentication and HTTP action configuration for seamless log ingestion. Generating Webhook Action Callback URL with Primary or secondary Access Key Learn how to manage Webhook action callback URLs in Azure Logic Apps when regenerating access keys. Discover how to use the accessKeyType property to ensure seamless workflow execution and maintain security. Announcing the Public Preview of the Applications feature in Azure API management Discover the new Applications feature in Azure API Management, enabling OAuth-based access to APIs and products. Streamline secure API access with built-in OAuth 2.0 application-based authorization. GA: Inbound private endpoint for Standard v2 tier of Azure API Management Today, we are excited to announce the general availability of inbound private endpoint for Azure API management Standard v2 tier. Securely connect clients in your private network to the API Management gateway using Azure Private Link. Announcing the open Public Preview of the Premium v2 tier of Azure API Management Announcing the public preview of Azure API Management Premium v2 tier. Experience superior capacity, highest entity limits, and unlimited calls with enhanced security and networking flexibility. Announcing Federated Logging in Azure API Management Announcing federated logging in Azure API Management. Gain centralized monitoring for platform teams and autonomy for API teams, streamlining API management with robust security and operational visibility. Introducing Workspace Gateway Metrics and Autoscale in Azure API Management Introducing workspace gateway metrics and autoscale in Azure API Management. Efficiently monitor and scale your gateway infrastructure with real-time insights and automated scaling for enhanced reliability and cost efficiency. Introducing Model Logging, Import from AI Foundry, and extended model support in AI Gateway Introducing workspace gateway metrics and autoscale in Azure API Management. Efficiently monitor and scale your gateway infrastructure with real-time insights and automated scaling for enhanced reliability and cost efficiency. Expose REST APIs as MCP servers with Azure API Management and API Center (now in preview) Discover how to expose REST APIs as MCP servers with Azure API Management and API Center, now in preview. Enhance AI integration with secure, observable, and scalable API operations. Now in Public Preview: System events for data-plane in API Management gateway Announcing the public preview of new data-plane system events in Azure Event Grid for the Azure API Management managed gateway. Gain near-real-time visibility into critical operations, automate responses, and prevent disruptions. News from our community Agentic AI – A Potential Black Swan Moment in System Integration Video by Ahmed Bayoumy Discover how Agentic Logic Apps are revolutionizing system integration with AI-driven workflows. Learn how this innovative approach transforms business processes by understanding goals, deciding actions, and using predefined tools for smart orchestration. Microsoft Build: Behind the Scenes with Agent Loop Workflow A New Phase in AI Evolution Video by Ahmed Bayoumy Explore how Agent Loop brings “human in the loop” control to enterprise workflows, on this video by Ahmed, sharing insights directly from Microsoft Build 2025, in a chat with Kent Weare and Divya Swarnkar. Microsoft Build 2025: Azure Logic Apps is Now Your AI Agent Superpower! Post by Sagar Sharma Discover how Azure Logic Apps is transforming AI agent development with new capabilities unveiled at Microsoft Build 2025. Learn about Agent Loop, AI Foundry integration, Document Indexer, and more for intelligent, adaptive workflows. Everyone is talking about AI Agents — Here’s how to actually build one that works Post by Mateusz Partyka Learn how to build effective AI agents with practical strategies and insights. Discover tips on choosing the right tech stack, prototyping fast, managing model costs, and prompt engineering for optimal results. Agent Loop | Azure Logic Apps Just Got Smarter Post by Andrew Wilson Discover Agent Loop in Azure Logic Apps – now in preview - a revolutionary AI-powered integration feature. Enhance workflows with advanced decision-making, context retention, and adaptive actions for smarter automation. Step-by-Step Guide to Azure Logic Apps Agent Loop Post by Stephen W. Thomas Dive into the step-by-step guide for creating AI Agents with Azure Logic Apps Agent Loop – now in preview. Learn to leverage 1300+ connectors, set up OpenAI models, and build intelligent workflows with no-code integration. You can also follow Stephen’s video tutorial Confessions of a Control Freak: How I Learned to Love Low Code (with Logic Apps) Post by Peter Mugisha Discover how a self-confessed control freak learned to embrace low-code development with Azure Logic Apps. From skepticism to advocacy, explore the journey of efficient integration and streamlined workflows. Logic Apps Standard vs. Large Files: Common Hurdles and How to Beat Them Post by Şahin Özdemir Learn how to overcome common hurdles when handling large files in Logic Apps Standard. Discover strategies for scaling, offloading memory-intensive operations, and optimizing performance for efficient integration. There is a new-new Data Mapper for Logic App Standard Post by Sandro Pereira Discover the new Data Mapper for Logic App Standard, now in public preview. Enjoy a modern BizTalk-style mapper with code-first, schema-aware experience, supporting XSLT 3.0, XSD, and JSON schemas for efficient data mapping! A Friday Fact from Sandro Pereira. The name of When a HTTP request is received trigger affects the workflow URL Post by Sandro Pereira Discover how the name of the "When a HTTP request is received" trigger affects the workflow URL in Azure Logic Apps. Learn best practices to avoid integration issues and ensure consistent endpoint paths. Changing APIM Operations Doesn’t Update their PathTemplate Post by Luis Rigueira Learn how to handle PathTemplate issues in Azure Logic Apps Standard when switching APIM operations. Ensure correct endpoint paths to avoid misleading results and streamline your workflow. It is a Friday Fact, brought to you by Luis Rigueira!350Views0likes0Comments📢Announcing agent loop: Build AI Agents in Azure Logic Apps 🤖
This post is written in collaboration with Kent Weare and Rohitha Hewawasam The era of intelligent business processes has arrived! Today, we are excited to announce agent loop, a groundbreaking new capability in Azure Logic Apps to build AI agents into your enterprise workflows. With agent loop, you can embed advanced AI decision-making directly into your processes – enabling your apps and automation to not just follow predefined steps, but to reason, adapt, and act autonomously towards goals. Agent loop becomes central to AI Agent development — it’s a new action type that brings together your AI model of choice, domain-specific tools, and enterprise knowledge sources. Whether you’re building an autonomous agent to process loan approvals, a conversational agent to support customers, or a multi-agent system that coordinates tasks such as Sales Report generation across agents, Agent Loop enables your workflows to go beyond static steps — making decisions, adapting to context, and delivering outcomes. Agent loop is implemented using kernel object in the Semantic Kernel. The kernel object, along with an LLM, creates the plan for what needs to be done, while Logic Apps runtime handles execution of that plan. Agent Loop is highly configurable, enabling you to build agents with diverse capabilities: Conversational or Autonomous Agents With Logic Apps' extensive gallery of connectors, you can build fully autonomous agents that respond to real-time events — like new records in a database, files added to a share, or messages in a queue. Agent Loop also supports conversational agents via Channels, allowing agents to interact with users through the Azure portal or custom chat clients. Bring your own Model Associate your AI agent with any Azure OpenAI model of your choice. As new models become available, you can easily switch or upgrade without re-architecting the solution. Define Agent Goals and Guardrails Specify your agent’s objective and behavioral boundaries through system prompts and user instructions. Using connectors like Outlook or Teams, you can easily introduce human-in-the-loop interactions for approvals or overrides — enabling safe, controlled autonomy. Tools and Knowledge, Built In Leverage hundreds of out-of-the-box connectors to equip agents with access to enterprise systems, APIs, and business data. Enrich their reasoning with knowledge from vector stores, structured databases, or unstructured files, and empower them to take meaningful actions across your environment. AI Agents in Action Here are some examples of AI Agents in Action that highlight the value and efficiencies of these agents across different domains and solution areas. A product return agent verifies order details, return eligibility, and refund rules, then processes the return or requests additional info from the customer. A loan approval agent evaluates credit score, income, and risk profile, applies business rules, and auto-approves or routes applications for review. A recruiting agent screens resumes, summarizes qualifications, and drafts personalized outreach to top candidates, streamlining early hiring stages. A sales report generation workflow uses a writer agent to draft content, a reviewer agent to verify accuracy, and a publisher agent to format and distribute the report. An IT operations agent triages alerts, checks recent changes, and either resolves common issues or escalates to on-call engineers when needed. A multi-agent retail supply chain solution combines inventory and logistics agents to ensure timely restocks and optimize fulfillment routes. Why agent loop matters Modern businesses thrive on agility and intelligence. Traditional workflows remain essential for deterministic tasks—especially those involving structured data or high-risk decisions. But when processes involve unstructured data, changing context, or require adaptive decision-making, AI agents excel. They can reason, act in real time, and dynamically sequence steps to meet goals. Agent Loop exactly serves this purpose. What makes Agent Loop especially powerful is its deep integration with the Logic Apps ecosystem. Logic Apps comes with over 1,400+ connectors for Microsoft and third-party services – from databases and ERP systems to SaaS applications and custom APIs. They can also invoke custom code and scripts, making it easy to tap into homegrown capabilities. The agent isn’t limited to information in its prompt; it can actively retrieve knowledge, perform transactions, and effect change in the real world via these connectors. Logic Apps is uniquely positioned to enable customers to leverage their API and connector ecosystem cohesively across their workflows and AI Agents to build agentic applications. Equally important, Agent Loop is designed for flexibility. You can orchestrate single-agent workflows or coordinate multiple agents working in tandem towards a common goal. Agent Loop can even involve humans in the loop when needed – for instance, pausing to get a manager’s approval or to ask for clarification – leveraging Logic Apps’ human workflow capabilities. All of this is handled within the familiar, visual Logic Apps designer, so you get a high-level view of the entire orchestration. How agent loop works At a high level, Agent Loop works by pairing the reasoning capabilities of large-scale AI models with the robust action framework of Logic Apps. Built on top of Semantic Kernel, the Agent loop operates in iterative cycles, allowing the agent to think, act, and learn from each step: Reasoning (Think): The agent (powered by an LLM like Azure OpenAI Service under the hood) and on Semantic Kernel, examines its goal and the current context. It decides what needs to be done next – whether that’s gathering more information, calling a specific connector, or formulating an answer. This step is essentially the AI “planning” its next action based on the goal you’ve provided and the data it has so far. Action (Act): The agent then carries out the decided action by invoking a tool or connector through Logic Apps. This could be anything from querying a database, calling a REST API, sending an email, to running a calculation. Thanks to Logic Apps’ extensive connector library, the agent has a rich toolbox at its disposal. Each action is executed as a Logic Apps step, meaning it’s secure, managed, and logged like any other workflow action. Reflection (Learn): After the action, the agent receives the results (e.g. data retrieved, outcome of the API call, user input, etc.). It then evaluates: Did this bring it closer to the goal? Does the plan need adjusting? The agent updates its understanding based on new information. This reflection is what lets the agent handle complex, open-ended tasks – it can correct course if needed, try alternative approaches, or conclude if the goal has been satisfied. These steps repeat in a loop. The Agent Loop action manages this cycle automatically – calling the AI model to reason, executing the chosen connector operations, feeding results back, and iterating. Why Build AI Agents in Logic Apps? Building AI agents is an emerging frontier in automation but doing it from the ground up can be daunting especially when organizations build them in large numbers. Agent Loop in Logic Apps makes this dramatically easier and more scalable for several reasons: Declarative Orchestration: Logic Apps provides a visual workflow canvas and a serverless runtime. The Agent Loop action plugs into this and the platform handles the sequence of steps and iterations, so you can focus on defining the goal and selecting the connectors (tools) the agent can use. Code extensibility: Logic Apps supports both declarative and code-first approaches to building agents. You can combine the two — using visual designer for orchestration and injecting code where needed through extensibility points. Write custom logic in C#, PowerShell, JavaScript, or use inline scripts for lightweight processing. Python support is coming soon, enabling even more flexibility. 1400+ Integrated Tools: With the rich connector ecosystem at its disposal, your agent can seamlessly tap into your enterprise systems and SaaS applications. Your entire ecosystem of connectors, APIs, custom code and agents can be used by deterministic workflows and agents to solve business problems Observability: Logic Apps offers full traceability into each agent’s decisions and actions. Every run is logged in the workflow history, with data stored within the customer’s own network and storage boundaries. The Agent Chat view provides insights into the agent’s reasoning, tool invocations, and goal progress. Developers can easily revisit these logs for debugging, auditing, or analysis. Enterprise-Grade Governance: Because it runs on Azure Logic Apps, agent loop inherits all the robust monitoring, logging, security and compliance capabilities of the platform You can secure connections with managed identities and leverage built-in rate limiting, retries, and exception handling. Your AI agents run with the same enterprise-ready guardrails as any mission-critical workflow. Human-in-the-Loop & Multi-Agent Coordination: Logic Apps makes it straightforward to involve people at key decision points or to coordinate multiple agents. You can chain Agent Loop actions or have agents invoke other workflows, enabling collaborative problem-solving that would be difficult to implement from scratch. The result is a system where AI and humans can smoothly interact and complement each other. Faster Time to Value: By eliminating the boilerplate work of building an agent architecture (managing memory, planning logic, connecting to services, etc.), Agent Loop lets developers and architects concentrate on high-value logic and business goals, accelerating how you bring AI-driven improvements to your business processes. In short, agent loop combines the brains of generative AI with the brawn of Azure’s integration platform. It offers a turnkey way to build sophisticated AI-driven automation without reinventing the wheel. Companies no longer have to choose between the flexibility of custom AI solutions and the convenience of a managed workflow service – with Logic Apps and Agent Loop, you get both. Getting Started Agent Loop is available in Logic Apps Standard starting today! Here are some resources to help you begin: Documentation: Explore the agent loop concepts and detailed guide with step-by-step instructions on how to configure and use Agent Loop. Samples & Demos: Watch pre-recorded demos showcasing both conversational and autonomous agent scenarios built with Agent Loop. You'll also get a preview of exciting features coming soon. Looking Ahead Agent Loop opens up a new realm of possibilities for what you can achieve with Azure Logic Apps. It blurs the line between application integration and AI, allowing workflows to evolve from static sequences into adaptive, self-directed processes. We can’t wait to see what you will build with Agent Loop! This is just the beginning. We’re actively investing in new capabilities that are planned for release soon Multi-agent Hand-off Support – A multi-agent application with hand-off capabilities enables different agent-loops to collaborate by transferring tasks between one another based on expertise or context, which is crucial for building agentic applications that can dynamically adapt to complex, evolving goals and user needs. A2A (Agent-to-Agent) protocol support – A2A is a communication standard that defines how autonomous agents exchange messages, share context, and coordinate actions in a secure and structured way. It’s especially important in building agentic applications because it ensures interoperability, enables seamless hand-offs between agents, and maintains context integrity across different agents working toward a shared goal. This will allow Logic Apps agents to seamlessly integrate with other agentic platforms. OBO Auth for Logic Apps Agents: On Behalf Of Auth support for logic Apps agents would allow Logic Apps agents to use logged-in users identity for authentication when invoking Logic Apps connectors as part of agent-loop execution. This will enable building conversational applications to dynamically perform OAuth flows for fetching consent from log-in users to invoke Logic Apps connectors on logged-in user’s behalf. Contact Us Have feedback or questions about Agent Loop? We’d love to hear from you. Reply directly to this blog post or reach out to us through this form. Your input helps shape the future of Logic Apps and agentic automation.5.4KViews3likes2Comments🤖 Agent Loop Demos 🤖
We announced the public preview of agent loop at Build 2025. Agent Loop is a new feature in Logic Apps to build AI Agents for use cases that span across industry domains and patterns. Here are some resources to learn more about them Agent loop concepts Agent loop how-to Agent loop public preview announcement In this article, share with you use cases implemented in Logic Apps using agent loop and other features. This video shows an autonomous Loan Approval Agent specifically that handles auto loans for a bank. The demo features an AI Agent that uses an Azure Open AI model, company's policies, and several tools to process loan application. For edge cases, huma in involved via Teams connector. This video shows an autonomous Product Return Agent for Fourth Coffee company. The returns are processed by agent based on company policy, and other criterions. In this case also, a human is involved when decisions are outside the agent's boundaries This video shows a commercial agent that grants credits for purchases of groceries and other products, for Northwind Stores. The Agent extracts financial information from an IBM Mainframe and an IBM i system to assess each requestor and updates the internal Northwind systems with the approved customers information. Multi-Agent scenario including both a codeful and declarative method of implementation. Note: This is pre-release functionality and is subject to change. If you are interested in further discussing Logic Apps codeful Agents, please fill out the following feedback form. Operations Agent (part 1): In this conversational agent, we will perform Logic Apps operations such as repair and resubmit to ensure our integration platform is healthy and processing transactions. To ensure of compliance we will ensure all operational activities are logged in ServiceNow. Operations Agent (part 2): In this autonomous agent, we will perform Logic Apps operations such as repair and resubmit to ensure our integration platform is healthy and processing transactions. To ensure of compliance we will ensure all operational activities are logged in ServiceNow.2.1KViews1like2Comments