ai
47 TopicsLogic Apps - MCP Demos
We recently announced the ability to create MCP servers using Logic Apps connectors. In this post we are going to share some demo videos that will help you get started and provide you with some ideas on how you can build MCP servers to address your Agent connectivity needs. API Center + Logic Apps MCP Server Demos Getting Started - Salesforce Sales MCP Server In this video, we will leverage Azure API Center to create an MCP Server using Logic Apps connectors. Our solution will allow an end user to manage their Salesforce Contacts, Accounts and Opportunities. Building a Dataverse MCP Server In this video, we will leverage Azure API Center to create a Dataverse MCP Server using Logic Apps connectors. Our solution will allow an end user to gain insights on product returns and log an action for a quality control manager. Building a SharePoint MCP Server In this video, we will leverage Azure API Center to create a SharePoint MCP Server using Logic Apps connectors. Our solution will allow an end user to gain insights on product feedback and log new feedback on how to improve product. Calling Logic Apps MCP Server from Copilot Studio In this video, we will use API Center and Azure Logic Apps to expose an MCP Server that can be called from Copilot Studio securely. Logic Apps MCP Server Demos Getting Started - ServiceNow Incident MCP Server In this video, we will take an existing Logic App (Standard) instance and enable it as an MCP server. Our MCP server will expose tools that help users assign IT Incident tickets to ServiceNow. Resources Looking for more resources? Check out our product documentation: API Center and Logic Apps MCP server Logic Apps as an MCP serverBuild. Secure. Launch Your Private MCP Registry with Azure API Center.
We are thrilled to embrace a new era in the world of MCP registries. As organizations increasingly build and consume MCP servers, the need for a secure, governed, robust and easily discoverable tools catalog has become critical. Today, we are excited to show you how to do just that with MCP Center, a live example demonstrating how Azure API Center (APIC) can serve as a private and enterprise-ready MCP registry. The registry puts your MCPs just one click away for developers, ensuring no setup fuss and a direct path to coding brilliance. Why a private registry? đ¤ Public OSS registries have been instrumental in driving growth and innovation across the MCP ecosystem. But as adoption scales, so does the need for tighter security, governance, and control, this is where private MCP registries step in. This is where Azure API Center steps in. Azure API Center offers a powerful and centralized approach to MCP discovery and governance across diverse teams and services within an organization. Let's delve into the key benefits of leveraging a private MCP registry with Azure API Center. Security and Trust: The Foundation of AI Adoption Review and Verification: Public registries, by their open nature, accept submissions from a wide range of developers. This can introduce risks from tools with limited security practices or even malicious intent. A private registry empowers your organization to thoroughly review and verify every MCP server before it becomes accessible to internal developers or AI agents (like Copilot Studio and AI Foundry). This eliminates the risk of introducing random, potentially vulnerable first or third-party tools into your ecosystem. Reduced Attack Surface: By controlling which MCP servers are accessible, organizations significantly shrink their potential attack surface. When your AI agents interact solely with known and secure internal tools, the likelihood of external attackers exploiting vulnerabilities in unvetted solutions is drastically reduced. Enterprise-Grade Authentication and Authorization: Private registries enable the enforcement of your existing robust enterprise authentication and authorization mechanisms (e.g., OAuth 2) across all MCP servers. Public registries, in contrast, may have varying or less stringent authentication requirements. Enforced AI Gateway Control (Azure API Management): Beyond vetting, a private registry enables organizations to route all MCP server traffic through an AI gateway such as Azure API Management. This ensures that every interaction, whether internal or external, adheres to strict security policies, including centralized authentication, authorization, rate limiting, and threat protection, creating a secure front for your AI services. Governance and Control: Navigating the AI Landscape with Confidence Centralized Oversight and "Single Source of Truth": A private registry provides a centralized "single source of truth" for all AI-related tools and data connections within your organization. This empowers comprehensive oversight of AI initiatives, clearly identifying ownership and accountability for each MCP server. Preventing "Shadow AI": Without a formal registry, individual teams might independently develop or integrate AI tools, leading to "shadow AI" â unmanaged and unmonitored AI deployments that can pose significant risks. A private registry encourages a standardized approach, bringing all AI tools under central governance and visibility. Tailored Tool Development: Organizations can develop and host MCP servers specifically tailored to their unique needs and requirements. This means optimized efficiency and utility, providing specialized tools you won't typically find in broader public registries. Simplified Integration and Accelerated Development: A well-managed private registry simplifies the discovery and integration of internal tools for your AI developers. This significantly accelerates the development and deployment of AI-powered applications, fostering innovation. Good news! Azure API Center can be created for free in any Azure subscription. You can find a detailed guide to help you get started: Inventory and Discover MCP Servers in Your API Center - Azure API Center Get involved đĄ Your remote MCP server can be discoverable on API Centerâs MCP Discovery page today! Bring your MCP server and reach Azure customers! These Microsoft partners are shaping the future of the MCP ecosystem by making their remote MCP Servers discoverable via API Centerâs MCP Discovery page. Early Partners: Atlassian â Connect to Jira and Confluence for issue tracking and documentation Box â Use Box to securely store, manage and share your photos, videos, and documents in the cloud Neon â Manage and query Neon Postgres databases with natural language Pipedream â Add 1000s of APIs with built-in authentication and 10,000+ tools to your AI assistant or agent - coming soon - Stripe â Payment processing and financial infrastructure tools If partners would like their remote MCP servers to be featured in our Discover Panel, reach out to us here: GitHub/mcp-center and comment under the following GitHub issue: MCP Server Onboarding Request Ready to Get Started? đ Modernize your AI strategy and empower your teams with enhanced discovery, security, and governance of agentic tools. Now's the time to explore creating your own private enterprise MCP registry. Check out MCP Center, a public showcase demonstrating how you can build your own enterprise MCP registry - MCP Center - Build Your Own Enterprise MCP Registry - or go ahead and create your Azure API Center today!7.5KViews7likes4CommentsAnnouncing Parse & Chunk with Metadata in Logic Apps: Build Context-Aware RAG Agents
The new Parse document with metadata and Chunk text with metadata actions in Logic Apps bring powerful improvements to how you handle documents. Unlike the previously released parse and chunk actions, these new versions provide rich metadata alongside the text: pageNumber â the page a chunk came from totalPages â the total number of pages in the document sentencesAreComplete â ensures chunks end on full sentences, avoiding broken fragments This means you donât just get raw textâyou also get the context you need for citations, navigation, and downstream processing. You can also adjust your chunking strategy based on these metadata fields Once parsed and chunked with metadata, you can embed and index documents in Azure AI Search, and then use an Agent Loop in Logic Apps that calls Vector Search as a Tool to answer questions with precise, page-level references In this blog, weâll walk through a scenario where we index two enterprise contracts (a Master Service Agreement and a Procurement Agreement) and then use an Agent Loop to answer natural language questions with citations. Pre-requisites Azure Blob Storage for your documents Azure AI Search with an index Azure OpenAI deployment (embeddings + chat model) Logic App (Standard) with the new AI actions Here is a sample demo on GitHub you can provision to follow along. Step 1: Ingestion flow Goal: Convert raw PDFs into sentence-aware chunks with metadata, then index them. đ¸ Workflow overview When a blob is added or modified (container with your contracts). đ¸ Blob trigger Read blob content đ¸ Read blob content action Parse document with Metadata Input: File content from previous step đ¸ Parse document with metadata action Chunk text with metadata Input: entire parsed text items array đ¸ Chunk text with metadata action Get multiple embeddings Input: Embedding model and text chunks for which vector impressions will be generated đ¸ Get multiple embeddings action Select index objects Input: Raw text content, embeddings, documentName and uniqueID to be passed into the index đ¸ Select array action Index multiple documents Input: Array object output from previous Select step đ¸ Index documents action Step 2: Agent flow with Vector Search as a tool Goal: Let the agent answer natural language questions grounded in your indexed contract Conversational workflow creation: From the portal, create a new Conversational workflow type đ¸ Conversational flow creation Agent action Model: gpt-4.1 (Note: please use this model instead of gpt-4 or gpt-4o) System instructions You are a helpful assistant, answering questions about specific documents. When a question is asked, follow these steps in order: Use the agent parameter body prompt to pass in the user's questions to the Document search tool. Use this tool to do a vector search of the user's question, the output of the vector search tool will have the related information to answer the question. The output will be in the form of a json array. Each array object will have a "content" property, use the "content" property to generate an answer. Use only information to answer the user's question and cite the source using the page number you found it on. No other data or information should be used to answer the question. đĄOne of the coolest parts is how you can create an Agent Parameter that automatically carries the chat input into the tool call. In this case, our body prompt parameter brings the userâs question straight into the tool. đĄSecond of the coolest parts is because the toolâs response comes back in content, the agent automatically extracts itâno expressions required. Itâs declarative and effortless. đ¸ Agent action Tool: Overview Input description: Details on what the tool achieves Agent parameter: Body prompt to pass in context from the chat prompt to the tool đ¸ Tool action Tool: Search vectors with natural language Input index name: this is the name of AI Search index Search text: Body prompt parameter containing query from prompt Nearest neighbors: Number of matches to be returned đ¸ Tool: Search vector action Step 3: Try it out (example end-to-end) Indexing is automatic whenever a file is added to your storage container. The Storage trigger fires the document is read, parsed, chunked, embedded, and indexed into AI Search. You can confirm this end-to-end in run history for the indexing Logic App as well, where Parse and Chunk outputs clearly show pageNumber, totalPages, and sentencesAreComplete values. đ¸ Screenshot: Indexing flow run history with Parse/Chunk metadata outputs Now let's see it in action using the Chat experience to validate the retrieval flow Example question: "What is the standard payment timeline" đ¸ Answer The answer contains detailed information along with citation of page numbers which is leveraged from the new actions that contain such metadata information. đ¸ Run history view of Agent One can also trace the path agent followed with the inputs and outputs to streamline debugging and ensure agent responds reliably. Conclusion With Parse & Chunk with Metadata, you donât just split textâyou gain page numbers, total pages, and sentence completeness that make answers trustworthy and easy to cite. Combined with Agent Loop + Vector Search as a Tool, this unlocks production-ready contract Q&A in just a few steps.409Views0likes0CommentsCalling Logic Apps MCP Server from Copilot Studio
We recently released our MCP Server capabilities in Azure Logic Apps that unlocks enterprise data assets for agentic solutions. One of the questions that I received was, how can I connect to the Logic Apps MCP Server from Copilot Studio. In this post/video, we will describe exactly how to set that up. Why Azure Logic Apps? Azure Logic Apps provides an enterprise ready solution for building MCP Servers including the following capabilities: Strong security posture. Our MCP servers support Entra ID authentication (aka EasyAuth) which provides enterprise authentication and authorization capabilities. Logic Apps (Standard) is a single tenant offering, giving customers dedicated compute, storage and networking. Managed Identity is supported in Logic Apps, allowing you to connect to Azure resources in a secure and efficient manner, without keys or secrets Logic Apps support over 1400 connectors including popular SaaS connectors. But, Logic Apps also has built-in connectors that allow for high-throughput scenarios including on-premises connectivity, without needing an on-premises gateway. Pre-requisites In this video, we use the following services: API Center - which provides a wizard experience for building our MCP server. It also offers customers with a governance and discoverability surface for organizations looking to scale their MCP offerings. In this video, I used the free tier. Logic Apps Standard - which will provide our MCP capabilities through exposing workflows and connectors as MCP tools. Copilot Studio - which will provide our conversational capabilities and will consume our MCP Server Setup Use API Center to build a Logic Apps MCP Server. Note: This can be done without API Center as well, but API Center provides some value-add for customers interested in governance and discoverability. During this process, an App registration will be created. We need to obtain some values from this App registration and make some minor changes. Add MCP Server to Copilot Studio. Use your Copilot Studio agent to call MCP Server.đ Azure Logic Apps: Ushering in the Era of Multi-Agentic Business Process Automation
We've reached another exciting milestone in our automation journey, after introducing Agent loop at Build this year. We're excited to announce enhancements that make Azure Logic Apps the multiagentic business process automation platform that empowers you to build intelligent, collaborative automation solutions. This isn't just about automating tasksâit's about creating an ecosystem where agents, workflows, and humans work together seamlessly to drive exceptional business outcomes. The key highlights of this release includes support for Agent loop in any workflow - new or existing, Python code interpreter, support for Foundry Agent Service in Agent loop, rich Conversational capabilities in our agentic workflows, and multiagent patterns in workflows. Along with these new features, we are also introducing Azure Logic Apps Labs, your hub for AI labs, workshops, and tutorials. Customer Momentum & Use Cases Agent Loop has been received with tremendous excitement since its introduction. Customers across industriesâHealthcare, Retail, Energy, Financial Services, and moreâare embracing it to reimagine how agents collaborate with humans, tools, and workflows. Today, thousands of customersâfrom startups to large enterprisesâare building agentic workflows using Logic Apps platform that power both everyday tasks and mission-critical business processes. Customers are building agentic workflows that drive impact across a wide range of scenarios: Developer Productivity: Write code, generate unit tests, create workflows, mapping data between systems, automate source control,deployment and release pipelines. IT Operations: Incident Management, Ticket and Issues Handling, Policy review and enfocement, Triage, Resource management, cost optimization, issue remediation and more Business Process Automation: Empower sales specialists, retail assistants, order processing/approval flows, and healthcare assistants for intake and scheduling. Customer & Stakeholder Support: Project planning and estimation, Generating content, Automate communication, and streamline customer service workflows. Agent Loop is also powering the Logic Apps teamâs own operations, demonstrating its versatility and real-world impact: Release & Deployment Agent: To streamline deployment and release management for the Logic Apps platform. Incident Management Agent: An extension of our SRE Agent, leveraging Agent Loop to accelerate incident response and remediation. Analyst Agent: Assists teams in exploring product usage and health data, generating insights directly from analytics. Evolving Automation: Extending workflows with intelligent Agents Workflows remain the backbone of reliable business automationâessential for governed, regulated, and strictly defined processes where consistency and auditability are paramount. Yet in todayâs fast-moving environment, not every process fits into rigid rules. Some must adapt in real time, apply reasoning, and collaborate across multiple participants to achieve outcomes. Thatâs where our new agentic workflow capabilities come inânot to replace traditional workflows, but to complement them. Workflows deliver structure and reliability for repeatable processes, while agentic workflows powered by Agent loop add adaptability, reasoning, and collaboration for dynamic scenarios. With Logic Apps, you can orchestrate workflows, agents and human expertsâpreserving compliance where needed and enabling intelligence where it matters most. Every workflow is now Agentic Every workflow in Logic Apps is now an agentic workflow. This means you can seamlessly add AI intelligence to any existing business process with our Agent loop capability. Whether it's a simple approval workflow or a complex multi-step process, you can now infuse AI-powered decision-making and adaptability without rebuilding from scratch. Agent loop backed by Foundry Agent Service Azure Logic Apps now supports creating Agent loop backed by Foundry Agent service, giving you access to the full spectrum of models in Microsoft's Foundry - including third-party optionsâplus powerful built-in tools like Code Interpreter. You get the best of Microsoft's AI stack: use Logic Apps to build and orchestrate your agentic workflows, while Foundry serves as your centralized catalog for agents, models, and built-in tools. Conversational Agents in workflows built on A2A standards Weâre excited to announce that Logic Apps now supports Conversational Agents in workflowsâa major expansion beyond Autonomous Agents. Our conversational agents are built on the A2A (Agent-to-Agent) standard, making them fully interoperable within the broader A2A ecosystem of agents and applications. This standards-based approach ensures your Logic Apps agents can seamlessly participate in multi-vendor agent networks while maintaining enterprise-grade security., making them fully interoperable within the broader A2A ecosystem of agents and applications. This standards-based approach ensures your Logic Apps agents can seamlessly participate in multi-vendor agent networks while maintaining enterprise-grade security. Chat experience The out-of-box A2A chat client delivers a rich conversational experience with: Real-time streaming for responsive, natural interactions Multiturn conversations that maintain context across complex interactions Multiple session management allowing users to maintain separate conversation threads Designer integration for testing and development directly within Logic Apps Open-source external client option that organizations can fully customize and brand to match their specific requirements Per-user chat session that supports in-chat consent flow. Full security and isolation across user chats and sessions The chat client is open source so you can customize for your organizational needs Enterprise security by default Security isn't an afterthoughtâit's built into the foundation. Our conversational agents leverage Azure App Service's built-in Easy Auth capabilities - an out-of-the-box authentication layer that supports federated identity providers like Microsoft Entra. With no SDKs or code changes required, the platform automatically handles token validation, session management, and user identity injectionâmaking your agents secure by default. Agents act On Behalf of Users Our agents operate with full user-aware context through per-user connections using the On-Behalf-Of (OBO) authentication flow. This important capability means that when an agent needs to call a tool, access data, or take an action, it does so using the specific user's identity and permissionsânot a shared service account. This ensures that the access rights, permissions, and security policies applied to the user are consistently enforced across all downstream services, preventing unauthorized access. This user-aware approach using OBO transforms agents from simple chatbots into true collaborative partners that can take meaningful action while maintaining the security and governance standards your organization requires. Advanced multiagent orchestration Logic Apps now serves as a powerful workflow orchestration engine that enables sophisticated collaboration between multiple AI agents. Built on proven patterns used in production systems worldwide, our multiagent capabilities let you build automation workflows from simple agent handoffs to complex hierarchical systems where agents coordinate, delegate tasks, and work together to solve problems that would not be feasible for any single agent to handle alone. State Machine powered handoffs: Logic Apps now functions as a sophisticated state machine, enabling you to define precise handoff conditions between agents. This creates dynamic, powerful applications that can tackle complex problems by seamlessly transferring context and control between specialized agents. Nested Agent architecture: Build sophisticated patterns like supervisor-agent hierarchies where agents can utilize other agents as tools. This enables powerful architectural patterns that can break down complex challenges into manageable, specialized tasks. Python Code Interpreter: Extensible Agent tools With Python Code Interpreter support, agents can now think computationallyâprocessing complex problems through code execution. Developers gain unlimited extensibility by bringing custom Python code as agent tools, either writing the code themselves or letting the agent generate it dynamically. This empowers agents to tackle large datasets, perform complex calculations, and execute custom business logic, giving developers the freedom to build specialized tools that go far beyond standard capabilities. Comprehensive Observability and Transparency Logic Apps provides complete run history for full transparency and auditability. We're now introducing task timeline visualization in run history that makes it easy to follow agent and task execution through an intuitive timeline view. The newly added task timeline captures the entire A2A communication flowâshowing how tasks are initiated, delegated between agents, and completed, tools used, along with all messages exchanged throughout the process. This gives you full visibility into your multiagent workflows, letting you track task handoffs, monitor agent interactions, and understand the complete execution path for debugging and compliance needs. The platform that grows with your ambitions Logic Apps as a multiagent business process automation platform isn't just about today's needsâit's about future-proofing your automation strategy. As your business evolves and new AI capabilities emerge, your agents and workflows can evolve too, without requiring complete system overhauls. The beauty of this approach lies in its accessibility. Developers familiar with Logic Apps can immediately begin building agentic applications using familiar tools and patterns, while gradually exploring more sophisticated multiagent architectures as their needs grow. The future is built on collaboration! The future of automation is about creating intelligent systems where AI agents, automated processes, and human expertise work together seamlessly. Logic Apps now provides the platform to make this vision a reality. Built with security, isolation, scale, and governance, Logic Apps runs anywhereâgiving you everything you need for production-ready applications. Welcome to the era of collaborative intelligence. Welcome to Azure Logic Apps as your Intelligent automation platform! Explore Logic Apps Labs The best way to learn is by building. Ready to get started? Introducing Azure Logic Apps Labs âyour hub for AI labs, workshops, and tutorials. Whether youâre exploring agent basics, building autonomous or conversational agents, or designing advanced multi-agent patterns for building agentic workflows, this is the perfect place to begin. Weâre continuously expanding these capabilities and welcome your feedback at or https://aka.ms/AgentLoopFeedbackđ˘Announcement! Python Code Interpreter in Logic Apps is now in Public Preview
As AI agents evolve, they increasingly need to do more than just respond to textâthey must analyze structured data, reason over complex patterns, and perform custom computations on demand. This is especially true in real-world scenarios where users upload large CSV files and expect agents to perform tasks like exploratory data analysis or generating insightsâall from natural language prompts. Why This Matters The above image captures why this matters - behind this need lies a real challenge that many businesses face today. Data is diverse, fragmented, and large. It often comes in the form of CSV files, Excel spreadsheets, or JSONâcontaining thousands or even millions of rows. But this raw data is rarely useful on its own. It typically requires: Cleaning and transformation Custom logic to extract insights Visualizations or summaries that make the data actionable These steps are often manual, error-prone, and time-consumingâespecially for users without data science or engineering expertise. Introducing Python Code Interpreter in Logic Apps Agent Loop Weâre excited to announce support for Python code execution, powered by Azure Container Apps (ACA) session pool. This capability enables Logic Apps developers to use Python Code Interpreter in their workflows and also as a tool in Agent loop. You can author the code or use LLM to write code for you. As a code interpreter tool, it Accept natural language instructions Automatically generate Python code Execute that code securely on uploaded datasets (like CSV or JSON) Return insights, visualizations, or next-step data back to the user This brings the power of a code interpreterâsimilar to ChatGPTâs advanced data analysis toolâright into the Logic Apps runtime. Instead of writing code or manually manipulating spreadsheets, users can now describe their intent in natural languageâfor example: âFind the top 5 products by revenueâ âForecast demand by region for the next quarterâ âHighlight customer segments based on purchase patternsâ Under the hood, Logic Apps now enables this flow by interpreting the instruction, generating Python code, executing it securely in an isolated environment, and returning usable resultsâ summaries, forecasts, or data transformationsâwithin the same workflow. Real-World Use Cases This opens up a wide range of possibilities for businesses looking to embed intelligence into their automation: Sales & Marketing: Upload raw sales data and get on-the-fly summaries, forecasts, or regional comparisons. Finance: Analyze expense reports, detect anomalies, or generate quarterly breakdowns from Excel exports. Operations: Clean large log files, surface exceptions, and generate insights to improve reliability. Data Exploration: Let business users ask questions like âWhich region had the highest YoY growth?â without writing a single line of code. How It Works The action to execute Python code is powered by Azure Container Apps (ACA)session pool. Azure Container Apps dynamic sessions provides fast and scalable access to a code interpreter. Each code interpreter session is fully isolated by a Hyper-V boundary and is designed to run untrusted code. By enabling network isolation on ACA, your data never leaves the defined network boundaries In Logic Apps, choose the action to execute Python code. You need to create a connection to the ACA session before you use the action. The code to execute can be authored by the developer or generated by the agent Optionally, upload file to the ACA session which can then be referenced as a data source in the Python code Run the workflow to get insights/results from the action execution Getting Started We canât wait to see developers use this feature to build powerful agents! You can find all the details about the feature and step by step guidance to use this capability in our MS Learn document. If you have any questions, comments or feedback, please reach out to us via this form: http://aka.ms/la/feedback761Views1like0CommentsIntroducing Logic Apps MCP servers (Public Preview)
Using Logic Apps (Standard) as MCP servers transforms the way organizations build and manage agents by turning connectors into modular, reusable MCP tools. This approach allows each connectorâwhether it's for data access, messaging, or workflow orchestrationâto act as a specialized capability within the MCP framework. By dynamically composing these tools into Logic Apps, developers can rapidly construct agents that are both scalable and adaptable to complex enterprise scenarios. The benefits include reduced development overhead, enhanced reusability, and a streamlined path to integrating diverse systemsâall while maintaining the flexibility and power of the Logic Apps platform. Starting today, we now support creating Logic Apps MCP Servers in the following ways: Registering Logic Apps connectors as MCP servers using Azure API Center Using this approach provides a streamlined experience when building MCP servers based upon Azure Logic Apps connectors. This new experience includes selecting a managed connector and one or more of its actions to create an MCP server and its related tools. This experience also automates the creation of Logic Apps workflows and wires up Easy Auth authentication for you in a matter of minutes. Beyond the streamlined experience that we provide, customers also benefit from any MCP server created using this experience to be registered within their API Center enterprise catalogue. For admins this means they can manage their MCP servers across the enterprise. For developers, it offers a centralized catalog where MCP servers can be discovered and quickly onboarded in Agent solutions. To get started, please refer to our product documentation or our demo videos. Enabling Logic Apps as remote MCP server For customers who have existing Logic Apps (Standard) investments or who want additional control over how their MCP tools are created we are also offering the ability to enable a Logic App as an MCP server. For a Logic App to be eligible to become an MCP server, it must have the following characteristics: One or more workflows that have an HTTP Request trigger and a corresponding HTTP Response action It is recommended that your trigger has a description and your request payload has schema that includes meaningful descriptions Your host.json file has been configured to enable MCP capabilities You have created an App registration in Microsoft Entra and have configured Easy Auth in your Logic App To get started, please refer to our product documentation or our demo videos. Feedback Both of these capabilities are now available, in public preview, worldwide. If you have any questions or feedback on these MCP capabilities, we would love to hear from you. Please fill out the following form and I will follow-up with you.Expose REST APIs as MCP servers with Azure API Management and API Center (now in preview)
As AI-powered agents and large language models (LLMs) become central to modern application experiences, developers and enterprises need seamless, secure ways to connect these models to real-world data and capabilities. Today, weâre excited to introduce two powerful preview capabilities in the Azure API Management Platform: Expose REST APIs in Azure API Management as remote Model Context Protocol (MCP) servers Discover and manage MCP servers using API Center as a centralized enterprise registry Together, these updates help customers securely operationalize APIs for AI workloads and improve how APIs are managed and shared across organizations. Unlocking the value of AI through secure API integration While LLMs are incredibly capable, they are stateless and isolated unless connected to external tools and systems. Model Context Protocol (MCP) is an open standard designed to bridge this gap by allowing agents to invoke toolsâsuch as APIsâvia a standardized, JSON-RPC-based interface. With this release, Azure empowers you to operationalize your APIs for AI integrationâsecurely, observably, and at scale. 1. Expose REST APIs as MCP servers with Azure API Management An MCP server exposes selected API operations to AI clients over JSON-RPC via HTTP or Server-Sent Events (SSE). These operations, referred to as âtools,â can be invoked by AI agents through natural language prompts. With this new capability, you can expose your existing REST APIs in Azure API Management as MCP serversâwithout rebuilding or rehosting them. Addressing common challenges Before this capability, customers faced several challenges when implementing MCP support: Duplicating development efforts: Building MCP servers from scratch often led to unnecessary work when existing REST APIs already provided much of the needed functionality. Security concerns: Server trust: Malicious servers could impersonate trusted ones. Credential management: Self-hosted MCP implementations often had to manage sensitive credentials like OAuth tokens. Registry and discovery: Without a centralized registry, discovering and managing MCP tools was manual and fragmented, making it hard to scale securely across teams. API Management now addresses these concerns by serving as a managed, policy-enforced hosting surface for MCP toolsâoffering centralized control, observability, and security. Benefits of using Azure API Management with MCP By exposing MCP servers through Azure API Management, customers gain: Centralized governance for API access, authentication, and usage policies Secure connectivity using OAuth 2.0 and subscription keys Granular control over which API operations are exposed to AI agents as tools Built-in observability through APIMâs monitoring and diagnostics features How it works MCP servers: In your API Management instance navigate to MCP servers Choose an API: + Create a new MCP Server and select the REST API you wish to expose. Configure the MCP Server: Select the API operations you want to expose as tools. These can be all or a subset of your APIâs methods. Test and Integrate: Use tools like MCP Inspector or Visual Studio Code (in agent mode) to connect, test, and invoke the tools from your AI host. Getting started and availability This feature is now in public preview and being gradually rolled out to early access customers. To use the MCP server capability in Azure API Management: Prerequisites Your APIM instance must be on a SKUv1 tier: Premium, Standard, or Basic Your service must be enrolled in the AI Gateway early update group (activation may take up to 2 hours) Use the Azure Portal with feature flag: ⤠Append ?Microsoft_Azure_ApiManagement=mcp to your portal URL to access the MCP server configuration experience Note: Support for SKUv2 and broader availability will follow in upcoming updates. Full setup instructions and test guidance can be found via aka.ms/apimdocs/exportmcp. 2. Centralized MCP registry and discovery with Azure API Center As enterprises adopt MCP servers at scale, the need for a centralized, governed registry becomes critical. Azure API Center now provides this capabilityâserving as a single, enterprise-grade system of record for managing MCP endpoints. With API Center, teams can: Maintain a comprehensive inventory of MCP servers. Track version history, ownership, and metadata. Enforce governance policies across environments. Simplify compliance and reduce operational overhead. API Center also addresses enterprise-grade security by allowing administrators to define who can discover, access, and consume specific MCP serversâensuring only authorized users can interact with sensitive tools. To support developer adoption, API Center includes: Semantic search and a modern discovery UI. Easy filtering based on capabilities, metadata, and usage context. Tight integration with Copilot Studio and GitHub Copilot, enabling developers to use MCP tools directly within their coding workflows. These capabilities reduce duplication, streamline workflows, and help teams securely scale MCP usage across the organization. Getting started This feature is now in preview and accessible to customers: https://aka.ms/apicenter/docs/mcp AI Gateway Lab | MCP Registry 3. Whatâs next These new previews are just the beginning. We're already working on: Azure API Management (APIM) Passthrough MCP server support Weâre enabling APIM to act as a transparent proxy between your APIs and AI agentsâno custom server logic needed. This will simplify onboarding and reduce operational overhead. Azure API Center (APIC) Deeper integration with Copilot Studio and VS Code Today, developers must perform manual steps to surface API Center data in Copilot workflows. Weâre working to make this experience more visual and seamless, allowing developers to discover and consume MCP servers directly from familiar tools like VS Code and Copilot Studio. For questions or feedback, reach out to your Microsoft account team or visit: Azure API Management documentation Azure API Center documentation â The Azure API Management & API Center Teams7.8KViews5likes7Commentsđ New in Azure API Management: MCP in v2 SKUs + external MCP-compliant server support
Your APIs are becoming tools. Your users are becoming agents. Your platform needs to adapt. Azure API Management is becoming the secure, scalable control plane for connecting agents, tools, and APIs â with governance built in. -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Today, weâre announcing two major updates to bring the power of the Model Context Protocol (MCP) in Azure API Management to more environments and scenarios: MCP support in v2 SKUs â now in public preview Expose existing MCP-compliant servers through API Management These features make it easier than ever to connect APIs and agents with enterprise-grade controlâwithout rewriting your backends. Why MCP? MCP is an open protocol that enables AI agentsâlike GitHub Copilot, ChatGPT, and Azure OpenAIâto discover and invoke APIs as tools. It turns traditional REST APIs into structured, secure tools that agents can call during execution â powering real-time, context-aware workflows. Why API Management for MCP? Azure API Management is the single, secure control plane for exposing and governing MCP capabilities â whether from your REST APIs, Azure-hosted services, or external MCP-compliant runtimes. With built-in support for: Security using OAuth 2.1, Microsoft Entra ID, API keys, IP filtering, and rate limiting. Outbound token injection via Credential Manager with policy-based routing. Monitoring and diagnostics using Azure Monitor, Logs, and Application Insights. Discovery and reuse with Azure API Center integration. Comprehensive policy engine for request/response transformation, caching, validation, header manipulation, throttling, and more. âŚyou get end-to-end governance for both inbound and outbound agent interactions â with no new infrastructure or code rewrites. â Whatâs New? 1. MCP support in v2 SKUs Previously available only in classic tiers (Basic, Standard, Premium), MCP support is now in public preview for v2 SKUs â Basic v2, Standard v2, and Premium v2 â with no pre-requisites or manual enablement required. You can now: Expose any REST API as an MCP server in v2 SKUs Protect it with Microsoft Entra ID, keys or tokens Register tools in Azure API Center 2. Expose existing MCP-compliant servers (pass-through scenario) Already using tools hosted in Logic Apps, Azure Functions, LangChain or custom runtimes? Now you can govern those external tool servers by exposing them through API Management. Use API Management to: Secure external MCP servers with OAuth, rate limits, and Credential Manager Monitor and log usage with Azure Monitor and Application Insights Unify discovery with internal tools via Azure API Center đ You bring the tools. API Management brings the governance. đ§ Whatâs Next Weâre actively expanding MCP capabilities in API Management: Tool-level access policies for granular governance Support for MCP resources and prompts to expand beyond tools đ Get Started đ Expose APIs as MCP servers đ Connect external MCP servers đ Secure access to MCP servers đ Discover tools in API Center Summary Azure API Management is your single control plane for agents, tools and APIs â whether you're building internal copilots or connecting external toolchains. This preview unlocks more flexibility, less friction, and a secure foundation for the next wave of agent-powered applications. No new infrastructure. Secure by default. Built for the future.2.8KViews2likes3Comments