enterprise integration
91 TopicsAzure API Management Your Auth Gateway For MCP Servers
The Model Context Protocol (MCP) is quickly becoming the standard for integrating Tools 🛠️ with Agents 🤖 and Azure API Management is at the fore-front, ready to support this open-source protocol 🚀. You may have already encountered discussions about MCP, so let's clarify some key concepts: Model Context Protocol (MCP) is a standardized way, (a protocol), for AI models to interact with external tools, (and either read data or perform actions) and to enrich context for ANY language models. AI Agents/Assistants are autonomous LLM-powered applications with the ability to use tools to connect to external services required to accomplish tasks on behalf of users. Tools are components made available to Agents allowing them to interact with external systems, perform computation, and take actions to achieve specific goals. Azure API Management: As a platform-as-a-service, API Management supports the complete API lifecycle, enabling organizations to create, publish, secure, and analyze APIs with built-in governance, security, analytics, and scalability. New Cool Kid in Town - MCP AI Agents are becoming widely adopted due to enhanced Large Language Model (LLM) capabilities. However, even the most advanced models face limitations due to their isolation from external data. Each new data source requires custom implementations to extract, prepare, and make data accessible for any model(s). - A lot of heavy lifting. Anthropic developed an open-source standard - the Model Context Protocol (MCP), to connect your agents to external data sources such as local data sources (databases or computer files) or remote services (systems available over the internet through e.g. APIs). MCP Hosts: LLM applications such as chat apps or AI assistant in your IDEs (like GitHub Copilot in VS Code) that need to access external capabilities MCP Clients: Protocol clients that maintain 1:1 connections with servers, inside the host application MCP Servers: Lightweight programs that each expose specific capabilities and provide context, tools, and prompts to clients MCP Protocol: Transport layer in the middle At its core, MCP follows a client-server architecture where a host application can connect to multiple servers. Whenever your MCP host or client needs a tool, it is going to connect to the MCP server. The MCP server will then connect to for example a database or an API. MCP hosts and servers will connect with each other through the MCP protocol. You can create your own custom MCP Servers that connect to your or organizational data sources. For a quick start, please visit our GitHub repository to learn how to build a remote MCP server using Azure Functions without authentication: https://aka.ms/mcp-remote Remote vs. Local MCP Servers The MCP standard supports two modes of operation: Remote MCP servers: MCP clients connect to MCP servers over the Internet, establishing a connection using HTTP and Server-Sent Events (SSE), and authorizing the MCP client access to resources on the user's account using OAuth. Local MCP servers: MCP clients connect to MCP servers on the same machine, using stdio as a local transport method. Azure API Management as the AI Auth Gateway Now that we have learned that MCP servers can connect to remote services through an API. The question now rises, how can we expose our remote MCP servers in a secure and scalable way? This is where Azure API Management comes in. A way that we can securely and safely expose tools as MCP servers. Azure API Management provides: Security: AI agents often need to access sensitive data. API Management as a remote MCP proxy safeguards organizational data through authentication and authorization. Scalability: As the number of LLM interactions and external tool integrations grows, API Management ensures the system can handle the load. Security remains to be a critical piece of building MCP servers, as agents will need to securely connect to protected endpoints (tools) to perform certain actions or read protected data. When building remote MCP servers, you need a way to allow users to login (Authenticate) and allow them to grant the MCP client access to resources on their account (Authorization). MCP - Current Authorization Challenges State: 4/10/2025 Recent changes in MCP authorization have sparked significant debate within the community. 🔍 𝗞𝗲𝘆 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 with the Authorization Changes: The MCP server is now treated as both a resource server AND an authorization server. This dual role has fundamental implications for MCP server developers and runtime operations. 💡 𝗢𝘂𝗿 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: To address these challenges, we recommend using 𝗔𝘇𝘂𝗿𝗲 𝗔𝗣𝗜 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 as your authorization gateway for remote MCP servers. 🔗For an enterprise-ready solution, please check out our azd up sample repo to learn how to build a remote MCP server using Azure API Management as your authentication gateway: https://aka.ms/mcp-remote-apim-auth The Authorization Flow The workflow involves three core components: the MCP client, the APIM Gateway, and the MCP server, with Microsoft Entra managing authentication (AuthN) and authorization (AuthZ). Using the OAuth protocol, the client starts by calling the APIM Gateway, which redirects the user to Entra for login and consent. Once authenticated, Entra provides an access token to the Gateway, which then exchanges a code with the client to generate an MCP server token. This token allows the client to communicate securely with the server via the Gateway, ensuring user validation and scope verification. Finally, the MCP server establishes a session key for ongoing communication through a dedicated message endpoint. Diagram source: https://aka.ms/mcp-remote-apim-auth-diagram Conclusion Azure API Management (APIM) is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). In this blog, we've emphasized the simplicity of connecting AI agents to various data sources through MCP, streamlining previously complex implementations. Given the critical role of secure access to platforms and services for AI agents, APIM offers robust solutions for managing OAuth tokens and ensuring secure access to protected endpoints, making it an invaluable asset for enterprises, despite the challenges of authentication. API Management: An Enterprise Solution for Securing MCP Servers Azure API Management is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). It is designed to help you to securely expose your remote MCP servers. MCP servers are still very new, and as the technology evolves, API Management provides an enterprise-ready solution that will evolve with the latest technology. Stay tuned for further feature announcements soon! Acknowledgments This post and work was made possible thanks to the hard work and dedication of our incredible team. Special thanks to Pranami Jhawar, Julia Kasper, Julia Muiruri, Annaji Sharma Ganti Jack Pa, Chaoyi Yuan and Alex Vieira for their invaluable contributions. Additional Resources MCP Client Server integration with APIM as AI gateway Blog Post: https://aka.ms/remote-mcp-apim-auth-blog Sequence Diagram: https://aka.ms/mcp-remote-apim-auth-diagram APIM lab: https://aka.ms/ai-gateway-lab-mcp-client-auth Python: https://aka.ms/mcp-remote-apim-auth .NET: https://aka.ms/mcp-remote-apim-auth-dotnet On-Behalf-Of Authorization: https://aka.ms/mcp-obo-sample 3rd Party APIs – Backend Auth via Credential Manager: Blog Post: https://aka.ms/remote-mcp-apim-lab-blog APIM lab: https://aka.ms/ai-gateway-lab-mcp YouTube Video: https://aka.ms/ai-gateway-lab-demo20KViews12likes4CommentsData Mapper Test Executor: A New Addition to Logic Apps Standard Test Framework
Why Does It Matter? Testing complex data transformations has always been a challenge for Logic Apps developers. With workflows relying on XSLT, Liquid templates, and custom maps, validating these transformations often required manual steps or external tools. The Data Mapper Test Executor changes that by bringing first-class support for map testing directly into the Logic Apps Standard Automated Test Framework. This means faster feedback loops, improved reliability, and a more streamlined developer experience. Key Highlights Native Support for Data Mapper Testing Execute unit tests for XSLT 1.0/2.0/3.0 and Logic Apps Data Mapper (.lml) files without leaving your development environment. Integrated with Automated Test Framework SDK Leverages the latest SDK capabilities for consistent test execution and reporting. Enhanced Validation Supports schema validation and transformation checks, ensuring your maps behave as expected across scenarios. Performance Optimizations Built-in caching and resource management for efficient test runs. How to Get Started Install and reference the Latest Automated Test Framework SDK Make sure you’re referencing the version 1.0.1 or higher of the framework to access the new executor class. You can find the latest version in the NuGet gallery. <PackageReference Include="Microsoft.Azure.Workflows.WebJobs.Tests.Extension" Version="1.0.1" /> Add the Data Mapper Test Executor to Your Test Project Include the new class in your unit test suite for Logic Apps workflows. Write Your First Test Check the sample code below with three different methods, highlighting the options that can be used with the DataMapTestExecutor using System.Collections.Generic; using System.Threading.Tasks; using Microsoft.Azure.Workflows.UnitTesting.Definitions; using Microsoft.VisualStudio.TestTools.UnitTesting; using la1.Tests.Mocks.Stateful1; using Microsoft.Azure.Workflows.UnitTesting; using System.IO; using System.Text; using Microsoft.Azure.Workflows.Data.Entities; using Newtonsoft.Json.Linq; namespace la1.Tests { /// <summary> /// The unit test class. /// </summary> [TestClass] public class DataMapTest { /// <summary> /// The map to test. /// </summary> public const string MapName = "source_order_to_canonical_order"; public string PathToMapDefinition { get; private set; } public string PathToCompiledXslt { get; private set; } public string PathToXsltTestInputs { get; private set; } /// <summary> /// The data map test executor. /// </summary> public TestExecutor TestExecutor; [TestInitialize] public void Setup() { this.TestExecutor = new TestExecutor("Stateful1/testSettings.config"); this.PathToMapDefinition = Path.Combine(this.TestExecutor.rootDirectory, this.TestExecutor.logicAppName, "Artifacts", "MapDefinitions", $"{DataMapTest.MapName}.lml"); this.PathToCompiledXslt = Path.Combine(this.TestExecutor.rootDirectory, this.TestExecutor.logicAppName, "Artifacts", "Maps", $"{DataMapTest.MapName}.xslt"); this.PathToXsltTestInputs = Path.Combine(this.TestExecutor.rootDirectory, "Tests", "la1", "Stateful1", "DataMapTest", "test-input.json"); } /// <summary> /// Sample comparing compiled XSLT to generated XSLT using map name. /// </summary> [TestMethod] public async Task DataMapTest_GenerateXslt() { var dataMapTestExecutor = this.TestExecutor.CreateMapExecutor(); var xsltBytes = await dataMapTestExecutor .GenerateXslt(mapName: DataMapTest.MapName) .ConfigureAwait(continueOnCapturedContext: false); Assert.IsNotNull(xsltBytes); Assert.AreEqual(expected: File.ReadAllText(this.PathToCompiledXslt), actual: Encoding.UTF8.GetString(xsltBytes)); } /// <summary> /// Sample comparing compiled XSLT to generated XSLT using map content. /// </summary> [TestMethod] public async Task DataMapTest_GenerateXsltWithMapContent() { var mapContent = File.ReadAllText(this.PathToMapDefinition); var generateXsltInput = new GenerateXsltInput { MapContent = mapContent }; var dataMapTestExecutor = this.TestExecutor.CreateMapExecutor(); var xsltBytes = await dataMapTestExecutor .GenerateXslt(generateXsltInput: generateXsltInput) .ConfigureAwait(continueOnCapturedContext: false); Assert.IsNotNull(xsltBytes); Assert.AreEqual(expected: File.ReadAllText(this.PathToCompiledXslt), actual: Encoding.UTF8.GetString(xsltBytes)); } /// <summary> /// Sample running data map using map name and test inputs. /// </summary> [TestMethod] public async Task DataMapTest_RunMap() { var dataMapTestExecutor = this.TestExecutor.CreateMapExecutor(); var mapOutput = await dataMapTestExecutor .RunMapAsync( mapName: DataMapTest.MapName, inputContent: File.ReadAllBytes(this.PathToXsltTestInputs)) .ConfigureAwait(continueOnCapturedContext: false); Assert.IsNotNull(mapOutput); Assert.IsTrue(mapOutput.Type == JTokenType.Object); Assert.AreEqual(expected: "FC-20250603-001", actual: mapOutput["orderId"]); Assert.AreEqual(expected: "VIP-789456", actual: mapOutput["customerId"]); Assert.AreEqual(expected: "NEW", actual: mapOutput["status"]); } /// <summary> /// Sample running data map using generated XSLT content and test inputs. /// </summary> [TestMethod] public async Task DataMapTest_RunMapWithXsltContentBytes() { var dataMapTestExecutor = this.TestExecutor.CreateMapExecutor(); var mapContent = File.ReadAllText(this.PathToMapDefinition); var generateXsltInput = new GenerateXsltInput { MapContent = mapContent }; var xsltContent = await dataMapTestExecutor .GenerateXslt(generateXsltInput: generateXsltInput) .ConfigureAwait(continueOnCapturedContext: false); var mapOutput = await dataMapTestExecutor .RunMapAsync( xsltContent: xsltContent, inputContent: File.ReadAllBytes(this.PathToXsltTestInputs)) .ConfigureAwait(continueOnCapturedContext: false); Assert.IsNotNull(mapOutput); Assert.IsTrue(mapOutput.Type == JTokenType.Object); Assert.AreEqual(expected: "FC-20250603-001", actual: mapOutput["orderId"]); Assert.AreEqual(expected: "VIP-789456", actual: mapOutput["customerId"]); Assert.AreEqual(expected: "NEW", actual: mapOutput["status"]); } } } The default TestExecutor should be updated with a new method, so you can use the same class to create UnitTestExecutor and DataMapTestExecutor instances: public DataMapTestExecutor CreateMapExecutor() { // Set the path for workflow-related input files in the workspace and build the full paths to the required JSON files. var appDirectoryPath = Path.Combine(this.rootDirectory, this.logicAppName); return new DataMapTestExecutor(appDirectoryPath: appDirectoryPath); } Note: This feature is available starting with the latest SDK release. Update your dependencies before making changes to your code, to get full intelisense support. Limitations Loop Structures: Dynamic execution within repeating structures is not yet supported. Non-Mocked Connectors: All actions in the execution path should be mocked. Unsupported Actions: Integration Account maps, custom code actions, and EDI encode/decode remain out of scope for this release. Preview Caveats: If you’re using private preview components, expect limited testing compared to GA releases. Learn More Logic Apps Standard Automated Test Framework SDK222Views0likes0Comments🎉 Announcing General Availability of AI & RAG Connectors in Logic Apps (Standard)
We’re excited to share that a comprehensive set of AI and Retrieval-Augmented Generation (RAG) capabilities is now Generally Available in Azure Logic Apps (Standard). This release brings native support for document processing, semantic retrieval, embeddings, and grounded reasoning directly into the Logic Apps workflow engine. 🔌 Available AI Connectors in Logic Apps Standard Logic Apps (Standard) had previously previewed four AI-focused connectors that open the door for a new generation of intelligent automation across the enterprise. Whether you're processing large volumes of documents, enriching operational data with intelligence, or enabling employees to interact with systems using natural language, these connectors provide the foundation for building solutions that are smarter, faster, and more adaptable to business needs. These are now in GA. They allow teams to move from routine workflow automation to AI-assisted decisioning, contextual responses, and multi-step orchestration that reflects real business intent. Below is the full set of built-in connectors and their actions as they appear in the designer. 1. Azure OpenAI Actions Get an embedding Get chat completions Get chat completions using Prompt Template Get completion Get multiple chat completions Get multiple embeddings What this unlocks Bring natural language reasoning and structured AI responses directly into workflows. Common scenarios include guided decisioning, user-facing assistants, classification and routing, or preparing embeddings for semantic search and RAG workflows. 2. Azure AI Search Actions Delete a document Delete multiple documents Get agentic retrieval output (Preview) Index a document Index multiple documents Merge document Search vectors Search vectors with natural language What this unlocks Add vector, hybrid semantic, and natural language search directly to workflow logic. Ideal for retrieving relevant content from enterprise data, powering search-driven workflows, and grounding AI responses with context from your own documents. 3. Azure AI Document Intelligence Action Analyze document What this unlocks Document Intelligence serves as the entry point for document-heavy scenarios. It extracts structured information from PDFs, images, and forms, allowing workflows to validate documents, trigger downstream processes, or feed high-quality data into search and embeddings pipelines. 4. AI Operations Actions Chunk text with metadata Parse document with metadata What this unlocks Transform unstructured files into enriched, structured content. Enables token-aware chunking, page-level metadata, and clean preparation of content for embeddings and semantic search at scale. 🤖 Advanced AI & Agentic Workflows with AgentLoop Logic Apps (Standard) also supports AgentLoop (also Generally Available), allowing AI models to use workflow actions as tools and iterate until the task is complete. Combined with chunking, embeddings, and natural language search, this opens the door to advanced agentic scenarios such as document intelligence agents, RAG-based assistants, and iterative evaluators. Conclusion With these capabilities now built into Logic Apps Standard, teams can bring AI directly into their integration workflows without additional infrastructure or complexity. Whether you’re streamlining document-heavy processes, enabling richer search experiences, or exploring more advanced agentic patterns, these capabilities provide a strong foundation to start building today.Logic Apps Aviators Newsletter - December 2025
In this issue: Ace Aviator of the Month News from our product group Community Playbook News from our community Ace Aviator of the Month December’s Ace Aviator: Daniel Jonathan What's your role and title? What are your responsibilities? I’m an Azure Integration Architect at Cnext, helping organizations modernize and migrate their integrations to Azure Integration Services. I design and build solutions using Logic Apps, Azure Functions, Service Bus, and API Management. I also work on AI solutions using Semantic Kernel and LangChain to bring intelligence into business processes. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My day usually begins by attending customer requests and handling recent deployments. Most of my time goes into designing integration patterns, building Logic Apps, mentoring the team, and helping customers with technical solutions. Lately, I’ve also been integrating AI capabilities into workflows. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community is open, friendly, and full of knowledge. I enjoy sharing ideas, writing posts, and helping others solve real-world challenges. It’s great to learn and grow together. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Start small and stay consistent. Learn the basics well—like messaging, retries, and error handling—before diving into complex tools. Keep learning and share what you know. What has helped you grow professionally? Hands-on experience, teamwork, and continuous learning. Working across different projects taught me how to design reliable and scalable systems. Exploring AI with Semantic Kernel and LangChain has also helped me think beyond traditional integrations. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? I’d add an “Overview Page” in Logic Apps containing the HTTP URLs for each workflow, so developers can quickly access to test from one place. It would save time and make working with multiple workflows much easier. News from our product group Logic Apps Community Day 2025 Playlist Did you miss or want to catch up on individual sessions from Logic Apps Community Day 2025? Here is the full playlist – choose your favorite sessions and have fun! The future of integration is here and it's agentic Missed Kent Weare and Divya Swarnkar session at Ignite? It is here for you to watch on demand. Enterprise integration is being reimagined. It’s no longer just about connecting systems, but about enabling adaptive, agentic workflows that unify apps, data, and systems. In this session, discover how to modernize integration, migrate from BizTalk, and adopt AI-driven patterns that deliver agility and intelligence. Through customer stories and live demos, see how to bring these workflows to life with Agent Loop in Azure Logic Apps. Public Preview: Azure Logic Apps Connectors as MCP Tools in Microsoft Foundry Unlock secure enterprise connectivity with Azure Logic Apps connectors as MCP tools in Microsoft Foundry. Agents can now use hundreds of connectors natively—no custom code required. Learn how to configure and register MCP servers for seamless integration. Announcing AI Foundry Agent Service Connector v2 (Preview) AI Foundry Agent Service Connector v2 (Preview) is here! Azure Logic Apps can now securely invoke Foundry agents, enabling low-code AI integration, multi-agent workflows, and faster time-to-value. Explore new operations for orchestration and monitoring. Announcing the General Availability of the XML Parse and Compose Actions in Azure Logic Apps XML Parse and Compose Actions are now GA in Azure Logic Apps! Easily handle XML with XSD schemas, streamline workflows, and support internationalization. Learn best practices for arrays, encoding, and safe transport of content. Clone a Consumption Logic App to a Standard Workflow Clone your Consumption Logic Apps into Standard workflows with ease! This new feature accelerates migration, preserves design, and unlocks advanced capabilities for modern integration solutions. Announcing the HL7 connector for Azure Logic Apps Standard and Hybrid (Public Preview) Connect healthcare systems effortlessly! The new HL7 connector for Azure Logic Apps (Standard & Hybrid) enables secure, standardized data exchange and automation using HL7 protocols—now in Public Preview. Announcing Foundry Control Plane support for Logic Apps Agent Loop (Preview) Foundry Control Plane now supports Logic Apps Agent Loop (Preview)! Manage, govern, and observe agents at scale with built-in integration—no extra steps required. Ensure trust, compliance, and scalability in the agentic era. Announcing General Availability of Agent Loop in Azure Logic Apps Agent Loop transforms Logic Apps into a multi-agent automation platform, enabling AI agents to collaborate with workflows and humans. Build secure, enterprise-ready agentic solutions for business automation at scale. Agent Loop Ignite Update - New Set of AI Features Arrive in Public Preview We are releasing a broad set of Agent Loop new and powerful AI-first capabilities in Public Preview that dramatically expand what developers can build: run agents in the Consumption SKU ,bring your own models through APIM AI Gateway, call any tool through MCP, deploy agents directly into Teams, secure RAG with document-level permissions, onboard with Okta, and build in a completely redesigned workflow designer. Announcing MCP Server Support for Logic Apps Agent Loop Agent Loop in Azure Logic Apps now supports Model Context Protocol (MCP), enabling secure, standardized tool integration. Bring your own MCP connector, use Azure-managed servers, or build custom connectors for enterprise workflows. Enabling API Key Authentication for Logic Apps MCP Servers Logic Apps MCP Servers now support API Key authentication alongside OAuth2 and Anonymous options. Configure keys via host.json or Azure APIs, retrieve and regenerate keys easily, and connect MCP clients securely for agentic workflows. Announcing Public Preview of Agent Loop in Azure Logic Apps Consumption Agent Loop now brings AI-powered automation to Logic Apps Consumption with a frictionless, pay-as-you-go model. Build autonomous and conversational agents using 1,400+ connectors—no dedicated infrastructure required. Ideal for rapid prototyping and enterprise workflows. Moving the Logic Apps Designer Forward Major redesign of Azure Logic Apps designer enters Public Preview for Standard workflows. Phase I focuses on faster onboarding, unified views, draft mode with auto-save, improved search, and enhanced debugging. Feedback will shape future phases for a seamless development experience. Announcing the General Availability of the RabbitMQ Connector RabbitMQ Connector for Azure Logic Apps is now generally available, enabling reliable message exchange for Standard and Hybrid workflows. It supports triggers, publishing, and advanced routing, with global rollout underway for robust, scalable integration scenarios. Duplicate Detection in Logic App Trigger Prevent duplicate processing in Logic Apps triggers with a REST API-based solution. It checks recent runs using clientTrackingId to avoid reprocessing items caused by edits or webhook updates. Works with Logic App Standard and adaptable for Consumption or Power Automate. Announcing the BizTalk Server 2020 Cumulative Update 7 BizTalk Server 2020 Cumulative Update 7 is out, adding support for Visual Studio 2022, Windows Server 2022, SQL Server 2022, and Windows 11. Includes all prior fixes. Upgrade from older versions or consider migrating to Azure Logic Apps for modernization. News from our community Logic Apps Local Development Series Post by Daniel Jonathan Last month I shared an article from Daniel about debugging XSLT in VS Code. This month, I bumped into not one, but five articles in a series about Build, Test and Run Logic Apps Standard locally – definitely worth the read! Working with sessions in Agentic Workflows Post by Simon Stender Build AI-powered chat experiences with session-based agentic workflows in Azure Logic Apps. Learn how they enable dynamic, stateful interactions, integrate with APIs and apps, and avoid common pitfalls like workflows stuck in “running” forever. Integration Love Story with Mimmi Gullberg Video by Ahmed Bayoumy and Robin Wilde Meet Mimmi Gullberg, Green Cargo’s new integration architect driving smarter, sustainable rail logistics. With experience from BizTalk to Azure, she blends tech and business insight to create real value. Her mantra: understand the problem first, then choose the right tools—Logic Apps, Functions, or AI. Integration Love Story with Jenny Andersson Video by Ahmed Bayoumy and Robin Wilde Discover Jenny Andesson’s inspiring journey from skepticism to creativity in tech. In this episode, she shares insights on life as an integration architect, tackling system challenges, listening to customers, and how AI is shaping the future of integration. You Can Get an XPath value in Logic Apps without returning an array Post by Luis Rigueira Working with XML in Azure Logic Apps? The xpath() function always returns an array—even for a single node. Or does it? Found how to return just the values you want on this Friday Fact from Luis Rigueira. Set up Azure Standard Logic App Connectors as MCP Server Video by Srikanth Gunnala Expose your Azure Logic Apps integrations as secure tools for AI assistants. Learn how to turn connectors like SAP, SQL, and Jira into MCP tools, protect them with Entra ID/OAuth, and test in GitHub Copilot Chat for safe, action-ready AI workflows. Making Logic Apps Speak Business Post by Al Ghoniem Stop forcing Logic Apps to look like business diagrams. With Business Process Tracking, you can keep workflows technically sound while giving business users clear, stage-based visibility into processes—decoupled, visual, and KPI-driven.309Views0likes0CommentsClone a Consumption Logic App to a Standard Workflow
Azure Logic Apps now supports cloning existing Consumption Logic Apps into Standard workflows. This feature simplifies migration by allowing you to reuse your existing workflows without starting from scratch. It’s designed to help organizations transition to the Standard model, which offers improved performance, flexibility, and advanced capabilities. Why Does It Matter? Migrating from Consumption to Standard has been a common challenge for teams looking to modernize their integration solutions. Standard workflows provide: Better performance through single-tenant architecture. Enhanced developer experience with local development and VS Code integration. Advanced features like built-in connectors, stateful/stateless workflows, and integration with Azure Functions. This new cloning capability reduces friction, accelerates adoption, and ensures continuity for existing business processes. And with the announcement of Agent Loop Support in Logic Apps Consumption, the ability to clone agentic workflows to a Logic Apps Standard application allow you to have a graduation strategy for your agents. Key Highlights Direct cloning: Convert an existing Consumption Logic App into a Standard workflow with minimal manual effort. Preserves workflow design: Your triggers, actions, and configurations are carried over. Supports modernization: Enables migration to a model that supports advanced features like custom connectors and private endpoints. Improved governance: Standard workflows allow better control over deployment and scaling. How to Get Started Navigate to your Consumption Logic App in the Azure portal. Select Clone to Standard from the available options. Choose the target Logic App (Standard) environment and configure settings. Validate and deploy the cloned workflow. Limitations Infrastructure settings aren’t cloned (e.g., integration account configurations). Connections and credentials must be reconfigured after cloning. Secure parameters aren’t copied; placeholders are created and need updating from Key Vault. Connectors default to shared even if built-in versions exist. Unsupported items include: Integration account references XML/Flat file transformations EDIFACT and X12 actions Nested workflows Azure Function calls Learn more To learn more about this new feature, check the official documentation on Microsoft Docs.379Views0likes0CommentsAnnouncing the General Availability (GA) of the Premium v2 tier of Azure API Management
Superior capacity, highest entity limits, unlimited included calls, and the most comprehensive set of features set the Premium v2 tier apart from other API Management tiers. Customers rely on the Premium v2 tier for running enterprise-wide API programs at scale, with high availability, and performance. The Premium v2 tier has a new architecture that eliminates management traffic from the customer VNet, making private networking much more secure and easier to setup. During the creation of a Premium v2 instance, you can choose between VNet injection or VNet integration (introduced in the Standard v2 tier) options. In addition, today we are also adding three new features to Premium v2: Inbound Private Link: You can now enable private endpoint connectivity to restrict inbound access to your Premium v2 instance. It can be enabled along with VNet injection or VNet integration or without a VNet. Availability zone support: Premium v2 now supports availability zones (zone redundancy) to enhance the reliability and resilience of your API gateway. Custom CA certificates: Azure API management v2 gateway can now validate TLS connections with the backend service using custom CA certificates. New and improved VNet injection Using VNet injection in Premium v2 no longer requires configuring routes or service endpoints. Customers can secure their API workloads without impacting API Management dependencies, while Microsoft can secure the infrastructure without interfering with customer API workloads. In short, the new VNet injection implementation enables both parties to manage network security and configuration settings independently and without affecting each other. You can now configure your APIs with complete networking flexibility: force tunnel all outbound traffic to on-premises, send all outbound traffic through an NVA, or add a WAF device to monitor all inbound traffic to your API Management Premium v2—all without constraints. Inbound Private Link Customers can now configure an inbound private endpoint for their API Management Premium v2 instance to allow your API consumers securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Apply different API Management policies based on whether traffic comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with inbound virtual network injection or outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. More details can be found here Today, only the API Management instance’s Gateway endpoint supports inbound private link connections. Each API management instance can support at most 100 Private Link connections. Availability zones Azure API Management Premium v2 now supports Availability Zones (AZ) redundancy to enhance the reliability and resilience of your API gateway. When deploying an API Management instance in an AZ-enabled region, users can choose to enable zone redundancy. This distributes the service's units, including Gateway, management plane, and developer portal, across multiple, physically separate AZs within that region. Learn how to enable AZs here. CA certificates If the API Management Gateway needs to connect to the backends secured with TLS certificates issued by private certificate authorities (CA), you need to configure custom CA certificates in the API Management instance. Custom CA certificates can be added and managed as Authorization Credentials in the Backend entities. The Backend entity has been extended with new properties allowing customers to specify a list of certificate thumbprints or subject name + issuer thumbprint pairs that Gateway should trust when establishing TLS connection with associated backend endpoint. More details can be found here. Region availability The Premium v2 tier is now generally available in six public regions (Australia East, East US2, Germany West Central, Korea Central, Norway East and UK South) with additional regions coming soon. For pricing information and regional availability, please visit the API Management pricing page. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentationPreview: Govern, Secure, and Observe A2A APIs with Azure API Management
Today, we’re announcing the preview support for A2A (Agent2Agent) APIs in Azure API Management. With this capability, organizations can now manage and govern agent APIs alongside AI model APIs, Model Context Protocol (MCP) tools, and traditional APIs such as REST, SOAP, GraphQL, WebSocket, and gRPC — all within a single, consistent API management plane. Extending API Governance into the Agentic Ecosystem As organizations adopt agentic systems, the need for consistent governance, security, and observability grows. With A2A API support, Azure API Management enables you to extend established API practices into the agentic world — ensuring secure access, consistent policy enforcement, and complete visibility for AI agents. A2A APIs in Azure API Management: Mediate JSON-RPC runtime operations with policy support Expose and manage agent cards for users, clients, or other agents Support OpenTelemetry GenAI semantic conventions when logging traces to Application Insights — including "gen_ai.agent.id" and "gen_ai.agent.name" attributes How It Works When you import an A2A API, API Management mediates runtime calls to your agent backend (JSON-RPC only) and exposes the agent card as an operation within the same API. The agent card is transformed automatically to represent the A2A API managed by API Management — with the hostname replaced by API Management’s gateway address, security schemes converted to authentication configured in API Management, and unsupported interfaces removed. When integrated with Application Insights, API Management enriches traces with GenAI-compliant telemetry attributes — allowing easy identification of the agent and deep correlation between API and agent execution traces for monitoring and debugging. Try It Out To import an A2A API: Navigate to the APIs page in the Azure portal and select the A2A Agent tile. Enter your agent card URL. If accessible, the portal will automatically populate relevant settings. Configure the remaining properties, such as API path in API Management. This functionality is currently available only in v2 tiers of API Management and it will continue to roll out to all tiers in the coming months. Start Managing Your Agent APIs With A2A support in Azure API Management, you can now bring agent APIs under the same governance and security umbrella as your existing APIs — strengthening control, security, and observability across your AI and API ecosystems. Learn more about A2A API support in Azure API Management.Hybrid Logic Apps deployment on Rancher K3s Kubernetes cluster
K3s is a lightweight Kubernetes distribution, certified by the Cloud Native Computing Foundation (CNCF) and originally developed by Rancher. It is optimized for on-premises environments with limited resources, making it ideal for edge computing and lightweight hybrid scenarios. Unlike a full Kubernetes distribution, K3s reduces overhead while maintaining full Kubernetes API compatibility. This makes K3s an ideal choice for hosting Logic Apps Standard near your data sources—such as on-premises SQL Server or local file shares—when you have lightweight workloads. There are 5 steps which are followed to setup the Hybrid Logic Apps including infrastructure which is illustrated in the following diagram. Most of these 5 steps are same as discussed in the Hybrid Logic Apps doc except the K3s Setup part Set up your own infrastructure for Standard logic app workflows - Azure Logic Apps | Microsoft Learn. Step 1: Prepare the K3s Cluster Docker desktop setup - In this case, the host machine is Windows 11 so decided to user Docker with WSL2 to setup the containers. Install the docker desktop using WSL2 Docker Desktop: The #1 Containerization Tool for Developers | Docker and make sure we select WSL2 Install K3s on your infrastructure and create single node cluster using k3d. #Install choco , kubectl and Helm Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) powershell choco install kubernetes-cli -y choco install kubernetes-helm -y choco install k3d -y k3d cluster create "k3d-rancher" # open in new powershell window powershell k3d cluster create # deleting the default load balancer Traefik as it conflicts with 80 and 443 port - we can configure the load balancer to other ports if needed kubectl delete svc traefik -n kube-system kubectl delete deployment traefik -n kube-system Next two steps are same as given Set up your own infrastructure for Standard logic app workflows - Azure Logic Apps | Microsoft Learn Step 2: Connect the Kubernetes cluster to Azure Arc Step 3: Setup the Azure Container Apps extension and environment You need to skip the core DNS setup required for Azure Local as given in Update CoreDNS Step 4: Conduct the Storage Configuration for SQL and SMB SQL Database (Runtime Store): Hybrid Logic Apps use SQL database for runtime operations and run history. In this scenario I used on-premise SQL server using SQL Authentication. I setup the SQL Server 2022 on the Windows host machine, enabled SQL server authentication and added new SQL admin user. Please follow the link for more details.. The SQL connection string can be validated using following PowerShell script $connectionString = "Server=<server IP address>;Initial Catalog=<databaseName>;Persist Security Info=False;User ID=<sqluser>;Password=<password>;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;" try { $connection = New-Object System.Data.SqlClient.SqlConnection $connection.ConnectionString = $connectionString $connection.Open() Write-Host "✅ Connection successful" $connection.Close() } catch { Write-Host "❌ Connection failed: $($_.Exception.Message)" } SMB is used as local file share on Windows host machine; it is advised to use a new user for the Windows SMB share $Username = "k3suser" $Password = ConvertTo-SecureString "<password complex>" -AsPlainText -Force $FullName = "K3s user" $Description = "Created via PowerShell" # Create the user New-LocalUser -Name $Username -Password $Password -FullName $FullName -Description $Description Add-LocalGroupMember -Group "Users" -Member $Username Once the above user is created you can use Windows hosted machine to create Artifacts folder and allow read and write access. Please follow the link for more details Step 5: Create your Logic App (Hybrid) With all prerequisites and infrastructure in place for creating Hybrid Logic Apps, the next step is to build the Logic Apps using the specified connection string and SMB share path. This can be accomplished through the Azure Portal, as outlined below. Now you can create Logic Apps workflows using the designer and execute the Logic Apps workflow.Logic Apps Aviators Newsletter - November 2025
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month Novembers’s Ace Aviator: Al Ghoniem What's your role and title? What are your responsibilities? As a Senior Integration Consultant, I design and deliver enterprise-grade integration on Microsoft Azure, primarily using Logic Apps Standard, API Management, Service Bus, Event Grid and Azure Functions. My remit covers reference architectures, “golden” templates, governance and FinOps guardrails, CI/CD automation (Bicep and YAML), and production-ready patterns for reliability, observability and cost efficiency. Alongside my technical work, I lead teams of consultants and engineers, helping them adopt standardised delivery models, mentor through code reviews and architectural walkthroughs, and ensure we deliver consistent, high-quality outcomes across projects. I also help teams apply decisioning patterns (embedded versus external rules) and integrate AI responsibly within enterprise workflows. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? Architecture and patterns: refining solution designs, sequence diagrams and rules models for new and existing integrations. Build and automation: evolving reusable Logic App Standard templates, Bicep modules and pipelines, embedding monitoring, alerts and identity-first security. Problem-solving: addressing performance tuning, transient fault handling, poison/DLQ flows and “design for reprocessing.” Leadership and enablement: mentoring consultants, facilitating technical discussions, and ensuring knowledge is shared across teams. Community and writing: publishing articles and examples to demystify real-world integration trade-offs. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community continuously turns hard-won lessons into reusable practices. Sharing patterns (and anti-patterns) saves others time and incidents, while learning from peers strengthens my own work. Microsoft’s product teams also listen closely, and seeing customer feedback directly shape the platform is genuinely rewarding. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Optimise for learning speed, not titles. Choose problems that stretch you and deliver in small, measurable increments. Master the fundamentals. Naming, idempotency, retries and observability are not glamorous but make systems dependable. Document everything. Diagrams, runbooks and ADRs multiply your impact. Understand trade-offs. Every decision buys something and costs something; acknowledge both sides clearly. Value collaboration over heroics. Ask questions, share knowledge and give credit freely. What has helped you grow professionally? Reusable scaffolding: creating golden templates and reference repositories that capture best practice once and reuse it everywhere. Feedback loops: leveraging telemetry, post-incident reviews and peer critique to improve. Teaching and mentoring: explaining concepts to others brings clarity and strengthens leadership. Cross-disciplinary curiosity: combining architecture, DevOps, FinOps and AI to address problems holistically. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? "Stateful Sessions and Decisions” as a first-class capability: Built-in session state across multiple workflows, durable correlation and resumable orchestrations without external storage. A native decisioning activity with versioned decision tables and rule auditing (“why this rule fired”). A local-first developer experience with fast testing and contract validation for confident iteration. This would simplify complex, human-in-the-loop and event-driven scenarios, reduce custom plumbing, and make advanced orchestration patterns accessible to a wider audience. News from our product group Logic Apps Community Day 2025 Did you miss or want to catch up again on your favorite Logic Apps Community Day videos – jump back into action on this four hours long learning session, with 10 sessions from our Community Experts. And stay tuned for individual sessions being shared throughout the week. Announcing Parse & Chunk with Metadata in Logic Apps: Build Context-Aware RAG Agents New Parse & Chunk actions add metadata like page numbers and sentence completeness—perfect for context-aware document Q&A using Azure AI Search and Agent Loop. Introducing the RabbitMQ Connector (Public Preview) The new connector (Public Preview) lets you send and receive messages with RabbitMQ in Logic Apps Standard and Hybrid—ideal for scalable, reliable messaging across industries. News from our community EventGrid And Entra Auth In Logic Apps Standard Post by Riccardo Viglianisi Learn how to use Entra Auth for webhook authentication, ditch SAS tokens, and configure private endpoints with public access rules—perfect for secure, scalable integrations. Debugging XSLT Made Easy in VS Code: .NET-Based Debugging for Logic Apps Post by Daniel Jonathan A new .NET-based extension brings real debugging to XSLT for Logic Apps. Set breakpoints, step through transformations, and inspect variables—making XSLT development clear and productive. This is the 3 rd post in a 5 part series, so worth checking out the other posts too. Modifying the Logic App Azure Workbook: Custom Views for Multi Workflow Monitoring Post by Jeff Wessling Learn how to tailor dashboards with KQL, multi-workflow views, and context panes—boosting visibility, troubleshooting speed, and operational efficiency across your integrations. Azure AI Agents in Logic Apps: A Guide to Automate Decisions Post by Imashi Kinigama Discover how GPT-powered agents, created using Logic Apps Agent Loop, automate decisions, extract data, and adapt in real time. Build intelligent workflows with minimal effort—no hardcoding, just instructions and tools. How to Turn Logic App Connectors into MCP Servers (Step-by-Step Guide) Post by Stephen W. Thomas Learn how to expose connectors like Google Drive or Salesforce as MCP endpoints using Azure API Center—giving AI agents secure, real-time access to 1,400+ services directly from VS Code. Custom SAP MCP Server with Logic Apps Post by Sebastian Meyer Learn how to turn Logic Apps into AI-accessible tools using MCP. From workflow descriptions to Easy Auth setup and VS Code integration—this guide unlocks SAP automation with Copilot. How Azure Logic Apps as MCP Servers Accelerate AI Agent Development Post by Monisha S Turn 1,400+ connectors into AI tools with Logic Apps Standard. Build agents fast, integrate with legacy systems, and scale intelligent workflows across your organization. Designing Business Rules in Azure Logic Apps: When to Go Embedded vs External Post by Al Ghoniem Learn when to use Logic Apps' native Rules Engine or offload to Azure Functions with NRules or JSON RulesEngine. Discover hybrid patterns for scalable, testable decision automation. Syncing SharePoint with Azure Blob Storage using Logic Apps & Azure Functions for Azure AI Search Post by Daniel Jonathan Solve folder delete issues by tagging blobs with SharePoint metadata. Use Logic Apps and a custom Azure Function to clean up orphaned files and keep Azure AI Search in sync. Step-by-Step Guide: Building a Conversational Agent in Azure Logic Apps Post by Stephen W. Thomas Use Azure AI Foundry and Logic Apps Standard to create chatbots that shuffle cards, answer questions, and embed into websites—no code required, just smart workflows and EasyAuth. You can hide sensitive data from the Logic App run history Post by Francisco Leal Learn how to protect sensitive data like authentication tokens, credentials, and personal information in Logic App, so this data don’t appear in the run history, which could pose security and privacy risks.485Views0likes0CommentsIntroducing native Service Bus message publishing from Azure API Management (Preview)
We’re excited to announce a preview capability in Azure API Management (APIM) — you can now send messages directly to Azure Service Bus from your APIs using a built-in policy. This enhancement, currently in public preview, simplifies how you connect your API layer with event-driven and asynchronous systems, helping you build more scalable, resilient, and loosely coupled architectures across your enterprise. Why this matters? Modern applications increasingly rely on asynchronous communication and event-driven designs. With this new integration: Any API hosted in API Management can publish to Service Bus — no SDKs, custom code, or middleware required. Partners, clients, and IoT devices can send data through standard HTTP calls, even if they don’t support AMQP natively. You stay in full control with authentication, throttling, and logging managed centrally in API Management. Your systems scale more smoothly by decoupling front-end requests from backend processing. How it works The new send-service-bus-message policy allows API Management to forward payloads from API calls directly into Service Bus queues or topics. High-level flow A client sends a standard HTTP request to your API endpoint in API Management. The policy executes and sends the payload as a message to Service Bus. Downstream consumers such as Logic Apps, Azure Functions, or microservices process those messages asynchronously. All configurations happen in API Management — no code changes or new infrastructure are required. Getting started You can try it out in minutes: Set up a Service Bus namespace and create a queue or topic. Enable a managed identity (system-assigned or user-assigned) on your API Management instance. Grant the identity the “Service Bus data sender” role in Azure RBAC, scoped to your queue/ topic. Add the policy to your API operation: <send-service-bus-message queue-name="orders"> <payload>@(context.Request.Body.As<string>())</payload> </send-service-bus-message> Once saved, each API call publishes its payload to the Service Bus queue or topic. 📖 Learn more. Common use cases This capability makes it easy to integrate your APIs into event-driven workflows: Order processing – Queue incoming orders for fulfillment or billing. Event notifications – Trigger internal workflows across multiple applications. Telemetry ingestion – Forward IoT or mobile app data to Service Bus for analytics. Partner integrations – Offer REST-based endpoints for external systems while maintaining policy-based control. Each of these scenarios benefits from simplified integration, centralized governance, and improved reliability. Secure and governed by design The integration uses managed identities for secure communication between API Management and Service Bus — no secrets required. You can further apply enterprise-grade controls: Enforce rate limits, quotas, and authorization through APIM policies. Gain API-level logging and tracing for each message sent. Use Service Bus metrics to monitor downstream processing. Together, these tools help you maintain a consistent security posture across your APIs and messaging layer. Build modern, event-driven architectures With this feature, API Management can serve as a bridge to your event-driven backbone. Start small by queuing a single API’s workload, or extend to enterprise-wide event distribution using topics and subscriptions. You’ll reduce architectural complexity while enabling more flexible, scalable, and decoupled application patterns. Learn more: Get the full walkthrough and examples in the documentation 👉 here3.4KViews2likes4Comments