openai
23 Topics📢Announcement! Python Code Interpreter in Logic Apps is now in Public Preview
As AI agents evolve, they increasingly need to do more than just respond to text—they must analyze structured data, reason over complex patterns, and perform custom computations on demand. This is especially true in real-world scenarios where users upload large CSV files and expect agents to perform tasks like exploratory data analysis or generating insights—all from natural language prompts. Why This Matters The above image captures why this matters - behind this need lies a real challenge that many businesses face today. Data is diverse, fragmented, and large. It often comes in the form of CSV files, Excel spreadsheets, or JSON—containing thousands or even millions of rows. But this raw data is rarely useful on its own. It typically requires: Cleaning and transformation Custom logic to extract insights Visualizations or summaries that make the data actionable These steps are often manual, error-prone, and time-consuming—especially for users without data science or engineering expertise. Introducing Python Code Interpreter in Logic Apps Agent Loop We’re excited to announce support for Python code execution, powered by Azure Container Apps (ACA) session pool. This capability enables Logic Apps developers to use Python Code Interpreter in their workflows and also as a tool in Agent loop. You can author the code or use LLM to write code for you. As a code interpreter tool, it Accept natural language instructions Automatically generate Python code Execute that code securely on uploaded datasets (like CSV or JSON) Return insights, visualizations, or next-step data back to the user This brings the power of a code interpreter—similar to ChatGPT’s advanced data analysis tool—right into the Logic Apps runtime. Instead of writing code or manually manipulating spreadsheets, users can now describe their intent in natural language—for example: “Find the top 5 products by revenue” “Forecast demand by region for the next quarter” “Highlight customer segments based on purchase patterns” Under the hood, Logic Apps now enables this flow by interpreting the instruction, generating Python code, executing it securely in an isolated environment, and returning usable results— summaries, forecasts, or data transformations—within the same workflow. Real-World Use Cases This opens up a wide range of possibilities for businesses looking to embed intelligence into their automation: Sales & Marketing: Upload raw sales data and get on-the-fly summaries, forecasts, or regional comparisons. Finance: Analyze expense reports, detect anomalies, or generate quarterly breakdowns from Excel exports. Operations: Clean large log files, surface exceptions, and generate insights to improve reliability. Data Exploration: Let business users ask questions like “Which region had the highest YoY growth?” without writing a single line of code. How It Works The action to execute Python code is powered by Azure Container Apps (ACA)session pool. Azure Container Apps dynamic sessions provides fast and scalable access to a code interpreter. Each code interpreter session is fully isolated by a Hyper-V boundary and is designed to run untrusted code. By enabling network isolation on ACA, your data never leaves the defined network boundaries In Logic Apps, choose the action to execute Python code. You need to create a connection to the ACA session before you use the action. The code to execute can be authored by the developer or generated by the agent Optionally, upload file to the ACA session which can then be referenced as a data source in the Python code Run the workflow to get insights/results from the action execution Getting Started We can’t wait to see developers use this feature to build powerful agents! You can find all the details about the feature and step by step guidance to use this capability in our MS Learn document. If you have any questions, comments or feedback, please reach out to us via this form: http://aka.ms/la/feedback727Views1like0CommentsIntroducing Logic Apps MCP servers (Public Preview)
Using Logic Apps (Standard) as MCP servers transforms the way organizations build and manage agents by turning connectors into modular, reusable MCP tools. This approach allows each connector—whether it's for data access, messaging, or workflow orchestration—to act as a specialized capability within the MCP framework. By dynamically composing these tools into Logic Apps, developers can rapidly construct agents that are both scalable and adaptable to complex enterprise scenarios. The benefits include reduced development overhead, enhanced reusability, and a streamlined path to integrating diverse systems—all while maintaining the flexibility and power of the Logic Apps platform. Starting today, we now support creating Logic Apps MCP Servers in the following ways: Registering Logic Apps connectors as MCP servers using Azure API Center Using this approach provides a streamlined experience when building MCP servers based upon Azure Logic Apps connectors. This new experience includes selecting a managed connector and one or more of its actions to create an MCP server and its related tools. This experience also automates the creation of Logic Apps workflows and wires up Easy Auth authentication for you in a matter of minutes. Beyond the streamlined experience that we provide, customers also benefit from any MCP server created using this experience to be registered within their API Center enterprise catalogue. For admins this means they can manage their MCP servers across the enterprise. For developers, it offers a centralized catalog where MCP servers can be discovered and quickly onboarded in Agent solutions. To get started, please refer to our product documentation or our demo videos. Enabling Logic Apps as remote MCP server For customers who have existing Logic Apps (Standard) investments or who want additional control over how their MCP tools are created we are also offering the ability to enable a Logic App as an MCP server. For a Logic App to be eligible to become an MCP server, it must have the following characteristics: One or more workflows that have an HTTP Request trigger and a corresponding HTTP Response action It is recommended that your trigger has a description and your request payload has schema that includes meaningful descriptions Your host.json file has been configured to enable MCP capabilities You have created an App registration in Microsoft Entra and have configured Easy Auth in your Logic App To get started, please refer to our product documentation or our demo videos. Feedback Both of these capabilities are now available, in public preview, worldwide. If you have any questions or feedback on these MCP capabilities, we would love to hear from you. Please fill out the following form and I will follow-up with you.Expose REST APIs as MCP servers with Azure API Management and API Center (now in preview)
As AI-powered agents and large language models (LLMs) become central to modern application experiences, developers and enterprises need seamless, secure ways to connect these models to real-world data and capabilities. Today, we’re excited to introduce two powerful preview capabilities in the Azure API Management Platform: Expose REST APIs in Azure API Management as remote Model Context Protocol (MCP) servers Discover and manage MCP servers using API Center as a centralized enterprise registry Together, these updates help customers securely operationalize APIs for AI workloads and improve how APIs are managed and shared across organizations. Unlocking the value of AI through secure API integration While LLMs are incredibly capable, they are stateless and isolated unless connected to external tools and systems. Model Context Protocol (MCP) is an open standard designed to bridge this gap by allowing agents to invoke tools—such as APIs—via a standardized, JSON-RPC-based interface. With this release, Azure empowers you to operationalize your APIs for AI integration—securely, observably, and at scale. 1. Expose REST APIs as MCP servers with Azure API Management An MCP server exposes selected API operations to AI clients over JSON-RPC via HTTP or Server-Sent Events (SSE). These operations, referred to as “tools,” can be invoked by AI agents through natural language prompts. With this new capability, you can expose your existing REST APIs in Azure API Management as MCP servers—without rebuilding or rehosting them. Addressing common challenges Before this capability, customers faced several challenges when implementing MCP support: Duplicating development efforts: Building MCP servers from scratch often led to unnecessary work when existing REST APIs already provided much of the needed functionality. Security concerns: Server trust: Malicious servers could impersonate trusted ones. Credential management: Self-hosted MCP implementations often had to manage sensitive credentials like OAuth tokens. Registry and discovery: Without a centralized registry, discovering and managing MCP tools was manual and fragmented, making it hard to scale securely across teams. API Management now addresses these concerns by serving as a managed, policy-enforced hosting surface for MCP tools—offering centralized control, observability, and security. Benefits of using Azure API Management with MCP By exposing MCP servers through Azure API Management, customers gain: Centralized governance for API access, authentication, and usage policies Secure connectivity using OAuth 2.0 and subscription keys Granular control over which API operations are exposed to AI agents as tools Built-in observability through APIM’s monitoring and diagnostics features How it works MCP servers: In your API Management instance navigate to MCP servers Choose an API: + Create a new MCP Server and select the REST API you wish to expose. Configure the MCP Server: Select the API operations you want to expose as tools. These can be all or a subset of your API’s methods. Test and Integrate: Use tools like MCP Inspector or Visual Studio Code (in agent mode) to connect, test, and invoke the tools from your AI host. Getting started and availability This feature is now in public preview and being gradually rolled out to early access customers. To use the MCP server capability in Azure API Management: Prerequisites Your APIM instance must be on a SKUv1 tier: Premium, Standard, or Basic Your service must be enrolled in the AI Gateway early update group (activation may take up to 2 hours) Use the Azure Portal with feature flag: ➤ Append ?Microsoft_Azure_ApiManagement=mcp to your portal URL to access the MCP server configuration experience Note: Support for SKUv2 and broader availability will follow in upcoming updates. Full setup instructions and test guidance can be found via aka.ms/apimdocs/exportmcp. 2. Centralized MCP registry and discovery with Azure API Center As enterprises adopt MCP servers at scale, the need for a centralized, governed registry becomes critical. Azure API Center now provides this capability—serving as a single, enterprise-grade system of record for managing MCP endpoints. With API Center, teams can: Maintain a comprehensive inventory of MCP servers. Track version history, ownership, and metadata. Enforce governance policies across environments. Simplify compliance and reduce operational overhead. API Center also addresses enterprise-grade security by allowing administrators to define who can discover, access, and consume specific MCP servers—ensuring only authorized users can interact with sensitive tools. To support developer adoption, API Center includes: Semantic search and a modern discovery UI. Easy filtering based on capabilities, metadata, and usage context. Tight integration with Copilot Studio and GitHub Copilot, enabling developers to use MCP tools directly within their coding workflows. These capabilities reduce duplication, streamline workflows, and help teams securely scale MCP usage across the organization. Getting started This feature is now in preview and accessible to customers: https://aka.ms/apicenter/docs/mcp AI Gateway Lab | MCP Registry 3. What’s next These new previews are just the beginning. We're already working on: Azure API Management (APIM) Passthrough MCP server support We’re enabling APIM to act as a transparent proxy between your APIs and AI agents—no custom server logic needed. This will simplify onboarding and reduce operational overhead. Azure API Center (APIC) Deeper integration with Copilot Studio and VS Code Today, developers must perform manual steps to surface API Center data in Copilot workflows. We’re working to make this experience more visual and seamless, allowing developers to discover and consume MCP servers directly from familiar tools like VS Code and Copilot Studio. For questions or feedback, reach out to your Microsoft account team or visit: Azure API Management documentation Azure API Center documentation — The Azure API Management & API Center Teams7.7KViews5likes7Comments🚀 New in Azure API Management: MCP in v2 SKUs + external MCP-compliant server support
Your APIs are becoming tools. Your users are becoming agents. Your platform needs to adapt. Azure API Management is becoming the secure, scalable control plane for connecting agents, tools, and APIs — with governance built in. -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Today, we’re announcing two major updates to bring the power of the Model Context Protocol (MCP) in Azure API Management to more environments and scenarios: MCP support in v2 SKUs — now in public preview Expose existing MCP-compliant servers through API Management These features make it easier than ever to connect APIs and agents with enterprise-grade control—without rewriting your backends. Why MCP? MCP is an open protocol that enables AI agents—like GitHub Copilot, ChatGPT, and Azure OpenAI—to discover and invoke APIs as tools. It turns traditional REST APIs into structured, secure tools that agents can call during execution — powering real-time, context-aware workflows. Why API Management for MCP? Azure API Management is the single, secure control plane for exposing and governing MCP capabilities — whether from your REST APIs, Azure-hosted services, or external MCP-compliant runtimes. With built-in support for: Security using OAuth 2.1, Microsoft Entra ID, API keys, IP filtering, and rate limiting. Outbound token injection via Credential Manager with policy-based routing. Monitoring and diagnostics using Azure Monitor, Logs, and Application Insights. Discovery and reuse with Azure API Center integration. Comprehensive policy engine for request/response transformation, caching, validation, header manipulation, throttling, and more. …you get end-to-end governance for both inbound and outbound agent interactions — with no new infrastructure or code rewrites. ✅ What’s New? 1. MCP support in v2 SKUs Previously available only in classic tiers (Basic, Standard, Premium), MCP support is now in public preview for v2 SKUs — Basic v2, Standard v2, and Premium v2 — with no pre-requisites or manual enablement required. You can now: Expose any REST API as an MCP server in v2 SKUs Protect it with Microsoft Entra ID, keys or tokens Register tools in Azure API Center 2. Expose existing MCP-compliant servers (pass-through scenario) Already using tools hosted in Logic Apps, Azure Functions, LangChain or custom runtimes? Now you can govern those external tool servers by exposing them through API Management. Use API Management to: Secure external MCP servers with OAuth, rate limits, and Credential Manager Monitor and log usage with Azure Monitor and Application Insights Unify discovery with internal tools via Azure API Center 🔗 You bring the tools. API Management brings the governance. 🧭 What’s Next We’re actively expanding MCP capabilities in API Management: Tool-level access policies for granular governance Support for MCP resources and prompts to expand beyond tools 📚 Get Started 📘 Expose APIs as MCP servers 🌐 Connect external MCP servers 🔐 Secure access to MCP servers 🔎 Discover tools in API Center Summary Azure API Management is your single control plane for agents, tools and APIs — whether you're building internal copilots or connecting external toolchains. This preview unlocks more flexibility, less friction, and a secure foundation for the next wave of agent-powered applications. No new infrastructure. Secure by default. Built for the future.2.7KViews2likes3Comments📢Announcing agent loop: Build AI Agents in Azure Logic Apps 🤖
This post is written in collaboration with Kent Weare and Rohitha Hewawasam The era of intelligent business processes has arrived! Today, we are excited to announce agent loop, a groundbreaking new capability in Azure Logic Apps to build AI agents into your enterprise workflows. With agent loop, you can embed advanced AI decision-making directly into your processes – enabling your apps and automation to not just follow predefined steps, but to reason, adapt, and act autonomously towards goals. Agent loop becomes central to AI Agent development — it’s a new action type that brings together your AI model of choice, domain-specific tools, and enterprise knowledge sources. Whether you’re building an autonomous agent to process loan approvals, a conversational agent to support customers, or a multi-agent system that coordinates tasks such as Sales Report generation across agents, Agent Loop enables your workflows to go beyond static steps — making decisions, adapting to context, and delivering outcomes. Agent loop is implemented using kernel object in the Semantic Kernel. The kernel object, along with an LLM, creates the plan for what needs to be done, while Logic Apps runtime handles execution of that plan. Agent Loop is highly configurable, enabling you to build agents with diverse capabilities: Conversational or Autonomous Agents With Logic Apps' extensive gallery of connectors, you can build fully autonomous agents that respond to real-time events — like new records in a database, files added to a share, or messages in a queue. Agent Loop also supports conversational agents via Channels, allowing agents to interact with users through the Azure portal or custom chat clients. Bring your own Model Associate your AI agent with any Azure OpenAI model of your choice. As new models become available, you can easily switch or upgrade without re-architecting the solution. Define Agent Goals and Guardrails Specify your agent’s objective and behavioral boundaries through system prompts and user instructions. Using connectors like Outlook or Teams, you can easily introduce human-in-the-loop interactions for approvals or overrides — enabling safe, controlled autonomy. Tools and Knowledge, Built In Leverage hundreds of out-of-the-box connectors to equip agents with access to enterprise systems, APIs, and business data. Enrich their reasoning with knowledge from vector stores, structured databases, or unstructured files, and empower them to take meaningful actions across your environment. AI Agents in Action Here are some examples of AI Agents in Action that highlight the value and efficiencies of these agents across different domains and solution areas. A product return agent verifies order details, return eligibility, and refund rules, then processes the return or requests additional info from the customer. A loan approval agent evaluates credit score, income, and risk profile, applies business rules, and auto-approves or routes applications for review. A recruiting agent screens resumes, summarizes qualifications, and drafts personalized outreach to top candidates, streamlining early hiring stages. A sales report generation workflow uses a writer agent to draft content, a reviewer agent to verify accuracy, and a publisher agent to format and distribute the report. An IT operations agent triages alerts, checks recent changes, and either resolves common issues or escalates to on-call engineers when needed. A multi-agent retail supply chain solution combines inventory and logistics agents to ensure timely restocks and optimize fulfillment routes. Why agent loop matters Modern businesses thrive on agility and intelligence. Traditional workflows remain essential for deterministic tasks—especially those involving structured data or high-risk decisions. But when processes involve unstructured data, changing context, or require adaptive decision-making, AI agents excel. They can reason, act in real time, and dynamically sequence steps to meet goals. Agent Loop exactly serves this purpose. What makes Agent Loop especially powerful is its deep integration with the Logic Apps ecosystem. Logic Apps comes with over 1,400+ connectors for Microsoft and third-party services – from databases and ERP systems to SaaS applications and custom APIs. They can also invoke custom code and scripts, making it easy to tap into homegrown capabilities. The agent isn’t limited to information in its prompt; it can actively retrieve knowledge, perform transactions, and effect change in the real world via these connectors. Logic Apps is uniquely positioned to enable customers to leverage their API and connector ecosystem cohesively across their workflows and AI Agents to build agentic applications. Equally important, Agent Loop is designed for flexibility. You can orchestrate single-agent workflows or coordinate multiple agents working in tandem towards a common goal. Agent Loop can even involve humans in the loop when needed – for instance, pausing to get a manager’s approval or to ask for clarification – leveraging Logic Apps’ human workflow capabilities. All of this is handled within the familiar, visual Logic Apps designer, so you get a high-level view of the entire orchestration. How agent loop works At a high level, Agent Loop works by pairing the reasoning capabilities of large-scale AI models with the robust action framework of Logic Apps. Built on top of Semantic Kernel, the Agent loop operates in iterative cycles, allowing the agent to think, act, and learn from each step: Reasoning (Think): The agent (powered by an LLM like Azure OpenAI Service under the hood) and on Semantic Kernel, examines its goal and the current context. It decides what needs to be done next – whether that’s gathering more information, calling a specific connector, or formulating an answer. This step is essentially the AI “planning” its next action based on the goal you’ve provided and the data it has so far. Action (Act): The agent then carries out the decided action by invoking a tool or connector through Logic Apps. This could be anything from querying a database, calling a REST API, sending an email, to running a calculation. Thanks to Logic Apps’ extensive connector library, the agent has a rich toolbox at its disposal. Each action is executed as a Logic Apps step, meaning it’s secure, managed, and logged like any other workflow action. Reflection (Learn): After the action, the agent receives the results (e.g. data retrieved, outcome of the API call, user input, etc.). It then evaluates: Did this bring it closer to the goal? Does the plan need adjusting? The agent updates its understanding based on new information. This reflection is what lets the agent handle complex, open-ended tasks – it can correct course if needed, try alternative approaches, or conclude if the goal has been satisfied. These steps repeat in a loop. The Agent Loop action manages this cycle automatically – calling the AI model to reason, executing the chosen connector operations, feeding results back, and iterating. Why Build AI Agents in Logic Apps? Building AI agents is an emerging frontier in automation but doing it from the ground up can be daunting especially when organizations build them in large numbers. Agent Loop in Logic Apps makes this dramatically easier and more scalable for several reasons: Declarative Orchestration: Logic Apps provides a visual workflow canvas and a serverless runtime. The Agent Loop action plugs into this and the platform handles the sequence of steps and iterations, so you can focus on defining the goal and selecting the connectors (tools) the agent can use. Code extensibility: Logic Apps supports both declarative and code-first approaches to building agents. You can combine the two — using visual designer for orchestration and injecting code where needed through extensibility points. Write custom logic in C#, PowerShell, JavaScript, or use inline scripts for lightweight processing. Python support is coming soon, enabling even more flexibility. 1400+ Integrated Tools: With the rich connector ecosystem at its disposal, your agent can seamlessly tap into your enterprise systems and SaaS applications. Your entire ecosystem of connectors, APIs, custom code and agents can be used by deterministic workflows and agents to solve business problems Observability: Logic Apps offers full traceability into each agent’s decisions and actions. Every run is logged in the workflow history, with data stored within the customer’s own network and storage boundaries. The Agent Chat view provides insights into the agent’s reasoning, tool invocations, and goal progress. Developers can easily revisit these logs for debugging, auditing, or analysis. Enterprise-Grade Governance: Because it runs on Azure Logic Apps, agent loop inherits all the robust monitoring, logging, security and compliance capabilities of the platform You can secure connections with managed identities and leverage built-in rate limiting, retries, and exception handling. Your AI agents run with the same enterprise-ready guardrails as any mission-critical workflow. Human-in-the-Loop & Multi-Agent Coordination: Logic Apps makes it straightforward to involve people at key decision points or to coordinate multiple agents. You can chain Agent Loop actions or have agents invoke other workflows, enabling collaborative problem-solving that would be difficult to implement from scratch. The result is a system where AI and humans can smoothly interact and complement each other. Faster Time to Value: By eliminating the boilerplate work of building an agent architecture (managing memory, planning logic, connecting to services, etc.), Agent Loop lets developers and architects concentrate on high-value logic and business goals, accelerating how you bring AI-driven improvements to your business processes. In short, agent loop combines the brains of generative AI with the brawn of Azure’s integration platform. It offers a turnkey way to build sophisticated AI-driven automation without reinventing the wheel. Companies no longer have to choose between the flexibility of custom AI solutions and the convenience of a managed workflow service – with Logic Apps and Agent Loop, you get both. Getting Started Agent Loop is available in Logic Apps Standard starting today! Here are some resources to help you begin: Documentation: Explore the agent loop concepts and detailed guide with step-by-step instructions on how to configure and use Agent Loop. Samples & Demos: Watch pre-recorded demos showcasing both conversational and autonomous agent scenarios built with Agent Loop. You'll also get a preview of exciting features coming soon. Looking Ahead Agent Loop opens up a new realm of possibilities for what you can achieve with Azure Logic Apps. It blurs the line between application integration and AI, allowing workflows to evolve from static sequences into adaptive, self-directed processes. We can’t wait to see what you will build with Agent Loop! This is just the beginning. We’re actively investing in new capabilities that are planned for release soon Multi-agent Hand-off Support – A multi-agent application with hand-off capabilities enables different agent-loops to collaborate by transferring tasks between one another based on expertise or context, which is crucial for building agentic applications that can dynamically adapt to complex, evolving goals and user needs. A2A (Agent-to-Agent) protocol support – A2A is a communication standard that defines how autonomous agents exchange messages, share context, and coordinate actions in a secure and structured way. It’s especially important in building agentic applications because it ensures interoperability, enables seamless hand-offs between agents, and maintains context integrity across different agents working toward a shared goal. This will allow Logic Apps agents to seamlessly integrate with other agentic platforms. OBO Auth for Logic Apps Agents: On Behalf Of Auth support for logic Apps agents would allow Logic Apps agents to use logged-in users identity for authentication when invoking Logic Apps connectors as part of agent-loop execution. This will enable building conversational applications to dynamically perform OAuth flows for fetching consent from log-in users to invoke Logic Apps connectors on logged-in user’s behalf. Contact Us Have feedback or questions about Agent Loop? We’d love to hear from you. Reply directly to this blog post or reach out to us through this form. Your input helps shape the future of Logic Apps and agentic automation.8.8KViews4likes2Comments🤖 Agent Loop Demos 🤖
We announced the public preview of agent loop at Build 2025. Agent Loop is a new feature in Logic Apps to build AI Agents for use cases that span across industry domains and patterns. Here are some resources to learn more about them Agent loop concepts Agent loop how-to Agent loop public preview announcement In this article, share with you use cases implemented in Logic Apps using agent loop and other features. This video shows an autonomous Loan Approval Agent specifically that handles auto loans for a bank. The demo features an AI Agent that uses an Azure Open AI model, company's policies, and several tools to process loan application. For edge cases, huma in involved via Teams connector. This video shows an autonomous Product Return Agent for Fourth Coffee company. The returns are processed by agent based on company policy, and other criterions. In this case also, a human is involved when decisions are outside the agent's boundaries This video shows a commercial agent that grants credits for purchases of groceries and other products, for Northwind Stores. The Agent extracts financial information from an IBM Mainframe and an IBM i system to assess each requestor and updates the internal Northwind systems with the approved customers information. Multi-Agent scenario including both a codeful and declarative method of implementation. Note: This is pre-release functionality and is subject to change. If you are interested in further discussing Logic Apps codeful Agents, please fill out the following feedback form. Operations Agent (part 1): In this conversational agent, we will perform Logic Apps operations such as repair and resubmit to ensure our integration platform is healthy and processing transactions. To ensure of compliance we will ensure all operational activities are logged in ServiceNow. Operations Agent (part 2): In this autonomous agent, we will perform Logic Apps operations such as repair and resubmit to ensure our integration platform is healthy and processing transactions. To ensure of compliance we will ensure all operational activities are logged in ServiceNow.3.5KViews2likes2Comments📢Announcement: Power your Agents in Azure AI Foundry Agent Service with Azure Logic Apps
We’re excited to announce the Public Preview of two major integrations that bring the power of Azure Logic Apps to AI Agents in Foundry: Logic Apps as tools: You can now use Logic Apps workflows—and their 1400+ connectors—as tools within the Azure Foundry AI Agent Service. This unlocks seamless integration between AI agents and enterprise-grade automation—enabling agents to reason and act through Logic Apps. AI Agent Service connector: A new Logic Apps connector for the AI Agent Service is now available, allowing you to build workflows that can trigger agents based on events across hundreds of applications. This enables your agents to respond proactively and continuously—bringing event-driven autonomy to your AI solutions. Checkout the blogpost for these announcements from Foundry as well. Logic Apps as tool for Agents in Foundry Logic Apps now powers the tool layer for AI Agents in Foundry Agent Service —bringing together the strengths of business process automation and intelligent reasoning. AI agents need more than powerful models to be effective—they need the ability to act and the context to act appropriately. Tools play a critical role in this: they don’t just let agents perform actions—they provide the inputs, signals, and structure that anchor the agent’s reasoning and guide consistent behavior. Well-designed tools help ensure that agents make decisions based on reliable, real-world data and aligned business rules. With over 1400+ connectors, Logic Apps lets agents tap into real-world enterprise systems—such as reading records from a SQL database, retrieving order data from an ERP system like SAP, managing support tickets in ServiceNow, or triggering actions in CRM platforms like Dynamics or Salesforce. This integration transforms agents from passive responders into intelligent actors that can take meaningful, context-aware action across your organization. Requirements for using Logic Apps as Tools To use a Logic App as a tool within the AI Agent Service, your workflow must meet the following criteria: Consumption SKU: Currently, only Logic Apps in Consumption plan are supported. Request Trigger: The workflow must begin with a Request trigger so that it can be invoked by the agent via a REST call. Tool Description: Each workflow should include a clear, concise description to help the agent understand its purpose and appropriate usage. Getting Started with Logic Apps in AI Foundry There are two ways to bring Logic Apps into your agents’ toolset. You can find step by step instructions in the docs here. To summarize, Use prebuilt Microsoft authored templates Select from a library of curated Logic Apps templates designed for agent scenarios. After selecting a template: Configure the tool’s name and description Authenticate any services used in the workflow Set required parameters Once configured, the workflow will be deployed to your selected subscription and resource group, ready to be used by your agent. Import existing workflows If you already have Logic Apps powering key operations in your business, you can import them directly: Go to the Your Actions tab Select your existing workflow Provide a name and description for agent usage This makes it easy to extend your existing APIs and business logic to AI agents—no need to start from scratch. Tool Calling Demo In this demo video we build an AI Agent that can respond to any questions about GitHub issues and send an email report about them. The opportunities to unlock scenarios are endless and we can’t wait to hear from you. Logic Apps as a trigger for Agents in Foundry We’re excited to launch the AI Agent Service connector in Logic Apps—making it easier than ever to bring autonomy to your business processes. With this connector, you can now use any Logic Apps trigger—from HTTP requests to Service Bus messages, file drops, or scheduled events—to kick off a workflow that invokes an AI agent. This means your agents can now respond to real-world events in near real time, making decisions and taking actions based on dynamic context. Whether it’s processing a new order, reviewing a document, or triaging support tickets, Logic Apps + AI Agent Service gives you the power to build truly autonomous, intelligent workflows. Start Building Ready to try it out? Check out the documentation for step-by-step guidance on using Logic Apps as tools in the AI Agent Service. We’d love to hear what you build! Try the feature and share your feedback—your input helps shape the future of AI-powered automation in Azure.737Views0likes0Comments🧾 Automate Invoice data extraction with Logic Apps and Document Intelligence
📘 Scenario: Modernizing invoice processing with AI In many organizations, invoices still arrive as scanned documents, email attachments, or paper-based handoffs. Extracting data from these formats — invoice number, vendor, total amount, line items — often involves manual effort, custom scripts, or brittle OCR logic. This scenario demonstrates how you can use Azure Logic Apps, the new Analyze Document Details action, and Azure OpenAI to automatically convert invoice images into structured data and store them in Azure Cosmos DB. 💡 What’s new and why it matters The key enabler here is the Analyze Document Details action — now available in Logic Apps. With this action, you can: Send any document image (JPG, PNG, PDF) Receive a clean markdown-style output of all recognized content Combine that with Azure OpenAI to extract structured fields without training a custom model This simplifies what used to be a complex task: reading from invoices and inserting usable data into systems like Cosmos DB, SQL, or ERP platforms like Dynamics. 🔭 What this Logic App does With just a few built-in actions, you can turn unstructured invoice documents into structured, searchable records. Here’s what the flow looks like: 📸 Logic App Overview ✅ Pre-requisites To try this walkthrough, make sure you have the following set up: An Azure Logic Apps Standard workflow An Azure Cosmos DB for NoSQL database + container An Azure OpenAI deployment (we used gpt-4o) A Blob Storage container (where invoice files will be dropped) 💡Try it yourself 👉 Sample logic app 🧠 Step-by-Step: Inside the Logic App Here’s what each action in the Logic App does, and how it’s configured: ⚡ Trigger: When a blob is added or updated Starts the workflow when a new invoice image is dropped into a Blob container. Blob path: the name of blob container 📸 Blob trigger configuration 🔍 Read blob content Reads the raw image or PDF content to pass into the AI models. Container: invoices Blob name: dynamically fetched from trigger output response 📸 Read blob configuration 🧠 Analyze document details (✨ New!) This is the core of the scenario — and the feature we’re excited to highlight. The new “Analyze Document Details” action in Logic Apps allows you to send any document image (JPG, PNG, PDF) to Azure Document Intelligence and receive a textual markdown representation of its contents — without needing to build a custom model. 📸 Example invoice (Source: InvoiceSample) 💡 This action is ideal for scenarios where you want to extract high-quality text from messy, unstructured images — including scanned receipts, handwritten forms, or photographed documents — and immediately work with it downstream using markdown. Model: prebuilt-invoice Content: file content from blob Output: text (or markdown) block containing all detected invoice fields and layout information 📸 Analyze document details configuration ✂️ Parse document Extracts the "text" field from the Document Intelligence output. This becomes the prompt input for the next step. 📸 Parse document configuration 💬 Get chat completions This step calls your Azure OpenAI deployment (in this case, gpt-4) to extract clean, structured JSON from the text- generated earlier. System Message: You are an intelligent invoice parser. Given the following invoice text, extract the key fields as JSON. Return only the JSON in proper notation, do not add any markdown text or anything extra. Fields: invoice_number, vendor, invoice_date, due_date, total_amount, and line_items if available User Message: Uses the parsed text from the "Parse a document" step (referenced as Parsed result text in your logic app) Temperature: 0 Ensures consistent, reliable output from the model 📤 The model returns a clean JSON response, ready to be parsed and inserted into a database. 📸 Get chat completions configuration 📦 Parse JSON Converts the raw OpenAI response string into a JSON object. Use a sample schema that matches your expected invoice fields to generate a sample payload. Content: Chat completion outputs Schema: Use a sample schema that matches your expected invoice fields to generate a sample payload. 📸 Parse JSON configuration 🧱 Compose – format for Cosmos DB Use the dynamic outputs from Parse JSON action and construct the JSON body input to be passed into CosmosDB. 📸 Compose action configuration 🗃️ Create or update item Inserts the structured document into Cosmos DB. Database ID: InvoicesDB Container ID: Invoices Partition Key: @{body('Parse_JSON')?['invoice_number']} Item: @outputs('Compose') Is Upsert: true 📸 CosmosDB action configuration ✅ Test output As shown below, you’ll see a successful end-to-end run — starting from the file upload trigger, through OpenAI extraction, all the way to inserting the final structured document into Cosmos DB. 📸 Logic App workflow run output 💬 Feedback Let us know what other kinds of demos and content you would like to see in the comments.1.2KViews0likes1Comment🤖 AI Procurement assistant using prompt templates in Standard Logic Apps
📘 Introduction Answering procurement-related questions doesn't have to be a manual process. With the new Chat Completions using Prompt Template action in Logic Apps (Standard), you can build an AI-powered assistant that understands context, reads structured data, and responds like a knowledgeable teammate. 🏢 Scenario: AI assistant for IT procurement Imagine an employee wants to know: "When did we last order laptops for new hires in IT?" Instead of forwarding this to the procurement team, a Logic App can: Accept the question Look up catalog details and past orders Pass all the info to a prompt template Generate a polished, AI-powered response 🧠 What Are Prompt Templates? Prompt Templates are reusable text templates that use Jinja2 syntax to dynamically inject data at runtime. In Logic Apps, this means you can: Define a prompt with placeholders like {{ customer.orders }} Automatically populate it with outputs from earlier actions Generate consistent, structured prompts with minimal effort ✨ Benefits of Using Prompt Templates in Logic Apps Consistency: Centralized prompt logic instead of embedding prompt strings in each action. Reusability: Easily apply the same prompt across multiple workflows. Maintainability: Tweak prompt logic in one place without editing the entire flow. Dynamic control: Logic Apps inputs (e.g., values from a form, database, or API) flow right into the template. This allows you to create powerful, adaptable AI-driven flows without duplicating effort — making it perfect for scalable enterprise automation. 💡 Try it Yourself Grab the sample prompt template and sample inputs from our GitHub repo and follow along. 👉 Sample logic app 🧰 Prerequisites To get started, make sure you have: A Logic App (Standard) resource in Azure An Azure OpenAI resource with a deployed GPT model (e.g., GPT-3.5 or GPT-4) 💡 You’ll configure your OpenAI API connection during the workflow setup. 🔧 Build the Logic App workflow Here’s how to build the flow in Logic Apps using the Prompt Template action. This setup assumes you're simulating procurement data with test inputs. 📌 Step 0: Start by creating a Stateful Workflow in your Logic App (Standard) resource. Choose "Stateful" when prompted during workflow creation. This allows the run history and variables to be preserved for testing. 📸 Creating a new Stateful Logic App (Standard) workflow Here’s how to build the flow in Logic Apps using the Prompt Template action. This setup assumes you're simulating procurement data with test inputs. 📌 Trigger: "When an HTTP request is received" 📌 Step 1: Add three Compose actions to store your test data. documents: This stores your internal product catalog entries [ { "id": "1", "title": "Dell Latitude 5540 Laptop", "content": "Intel i7, 16GB RAM, 512GB SSD, standard issue for IT new hire onboarding" }, { "id": "2", "title": "Docking Station", "content": "Dell WD19S docking stations for dual monitor setup" } ] 📸 Compose action for documents input question: This holds the employee’s natural language question. [ { "role": "user", "content": "When did we last order laptops for new hires in IT?" } ] 📸 Compose action for question input customer: This includes employee profile and past procurement orders { "firstName": "Alex", "lastName": "Taylor", "department": "IT", "employeeId": "E12345", "orders": [ { "name": "Dell Latitude 5540 Laptop", "description": "Ordered 15 units for Q1 IT onboarding", "date": "2024/02/20" }, { "name": "Docking Station", "description": "Bulk purchase of 20 Dell WD19S docking stations", "date": "2024/01/10" } ] } 📸 Compose action for customer input 📌 Step 2: Add the "Chat Completions using Prompt Template" action 📸 OpenAI connector view 💡Tip: Always prefer the in-app connector (built-in) over the managed version when choosing the Azure OpenAI operation. Built-in connectors allow better control over authentication and reduce latency by running natively inside the Logic App runtime. 📌 Step 3: Connect to Azure OpenAI Navigate to your Azure OpenAI resource and click on Keys and Endpoint for connecting using key-based authentication 📸 Create Azure OpenAI connection 📝 Prompt template: Building the message for chat completions Once you've added the Get chat completions using Prompt Template action, here's how to set it up: 1. Deployment Identifier Enter the name of your deployed Azure OpenAI model here (e.g., gpt-4o). 📌 This should match exactly with what you configured in your Azure OpenAI resource. 2. Prompt Template This is the structured instruction that the model will use. Here’s the full template used in the action — note that the variable names exactly match the Compose action names in your Logic App: documents, question, and customer. system: You are an AI assistant for Contoso's internal procurement team. You help employees get quick answers about previous orders and product catalog details. Be brief, professional, and use markdown formatting when appropriate. Include the employee’s name in your response for a personal touch. # Product Catalog Use this documentation to guide your response. Include specific item names and any relevant descriptions. {% for item in documents %} Catalog Item ID: {{item.id}} Name: {{item.title}} Description: {{item.content}} {% endfor %} # Order History Here is the employee's procurement history to use as context when answering their question. {% for item in customer.orders %} Order Item: {{item.name}} Details: {{item.description}} — Ordered on {{item.date}} {% endfor %} # Employee Info Name: {{customer.firstName}} {{customer.lastName}} Department: {{customer.department}} Employee ID: {{customer.employeeId}} # Question The employee has asked the following: {% for item in question %} {{item.role}}: {{item.content}} {% endfor %} Based on the product documentation and order history above, please provide a concise and helpful answer to their question. Do not fabricate information beyond the provided inputs. 📸 Prompt template action view 3. Add your prompt template variables Scroll down to Advanced parameters → switch the dropdown to Prompt Template Variable. Then: Add a new item for each Compose action and reference it dynamically from previous outputs: documents question customer 📸 Prompt template variable references 🔍 How the template works Template element What it does {{ customer.firstName }} {{ customer.lastName }} Displays employee name {{ customer.department }} Adds department context {{ question[0].content }} Injects the user’s question from the Compose action named question {% for doc in documents %} Loops through catalog data from the Compose action named documents {% for order in customer.orders %} Loops through employee’s order history from customer Each of these values is dynamically pulled from your Logic App Compose actions — no code, no external services needed. You can apply the exact same approach to reference data from any connector, like a SharePoint list, SQL row, email body, or even AI Search results. Just map those outputs into the Prompt Template and let Logic Apps do the rest. ✅ Final Output When you run the flow, the model might respond with something like: "The last order for Dell Latitude 5540 laptops was placed on February 20, 2024 — 15 units were procured for IT new hire onboarding." This is based entirely on the structured context passed in through your Logic App — no extra fine-tuning required. 📸 Output from run history 💬 Feedback Let us know what other kinds of demos and content you would like to see using this formIntroducing GenAI Gateway Capabilities in Azure API Management
We are thrilled to announce GenAI Gateway capabilities in Azure API Management – a set of features designed specifically for GenAI use cases. Azure OpenAI service offers a diverse set of tools, providing access to advanced models like GPT3.5-Turbo to GPT-4 and GPT-4 Vision, enabling developers to build intelligent applications that can understand, interpret, and generate human-like text and images. One of the main resources you have in Azure OpenAI is tokens. Azure OpenAI assigns quota for your model deployments expressed in tokens-per-minute (TPMs) which is then distributed across your model consumers that can be represented by different applications, developer teams, departments within the company, etc. Starting with a single application integration, Azure makes it easy to connect your app to Azure OpenAI. Your intelligent application connects to Azure OpenAI directly using API Key with a TPM limit configured directly on the model deployment level. However, when you start growing your application portfolio, you are presented with multiple apps calling single or even multiple Azure OpenAI endpoints deployed as Pay-as-you-go or Provisioned Throughput Units (PTUs) instances. That comes with certain challenges: How can we track token usage across multiple applications? How can we do cross charges for multiple applications/teams that use Azure OpenAI models? How can we make sure that a single app does not consume the whole TPM quota, leaving other apps with no option to use Azure OpenAI models? How can we make sure that the API key is securely distributed across multiple applications? How can we distribute load across multiple Azure OpenAI endpoints? How can we make sure that PTUs are used first before falling back to Pay-as-you-go instances? To tackle these operational and scalability challenges, Azure API Management has built a set of GenAI Gateway capabilities: Azure OpenAI Token Limit Policy Azure OpenAI Emit Token Metric Policy Load Balancer and Circuit Breaker Import Azure OpenAI as an API Azure OpenAI Semantic Caching Policy (in public preview) Azure OpenAI Token Limit Policy Azure OpenAI Token Limit policy allows you to manage and enforce limits per API consumer based on the usage of Azure OpenAI tokens. With this policy you can set limits, expressed in tokens-per-minute (TPM). This policy provides flexibility to assign token-based limits on any counter key, such as Subscription Key, IP Address or any other arbitrary key defined through policy expression. Azure OpenAI Token Limit policy also enables pre-calculation of prompt tokens on the Azure API Management side, minimizing unnecessary request to the Azure OpenAI backend if the prompt already exceeds the limit. Learn more about this policy here. Azure OpenAI Emit Token Metric Policy Azure OpenAI enables you to configure token usage metrics to be sent to Azure Applications Insights, providing overview of the utilization of Azure OpenAI models across multiple applications or API consumers. This policy captures prompt, completions, and total token usage metrics and sends them to Application Insights namespace of your choice. Moreover, you can configure or select from pre-defined dimensions to split token usage metrics, enabling granular analysis by Subscription ID, IP Address, or any custom dimension of your choice. Learn more about this policy here. Load Balancer and Circuit Breaker Load Balancer and Circuit Breaker features allow you to spread the load across multiple Azure OpenAI endpoints. With support for round-robin, weighted (new), and priority-based (new) load balancing, you can now define your own load distribution strategy according to your specific requirements. Define priorities within the load balancer configuration to ensure optimal utilization of specific Azure OpenAI endpoints, particularly those purchased as PTUs. In the event of any disruption, a circuit breaker mechanism kicks in, seamlessly transitioning to lower-priority instances based on predefined rules. Our updated circuit breaker now features dynamic trip duration, leveraging values from the retry-after header provided by the backend. This ensures precise and timely recovery of the backends, maximizing the utilization of your priority backends to their fullest. Learn more about load balancer and circuit breaker here. Import Azure OpenAI as an API New Import Azure OpenAI as an API in Azure API management provides an easy single click experience to import your existing Azure OpenAI endpoints as APIs. We streamline the onboarding process by automatically importing the OpenAPI schema for Azure OpenAI and setting up authentication to the Azure OpenAI endpoint using managed identity, removing the need for manual configuration. Additionally, within the same user-friendly experience, you can pre-configure Azure OpenAI policies, such as token limit and emit token metric, enabling swift and convenient setup. Learn more about Import Azure OpenAI as an API here. Azure OpenAI Semantic Caching policy Azure OpenAI Semantic Caching policy empowers you to optimize token usage by leveraging semantic caching, which stores completions for prompts with similar meaning. Our semantic caching mechanism leverages Azure Redis Enterprise or any other external cache compatible with RediSearch and onboarded to Azure API Management. By leveraging the Azure OpenAI Embeddings model, this policy identifies semantically similar prompts and stores their respective completions in the cache. This approach ensures completions reuse, resulting in reduced token consumption and improved response performance. Learn more about semantic caching policy here. Get Started with GenAI Gateway Capabilities in Azure API Management We’re excited to introduce these GenAI Gateway capabilities in Azure API Management, designed to empower developers to efficiently manage and scale their applications leveraging Azure OpenAI services. Get started today and bring your intelligent application development to the next level with Azure API Management.36KViews10likes14Comments