built-in
14 Topics🧾 Automate Invoice data extraction with Logic Apps and Document Intelligence
📘 Scenario: Modernizing invoice processing with AI In many organizations, invoices still arrive as scanned documents, email attachments, or paper-based handoffs. Extracting data from these formats — invoice number, vendor, total amount, line items — often involves manual effort, custom scripts, or brittle OCR logic. This scenario demonstrates how you can use Azure Logic Apps, the new Analyze Document Details action, and Azure OpenAI to automatically convert invoice images into structured data and store them in Azure Cosmos DB. 💡 What’s new and why it matters The key enabler here is the Analyze Document Details action — now available in Logic Apps. With this action, you can: Send any document image (JPG, PNG, PDF) Receive a clean markdown-style output of all recognized content Combine that with Azure OpenAI to extract structured fields without training a custom model This simplifies what used to be a complex task: reading from invoices and inserting usable data into systems like Cosmos DB, SQL, or ERP platforms like Dynamics. 🔭 What this Logic App does With just a few built-in actions, you can turn unstructured invoice documents into structured, searchable records. Here’s what the flow looks like: 📸 Logic App Overview ✅ Pre-requisites To try this walkthrough, make sure you have the following set up: An Azure Logic Apps Standard workflow An Azure Cosmos DB for NoSQL database + container An Azure OpenAI deployment (we used gpt-4o) A Blob Storage container (where invoice files will be dropped) 💡Try it yourself 👉 Sample logic app 🧠 Step-by-Step: Inside the Logic App Here’s what each action in the Logic App does, and how it’s configured: ⚡ Trigger: When a blob is added or updated Starts the workflow when a new invoice image is dropped into a Blob container. Blob path: the name of blob container 📸 Blob trigger configuration 🔍 Read blob content Reads the raw image or PDF content to pass into the AI models. Container: invoices Blob name: dynamically fetched from trigger output response 📸 Read blob configuration 🧠 Analyze document details (✨ New!) This is the core of the scenario — and the feature we’re excited to highlight. The new “Analyze Document Details” action in Logic Apps allows you to send any document image (JPG, PNG, PDF) to Azure Document Intelligence and receive a textual markdown representation of its contents — without needing to build a custom model. 📸 Example invoice (Source: InvoiceSample) 💡 This action is ideal for scenarios where you want to extract high-quality text from messy, unstructured images — including scanned receipts, handwritten forms, or photographed documents — and immediately work with it downstream using markdown. Model: prebuilt-invoice Content: file content from blob Output: text (or markdown) block containing all detected invoice fields and layout information 📸 Analyze document details configuration ✂️ Parse document Extracts the "text" field from the Document Intelligence output. This becomes the prompt input for the next step. 📸 Parse document configuration 💬 Get chat completions This step calls your Azure OpenAI deployment (in this case, gpt-4) to extract clean, structured JSON from the text- generated earlier. System Message: You are an intelligent invoice parser. Given the following invoice text, extract the key fields as JSON. Return only the JSON in proper notation, do not add any markdown text or anything extra. Fields: invoice_number, vendor, invoice_date, due_date, total_amount, and line_items if available User Message: Uses the parsed text from the "Parse a document" step (referenced as Parsed result text in your logic app) Temperature: 0 Ensures consistent, reliable output from the model 📤 The model returns a clean JSON response, ready to be parsed and inserted into a database. 📸 Get chat completions configuration 📦 Parse JSON Converts the raw OpenAI response string into a JSON object. Use a sample schema that matches your expected invoice fields to generate a sample payload. Content: Chat completion outputs Schema: Use a sample schema that matches your expected invoice fields to generate a sample payload. 📸 Parse JSON configuration 🧱 Compose – format for Cosmos DB Use the dynamic outputs from Parse JSON action and construct the JSON body input to be passed into CosmosDB. 📸 Compose action configuration 🗃️ Create or update item Inserts the structured document into Cosmos DB. Database ID: InvoicesDB Container ID: Invoices Partition Key: @{body('Parse_JSON')?['invoice_number']} Item: @outputs('Compose') Is Upsert: true 📸 CosmosDB action configuration ✅ Test output As shown below, you’ll see a successful end-to-end run — starting from the file upload trigger, through OpenAI extraction, all the way to inserting the final structured document into Cosmos DB. 📸 Logic App workflow run output 💬 Feedback Let us know what other kinds of demos and content you would like to see in the comments.🤖 AI Procurement assistant using prompt templates in Standard Logic Apps
📘 Introduction Answering procurement-related questions doesn't have to be a manual process. With the new Chat Completions using Prompt Template action in Logic Apps (Standard), you can build an AI-powered assistant that understands context, reads structured data, and responds like a knowledgeable teammate. 🏢 Scenario: AI assistant for IT procurement Imagine an employee wants to know: "When did we last order laptops for new hires in IT?" Instead of forwarding this to the procurement team, a Logic App can: Accept the question Look up catalog details and past orders Pass all the info to a prompt template Generate a polished, AI-powered response 🧠 What Are Prompt Templates? Prompt Templates are reusable text templates that use Jinja2 syntax to dynamically inject data at runtime. In Logic Apps, this means you can: Define a prompt with placeholders like {{ customer.orders }} Automatically populate it with outputs from earlier actions Generate consistent, structured prompts with minimal effort ✨ Benefits of Using Prompt Templates in Logic Apps Consistency: Centralized prompt logic instead of embedding prompt strings in each action. Reusability: Easily apply the same prompt across multiple workflows. Maintainability: Tweak prompt logic in one place without editing the entire flow. Dynamic control: Logic Apps inputs (e.g., values from a form, database, or API) flow right into the template. This allows you to create powerful, adaptable AI-driven flows without duplicating effort — making it perfect for scalable enterprise automation. 💡 Try it Yourself Grab the sample prompt template and sample inputs from our GitHub repo and follow along. 👉 Sample logic app 🧰 Prerequisites To get started, make sure you have: A Logic App (Standard) resource in Azure An Azure OpenAI resource with a deployed GPT model (e.g., GPT-3.5 or GPT-4) 💡 You’ll configure your OpenAI API connection during the workflow setup. 🔧 Build the Logic App workflow Here’s how to build the flow in Logic Apps using the Prompt Template action. This setup assumes you're simulating procurement data with test inputs. 📌 Step 0: Start by creating a Stateful Workflow in your Logic App (Standard) resource. Choose "Stateful" when prompted during workflow creation. This allows the run history and variables to be preserved for testing. 📸 Creating a new Stateful Logic App (Standard) workflow Here’s how to build the flow in Logic Apps using the Prompt Template action. This setup assumes you're simulating procurement data with test inputs. 📌 Trigger: "When an HTTP request is received" 📌 Step 1: Add three Compose actions to store your test data. documents: This stores your internal product catalog entries [ { "id": "1", "title": "Dell Latitude 5540 Laptop", "content": "Intel i7, 16GB RAM, 512GB SSD, standard issue for IT new hire onboarding" }, { "id": "2", "title": "Docking Station", "content": "Dell WD19S docking stations for dual monitor setup" } ] 📸 Compose action for documents input question: This holds the employee’s natural language question. [ { "role": "user", "content": "When did we last order laptops for new hires in IT?" } ] 📸 Compose action for question input customer: This includes employee profile and past procurement orders { "firstName": "Alex", "lastName": "Taylor", "department": "IT", "employeeId": "E12345", "orders": [ { "name": "Dell Latitude 5540 Laptop", "description": "Ordered 15 units for Q1 IT onboarding", "date": "2024/02/20" }, { "name": "Docking Station", "description": "Bulk purchase of 20 Dell WD19S docking stations", "date": "2024/01/10" } ] } 📸 Compose action for customer input 📌 Step 2: Add the "Chat Completions using Prompt Template" action 📸 OpenAI connector view 💡Tip: Always prefer the in-app connector (built-in) over the managed version when choosing the Azure OpenAI operation. Built-in connectors allow better control over authentication and reduce latency by running natively inside the Logic App runtime. 📌 Step 3: Connect to Azure OpenAI Navigate to your Azure OpenAI resource and click on Keys and Endpoint for connecting using key-based authentication 📸 Create Azure OpenAI connection 📝 Prompt template: Building the message for chat completions Once you've added the Get chat completions using Prompt Template action, here's how to set it up: 1. Deployment Identifier Enter the name of your deployed Azure OpenAI model here (e.g., gpt-4o). 📌 This should match exactly with what you configured in your Azure OpenAI resource. 2. Prompt Template This is the structured instruction that the model will use. Here’s the full template used in the action — note that the variable names exactly match the Compose action names in your Logic App: documents, question, and customer. system: You are an AI assistant for Contoso's internal procurement team. You help employees get quick answers about previous orders and product catalog details. Be brief, professional, and use markdown formatting when appropriate. Include the employee’s name in your response for a personal touch. # Product Catalog Use this documentation to guide your response. Include specific item names and any relevant descriptions. {% for item in documents %} Catalog Item ID: {{item.id}} Name: {{item.title}} Description: {{item.content}} {% endfor %} # Order History Here is the employee's procurement history to use as context when answering their question. {% for item in customer.orders %} Order Item: {{item.name}} Details: {{item.description}} — Ordered on {{item.date}} {% endfor %} # Employee Info Name: {{customer.firstName}} {{customer.lastName}} Department: {{customer.department}} Employee ID: {{customer.employeeId}} # Question The employee has asked the following: {% for item in question %} {{item.role}}: {{item.content}} {% endfor %} Based on the product documentation and order history above, please provide a concise and helpful answer to their question. Do not fabricate information beyond the provided inputs. 📸 Prompt template action view 3. Add your prompt template variables Scroll down to Advanced parameters → switch the dropdown to Prompt Template Variable. Then: Add a new item for each Compose action and reference it dynamically from previous outputs: documents question customer 📸 Prompt template variable references 🔍 How the template works Template element What it does {{ customer.firstName }} {{ customer.lastName }} Displays employee name {{ customer.department }} Adds department context {{ question[0].content }} Injects the user’s question from the Compose action named question {% for doc in documents %} Loops through catalog data from the Compose action named documents {% for order in customer.orders %} Loops through employee’s order history from customer Each of these values is dynamically pulled from your Logic App Compose actions — no code, no external services needed. You can apply the exact same approach to reference data from any connector, like a SharePoint list, SQL row, email body, or even AI Search results. Just map those outputs into the Prompt Template and let Logic Apps do the rest. ✅ Final Output When you run the flow, the model might respond with something like: "The last order for Dell Latitude 5540 laptops was placed on February 20, 2024 — 15 units were procured for IT new hire onboarding." This is based entirely on the structured context passed in through your Logic App — no extra fine-tuning required. 📸 Output from run history 💬 Feedback Let us know what other kinds of demos and content you would like to see using this formConcurrency support for Service Bus built-in connector in Logic Apps Standard
In this post, we'll cover the recent enhancements in the built-on or InApp Service Bus connector in Logic Apps Standard. Specifically, we'll cover the support for concurrency for Service Bus trigger...7KViews0likes16CommentsCollect ETW trace in Logic App Standard
ETW stands for Event Tracing for Windows. It's a powerful and built-in tracing mechanism within the Windows operating system that allows developers and system administrators to capture detailed information about the behavior of applications, the system itself, and even kernel-mode drivers. Usually we can use the Logman tool to collect the ETW , but this tool is not allowed to be used in logic app kudo , so the solution was to come up with a code base solution How to collect the EWT using C# ? I have built in previous article a small tool that collect the ETW from Http operation , the whole idea is to implement a class of type EventListener How this tool works download the ETW collector flow and the C# file from Github in the compose action modify the URL to your test case url where you are calling http endpoint or sftp or SMTP etc. then run the collector flow , after the flow is finished you need to open Kudo and in the log folder to will find the log file What the C# do it will create the ETW listener and then call the child logic app which is the logic app that we will test , I have some event that I am ignoring , the because they are very nosey and some of them will make Stack over flow error like System.Data.DataCommonEventSource private readonly Hashtable ignoredList = new Hashtable { { "Private.InternalDiagnostics.System.Net.Http_decrypt", null }, { "System.Data.DataCommonEventSource_Trace", null }, { "System.Buffers.ArrayPoolEventSource", null }, { "System.Threading.Tasks.TplEventSource", null }, { "Microsoft-ApplicationInsights-Data", null } }; then putting all the logs in Data table and finally convert this data table to text file How to open the log file? I do recommend to use something like Klogg variar/klogg or normal VS code.Scaling Logic Apps Standard – Sustained Message Processing System
In the previous blog of this blog post series, we discussed how Logic App standard can be used to process high throughput event data at a sustained rate over long periods of time. In this blog, we will see how Logic App standard can be used to process high throughput message data that can facilitate the decoupling of applications and services. We simulate a real-life use case where messages are sent to a Service Bus queue at a sustained rate for processing, and we use a templated Logic App workflow to process the messages in real-time. The business logic in the templated workflow can be easily replaced by the customer to actions that encompass their unique processing of the relevant messaging information. To better showcase the message processing capabilities, we will discuss two scaling capabilities, one for vertical scaling (varying the performance of service plans), and another horizontal scaling (varying the number of service plan instances). Vertical scaling capabilities of the Logic App Standard with Built-In Service Bus Connector In this section, we will investigate the vertical scaling capabilities of the Logic App Service Bus connector, conducting experiments to find the maximum message throughput supported by each of the standard Logic App SKUs from WS1 to WS3. The workflow uses the Service Bus built-in trigger, so the messages are promptly picked up and are processed in the run at par with ingress rate. like the one shown below - available at our Template Gallery. Customers can replace the Business Logic and Compensation Logic to handle their business scenarios. For this investigation, we used the out-of-the-box Logic Apps Standard configuration for scaling: 1 always ready instance 20 maximum burst instances We also used the default trigger batch size of 50. Experiment Methodology For each experiment we selected one of the available SKUs (WS1, WS2, WS3), and supplied a steady influx of X messages per minute to the connected Service Bus queue in one experiment. We conduct multiple experiments for each SKU and gradually increase X until the Logic App cannot process all the messages immediately. For each experiment, we pushed enough (1 million) messages in total to the queue to ensure that each workflow reaches a steady state processing rate with its maximum scaling. Environment Configuration The experiment setup is summarized in the table below: Tests setup Single Stamp Logic App Number of workflows 1 Templated Triggers Service Bus Trigger batch size 50 Actions Service Bus, Scope, Condition, Compose Number of storage accounts 1 Prewarmed instances 1 Max scale settings 20 Message size 1 KB Service Bus queue max size 2 GB Service Bus queue message lock duration 5 minutes Service Bus queue message max delivery count 10 Experiment results We summarize the experiment results in the table below. If the default maximum scaling of 20 instances is adopted, then the throughput we measured here serves as a good reference for the upper bound of message processing powers: WS Plan Message Throughput Time to process 1M messages WS1 9000 messages/minute 120 minutes WS2 19000 messages/minute 60 minutes WS3 24000 messages/minute 50 minutes In all the experiments, the Logic App scaled out to 20 instances at steady state. 📝 Complex business logic, which requires more actions and/or longer processing times, can change those values. Findings Understand the scaling and bottlenecks In the vertical scaling experiments, we limited the maximum instance count to 20. Under this setting, we sometimes observe "dead-letter" messages being generated. With Service Bus, messages become "dead-letters" if they are not processed within the lock duration for all delivery attempts. This means that the workflow takes more than 5 minutes to complete the scope/business logic for some messages. The root cause is that the Service Bus trigger fetches messages faster than the workflow actions can process them. As we can see in the following figure, the Service Bus trigger can fetch as much as 60k messages per minute, but the workflow can only process less than 30k messages per minute. Recommendations We recommend going with the default scaling settings if your workload is well below the published message throughput and increase the maximum burst when a heavier workload is expected. Horizontal scaling capabilities of the Logic App Service Bus connector In this section, we probe into the horizontal scaling of Logic App message handling capabilities with varying instance counts. We conduct experiments on the most performant and widely used WS3 SKU. Experiment Methodology For each experiment we varied the number of pre-warmed instances and maximum burst instances and supplied a steady influx of X messages per minute to the connected Service Bus queue, gradually increase X until the Logic App cannot process all the messages immediately. We push enough (4 million) messages to the queue for each experiment to ensure that each workflow reaches a steady state processing rate. Environment configuration The experiment setup is summarized in the table below: Tests setup Multi Stamp Logic App Number of workflows 1 Templated Triggers Service Bus Trigger batch size 50 Actions Service Bus, Scope, Condition, Compose Number of storage accounts 3 Message size 1 KB Service Bus queue max size 4 GB Service Bus queue message lock duration 5 minutes WS Plan WS3 Service Bus queue message max delivery count 10 Experiment results The experiment results are summarized in the table below: Prewarmed Instances Max Burst Instances Message Throughput 1 20 24000 messages/minute 1 60 65000 messages/minute 5 60 65000 messages/minute 10 60 65000 messages/minute 10 100 85000 messages/minute In all the experiments, the Logic App scaled out to the maximum burst instance allowed at steady state. Editor's Note: The actual business logic can affect the number of machines the app scales out to. The performance might also vary based on the complexity of the workflow logic. Findings Understand the scaling and bottlenecks In the horizontal scaling experiments, when the max burst instances count is 60 or above, we no longer observe "dead-letters" being generated. In these cases, the Service Bus trigger can only fetch messages as fast as the workflow actions can process them. As we can observe in the following figure, all messages are processed immediately after they are fetched. Does the scaling speed affect the workload? As we can see below, a Standard Logic app with a prewarmed instance count of 5 can scale out to its maximum scaling of 60 under 10 minutes. The message fetching and message processing abilities scale out together, preventing the generation of “dead-letters.” Also, from the results in our horizontal scaling experiments, we see that having more prewarmed instances does not affect the steady-state throughput of the workflow. Recommendations With these two findings, we recommend keeping the minimum instance number small for cost-saving, without any impact on your peak performance. If a use case requires a higher throughput, the maximum burst instances setting can be set higher to accommodate that. For production workflows, we still recommend having at least two always-ready instances, as they would reduce any potential downtime from reboots.671Views3likes0Comments🎉Built for Enterprise: Integration Account Premium SKU Hits General Availability
We are thrilled to announce the General Availability (GA) of the Premium SKU for Integration Account! Since the Preview release, we’ve incorporated enhancements to deliver greater reliability and enable Integration Accounts to seamlessly operate within VNETs—a critical requirement for enterprise workloads. These updates provide a secure, scalable foundation tailored for your B2B and EDI integrations. What’s New in GA? Availability Zone Support Integration Accounts now support Availability Zones by default, ensuring enhanced reliability and resilience. Customers can enable AZ support for the underlying storage to achieve a fully AZ-enabled architecture. VNET Support Integration Accounts can now operate within a VNET, enabling you to: Host both the Integration Account and its underlying storage in a secure VNET environment. Strengthen the security and isolation of your enterprise workflows. These features, combined with unlimited artifacts, ensure that Integration Account provides a secure, reliable, and scalable platform for running mission-critical B2B and EDI workloads. There are no changes to the pricing. The Premium SKU pricing will be like Standard Account Pricing and will be billed at the rate of $1.37 per hour. Don’t Miss: EDI Actions in Logic Apps Standard Complementing the Integration Account, EDI actions for different EDI standards (AS2, X12, EDIFACT) are available as built-in actions in Logic Apps Standard. These actions empower you to: Process messages individually or in batches. Handle payloads up to 100MB by default, with support for even larger sizes using higher compute tiers. Achieve secure and scalable message processing for your enterprise workflows. Do you have business requirements for handling even larger payloads? We’d love to hear from you! Use Cases Here are just a few examples of how businesses can leverage these new capabilities: Retail and Supply Chain: Automate purchase orders, shipping notices, and invoices using X12. Healthcare: Process claims and remittances securely with HIPAA-compliant EDIFACT transactions. Manufacturing: Optimize production schedules with automated order exchanges using AS2. Try It Today Start leveraging the Integration Account Premium SKU and EDI actions for your enterprise integrations either through Azure Portal or VS Code. For detailed documentation and best practices, check out our documentation page.533Views1like0CommentsBuilt-in Oracle DB - using JKS keystore to support certification validation
With the help of a colleague (anonymous), I would like to share a new idea to support certification validation by using jks keystore in Logic app standard JDBC (built-in) action when connecting to Oracle DB.1.4KViews0likes0Comments