connectors
56 TopicsLogic Apps Agentic Workflows with SAP - Part 1: Infrastructure
When you integrate Azure Logic Apps with SAP, the “hello world” part is usually easy. The part that bites you later is data quality. In SAP-heavy flows, validation isn’t a nice-to-have — it’s what makes the downstream results meaningful. If invalid data slips through, it can get expensive fast: you may create incorrect business documents, trigger follow-up processes, and end up in a cleanup path that’s harder (and more manual) than building validation upfront. And in “all-or-nothing” transactional patterns, things get even more interesting: one bad record can force a rollback strategy, compensating actions, or a whole replay/reconciliation story you didn’t want to own. See for instance Handling Errors in SAP BAPI Transactions | Microsoft Community Hub to get an idea of the complexity in a BizTalk context. That’s the motivation for this post: a practical starter pattern that you can adapt to many data shapes and domains for validating data in a Logic Apps + SAP integration. Note: For the full set of assets used here, see the companion GitHub repository (workflows, schemas, SAP ABAP code, and sample files). 1. Introduction Scenario overview The scenario is intentionally simple, but it mirrors what shows up in real systems: A Logic App workflow sends CSV documents to an SAP endpoint. SAP forwards the payload to a second Logic App workflow that performs: rule-based validation (based on pre-defined rules) analysis/enrichment (market trends, predictions, recommendations) The workflow either: returns validated results (or validation errors) to the initiating workflow, or persists outputs for later use For illustration, I’m using fictitious retail data. The content is made up, but the mechanics are generic: the same approach works for orders, inventory, pricing, master data feeds, or any “file in → decision out” integration. You’ll see sample inputs and outputs below to keep the transformations concrete. What this post covers This walkthrough focuses on the integration building blocks that tend to matter in production: Calling SAP RFCs from Logic App workflows, and invoking Logic App workflows from SAP function modules Using the Logic Apps SAP built-in trigger Receiving and processing IDocs Returning responses and exceptions back to SAP in a structured, actionable way Data manipulation patterns in Logic Apps, including: parsing and formatting inline scripts XPath (where it fits, and where it becomes painful). Overall Implementation A high-level view of the implementation is shown below. The source workflow handles end-to-end ingestion—file intake, transformation, SAP integration, error handling, and notifications—using Azure Logic Apps. The destination workflows focus on validation and downstream processing, including AI-assisted analysis and reporting, with robust exception handling across multiple technologies. I’ll cover the AI portion in a follow-up post. Note on AI-assisted development Most of the workflow “glue” in this post—XPath, JavaScript snippets, and Logic Apps expressions—was built with help from Copilot and the AI assistant in the designer (see Get AI-assisted help for Standard workflows - Azure Logic Apps | Microsoft Learn). In my experience, this is exactly where AI assistance pays off: generating correct scaffolding quickly, then iterating based on runtime behavior. I’ve also included SAP ABAP snippets for the SAP-side counterpart. You don’t need advanced ABAP knowledge to follow along; the snippets are deliberately narrow and integration-focused. I include them because it’s hard to design robust integrations if you only understand one side of the contract. When you understand how SAP expects to receive data, how it signals errors, and where transactional boundaries actually are, you end up with cleaner workflows and fewer surprises. 2. Source Workflow This workflow is a small, end‑to‑end “sender” pipeline: it reads a CSV file from Azure Blob Storage, converts the rows into the SAP table‑of‑lines XML shape expected by an RFC, calls Z_GET_ORDERS_ANALYSIS via the SAP connector, then extracts analysis or error details from the RFC response and emails a single consolidated result. At a high level: Input: an HTTP request (used to kick off the run) + a blob name. Processing: CSV → array of rows → XML (…) → RFC call Output: one email containing either: the analysis (success path), or a composed error summary (failure path). The diagram below summarizes the sender pipeline: HTTP trigger → Blob CSV (header included) → rows → SAP RFC → parse response → email. Two design choices are doing most of the work here. First, the workflow keeps the CSV transport contract stable by sending the file as a verbatim list of lines—including the header—wrapped into … elements under IT_CSV . Second, it treats the RFC response as the source of truth: EXCEPTIONMSG and RETURN/MESSAGE drive a single Has errors gate, which determines whether the email contains the analysis or a consolidated failure summary. Step-by-step description Phase 0 — Trigger Trigger — When_an_HTTP_request_is_received The workflow is invoked via an HTTP request trigger (stateful workflow). Phase 1 — Load and split the CSV Read file — Read_CSV_orders_from_blob Reads the CSV from container onlinestoreorders using the blob name from @parameters('DataFileName'). Split into rows — Extract_rows Splits the blob content on \r\n, producing an array of CSV lines. Design note: Keeping the header row is useful when downstream validation or analysis wants column names, and it avoids implicit assumptions in the sender workflow. Phase 2 — Shape the RFC payload Convert CSV rows to SAP XML — Transform_CSV_to_XML Uses JavaScript to wrap each CSV row (including the header line) into the SAP line structure and XML‑escape special characters. The output is an XML fragment representing a table of ZTY_CSV_LINE rows. Phase 3 — Call SAP and extract response fields Call the RFC — [RFC]_ Z_GET_ORDERS_ANALYSIS Invokes Z_GET_ORDERS_ANALYSIS with an XML body containing … built from the transformed rows. Extract error/status — Save_EXCEPTION_message and Save_RETURN_message Uses XPath to pull: EXCEPTIONMSG from the RFC response, and the structured RETURN / MESSAGE field. Phase 4 — Decide success vs failure and notify Initialize output buffer — Initialize_email_body Creates the EmailBody variable used by both success and failure cases. Gate — Has_errors Determines whether to treat the run as failed based on: EXCEPTIONMSG being different from "ok", or RETURN / MESSAGE being non‑empty. Send result — Send_an_email_(V2) Emails either: the extracted ANALYSIS (success), or a concatenated error summary including RETURN / MESSAGE plus message details ( MESSAGE_V1 … MESSAGE_V4 ) and EXCEPTIONMSG . Note: Because the header row is included in IT_CSV , the SAP-side parsing/validation treats the first line as column titles (or simply ignores it). The sender workflow stays “schema-agnostic” by design. Useful snippets Snippet 1 — Split the CSV into rows split(string(body('Read_CSV_orders_from_blob')?['content']), '\r\n') Tip: If your CSV has a header row you don’t want to send to SAP, switch back to: @skip(split(string(body('Read_CSV_orders_from_blob')?['content']), '\r\n'), 1) Snippet 2 — JavaScript transform: “rows → SAP table‑of‑lines XML” const lines = workflowContext.actions.Extract_rows.outputs; function xmlEscape(value) { return String(value) .replace(/&/g, "&") .replace(//g, ">") .replace(/"/g, """) .replace(/'/g, "'"); } // NOTE: we don't want to keep empty lines (which can be produced by reading the blobs) // the reason being that if the recipient uses a schema to validate the xml, // it may reject it if it does not allow empty nodes. const xml = lines .filter(line => line && line.trim() !== '') // keep only non-empty lines .map(line => `<zty_csv_line><line>${xmlEscape(line)}</line></zty_csv_line>`) .join(''); return { xml }; Snippet 3 — XPath extraction of response fields (namespace-robust) EXCEPTIONMSG: @xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string( /*[local-name()="Z_GET_ORDERS_ANALYSISResponse"] /*[local-name()="EXCEPTIONMSG"])') RETURN/MESSAGE: @xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string( /*[local-name()="Z_GET_ORDERS_ANALYSISResponse"] /*[local-name()="RETURN"] /*[local-name()="MESSAGE"])') Snippet 4 — Failure email body composition concat( 'Error message: ', outputs('Save_RETURN_message'), ', details: ', xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string(//*[local-name()=\"MESSAGE_V1\"])'), xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string(//*[local-name()=\"MESSAGE_V2\"])'), xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string(//*[local-name()=\"MESSAGE_V3\"])'), xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string(//*[local-name()=\"MESSAGE_V4\"])'), '; ', 'Exception message: ', outputs('Save_EXCEPTION_message'), '.') 3. SAP Support To make the SAP/Logic Apps boundary simple, I model the incoming CSV as a table of “raw lines” on the SAP side. The function module Z_GET_ORDERS_ANALYSIS exposes a single table parameter, IT_CSV , typed using a custom line structure. Figure: IT_CSV is a table of CSV lines ( ZTY_CSV_LINE ), with a single LINE field ( CHAR2048 ). IT_CSV uses the custom structure ZTY_CSV_LINE , which contains a single component LINE ( CHAR2048 ). This keeps the SAP interface stable: the workflow can send CSV lines without SAP having to know the schema up front, and the parsing/validation logic can evolve independently. The diagram below shows the plumbing that connects SAP to Azure Logic Apps in two common patterns: SAP sending IDocs to a workflow and SAP calling a remote-enabled endpoint via an RFC destination. I’m showing all three pieces together—the ABAP call site, the SM59 RFC destination, and the Logic Apps SAP built-in trigger—because most “it doesn’t work” problems come down to a small set of mismatched configuration values rather than workflow logic. The key takeaway is that both patterns hinge on the same contract: Program ID plus the SAP Gateway host/service. In SAP, those live in SM59 (TCP/IP destination, registered server program). In Logic Apps, the SAP built-in trigger listens using the same Program ID and gateway settings, while the trigger configuration (for example, IDoc format and degree of parallelism) controls how messages are interpreted and processed. Once these values line up, the rest of the implementation becomes “normal workflow engineering”: validation, predictable error propagation, and response shaping. Before diving into workflow internals, I make the SAP-side contract explicit. The function module interface below shows the integration boundary: CSV lines come in as IT_CSV , results come back as ANALYSIS , and status/error information is surfaced both as a human-readable EXCEPTIONMSG and as a structured RETURN ( BAPIRET2 ). I also use a dedicated exception ( SENDEXCEPTIONTOSAPSERVER ) to signal workflow-raised failures cleanly. Contract (what goes over RFC): Input: IT_CSV (CSV lines) Outputs: ANALYSIS (analysis payload), EXCEPTIONMSG (human-readable status) Return structure: RETURN ( BAPIRET2 ) for structured SAP-style success/error Custom exception: SENDEXCEPTIONTOSAPSERVER for workflow-raised failures Here is the ABAP wrapper that calls the remote implementation and normalizes the result. FUNCTION z_get_orders_analysis. *"---------------------------------------------------------------------- *" This module acts as a caller wrapper. *" Important: the remote execution is determined by DESTINATION. *" Even though the function name is the same, this is not recursion: *" the call runs in the remote RFC server registered under DESTINATION "DEST". *"---------------------------------------------------------------------- *" Contract: *" TABLES it_csv "CSV lines *" IMPORTING analysis "Result payload *" EXPORTING exceptionmsg "Human-readable status / error *" CHANGING return "BAPIRET2 return structure *" EXCEPTIONS sendexceptiontosapserver *"---------------------------------------------------------------------- CALL FUNCTION 'Z_GET_ORDERS_ANALYSIS' DESTINATION dest IMPORTING analysis = analysis TABLES it_csv = it_csv CHANGING return = return EXCEPTIONS sendexceptiontosapserver = 1 system_failure = 2 MESSAGE exceptionmsg communication_failure = 3 MESSAGE exceptionmsg OTHERS = 4. CASE sy-subrc. WHEN 0. exceptionmsg = 'ok'. "Optional: normalize success into RETURN for callers that ignore EXCEPTIONMSG IF return-type IS INITIAL. return-type = 'S'. return-message = 'OK'. ENDIF. WHEN 1. exceptionmsg = |Exception from workflow: SENDEXCEPTIONTOSAPSERVER { sy-msgv1 }{ sy-msgv2 }{ sy-msgv3 }{ sy-msgv4 }|. return-type = 'E'. return-message = exceptionmsg. WHEN 2 OR 3. "system_failure / communication_failure usually already populate exceptionmsg IF exceptionmsg IS INITIAL. exceptionmsg = |RFC system/communication failure.|. ENDIF. return-type = 'E'. return-message = exceptionmsg. WHEN OTHERS. exceptionmsg = |Error in workflow: { sy-msgv1 }{ sy-msgv2 }{ sy-msgv3 }{ sy-msgv4 }|. return-type = 'E'. return-message = exceptionmsg. ENDCASE. ENDFUNCTION. The wrapper is intentionally small: it forwards the payload to the remote implementation via the RFC destination and then normalizes the outcome into a predictable shape. The point isn’t fancy ABAP — it’s reliability. With a stable contract ( IT_CSV, ANALYSIS, RETURN, EXCEPTIONMSG ) the Logic Apps side can evolve independently while SAP callers still get consistent success/error semantics. Important: in 'CALL FUNCTION 'Z_GET_ORDERS_ANALYSIS' DESTINATION dest' the name of the called function should be the same as the name of the ABAP wrapper function module, the reason being that the SAP built-in trigger in the logic app uses the function module signature as the contract (i.e. metadata). Z_GET_ORDERS_ANALYSIS . To sum up, the integration is intentionally shaped around three outputs: the raw input table ( IT_CSV ), a standardized SAP return structure ( RETURN / BAPIRET2 ), and a readable status string ( EXCEPTIONMSG ). The custom exception ( SENDEXCEPTIONTOSAPSERVER ) gives me a clean way to surface workflow failures back into SAP without burying them inside connector-specific error payloads. This is depicted in the figure below. 4. Destination Workflow The diagram below shows the destination workflow at a high level. I designed it as a staged pipeline: guard early, normalize input, validate, and then split the workload into two paths—operational handling of invalid records (notifications and optional IDoc remediation) and analysis of the validated dataset. Importantly, the SAP response is intentionally narrow: SAP receives only the final analysis (or a structured error), while validation details are delivered out-of-band via email. How to read this diagram Guardrail: Validate requested action ensures the workflow only handles the expected request. Normalize: Create CSV payload converts the inbound content into a consistent CSV representation. Validate: Data Validation Agent identifies invalid records (and produces a summary). Operational handling (invalid data): invalid rows are reported by email and may optionally be turned into IDocs (right-hand block). Analyze (valid data): Analyze data runs only on the validated dataset (invalid IDs excluded). Outputs: users receive Email analysis, while SAP receives only the analysis (or a structured error) via Respond to SAP server. Figure: Destination workflow with staged validation, optional IDoc remediation, and an SAP response . Reading the workflow top-to-bottom, the main design choice is separation of concerns. Validation is used to filter and operationalize bad records (notify humans, optionally create IDocs), while the SAP-facing response stays clean and predictable: SAP receives the final analysis for the validated dataset, or an error if the run can’t complete. This keeps the SAP contract stable even as validation rules and reporting details evolve. Step‑by‑step walkthrough Phase 0 — Entry and routing Trigger — When a message is received The workflow starts when an inbound SAP message is delivered to the Logic Apps SAP built‑in trigger. Guardrail — Validate requested action (2 cases) The workflow immediately checks whether the inbound request is the operation it expects (for example, the function/action name equals Z_GET_ORDERS_ANALYSIS ). If the action does not match: the workflow sends an exception back to SAP describing the unexpected action and terminates early (fail fast). If the action matches: processing continues. Phase 1 — Normalize input into a workflow‑friendly payload Prepare input — Create CSV payload The workflow extracts CSV lines from the inbound (XML) SAP payload and normalizes them into a consistent CSV text payload that downstream steps can process reliably. Initialize validation state — Initialize array of invalid order ids The workflow creates an empty array variable to capture order IDs that fail validation. This becomes the “validation output channel” used later for reporting, filtering, and optional remediation. Phase 2 — Validate the dataset (AI agent loop) Validate — Data Validation Agent (3 cases) This stage performs rule‑based validation using an agent pattern (backed by Azure OpenAI). Conceptually, it does three things (as shown in the diagram’s expanded block): Get validation rules: retrieves business rules from a SharePoint‑hosted validation document. Get CSV payload: loads the normalized CSV created earlier. Summarize CSV payload review: evaluates the CSV against the rules and produces structured validation outputs. Outputs produced by validation: A list of invalid order IDs The corresponding invalid CSV rows A human‑readable validation summary Note: The detailed AI prompt/agent mechanics are covered in Part 2. In Part 1, the focus is on the integration flow and how data moves. Phase 3 — Operational handling of invalid records (email + optional SAP remediation) After validation, the workflow treats invalid records as an operational concern: they are reported to humans and can optionally be routed into an SAP remediation path. This is shown in the right‑hand “Create IDocs” block. Notify — Send verification summary The workflow sends an email report (Office 365) to configured recipients containing: the validation summary the invalid order IDs the invalid CSV payload (or the subset of invalid rows) Transform — Transform CSV to XML The workflow converts the invalid CSV lines into an XML format that is suitable for SAP processing. Optional remediation — [RFC] Create all IDocs (conditional) If the workflow parameter (for example, CreateIDocs) is enabled, the workflow calls an SAP RFC (e.g., Z_CREATE_ONLINEORDER_IDOC ) to create IDocs from the transformed invalid data. Why this matters: Validation results are made visible (email) and optionally actionable (IDocs), without polluting the primary analysis response that SAP receives. Phase 4 — Analyze only the validated dataset (AI analysis) The workflow runs AI analysis on the validated dataset, explicitly excluding invalid order IDs discovered during the validation phase. The analysis prompt instructs the model to produce outputs such as trends, predictions, and recommendations. Note: The AI analysis prompt design and output shaping are covered in Part 2. Phase 5 — Post‑process the AI response and publish outputs Package results — Process analysis results (Scope) The workflow converts the AI response into a format suitable for email and for SAP consumption: Parse the OpenAI JSON response Extract the analysis content Convert markdown → HTML using custom JavaScript formatting Outputs Email analysis: sends the formatted analysis to recipients. Respond to SAP server: returns only the analysis (and errors) to SAP. Key design choice: SAP receives a clean, stable contract—analysis on success, structured error on failure. Validation details are handled out‑of‑band via email (and optionally via IDoc creation). Note: the analysis email sent by the destination workflow is there for testing purposes, to verify that the html content remains the same as it is sent back to the source workflow. Useful snippets Snippet 1 - Join each CSV line in the XML to make a CSV table: join( xpath( xml(triggerBody()?['content']), '/*[local-name()=\"Z_GET_ORDERS_ANALYSIS\"] /*[local-name()=\"IT_CSV\"] /*[local-name()=\"ZTY_CSV_LINE\"] /*[local-name()=\"LINE\"]/text()' ), '\r\n') Note: For the sake of simplicity, XPath is used here and throughout all places where XML is parsed. In the general case however, the Parse XML with schema action is the better and recommended way to strictly enforce the data contract between senders and receivers. More information about Parse XML with schema is provided in Appendix 1. Snippet 2 - Format markdown to html (simplified): const raw = workflowContext.actions.Extract_analysis.outputs; // Basic HTML escaping for safety (keeps <code> blocks clean) const escapeHtml = s => s.replace(/[&<>"]/g, c => ({'&':'&','<':'<','>':'>','"':'"'}[c])); // Normalize line endings let md = raw; // raw.replace(/\r\n/g, '\n').trim(); // Convert code blocks (``` ... ```) md = md.replace(/```([\s\S]*?)```/g, (m, p1) => `<pre><code>${escapeHtml(p1)}</code></pre>`); // Horizontal rules --- or *** md = md.replace(/(?:^|\n)---+(?:\n|$)/g, '<hr/>'); // Headings ###### to # for (let i = 6; i >= 1; i--) { const re = new RegExp(`(?:^|\\n)${'#'.repeat(i)}\\s+(.+?)\\s*(?=\\n|$)`, 'g'); md = md.replace(re, (m, p1) => `<h${i}>${p1.trim()}</h${i}>`); } // Bold and italic md = md.replace(/\*\*([^*]+)\*\*/g, '<strong>$1</strong>'); md = md.replace(/\*([^*]+)\*/g, '<em>$1</em>'); // Unordered lists (lines starting with -, *, +) md = md.replace(/(?:^|\n)([-*+]\s.+(?:\n[-*+]\s.+)*)/g, (m) => { const items = m.trim().split(/\n/).map(l => l.replace(/^[-*+]\s+/, '').trim()); return '\n<ul>' + items.map(i => `<li>${i}</li>`).join('\n') + '</ul>'; }); // Paragraphs: wrap remaining text blocks in <p>...</p> const blocks = md.split(/\n{2,}/).map(b => { if (/^<h\d>|^<ul>|^<pre>|^<hr\/>/.test(b.trim())) return b; return `<p>${b.replace(/\n/g, '<br/>')}</p>`; }); const html = blocks.join(''); return { html }; 5. Exception Handling To illustrate exception handling, we supposed that multiple workflows may listen to the same program id (by design or unexpectedly) and could therefore receive messages that were meant for others. So the first thing that happens is validate that the function name is as expected. It is shown below. In this section I show three practical ways to surface workflow failures back to SAP using the Logic Apps action “Send exception to SAP server”, and the corresponding ABAP patterns used to handle them. The core idea is the same in all three: Logic Apps raises an exception on the SAP side, SAP receives it as an RFC exception, and your ABAP wrapper converts that into something predictable (for example, a readable EXCEPTIONMSG , a populated RETURN , or both). The differences are in how much control you want over the exception identity and whether you want to leverage SAP message classes for consistent, localized messages. 5.1 Default exception This first example shows the default behavior of Send exception to SAP server. When the action runs without a custom exception name configuration, the connector raises a pre-defined exception that can be handled explicitly in ABAP. On the Logic Apps side, the action card “Send exception to SAP server” sends an Exception Error Message (for example, “Unexpected action in request: …”). On the ABAP side, the RFC call lists SENDEXCEPTIONTOSAPSERVER = 1 under EXCEPTIONS , and the code uses CASE sy-subrc to map that exception to a readable message. The key takeaway is that you get a reliable “out-of-the-box” exception path: ABAP can treat sy-subrc = 1 as the workflow‑raised failure and generate a consistent EXCEPTIONMSG . This is the simplest option and works well when you don’t need multiple exception names—just one clear “workflow failed” signal. 5.2 Message exception If you want more control than the default, you can configure the action to raise a named exception declared in your ABAP function module interface. This makes it easier to route different failure types without parsing free-form text. The picture shows Advanced parameters under the Logic Apps action, including “Exception Name” with helper text indicating it must match an exception declared in the ABAP function module definition. This option is useful when you want to distinguish workflow error categories (e.g., validation vs. routing vs. downstream failures) using exception identity, not just message text. The contract stays explicit: Logic Apps raises a named exception, and ABAP can branch on that name (or on sy-subrc mapping) with minimal ambiguity. 5.3 Message class exception The third approach uses SAP’s built-in message class mechanism so that the exception raised by the workflow can map cleanly into SAP’s message catalog ( T100 ). This is helpful when you want consistent formatting and localization aligned with standard SAP patterns. On the Logic Apps side, the action shows advanced fields including Message Class, Message Number, and an Is ABAP Message toggle, with helper text stating the message class can come from message maintenance ( SE91 ) or be custom. On the ABAP side, the code highlights an error-handling block that calls using sy-msgid , sy-msgno , and variables sy-msgv1 … sy-msgv4 , then stores the resulting text in EXCEPTIONMSG . This pattern is ideal when you want workflow exceptions to look and behave like “native” SAP messages. Instead of hard-coding strings, you rely on the message catalog and let ABAP produce a consistent final message via FORMAT_MESSAGE . The result is easier to standardize across teams and environments—especially if you already manage message classes as part of your SAP development process. Refer to Appendix 2 for further information on FORMAT_MESSAGE . 5.4 Choosing an exception strategy that SAP can act on Across these examples, the goal is consistent: treat workflow failures as first‑class outcomes in SAP, not as connector noise buried in run history. The Logic Apps action Send exception to SAP server gives you three increasingly structured ways to do that, and the “right” choice depends on how much semantics you want SAP to understand. Default exception (lowest ceremony): Use this when you just need a reliable “workflow failed” signal. The connector raises a pre-defined exception name (for example, SENDEXCEPTIONTOSAPSERVER ), and ABAP can handle it with a simple EXCEPTIONS … = 1 mapping and a sy-subrc check. This is the fastest way to make failures visible and deterministic. Named exception(s) (more routing control): Use this when you want SAP to distinguish failure types without parsing message text. By raising an exception name declared in the ABAP function module interface, you can branch cleanly in ABAP (or map to different return handling) and keep the contract explicit and maintainable. Message class + number (most SAP-native): Use this when you want errors to look and behave like standard SAP messages—consistent wording, centralized maintenance, and better alignment with SAP operational practices. In this mode, ABAP can render the final localized string using FORMAT_MESSAGE and return it as EXCEPTIONMSG (and optionally BAPIRET2 - MESSAGE ), which makes the failure both human-friendly and SAP-friendly. A practical rule of thumb: start with the default exception while you stabilize the integration, move to named exceptions when you need clearer routing semantics, and adopt message classes when you want SAP-native error governance (standardization, maintainability, and localization). Regardless of the option, the key is to end with a predictable SAP-side contract: a clear success path, and a failure path that produces a structured return and a readable message. 6. Response Handling This section shows how the destination workflow returns either a successful analysis response or a workflow exception back to SAP, and how the source (caller) workflow interprets the RFC response structure to produce a single, human‑readable outcome (an email body). The key idea is to keep the SAP-facing contract stable: SAP always returns a Z_GET_ORDERS_ANALYSISResponse envelope, and the caller workflow decides between success and error using just two fields: EXCEPTIONMSG and RETURN / MESSAGE . To summarize the steps: Destination workflow either: sends a normal response via Respond to SAP server, or raises an exception via Send exception to SAP server (with an error message). SAP server exposes those outcomes through the RFC wrapper: sy-subrc = 0 → success ( EXCEPTIONMSG = 'ok') sy-subrc = 1 → workflow exception ( SENDEXCEPTIONTOSAPSERVER ) sy-subrc = 2/3 → system/communication failures Source workflow calls the RFC, extracts: EXCEPTIONMSG RETURN / MESSAGE and uses an Has errors gate to choose between a success email body (analysis) or a failure email body (error summary). The figure below shows the full return path for results and failures. On the right, the destination workflow either responds normally (Respond to SAP server) or raises a workflow exception (Send exception to SAP server). SAP then maps that into the RFC outcome ( sy-subrc and message fields). On the left, the source workflow parses the RFC response structure and populates a single EmailBody variable using two cases: failure (error details) or success (analysis text). Figure: Response/exception flow Two things make this pattern easy to operationalize. First, the caller workflow does not need to understand every SAP field—only EXCEPTIONMSG and RETURN / MESSAGE are required to decide success vs failure. Second, the failure path intentionally aggregates details ( MESSAGE_V1 … MESSAGE_V4 plus the exception text) into a single readable string so errors don’t get trapped in run history. Callout: The caller workflow deliberately treats EXCEPTIONMSG != "ok" or RETURN / MESSAGE present as the single source of truth for failure, which keeps the decision logic stable even if the response schema grows. Detailed description Phase 1 — Destination workflow: choose “response” vs “exception” Respond to SAP server returns the normal response payload back to SAP. Send exception to SAP server raises a workflow failure with an Exception Error Message (the screenshot shows an example beginning with “Unexpected action in request:” and a token for Function Name). Outcome: SAP receives either a normal response or a raised exception for the RFC call. Phase 2 — SAP server: map workflow outcomes to RFC results The SAP-side wrapper code shown in the figure calls: CALL FUNCTION ' Z_GET_ORDERS_ANALYSIS ' DESTINATION DEST ... It declares exception mappings including: SENDEXCEPTIONTOSAPSERVER = 1 system_failure = 2 MESSAGE EXCEPTIONMSG communication_failure = 3 MESSAGE EXCEPTIONMSG OTHERS = 4 Then it uses CASE sy-subrc . to normalize outcomes (the figure shows WHEN 0. setting EXCEPTIONMSG = 'ok'., and WHEN 1. building a readable message for the workflow exception). Outcome: regardless of why it failed, SAP can provide a consistent set of fields back to the caller: a return structure and an exception/status message. Phase 3 — Source workflow: parse response and build one “email body” After the RFC action ([RFC] Call Z GET ORDERS ANALYSIS ) the source workflow performs: Save EXCEPTION message Extracts EXCEPTIONMSG from the response XML using XPath. Save RETURN message Extracts RETURN / MESSAGE from the response XML using XPath. Initialize email body Creates EmailBody once, then sets it in exactly one of two cases. Has errors (two cases) The condition treats the run as “error” if either: EXCEPTIONMSG is not equal to "ok", or RETURN / MESSAGE is not empty. Set email body (failure) / Set email body (success) Failure: builds a consolidated string containing RETURN / MESSAGE , message details ( MESSAGE_V1 ..V4), and EXCEPTIONMSG . Success: sets EmailBody to the ANALYSIS field extracted from the response. Outcome: the caller produces a single artifact (EmailBody) that is readable and actionable, without requiring anyone to inspect the raw RFC response. 7. Destination Workflow #2: Persisting failed rows as custom IDocs In this section I zoom in on the optional “IDoc persistence” branch at the end of the destination workflow. After the workflow identifies invalid rows (via the Data Validation Agent) and emails a verification summary, it can optionally call a second SAP RFC to save the failed rows as IDocs for later processing. This is mainly included to showcase another common SAP integration scenario—creating/handling IDocs—and to highlight that you can combine “AI-driven validation” with traditional enterprise workflows. The deeper motivation for invoking this as part of the agent tooling is covered in Part 2; here, the goal is to show the connector pattern and the custom RFC used to create IDocs from CSV input. The figure below shows the destination workflow at two levels: a high-level overview at the top, and a zoomed view of the post-validation remediation steps at the bottom. The zoom starts from Data Validation Agent → Summarize CSV payload review and then expands the sequence that runs after Send verification summary: Transform CSV to XML followed by an SAP RFC call that creates IDocs from the failed data. The key point is that this branch is not the main “analysis response” path. It’s a practical remediation option: once invalid rows are identified and reported, the workflow can persist them into SAP using a dedicated RFC ( Z_CREATE_ONLINEORDER_IDOC ) and a simple IT_CSV payload. This keeps the end-to-end flow modular: analysis can remain focused on validated data, while failed records can be routed to SAP for follow-up processing on their own timeline. Callout: This branch exists to showcase an IDoc-oriented connector scenario. The “why this is invoked from the agent tooling” context is covered in Part 2; here the focus is the mechanics of calling Z_CREATE_ONLINEORDER_IDOC with IT_CSV and receiving ET_RETURN / ET_DOCNUMS . The screenshot shows an XML body with the RFC root element and an SAP namespace: <z_create_onlineorder_idoc xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/"> <iv_direction>...</iv_direction> <iv_sndptr>...</iv_sndptr> <iv_sndprn>...</iv_sndprn> <iv_rcvptr>...</iv_rcvptr> <iv_rcvprn>...</iv_rcvprn> <it_csv> @{ ...Outputs... } </it_csv> <et_return></et_return> <et_docnums></et_docnums> </z_create_onlineorder_idoc> What to notice: the workflow passes invalid CSV rows in IT_CSV , and SAP returns a status table ( ET_RETURN ) and created document numbers ( ET_DOCNUMS ) for traceability. The payload includes standard-looking control fields ( IV_DIRECTION , IV_SNDPTR , IV_SNDPRN , IV_RCVPTR , IV_RCVPRN ) and the actual failed-row payload as IT_CSV . IT_CSV is populated via a Logic Apps expression (shown as @{ ...Outputs... } in the screenshot), which is the bridge between the prior transform step and the RFC call. The response side indicates table-like outputs: ET_RETURN and ET_DOCNUMS . 7.1 From CSV to IDocs I’ll cover the details of Destination workflow #2 in Part 2. In this post (Part 1), I focus on the contract and the end-to-end mechanics: what the RFC expects, what it returns, and how the created IDocs show up in the receiving workflow. Before looking at the RFC itself, it helps to understand the payload we’re building inside the IDoc. The screenshot below shows the custom segment definition used by the custom IDoc type. This segment is intentionally shaped to mirror the columns of the CSV input so the mapping stays direct and easy to reason about. Figure: Custom segment ZONLINEORDER000 (segment type ZONLINEORDER ) This segment definition is the contract anchor: it makes the CSV-to-IDoc mapping explicit and stable. Each CSV record becomes one segment instance with the same 14 business fields. That keeps the integration “boringly predictable,” which is exactly what you want when you’re persisting rejected records for later processing. The figure below shows the full loop for persisting failed rows as IDocs. The source workflow calls the custom RFC and sends the invalid CSV rows as XML. SAP converts each row into the custom segment and creates outbound IDocs. Those outbound IDocs are then received by Destination workflow #2, which processes them asynchronously (one workflow instance per IDoc) and appends results into shared storage for reporting. This pattern deliberately separates concerns: the first destination workflow identifies invalid rows and decides whether to persist them, SAP encapsulates the mechanics of IDoc creation behind a stable RFC interface, and a second destination workflow processes those IDocs asynchronously (one per IDoc), which is closer to how IDoc-driven integrations typically operate in production. Destination workflow #2 is included here to show the end-to-end contract and the “receipt” side of the connector scenario: Triggered by the SAP built-in trigger and checks FunctionName = IDOC_INBOUND_ASYNCHRONOUS extracts DOCNUM from the IDoc control record ( EDI_DC40 / DOCNUM ) reconstructs a CSV payload from the IDoc data segment (the fields shown match the segment definition) appends a “verification info” line to shared storage for reporting The implementation details of that workflow (including why it is invoked from the agent tooling) are covered in Part 2. 7.2 Z_CREATE_ONLINEORDER_IDOC - Contract overview The full source code for Z_CREATE_ONLINEORDER_IDOC is included in the supporting material. It’s too long to reproduce inline, so this post focuses on the contract—the part you need to call the RFC correctly and interpret its results. A quick note on authorship: most of the implementation was generated with Copilot, with manual review and fixes to resolve build errors and align the behavior with the intended integration pattern. The contract is deliberately generic because the goal was to produce an RFC that’s reusable across more than one scenario, rather than tightly coupled to a single workflow. At a high level, the RFC is designed to support: Both inbound and outbound IDoc creation It can either write IDocs to the SAP database (inbound-style persistence) or create/distribute IDocs outbound. Multiple IDoc/message/segment combinations IDoc type ( IDOCTYP ), message type ( MESTYP ), and segment type ( SEGTP ) are configurable so the same RFC can be reused. Explicit partner/port routing control Optional sender/receiver partner/port fields can be supplied when routing matters. Traceability of created artifacts The RFC returns created IDoc numbers so the caller can correlate “these failed rows” to “these IDocs.” Contract: Inputs (import parameters) IV_DIRECTION (default: 'O') — 'I' for inbound write-to-db, 'O' for outbound distribute/dispatch IV_IDOCTYP (default: ZONLINEORDERIDOC ) IV_MESTYP (default: ZONLINEORDER ) IV_SEGTP (default: ZONLINEORDER ) Optional partner/port routing fields: IV_SNDPRT , IV_SNDPRN , IV_RCVPRT , IV_RCVPRN , IV_RCVPOR Tables IT_CSV (structure ZTY_CSV_LINE ) — each row is one CSV line (the “table-of-lines” pattern) ET_RETURN (structure BAPIRET2 ) — success/warning/error messages (per-row and/or aggregate) ET_DOCNUMS (type ZTY_DOCNUM_TT ) — list of created IDoc numbers for correlation/traceability Outputs EV_DOCNUM — a convenience “primary / last created” DOCNUM value returned by the RFC 8. Concluding Remarks Part 1 established a stable SAP ↔ Logic Apps integration baseline: CSV moves end‑to‑end using explicit contracts, and failures are surfaced predictably. The source workflow reads CSV from Blob, wraps rows into the IT_CSV table‑of‑lines payload, calls Z_GET_ORDERS_ANALYSIS , and builds one outcome using two fields from the RFC response: EXCEPTIONMSG and RETURN / MESSAGE . The destination workflow gates requests, validates input, and returns only analysis (or errors) back to SAP while handling invalid rows operationally (notification + optional persistence). On the error path, we covered three concrete patterns to raise workflow failures back into SAP: the default connector exception ( SENDEXCEPTIONTOSAPSERVER ), named exceptions (explicit ABAP contract), and message‑class‑based errors (SAP‑native formatting via FORMAT_MESSAGE ). On the remediation side, we added a realistic enterprise pattern: persist rejected rows as custom IDocs via Z_CREATE_ONLINEORDER_IDOC ( IT_CSV in, ET_RETURN + ET_DOCNUMS out), using the custom segment ZONLINEORDER000 as the schema anchor and enabling downstream receipt in Destination workflow #2 (one run per IDoc, correlated via DOCNUM ). Part 2 is separate because it tackles a different problem: the AI layer. With contracts and error semantics now fixed, Part 2 can focus on the agent/tooling details that tend to iterate—rule retrieval, structured validation outputs, prompt constraints, token/history controls, and how the analysis output is generated and shaped—without muddying the transport story. Appendix 1: Parse XML with schema In this section I consider the CSV payload creation as an example, but parsing XML with schema applies in every place where we get an XML input to process, such as when receiving SAP responses, exceptions, or request/responses from other RFCs. Strong contract The Create_CSV_payload step in the shown implementation uses an xpath() + join() expression to extract LINE values from the incoming XML: join( xpath( xml(triggerBody()?['content']), '/*[local-name()="Z_GET_ORDERS_ANALYSIS"] /*[local-name()="IT_CSV"] /*[local-name()="ZTY_CSV_LINE"] /*[local-name()="LINE"]/text()' ), '\r\n' ) That approach works, but it’s essentially a “weak contract”: it assumes the message shape stays stable and that your XPath continues to match. By contrast, the Parse XML with schema action turns the XML payload into structured data based on an XSD, which gives you a “strong contract” and enables downstream steps to bind to known fields instead of re-parsing XML strings. The figure below compares two equivalent ways to build the CSV payload from the RFC input. On the left is the direct xpath() compose (labeled “weak contract”). On the right is the schema-based approach (labeled “strong contract”), where the workflow parses the request first and then builds the CSV payload by iterating over typed rows. What’s visible in the diagram is the key tradeoff: XPath compose path (left): the workflow creates the CSV payload directly using join(xpath(...), '\r\n') , with the XPath written using local-name() selectors. This is fast to prototype, but the contract is implicit—your workflow “trusts” the XML shape and your XPath accuracy. Parse XML with schema path (right): the workflow inserts a Parse XML with schema step (“ Parse Z GET ORDERS ANALYSIS request ”), initializes variables, loops For each CSV row, and Appends to CSV payload, then performs join(variables('CSVPayload'), '\r\n') . Here, the contract is explicit—your XSD defines what IT_CSV and LINE mean, and downstream steps bind to those fields rather than re-parsing XML. A good rule of thumb is: XPath is great for lightweight extraction, while Parse XML with schema is better when you want contract enforcement and long-term maintainability, especially in enterprise integration / BizTalk migration scenarios where schemas are already part of the integration culture. Implementation details The next figure shows the concrete configuration for Parse XML with schema and how its outputs flow into the “For each CSV row” loop. This is the “strong contract” version of the earlier XPath compose. This screenshot highlights three practical implementation details: The Parse action is schema-backed. In the Parameters pane, the action uses: Content: the incoming XML Response Schema source: LogicApp Schema name: Z_GET_ORDERS_ANALYSIS The code view snippet shows the same idea: type: "XmlParse" with content: " @triggerBody()?['content'] " and schema: { source: "LogicApp", name: "Z_GET_ORDERS_ANALYSIS.xsd" }. The parsed output becomes typed “dynamic content.” The loop input is shown as “ JSON Schema for element 'Z_GET_ORDERS_ANALYSIS: IT_CSV' ”. This is the key benefit: you are no longer scraping strings—you are iterating over a structured collection that was produced by schema-based parsing. The LINE extraction becomes trivial and readable. The “Append to CSV payload” step appends @item()?['LINE'] to the CSVpayload variable (as shown in the code snippet). Then the final Create CSV payload becomes a simple join(variables('CSVPayload'), '\r\n') . This is exactly the kind of “workflow readability” benefit you get once XML parsing is schema-backed. Schema generation The Parse action requires XSD schemas, which can be stored in the Logic App (or via a linked Integration Account). The final figure shows a few practical ways to obtain and manage those XSDs: Generate Schema (SAP connector): a “Generate Schema” action with Operation Type = RFC and an RFC Name field, which is a practical way to bootstrap schema artifacts when you already know the RFC you’re calling. Run Diagnostics / Fetch RFC Metadata: a “Run Diagnostics” action showing Operation type = Fetch RFC Metadata and RFC Name, which is useful to confirm the shape of the RFC interface and reconcile it with your XSD/contract. If you don’t want to rely solely on connector-side schema generation, there are also classic “developer tools” approaches: Infer XSD from a sample XML using .NET’s XmlSchemaInference (good for quick starting points). Generate XSD from an XML instance using xsd.exe (handy when you already have representative sample payloads) or by asking your favorite AI prompt. When to choose XPath vs Parse XML with schema (practical guidance) Generally speaking, choose XPath when… You need a quick extraction and you’re comfortable maintaining a single XPath. You don’t want to manage schema artifacts yet (early prototypes). Choose Parse XML with schema when… You want a stronger, explicit contract (XSD defines what the payload is). You want the designer to expose structured outputs (“JSON Schema for element …”) so downstream steps are readable and less brittle. You expect the message shape to evolve over time and prefer schema-driven changes over XPath surgery. Appendix 2: Using FORMAT_MESSAGE to produce SAP‑native error text When propagating failures from Logic Apps back into SAP (for example via Send exception to SAP server), I want the SAP side to produce a predictable, human‑readable message without forcing callers to parse connector‑specific payloads. ABAP’s FORMAT_MESSAGE is ideal for this because it converts SAP’s message context—message class, message number, and up to four variables—into the final message text that SAP would normally display, but without raising a UI message. What FORMAT_MESSAGE does FORMAT_MESSAGE formats a message defined in SAP’s message catalog ( T100 / maintained via SE91 ) using the values in sy-msgid , sy-msgno , and sy-msgv1 … sy-msgv4 . Conceptually, it answers the question: “Given message class + number + variables, what is the rendered message string?” This is particularly useful after an RFC call fails, where ABAP may have message context available even if the exception itself is not a clean string. Why this matters in an RFC wrapper In the message class–based exception configuration, the workflow can provide message metadata (class/number/type) so that SAP can behave “natively”: ABAP receives a failure ( sy-subrc <> 0), formats the message using FORMAT_MESSAGE , and returns the final text in a field like EXCEPTIONMSG (and/or in BAPIRET2 - MESSAGE ). The result is: consistent wording across systems and environments easier localization (SAP selects language-dependent text) separation of concerns: code supplies variables; message content lives in message maintenance A robust pattern After the RFC call, I use this order of precedence: Use any explicit text already provided (for example via system_failure … MESSAGE exceptionmsg), because it’s already formatted. If that’s empty but SAP message context exists ( sy-msgid / sy-msgno ), call FORMAT_MESSAGE to produce the final string. If neither is available, fall back to a generic message that includes sy-subrc . Here is a compact version of that pattern: DATA: lv_text TYPE string. CALL FUNCTION 'Z_GET_ORDERS_ANALYSIS' DESTINATION dest IMPORTING analysis = analysis TABLES it_csv = it_csv CHANGING return = return EXCEPTIONS sendexceptiontosapserver = 1 system_failure = 2 MESSAGE exceptionmsg communication_failure = 3 MESSAGE exceptionmsg OTHERS = 4. IF sy-subrc <> 0. "Prefer explicit message text if it already exists IF exceptionmsg IS INITIAL. "Otherwise format SAP message context into a string IF sy-msgid IS NOT INITIAL AND sy-msgno IS NOT INITIAL. CALL FUNCTION 'FORMAT_MESSAGE' EXPORTING id = sy-msgid no = sy-msgno v1 = sy-msgv1 v2 = sy-msgv2 v3 = sy-msgv3 v4 = sy-msgv4 IMPORTING msg = lv_text. exceptionmsg = lv_text. ELSE. exceptionmsg = |RFC failed (sy-subrc={ sy-subrc }).|. ENDIF. ENDIF. "Optionally normalize into BAPIRET2 for structured consumption return-type = 'E'. return-message = exceptionmsg. ENDIF. Common gotchas FORMAT_MESSAGE only helps if sy-msgid and sy-msgno are set. If the failure did not originate from an SAP message (or message mapping is disabled), these fields may be empty—so keep a fallback. Message numbers are typically 3-digit strings (e.g., 001, 012), matching how messages are stored in the catalog. FORMAT_MESSAGE formats text; it does not raise or display a message. That makes it safe to use in RFC wrappers and background processing. Bottom line: FORMAT_MESSAGE is a simple tool that helps workflow‑originated failures “land” in SAP as clean, SAP‑native messages—especially when using message classes to standardize and localize error text. References Logic Apps Agentic Workflows with SAP - Part 2: AI Agents Handling Errors in SAP BAPI Transactions | Microsoft Community Hub Access SAP from workflows | Microsoft Learn Create common SAP workflows | Microsoft Learn Generate Schemas for SAP Artifacts via Workflows | Microsoft Learn Parse XML using Schemas in Standard workflows - Azure Logic Apps | Microsoft Learn Announcing XML Parse and Compose for Azure Logic Apps GA Exception Handling | ABAP Keyword Documentation Handling and Propagating Exceptions - ABAP Keyword Documentation SAP .NET Connector 3.1 Overview SAP .NET Connector 3.1 Programming Guide All supporting content for this post may be found in the companion GitHub repository.🎉 Announcing General Availability of AI & RAG Connectors in Logic Apps (Standard)
We’re excited to share that a comprehensive set of AI and Retrieval-Augmented Generation (RAG) capabilities is now Generally Available in Azure Logic Apps (Standard). This release brings native support for document processing, semantic retrieval, embeddings, and grounded reasoning directly into the Logic Apps workflow engine. 🔌 Available AI Connectors in Logic Apps Standard Logic Apps (Standard) had previously previewed four AI-focused connectors that open the door for a new generation of intelligent automation across the enterprise. Whether you're processing large volumes of documents, enriching operational data with intelligence, or enabling employees to interact with systems using natural language, these connectors provide the foundation for building solutions that are smarter, faster, and more adaptable to business needs. These are now in GA. They allow teams to move from routine workflow automation to AI-assisted decisioning, contextual responses, and multi-step orchestration that reflects real business intent. Below is the full set of built-in connectors and their actions as they appear in the designer. 1. Azure OpenAI Actions Get an embedding Get chat completions Get chat completions using Prompt Template Get completion Get multiple chat completions Get multiple embeddings What this unlocks Bring natural language reasoning and structured AI responses directly into workflows. Common scenarios include guided decisioning, user-facing assistants, classification and routing, or preparing embeddings for semantic search and RAG workflows. 2. Azure AI Search Actions Delete a document Delete multiple documents Get agentic retrieval output (Preview) Index a document Index multiple documents Merge document Search vectors Search vectors with natural language What this unlocks Add vector, hybrid semantic, and natural language search directly to workflow logic. Ideal for retrieving relevant content from enterprise data, powering search-driven workflows, and grounding AI responses with context from your own documents. 3. Azure AI Document Intelligence Action Analyze document What this unlocks Document Intelligence serves as the entry point for document-heavy scenarios. It extracts structured information from PDFs, images, and forms, allowing workflows to validate documents, trigger downstream processes, or feed high-quality data into search and embeddings pipelines. 4. AI Operations Actions Chunk text with metadata Parse document with metadata What this unlocks Transform unstructured files into enriched, structured content. Enables token-aware chunking, page-level metadata, and clean preparation of content for embeddings and semantic search at scale. 🤖 Advanced AI & Agentic Workflows with AgentLoop Logic Apps (Standard) also supports AgentLoop (also Generally Available), allowing AI models to use workflow actions as tools and iterate until the task is complete. Combined with chunking, embeddings, and natural language search, this opens the door to advanced agentic scenarios such as document intelligence agents, RAG-based assistants, and iterative evaluators. Conclusion With these capabilities now built into Logic Apps Standard, teams can bring AI directly into their integration workflows without additional infrastructure or complexity. Whether you’re streamlining document-heavy processes, enabling richer search experiences, or exploring more advanced agentic patterns, these capabilities provide a strong foundation to start building today.Logic Apps Aviators Newsletter - December 2025
In this issue: Ace Aviator of the Month News from our product group Community Playbook News from our community Ace Aviator of the Month December’s Ace Aviator: Daniel Jonathan What's your role and title? What are your responsibilities? I’m an Azure Integration Architect at Cnext, helping organizations modernize and migrate their integrations to Azure Integration Services. I design and build solutions using Logic Apps, Azure Functions, Service Bus, and API Management. I also work on AI solutions using Semantic Kernel and LangChain to bring intelligence into business processes. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My day usually begins by attending customer requests and handling recent deployments. Most of my time goes into designing integration patterns, building Logic Apps, mentoring the team, and helping customers with technical solutions. Lately, I’ve also been integrating AI capabilities into workflows. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community is open, friendly, and full of knowledge. I enjoy sharing ideas, writing posts, and helping others solve real-world challenges. It’s great to learn and grow together. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Start small and stay consistent. Learn the basics well—like messaging, retries, and error handling—before diving into complex tools. Keep learning and share what you know. What has helped you grow professionally? Hands-on experience, teamwork, and continuous learning. Working across different projects taught me how to design reliable and scalable systems. Exploring AI with Semantic Kernel and LangChain has also helped me think beyond traditional integrations. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? I’d add an “Overview Page” in Logic Apps containing the HTTP URLs for each workflow, so developers can quickly access to test from one place. It would save time and make working with multiple workflows much easier. News from our product group Logic Apps Community Day 2025 Playlist Did you miss or want to catch up on individual sessions from Logic Apps Community Day 2025? Here is the full playlist – choose your favorite sessions and have fun! The future of integration is here and it's agentic Missed Kent Weare and Divya Swarnkar session at Ignite? It is here for you to watch on demand. Enterprise integration is being reimagined. It’s no longer just about connecting systems, but about enabling adaptive, agentic workflows that unify apps, data, and systems. In this session, discover how to modernize integration, migrate from BizTalk, and adopt AI-driven patterns that deliver agility and intelligence. Through customer stories and live demos, see how to bring these workflows to life with Agent Loop in Azure Logic Apps. Public Preview: Azure Logic Apps Connectors as MCP Tools in Microsoft Foundry Unlock secure enterprise connectivity with Azure Logic Apps connectors as MCP tools in Microsoft Foundry. Agents can now use hundreds of connectors natively—no custom code required. Learn how to configure and register MCP servers for seamless integration. Announcing AI Foundry Agent Service Connector v2 (Preview) AI Foundry Agent Service Connector v2 (Preview) is here! Azure Logic Apps can now securely invoke Foundry agents, enabling low-code AI integration, multi-agent workflows, and faster time-to-value. Explore new operations for orchestration and monitoring. Announcing the General Availability of the XML Parse and Compose Actions in Azure Logic Apps XML Parse and Compose Actions are now GA in Azure Logic Apps! Easily handle XML with XSD schemas, streamline workflows, and support internationalization. Learn best practices for arrays, encoding, and safe transport of content. Clone a Consumption Logic App to a Standard Workflow Clone your Consumption Logic Apps into Standard workflows with ease! This new feature accelerates migration, preserves design, and unlocks advanced capabilities for modern integration solutions. Announcing the HL7 connector for Azure Logic Apps Standard and Hybrid (Public Preview) Connect healthcare systems effortlessly! The new HL7 connector for Azure Logic Apps (Standard & Hybrid) enables secure, standardized data exchange and automation using HL7 protocols—now in Public Preview. Announcing Foundry Control Plane support for Logic Apps Agent Loop (Preview) Foundry Control Plane now supports Logic Apps Agent Loop (Preview)! Manage, govern, and observe agents at scale with built-in integration—no extra steps required. Ensure trust, compliance, and scalability in the agentic era. Announcing General Availability of Agent Loop in Azure Logic Apps Agent Loop transforms Logic Apps into a multi-agent automation platform, enabling AI agents to collaborate with workflows and humans. Build secure, enterprise-ready agentic solutions for business automation at scale. Agent Loop Ignite Update - New Set of AI Features Arrive in Public Preview We are releasing a broad set of Agent Loop new and powerful AI-first capabilities in Public Preview that dramatically expand what developers can build: run agents in the Consumption SKU ,bring your own models through APIM AI Gateway, call any tool through MCP, deploy agents directly into Teams, secure RAG with document-level permissions, onboard with Okta, and build in a completely redesigned workflow designer. Announcing MCP Server Support for Logic Apps Agent Loop Agent Loop in Azure Logic Apps now supports Model Context Protocol (MCP), enabling secure, standardized tool integration. Bring your own MCP connector, use Azure-managed servers, or build custom connectors for enterprise workflows. Enabling API Key Authentication for Logic Apps MCP Servers Logic Apps MCP Servers now support API Key authentication alongside OAuth2 and Anonymous options. Configure keys via host.json or Azure APIs, retrieve and regenerate keys easily, and connect MCP clients securely for agentic workflows. Announcing Public Preview of Agent Loop in Azure Logic Apps Consumption Agent Loop now brings AI-powered automation to Logic Apps Consumption with a frictionless, pay-as-you-go model. Build autonomous and conversational agents using 1,400+ connectors—no dedicated infrastructure required. Ideal for rapid prototyping and enterprise workflows. Moving the Logic Apps Designer Forward Major redesign of Azure Logic Apps designer enters Public Preview for Standard workflows. Phase I focuses on faster onboarding, unified views, draft mode with auto-save, improved search, and enhanced debugging. Feedback will shape future phases for a seamless development experience. Announcing the General Availability of the RabbitMQ Connector RabbitMQ Connector for Azure Logic Apps is now generally available, enabling reliable message exchange for Standard and Hybrid workflows. It supports triggers, publishing, and advanced routing, with global rollout underway for robust, scalable integration scenarios. Duplicate Detection in Logic App Trigger Prevent duplicate processing in Logic Apps triggers with a REST API-based solution. It checks recent runs using clientTrackingId to avoid reprocessing items caused by edits or webhook updates. Works with Logic App Standard and adaptable for Consumption or Power Automate. Announcing the BizTalk Server 2020 Cumulative Update 7 BizTalk Server 2020 Cumulative Update 7 is out, adding support for Visual Studio 2022, Windows Server 2022, SQL Server 2022, and Windows 11. Includes all prior fixes. Upgrade from older versions or consider migrating to Azure Logic Apps for modernization. News from our community Logic Apps Local Development Series Post by Daniel Jonathan Last month I shared an article from Daniel about debugging XSLT in VS Code. This month, I bumped into not one, but five articles in a series about Build, Test and Run Logic Apps Standard locally – definitely worth the read! Working with sessions in Agentic Workflows Post by Simon Stender Build AI-powered chat experiences with session-based agentic workflows in Azure Logic Apps. Learn how they enable dynamic, stateful interactions, integrate with APIs and apps, and avoid common pitfalls like workflows stuck in “running” forever. Integration Love Story with Mimmi Gullberg Video by Ahmed Bayoumy and Robin Wilde Meet Mimmi Gullberg, Green Cargo’s new integration architect driving smarter, sustainable rail logistics. With experience from BizTalk to Azure, she blends tech and business insight to create real value. Her mantra: understand the problem first, then choose the right tools—Logic Apps, Functions, or AI. Integration Love Story with Jenny Andersson Video by Ahmed Bayoumy and Robin Wilde Discover Jenny Andesson’s inspiring journey from skepticism to creativity in tech. In this episode, she shares insights on life as an integration architect, tackling system challenges, listening to customers, and how AI is shaping the future of integration. You Can Get an XPath value in Logic Apps without returning an array Post by Luis Rigueira Working with XML in Azure Logic Apps? The xpath() function always returns an array—even for a single node. Or does it? Found how to return just the values you want on this Friday Fact from Luis Rigueira. Set up Azure Standard Logic App Connectors as MCP Server Video by Srikanth Gunnala Expose your Azure Logic Apps integrations as secure tools for AI assistants. Learn how to turn connectors like SAP, SQL, and Jira into MCP tools, protect them with Entra ID/OAuth, and test in GitHub Copilot Chat for safe, action-ready AI workflows. Making Logic Apps Speak Business Post by Al Ghoniem Stop forcing Logic Apps to look like business diagrams. With Business Process Tracking, you can keep workflows technically sound while giving business users clear, stage-based visibility into processes—decoupled, visual, and KPI-driven.528Views0likes0CommentsIntroducing Logic Apps MCP servers (Public Preview)
Using Logic Apps (Standard) as MCP servers transforms the way organizations build and manage agents by turning connectors into modular, reusable MCP tools. This approach allows each connector—whether it's for data access, messaging, or workflow orchestration—to act as a specialized capability within the MCP framework. By dynamically composing these tools into Logic Apps, developers can rapidly construct agents that are both scalable and adaptable to complex enterprise scenarios. The benefits include reduced development overhead, enhanced reusability, and a streamlined path to integrating diverse systems—all while maintaining the flexibility and power of the Logic Apps platform. Starting today, we now support creating Logic Apps MCP Servers in the following ways: Registering Logic Apps connectors as MCP servers using Azure API Center Using this approach provides a streamlined experience when building MCP servers based upon Azure Logic Apps connectors. This new experience includes selecting a managed connector and one or more of its actions to create an MCP server and its related tools. This experience also automates the creation of Logic Apps workflows and wires up Easy Auth authentication for you in a matter of minutes. Beyond the streamlined experience that we provide, customers also benefit from any MCP server created using this experience to be registered within their API Center enterprise catalogue. For admins this means they can manage their MCP servers across the enterprise. For developers, it offers a centralized catalog where MCP servers can be discovered and quickly onboarded in Agent solutions. To get started, please refer to our product documentation or our demo videos. Enabling Logic Apps as remote MCP server For customers who have existing Logic Apps (Standard) investments or who want additional control over how their MCP tools are created we are also offering the ability to enable a Logic App as an MCP server. For a Logic App to be eligible to become an MCP server, it must have the following characteristics: One or more workflows that have an HTTP Request trigger and a corresponding HTTP Response action It is recommended that your trigger has a description and your request payload has schema that includes meaningful descriptions Your host.json file has been configured to enable MCP capabilities You have created an App registration in Microsoft Entra and have configured Easy Auth in your Logic App To get started, please refer to our product documentation or our demo videos. Feedback Both of these capabilities are now available, in public preview, worldwide. If you have any questions or feedback on these MCP capabilities, we would love to hear from you. Please fill out the following form and I will follow-up with you.Announcing Public Preview of API Management WordPress plugin to build customized developer portals
Azure API Management WordPress plugin enables our customers to leverage the power of WordPress to build their own unique developer portal. API managers and administrators can bring up a new developer portal in a matter of few minutes and customize the theme, layout, add stylesheet or localize the portal into different languages.🎙️ Announcement: Logic Apps connectors in Azure AI Search for Integrated Vectorization
We’re excited to announce that Azure Logic Apps connectors are now supported within AI Search as data sources for ingestion into Azure AI Search vector stores. This unlocks the ability to ingest unstructured documents from a variety of systems—including SharePoint, Amazon S3, Dropbox and many more —into your vector index using a low-code experience. This new capability is powered by Logic Apps templates, which orchestrate the entire ingestion pipeline—from extracting documents to embedding generation and indexing—so you can build Retrieval-Augmented Generation (RAG) applications with ease. Grounding AI with RAG: Why Document Ingestion Matters Retrieval-Augmented Generation (RAG) has become a cornerstone technique for building grounded and trustworthy AI systems. Instead of generating answers from the model’s pretraining alone, RAG applications fetch relevant information from external knowledge bases—giving LLMs access to accurate and up-to-date enterprise data. To power RAG, enterprises need a scalable way to ingest and index documents into a vector store. Whether you're working with policy documents, legal contracts, support tickets, or financial reports, getting this content into a searchable, semantic format is step one. Simplified Ingestion with Integrated Vectorization Azure AI Search’s Integrated Vectorization capability automates the process of turning raw content into semantically indexed vectors: Chunking: Documents are split into meaningful text segments Embedding: Each chunk is transformed into a vector using an embedding model like text-embedding-3-small or a custom model Indexing: Vectors and associated metadata are written into a searchable vector store Projection: Metadata is preserved to enable filtering, ranking, and hybrid queries This eliminates the need to build or maintain custom pipelines, making it significantly easier to adopt RAG in production environments. Ingest from Anywhere: Logic Apps + AI Search With today’s release, we’re extending ingestion to a variety of new data sources by integrating Logic Apps connectors directly with AI Search. This allows you to retrieve unstructured content from enterprise systems and seamlessly ingest it into the vector store. Here’s how the ingestion process works with Logic Apps: Connect to Source Systems Using prebuilt connectors, Logic Apps can fetch content from various data sources including Sharepoint document libraries, messages from Service Bur or Azure Queues, files from OneDrive or SFTP Server and more. You can trigger ingestion on demand or at schedule. Parse and Chunk Documents Next, Logic Apps uses built-in AI-powered document parsing actions to extract raw text. This is followed by the “Chunk Document” action, which: Tokenizes the document based on language model-friendly units Splits the content into semantically coherent chunks This ensures optimal chunk size for downstream embedding and retrieval. Note – Currently we default to a chunk size of 5000 in the workflows created for document ingestion. We’ll be updating the default chunk size to a smaller number in our next release. Meanwhile, you can update it in the workflow if you need a smaller chunk size. Generate Embeddings with Azure OpenAI The chunked text is then passed to the Azure OpenAI connector, where the text-embedding-3-small or another configured embedding model is used to generate high-dimensional vector representations. These vectors capture the semantic meaning of the content and are key to enabling accurate retrieval in RAG applications. Write to Azure AI Search Finally, the embeddings, along with any relevant metadata (e.g., document title, tags, timestamps), are written into the AI Search index. The index schema is created for you ——and can include fields for filtering, sorting, and semantic ranking. Logic Apps Templates: Fast Start, Flexible Design To help you get started, we’ve created Logic Apps templates specifically for RAG ingestion. These templates: Include all the steps mentioned above Are customizable if you want to update the default configuration Whether you’re ingesting thousands of PDFs from SharePoint or syncing files from Amazon S3 bucket, these templates provide a production-grade foundation for building your pipeline. Getting Started Here is step by step detailed documentation to get started using Integrated Vectorization with Logic Apps data sources 👉 Get started with Logic Apps data sources for AI Search ingestion 👉 Learn more about Integrated Vectorization in Azure AI Search We'd Love Your Feedback We're just getting started. Tell us: What other data sources would you like to ingest? What enhancements would make ingestion easier for your use case? Are there specific industry templates or formats we should support? 👉 Reply to this post or share your ideas through our feedback form We’re building this with you—so your feedback helps shape the future of AI-powered automation and RAG.1.1KViews1like0Comments🤖 AI Procurement assistant using prompt templates in Standard Logic Apps
📘 Introduction Answering procurement-related questions doesn't have to be a manual process. With the new Chat Completions using Prompt Template action in Logic Apps (Standard), you can build an AI-powered assistant that understands context, reads structured data, and responds like a knowledgeable teammate. 🏢 Scenario: AI assistant for IT procurement Imagine an employee wants to know: "When did we last order laptops for new hires in IT?" Instead of forwarding this to the procurement team, a Logic App can: Accept the question Look up catalog details and past orders Pass all the info to a prompt template Generate a polished, AI-powered response 🧠 What Are Prompt Templates? Prompt Templates are reusable text templates that use Jinja2 syntax to dynamically inject data at runtime. In Logic Apps, this means you can: Define a prompt with placeholders like {{ customer.orders }} Automatically populate it with outputs from earlier actions Generate consistent, structured prompts with minimal effort ✨ Benefits of Using Prompt Templates in Logic Apps Consistency: Centralized prompt logic instead of embedding prompt strings in each action. Reusability: Easily apply the same prompt across multiple workflows. Maintainability: Tweak prompt logic in one place without editing the entire flow. Dynamic control: Logic Apps inputs (e.g., values from a form, database, or API) flow right into the template. This allows you to create powerful, adaptable AI-driven flows without duplicating effort — making it perfect for scalable enterprise automation. 💡 Try it Yourself Grab the sample prompt template and sample inputs from our GitHub repo and follow along. 👉 Sample logic app 🧰 Prerequisites To get started, make sure you have: A Logic App (Standard) resource in Azure An Azure OpenAI resource with a deployed GPT model (e.g., GPT-3.5 or GPT-4) 💡 You’ll configure your OpenAI API connection during the workflow setup. 🔧 Build the Logic App workflow Here’s how to build the flow in Logic Apps using the Prompt Template action. This setup assumes you're simulating procurement data with test inputs. 📌 Step 0: Start by creating a Stateful Workflow in your Logic App (Standard) resource. Choose "Stateful" when prompted during workflow creation. This allows the run history and variables to be preserved for testing. 📸 Creating a new Stateful Logic App (Standard) workflow Here’s how to build the flow in Logic Apps using the Prompt Template action. This setup assumes you're simulating procurement data with test inputs. 📌 Trigger: "When an HTTP request is received" 📌 Step 1: Add three Compose actions to store your test data. documents: This stores your internal product catalog entries [ { "id": "1", "title": "Dell Latitude 5540 Laptop", "content": "Intel i7, 16GB RAM, 512GB SSD, standard issue for IT new hire onboarding" }, { "id": "2", "title": "Docking Station", "content": "Dell WD19S docking stations for dual monitor setup" } ] 📸 Compose action for documents input question: This holds the employee’s natural language question. [ { "role": "user", "content": "When did we last order laptops for new hires in IT?" } ] 📸 Compose action for question input customer: This includes employee profile and past procurement orders { "firstName": "Alex", "lastName": "Taylor", "department": "IT", "employeeId": "E12345", "orders": [ { "name": "Dell Latitude 5540 Laptop", "description": "Ordered 15 units for Q1 IT onboarding", "date": "2024/02/20" }, { "name": "Docking Station", "description": "Bulk purchase of 20 Dell WD19S docking stations", "date": "2024/01/10" } ] } 📸 Compose action for customer input 📌 Step 2: Add the "Chat Completions using Prompt Template" action 📸 OpenAI connector view 💡Tip: Always prefer the in-app connector (built-in) over the managed version when choosing the Azure OpenAI operation. Built-in connectors allow better control over authentication and reduce latency by running natively inside the Logic App runtime. 📌 Step 3: Connect to Azure OpenAI Navigate to your Azure OpenAI resource and click on Keys and Endpoint for connecting using key-based authentication 📸 Create Azure OpenAI connection 📝 Prompt template: Building the message for chat completions Once you've added the Get chat completions using Prompt Template action, here's how to set it up: 1. Deployment Identifier Enter the name of your deployed Azure OpenAI model here (e.g., gpt-4o). 📌 This should match exactly with what you configured in your Azure OpenAI resource. 2. Prompt Template This is the structured instruction that the model will use. Here’s the full template used in the action — note that the variable names exactly match the Compose action names in your Logic App: documents, question, and customer. system: You are an AI assistant for Contoso's internal procurement team. You help employees get quick answers about previous orders and product catalog details. Be brief, professional, and use markdown formatting when appropriate. Include the employee’s name in your response for a personal touch. # Product Catalog Use this documentation to guide your response. Include specific item names and any relevant descriptions. {% for item in documents %} Catalog Item ID: {{item.id}} Name: {{item.title}} Description: {{item.content}} {% endfor %} # Order History Here is the employee's procurement history to use as context when answering their question. {% for item in customer.orders %} Order Item: {{item.name}} Details: {{item.description}} — Ordered on {{item.date}} {% endfor %} # Employee Info Name: {{customer.firstName}} {{customer.lastName}} Department: {{customer.department}} Employee ID: {{customer.employeeId}} # Question The employee has asked the following: {% for item in question %} {{item.role}}: {{item.content}} {% endfor %} Based on the product documentation and order history above, please provide a concise and helpful answer to their question. Do not fabricate information beyond the provided inputs. 📸 Prompt template action view 3. Add your prompt template variables Scroll down to Advanced parameters → switch the dropdown to Prompt Template Variable. Then: Add a new item for each Compose action and reference it dynamically from previous outputs: documents question customer 📸 Prompt template variable references 🔍 How the template works Template element What it does {{ customer.firstName }} {{ customer.lastName }} Displays employee name {{ customer.department }} Adds department context {{ question[0].content }} Injects the user’s question from the Compose action named question {% for doc in documents %} Loops through catalog data from the Compose action named documents {% for order in customer.orders %} Loops through employee’s order history from customer Each of these values is dynamically pulled from your Logic App Compose actions — no code, no external services needed. You can apply the exact same approach to reference data from any connector, like a SharePoint list, SQL row, email body, or even AI Search results. Just map those outputs into the Prompt Template and let Logic Apps do the rest. ✅ Final Output When you run the flow, the model might respond with something like: "The last order for Dell Latitude 5540 laptops was placed on February 20, 2024 — 15 units were procured for IT new hire onboarding." This is based entirely on the structured context passed in through your Logic App — no extra fine-tuning required. 📸 Output from run history 💬 Feedback Let us know what other kinds of demos and content you would like to see using this form1.2KViews1like3CommentsConcurrency support for Service Bus built-in connector in Logic Apps Standard
In this post, we'll cover the recent enhancements in the built-on or InApp Service Bus connector in Logic Apps Standard. Specifically, we'll cover the support for concurrency for Service Bus trigger...8KViews0likes16Comments