biztalk server
51 TopicsLogic Apps Agentic Workflows with SAP - Part 2: AI Agents
Part 2 focuses on the AI-shaped portion of the destination workflows: how the Logic Apps Agent is configured, how it pulls business rules from SharePoint, and how its outputs are converted into concrete workflow artifacts. In Destination workflow #1, the agent produces three structured outputs—an HTML validation summary, a CSV list of InvalidOrderIds , and an Invalid CSV payload—which drive (1) a verification email, (2) an optional RFC call to persist failed rows as IDocs, and (3) a filtered dataset used for the separate analysis step that returns only analysis (or errors) back to SAP. In Destination workflow #2, the same approach is applied to inbound IDocs: the workflow reconstructs CSV from the custom segment, runs AI validation against the same SharePoint rules, and safely appends results to an append blob using a lease-based write pattern for concurrency. 1. Introduction In Part 1, the goal was to make the integration deterministic: stable payload shapes, stable response shapes, and predictable error propagation across SAP and Logic Apps. Concretely, Part 1 established: how SAP reaches Logic Apps (Gateway/Program ID plumbing) the RFC contracts ( IT_CSV , response envelope, RETURN / MESSAGE , EXCEPTIONMSG ) how the source workflow interprets RFC responses (success vs error) how invalid rows can be persisted into SAP as custom IDocs ( Z_CREATE_ONLINEORDER_IDOC ) and how the second destination workflow receives those IDocs asynchronously With that foundation in place, Part 2 narrows in on the part that is not just plumbing: the agent loop, the tool boundaries, and the output schemas that make AI results usable inside a workflow rather than “generated text you still need to interpret.” The diagram below highlights the portion of the destination workflow where AI is doing real work. The red-circled section is the validation agent loop (rules in, structured validation outputs out), which then fans out into operational actions like email notification, optional IDoc persistence, and filtering for the analysis step. What matters here is the shape of the agent outputs and how they are consumed by the rest of the workflow. The agent is not treated as a black box; it is forced to emit typed, workflow-friendly artifacts (summary + invalid IDs + filtered CSV). Those artifacts are then used deterministically: invalid rows are reported (and optionally persisted as IDocs), while valid rows flow into the analysis stage and ultimately back to SAP. What this post covers In this post, I focus on five practical topics: Agent loop design in Logic Apps: tools, message design, and output schemas that make the agent’s results deterministic enough to automate. External rule retrieval: pulling validation rules from SharePoint and applying them consistently to incoming payloads. Structured validation outputs → workflow actions: producing InvalidOrderIds and a filtered CSV payload that directly drive notifications and SAP remediation. Two-model pattern: a specialized model for validation (agent) and a separate model call for analysis, with a clean handoff between the two. Output shaping for consumption: converting AI output into HTML for email and into the SAP response envelope (analysis/errors only). (Everything else—SAP plumbing, RFC wiring, and response/exception patterns—was covered in Part 1 and is assumed here.) Next, I’ll break down the agent loop itself—the tool sequence, the required output fields, and the exact points where the workflow turns AI output into variables, emails, and SAP actions. Huge thanks to KentWeareMSFT for helping me understand agent loops and design the validation agent structure. And thanks to everyone in 🤖 Agent Loop Demos 🤖 | Microsoft Community Hub for making such great material available. Note: For the full set of assets used here, see the companion GitHub repository (workflows, schemas, SAP ABAP code, and sample files). 2. Validation Agent Loop In this solution, the Data Validation Agent runs inside the destination workflow after the inbound SAP payload has been normalized into a single CSV string. The agent is invoked as a single Logic Apps Agent action, configured with an Azure OpenAI deployment and a short set of instructions. Its inputs are deliberately simple at this stage: the CSV payload (the dataset to validate), and the ValidationRules reference (where the rule document lives), shown in the instructions as a parameter token. The figure below shows the validation agent configuration used in the destination workflow. The top half is the Agent action configuration (model + instructions), and the bottom half shows the toolset that the agent is allowed to use. The key design choice is that the agent is not “free-form chat”: it’s constrained by a small number of tools and a workflow-friendly output contract. What matters most in this configuration is the separation between instructions and tools. The instructions tell the agent what to do (“follow business process steps 1–3”), while the tools define how the agent can interact with external systems and workflow state. This keeps the agent modular: you can change rules in SharePoint or refine summarization expectations without rewriting the overall SAP integration mechanics. Purpose This agent’s job is narrowly scoped: validate the CSV payload from SAP against externally stored business rules and produce outputs that the workflow can use deterministically. In other words, it turns “validation as reasoning” into workflow artifacts (summary + invalid IDs + invalid payload), instead of leaving validation as unstructured prose. In Azure Logic Apps terms, this is an agent loop: an iterative process where an LLM follows instructions and selects from available tools to complete a multi-step task. Logic Apps agent workflows explicitly support this “agent chooses tools to complete tasks” model (see Agent Workflows Concepts). Tools In Logic Apps agent workflows, a tool is a named sequence that contains one or more actions the agent can invoke to accomplish part of its task (see Agent Workflows Concepts). In the screenshot, the agent is configured with three tools, explicitly labeled Get validation rules, Get CSV payload, Summarize CSV payload review. These tool names match the business process in the “Instructions for agent” box (steps 1–3). The next sections of the post go deeper into what each tool does internally; at this level, the important point is simply that the agent is constrained to a small, explicit toolset. Agent execution The screenshot shows the agent configured with: AI model: gpt-5-3 (gpt-5) A connection line: “Connected to … (Azure OpenAI)” Instructions for agent that define the agent’s role and a 3-step business process: Get validation rules (via the ValidationRules reference) Get CSV payload Summarize the CSV payload review, using the validation document This pattern is intentional: The instructions provide the agent’s “operating procedure” in plain language. The tools give the agent: controlled ways to fetch the rule document, access the CSV input, and return structured results. Because the workflow consumes the agent’s outputs downstream, the instruction text is effectively part of your workflow contract (it must remain stable enough that later actions can trust the output shape). Note: If a reader wants to recreate this pattern, the fastest path is: Start with the official overview of agent workflows (Workflows with AI Agents and Models - Azure Logic Apps). Follow a hands-on walkthrough for building an agent workflow and connecting it to an Azure OpenAI deployment (Logic Apps Labs is a good step-by-step reference). [ azure.github.io ] Use the Azure OpenAI connector reference to understand authentication options and operations available in Logic Apps Standard (see Built-in OpenAI Connector) If you’re using Foundry for resource management, review how Foundry connections are created and used, especially when multiple resources/tools are involved (see How to connect to AI foundry). 2.1 Tool 1: Get validation rules The first tool in the validation agent loop is Get validation rules. Its job is to load the business validation rules that will be applied to the incoming CSV payload from SAP. I keep these rules outside the workflow (in a document) so they can be updated without redeploying the Logic App. In this example, the rules are stored in SharePoint, and the tool simply retrieves the document content at runtime. Get validation rules is implemented as a single action called Get validation document. In the designer, you can see it: uses a SharePoint Online connection (SharePoint icon and connector action) calls GetFileContentByPath (shown by the “File Path” input) reads the rule file from the configured Site Address uses the workflow parameter token ValidationRules for the File Path (so the exact rule file location is configurable per environment) The output of this tool is the raw rule document content, which the Data Validation Agent uses in the next steps to validate the CSV payload. The bottom half of the figure shows an excerpt of the rules document. The format is simple and intentionally human-editable: each rule is expressed as FieldName : condition. For example, the visible rules include: PaymentMethod : value must exist PaymentMethod : value cannot be “Cash” OrderStatus : value must be different from “Cancelled” CouponCode : value must have at least 1 character OrderID : value must be unique in the CSV array A scope note: “Do not validate the Date field.” These rules are the “source of truth” for validation. The workflow does not hardcode them into expressions; instead, it retrieves them from SharePoint and passes them into the agent loop so the validation logic remains configurable and auditable (you can always point to the exact rule document used for a given run). A small but intentional rule in the document is “Do not validate the Date field.” That line is there for a practical reason: in an early version of the source workflow, the date column was being corrupted during CSV generation. The validation agent still tried to validate dates (even though date validation wasn’t part of the original intent), and the result was predictable: every row failed validation, leaving nothing to analyze. The upstream issue is fixed now, but I kept this rule in the demo to illustrate an important point: validation is only useful when it’s aligned with the data contract you can actually guarantee at that point in the pipeline. Note: The rules shown here assume the CSV includes a header row (field names in the first line) so the agent can interpret each column by name. If you want the agent to be schema‑agnostic, you can extend the rules with an explicit column mapping, for example: Column 1: Order ID Column 2: Date Column 3: Customer ID … This makes the contract explicit even when headers are missing or unreliable. With the rules loaded, the next tool provides the second input the agent needs: the CSV payload that will be validated against this document. 2.2 Tool 2: Get CSV payload The second tool in the validation agent loop is Get CSV payload. Its purpose is to make the dataset-to-validate explicit: it defines exactly what the agent should treat as “the CSV payload,” rather than relying on implicit workflow context. In this workflow, the CSV is already constructed earlier (as Create_CSV_payload ), and this tool acts as the narrow bridge between that prepared string and the agent’s validation step. Figure: Tool #2 (“Get CSV payload”) defines a single agent parameter and binds it to the workflow’s generated CSV. The figure shows two important pieces: - The tool parameter contract (“Agent Parameters”) On the right, the tool defines an agent parameter named CSV Payload with type String, and the description (highlighted in yellow) makes the intent explicit: “The CSV payload received from SAP and that we validate based on the validation rules.” This parameter is the tool’s interface: it documents what the agent is supposed to provide/consume when using this tool, and it anchors the rest of the validation process to a single, well-defined input. Tools in Logic Apps agent workflows exist specifically to constrain and structure what an agent can do and what data it operates on (see Agent Workflows Concepts). - Why there is an explicit Compose action (“CSV payload”) In the lower-right “Code view,” the tool’s internal action is shown as a standard Compose: { "type": "Compose", "inputs": "@outputs('Create_CSV_payload')" } This is intentional. Even though the CSV already exists in the workflow, the tool still needs a concrete action that produces the value it returns to the agent. The Compose step: pins the tool output to a single source of truth ( Create_CSV_payload ), and creates a stable boundary: “this is the exact CSV string the agent validates,” independent of other workflow state. Put simply: the Compose action isn’t there because Logic Apps can’t access the CSV—it’s there to make the agent/tool interface explicit, repeatable, and easy to troubleshoot. What “tool parameters” are (in practical terms) In Logic Apps agent workflows, a tool is a named sequence of one or more actions that the agent can invoke while executing its instructions. A tool parameter is the tool’s input/output contract exposed to the agent. In this screenshot, that contract is defined under Agent Parameters, where you specify: Name: CSV Payload Type: String Description: “The CSV payload received from SAP…” This matters because it clarifies (for both the model and the human reader) what the tool represents and what data it is responsible for supplying. With Tool #1 providing the rules document and Tool #2 providing the CSV dataset, Tool #3 is where the agent produces workflow-ready outputs (summary + invalid IDs + filtered payload) that the downstream steps can act on. 2.3 Tool 3: Summarize CSV payload review The third tool, Summarize CSV payload review, is where the agent stops being “an evaluator” and becomes a producer of workflow-ready outputs. It does most of the heavy lifting so let's go into the details. Instead of returning one blob of prose, the tool defines three explicit agent parameters—each with a specific format and purpose—so the workflow can reliably consume the results in downstream actions. In Logic Apps agent workflows, tools are explicitly defined tasks the agent can invoke, and each tool can be structured around actions and schemas that keep the loop predictable (see Agent Workflows Concepts). Figure: Tool #3 (“Summarize CSV payload review”) defines three structured agent outputs Description is not just documentation—it’s the contract the model is expected to satisfy, and it strongly shapes what the agent considers “relevant” when generating outputs. The parameters are: Validation summary (String) Goal: a human-readable summary that can be dropped straight into email. In the screenshot, the description is very explicit about shape and content: “expected format is an HTML table” “create a list of all orderids that have failed” “create a CSV document… only for the orderid values that failed… each row on a separate line” “include title row only in the email body” This parameter is designed for presentation: it’s the thing you want humans to read first. InvalidOrderIds (String, CSV format) Goal: a machine-friendly list of identifiers the workflow can use deterministically. The key part of the description (highlighted in the image) is: “The format is CSV.” That single sentence is doing a lot of work: it tells the model to emit a comma-separated list, which you then convert into an array in the workflow using split(...). Invalid CSV payload (String, one row per line) Goal: the failed rows extracted from the original dataset, in a form that downstream steps can reuse. The description constrains the output tightly: “original CSV rows… for the orderid values that failed validation” “each row must be on a separate line” “keep the title row only for the email body and remove it otherwise” This parameter is designed for automation: it becomes input to remediation steps (like transforming rows to XML and creating IDocs), not just a report. What “agent parameters” do here (and why they matter) A useful way to think about agent parameters is: they are the “typed return values” of a tool. Tools in agent workflows exist to structure work into bounded tasks the agent can perform, and a schema/parameter contract makes the results consumable by the rest of the workflow (see Agent Workflows Concepts). In this tool, the parameters serve two purposes at once: They guide the agent toward salient outputs. The descriptions explicitly name what matters: “failed orderids,” “HTML table,” “CSV format,” “one row per line,” “header row rules.” That phrasing makes it much harder for the model to “wander” into irrelevant commentary. They align with how the workflow will parse and use the results. By stating “ InvalidOrderIds is CSV,” you make it trivially parseable (split), and by stating “Invalid CSV payload is one row per line,” you make it easy to feed into later transformations. Why the wording works (and what wording tends to work best) What’s interesting about the parameter descriptions is that they combine three kinds of constraints: Output format constraints (make parsing deterministic) “expected format is an HTML table” “The format is CSV.” “each row must be on a separate line” These format cues help the agent decide what to emit and help you avoid brittle parsing later. Output selection constraints (force relevance) “only for the orderid values that failed validation” “Create a list of all orderids that have failed” This tells the agent what to keep and what to ignore. Output operational constraints (tie outputs to downstream actions) “Include title row only in the email body” “remove it otherwise” This explicitly anticipates downstream usage (email vs remediation), which is exactly the kind of detail models often miss unless you state it. Rule of thumb: wording works best when it describes what to produce, in what format, with what filtering rules, and why the workflow needs it. How these parameters tie directly to the downstream actions The next picture makes the design intent very clear: each parameter is immediately “bound” to a normal workflow value via Compose actions and then used by later steps. This is the pattern we want: agent output → Compose → (optional) normalization → reused by deterministic workflow actions. It’s the opposite of “read the model output and hope.” This is the reusable pattern: Decide the minimal set of outputs the workflow needs. Specify formats that are easy to parse. Write parameter descriptions that encode both selection and formatting constraints. Immediately bind outputs to workflow variables via Compose/ SetVariable actions. The main takeaway from this tool is that the agent is being forced into a structured contract: three outputs with explicit formats and clear intent. That contract is what makes the rest of the workflow deterministic—Compose actions can safely read @agentParameters(...), the workflow can safely split(...) the invalid IDs, and downstream actions can treat the “invalid payload” as real data rather than narrative. I'll show later how this same “parameter-first” design scales to other agent tools. 2.4 Turning agent outputs into a verification email Once the agent has produced structured outputs (Validation summary, InvalidOrderIds , and Invalid CSV payload), the next goal is to make those outputs operational: humans need a quick summary of what failed, and the workflow needs machine‑friendly values it can reuse downstream. The design here is intentionally straightforward: the workflow converts each agent parameter into a first‑class workflow output (via Compose actions and one variable assignment), then binds those values directly into the Office 365 email body. The result is an email that is both readable and actionable—without anyone needing to open run history. The figure below shows how the outputs of Summarize CSV payload review are mapped into the verification email. On the left, the tool produces three values via subsequent actions (Summary, Invalid order ids, and Invalid CSV payload), and the workflow also normalizes the invalid IDs into an array (Save invalid order ids). On the right, the Send verification summary action composes the email body using those same values as dynamic content tokens. Figure: Mapping agent outputs to the verification email The important point is that the email is not constructed by “re-prompting” or “re-summarizing.” It is assembled from already-structured outputs. This mapping is intentionally direct: each piece of the email corresponds to one explicit output from the agent tool. The workflow doesn’t interpret or transform the summary beyond basic formatting—its job is to preserve the agent’s structured outputs and present them consistently. The only normalization step happens for InvalidOrderIds , where the workflow also converts the CSV string into an array ( ArrayOfInvalidOrderIDs ) for later filtering and analysis steps. The next figure shows a sample verification email produced by this pipeline. It illustrates the three-part structure: an HTML validation summary table, the raw invalid order ID list, and the extracted invalid CSV rows: Figure: Sample verification email — validation summary table + invalid order IDs + invalid CSV rows. The extracted artifacts InvalidOrderIds and Invalid CSV payload are used in the downstream actions that persist failed rows as IDocs for later processing, which were presented in Part 1. I will get back to this later to talk about reusing the validation agent. Next however I will go over the data analysis part of the AI integration. 3. Analysis Phase: from validated dataset to HTML output After the validation agent loop finishes, the workflow enters a second AI phase: analysis. The validation phase is deliberately about correctness (what to exclude and why). The analysis phase is about insight, and it runs on the remaining dataset after invalid rows are filtered out. At a high level, this phase has three steps: Call Azure OpenAI to analyze the CSV dataset while explicitly excluding invalid OrderIDs . Extract the model’s text output from the OpenAI response object. Convert the model’s markdown output into HTML so it renders cleanly in email (and in the SAP response envelope). 3.1 OpenAI component: the “Analyze data” call The figure below shows the Analyze data action that drives the analysis phase. This action is executed after the Data Validation Agent completes, and it uses three messages: a system instruction that defines the task, the CSV dataset as input, and a second user message that enumerates the OrderIDs to exclude (the invalid IDs produced by validation). Figure: Azure OpenAI analysis call. The analysis call is structured as: system: define the task and constraints user: provide the dataset user: provide exclusions derived from validation system: Analyze dataset; provide trends/predictions; exclude specified orderids. user: <csv payload=""> user: Excluded orderids: <comma-separated ids="" invalid=""></comma-separated></csv> Two design choices are doing most of the work here: The model is given the dataset and the exclusions separately. This avoids ambiguity: the dataset is one message, and the “do not include these OrderIDs ” constraint is another. The exclusion list is derived from validation output, not re-discovered during analysis. The analysis step doesn’t re-validate; it consumes the validation phase’s results and focuses purely on trends/predictions. 3.2 Processing the response The next figure shows how the workflow turns the Azure OpenAI response into a single string that can be reused for email and for the SAP response. The workflow does three things in sequence: it parses the response JSON, extracts the model’s text content, and then passes that text into an HTML formatter. Figure: Processing the OpenAI response. This is the only part of the OpenAI response you need to understand for this workflow: Analyze_data response └─ choices[] (array) └─ [0] (object) └─ message (object) └─ content (string) <-- analysis text Everything else in the OpenAI response (filters, indexes, metadata) is useful for auditing but not required to build the final user-facing output. 3.3 Crafting the output to HTML The model’s output is plain text and often includes lightweight markdown structures (headings, lists, separators). To make the analysis readable in email (and safe to embed in the SAP response envelope), the workflow converts the markdown into HTML. The script was generated with copilot. Source code snippet may be found in Part 1. The next figure shows what the formatted analysis looks like when rendered. Not the explicit reference to the excluded OrderIDs and summary of the remaining dataset before listing trend observations. Figure: Example analysis output after formatting. 4. Closing the loop: persisting invalid rows as IDocs In Part 1, I introduced an optional remediation branch: when validation finds bad rows, the workflow can persist them into SAP as custom IDocs for later handling. In Part 2, after unpacking the agent loop, I want to reconnect those pieces and show the “end of the story”: the destination workflow creates IDocs for invalid data, and a second destination workflow receives those IDocs and produces a consolidated audit trail in Blob Storage. This final section is intentionally pragmatic. It shows: where the IDoc creation call happens, how the created IDocs arrive downstream, and how to safely handle many concurrent workflow instances writing to the same storage artifact (one instance per IDoc). 4.1 From “verification summary” to “Create all IDocs” The figure below shows the tail end of the verification summary flow. Once the agent produces the structured validation outputs, the workflow first emails the human-readable summary, then converts the invalid CSV rows into an SAP-friendly XML shape, and finally calls the RFC that creates IDocs from those rows. Figure: End of the validation/remediation branch. This is deliberately a “handoff point.” After this step, the invalid rows are no longer just text in an email—they become durable SAP artifacts (IDocs) that can be routed, retried, and processed independently of the original workflow run. 4.2 Z_CREATE_ONLINEORDER_IDOC and the downstream receiver The next figure is the same overview from Part 1. I’m reusing it here because it captures the full loop: the workflow calls Z_CREATE_ONLINEORDER_IDOC , SAP converts the invalid rows into custom IDocs, and Destination workflow #2 receives those IDocs asynchronously (one workflow run per IDoc). Figure 2: Invalid rows persisted as custom IDocs. This pattern is intentionally modular: Destination workflow #1 decides which rows are invalid and optionally persists them. SAP encapsulates the IDoc creation mechanics behind a stable RFC ( Z_CREATE_ONLINEORDER_IDOC ). Destination workflow #2 processes each incoming IDoc independently, which matches how IDoc-driven integrations typically behave in production. 4.3 Two phases in Destination workflow #2: AI agent + Blob Storage logging In the receiver workflow, there are two distinct phases: AI agent phase (per-IDoc): reconstruct a CSV view from the incoming IDoc payload and (optionally) run the same validation logic. Blob storage phase (shared output): append a normalized “verification line” into a shared blob in a concurrency-safe way. It’s worth calling out: in this demo, the IDocs being received were created from already-validated outputs upstream, so you could argue the second validation is redundant. I keep it anyway for two reasons: it demonstrates that the agent tooling is reusable with minimal changes, and in a general integration, Destination workflow #2 may receive IDocs from multiple sources, not only from this pipeline—so “validate on receipt” can still be valuable. 4.3.1 AI agent phase The figure below shows the validation agent used in Destination workflow #2. The key difference from the earlier agent loop is the output format: instead of producing an HTML summary + invalid lists, this agent writes a single “audit line” that includes the IDoc correlation key ( DOCNUM ) along with the order ID and the failed rules. Figure: Destination workflow #2 agent configuration. The reusable part here is the tooling structure: rules still come from the same validation document, the dataset is still supplied as CSV, and the summarization tool outputs a structured value the workflow can consume deterministically. The only meaningful change is “what shape do I want the output to take,” which is exactly what the agent parameter descriptions control. The next figure zooms in on the summarization tool parameter in Destination workflow #2. Instead of three outputs, this tool uses a single parameter ( VerificationInfo ) whose description forces a consistent line format anchored on DOCNUM . Figure 4: VerificationInfo parameter. This is the same design principle as Tool #3 in the first destination workflow: describe the output as a contract, not as a vague request. The parameter description tells the agent exactly what must be present ( DOCNUM + OrderId + failed rules) and therefore makes it straightforward to append the output to a shared log without additional parsing. Interesting snippets Extracting DOCNUM from the IDoc control record and carry it through the run: xpath(xml(triggerBody()?['content']), 'string(/*[local-name()="Receive"] /*[local-name()="idocData"] /*[local-name()="EDI_DC40"] /*[local-name()="DOCNUM"])') 4.3.2 Blob Storage phase Destination workflow #2 runs one instance per inbound IDoc. That means multiple runs can execute at the same time, all trying to write to the same “ ValidationErrorsYYYYMMDD.txt ” artifact. The figure below shows the resulting appended output: one line per IDoc, each line beginning with DOCNUM , which becomes the stable correlation key. Destination workflow #2 runs one instance per inbound IDoc, so multiple instances can attempt to write to the same daily “validation errors” append blob at the same time. The figure below shows the concurrency control pattern I used to make those writes safe: a short lease acquisition loop that retries until it owns the blob lease, then appends the verification line(s), and finally releases the lease. Figure: Concurrency-safe append pattern. Reading the diagram top‑to‑bottom, the workflow uses a simple lease → append → release pattern to make concurrent writes safe. Each instance waits briefly (Delay), attempts to acquire a blob lease (Acquire validation errors blob lease), and loops until it succeeds (Set status code → Until lease is acquired). Once a lease is obtained, the workflow stores the lease ID (Save lease id), appends its verification output under that lease (Append verification results), and then releases the lease (Release the lease) so the next workflow instance can write. Implementation note: the complete configuration for this concurrency pattern (including the HTTP actions, headers, retries, and loop conditions) is included in the attached artifacts, in the workflow JSON for Destination workflow #2. 5. Concluding remarks Part 2 zoomed in on the AI boundary inside the destination workflows and made it concrete: what the agent sees, what it is allowed to do, what it must return, and how those outputs drive deterministic workflow actions. The practical outcomes of Part 2 are: A tool-driven validation agent that produces workflow artifacts, not prose. The validation loop is constrained by tools and parameter schemas so its outputs are immediately consumable: an email-friendly validation summary, a machine-friendly InvalidOrderIds list, and an invalid-row payload that can be remediated. A clean separation between validation and analysis. Validation decides what not to trust (invalid IDs / rows) and analysis focuses on what is interesting in the remaining dataset. The analysis prompt makes the exclusion rule explicit by passing the dataset and excluded IDs as separate messages. A repeatable response-processing pipeline. You extract the model’s text from a stable response path ( choices[0].message.content ), then shape it into HTML once (markdown → HTML) so the same formatted output can be reused for email and the SAP response envelope. A “reuse with minimal changes” pattern across workflows. Destination workflow #2 shows the same agent principles applied to IDoc reception, but with a different output contract optimized for logging: DOCNUM + OrderId + FailedRules . This demonstrates that the real reusable asset is the tool + parameter contract design. Putting It All Together We have a full integration story where SAP, Logic Apps, AI, and IDocs are connected with explicit contracts and predictable behavior: Part 1 established the deterministic integration foundation. SAP ↔ Logic Apps connectivity (gateway/program wiring) RFC payload/response contracts ( IT_CSV , response envelope, error semantics) predictable exception propagation back into SAP an optional remediation branch that persists invalid rows as IDocs via a custom RFC ( Z_CREATE_ONLINEORDER_IDOC ) and the end-to-end response handling pattern in the caller workflow. Part 2 layered AI on top without destabilizing the contracts. Agent loop + tools for rule retrieval and validation output schemas that convert “reasoning” into workflow artifacts a separate analysis step that consumes validated data and produces formatted results and an asynchronous IDoc receiver that logs outcomes safely under concurrency. The reason it works as a two-part series is that the two layers evolve at different speeds: The integration layer (Part 1) should change slowly. It defines interoperability: payload shapes, RFC names, error contracts, and IDoc interfaces. The AI layer (Part 2) is expected to iterate. Prompts, rule documents, output formatting, and agent tool design will evolve as you tune behavior and edge cases. References Logic Apps Agentic Workflows with SAP - Part 1: Infrastructure 🤖 Agent Loop Demos 🤖 | Microsoft Community Hub Agent Workflows Concepts Workflows with AI Agents and Models - Azure Logic Apps Built-in OpenAI Connector How to connect to AI foundry Create Autonomous AI Agent Workflows - Azure Logic Apps Handling Errors in SAP BAPI Transactions Access SAP from workflows Create common SAP workflows Generate Schemas for SAP Artifacts via Workflows Exception Handling | ABAP Keyword Documentation Handling and Propagating Exceptions - ABAP Keyword Documentation SAP .NET Connector 3.1 Overview SAP .NET Connector 3.1 Programming Guide Connect to Azure AI services from Workflows All supporting content for this post may be found in the companion GitHub repository.Logic Apps Agentic Workflows with SAP - Part 1: Infrastructure
When you integrate Azure Logic Apps with SAP, the “hello world” part is usually easy. The part that bites you later is data quality. In SAP-heavy flows, validation isn’t a nice-to-have — it’s what makes the downstream results meaningful. If invalid data slips through, it can get expensive fast: you may create incorrect business documents, trigger follow-up processes, and end up in a cleanup path that’s harder (and more manual) than building validation upfront. And in “all-or-nothing” transactional patterns, things get even more interesting: one bad record can force a rollback strategy, compensating actions, or a whole replay/reconciliation story you didn’t want to own. See for instance Handling Errors in SAP BAPI Transactions | Microsoft Community Hub to get an idea of the complexity in a BizTalk context. That’s the motivation for this post: a practical starter pattern that you can adapt to many data shapes and domains for validating data in a Logic Apps + SAP integration. Note: For the full set of assets used here, see the companion GitHub repository (workflows, schemas, SAP ABAP code, and sample files). 1. Introduction Scenario overview The scenario is intentionally simple, but it mirrors what shows up in real systems: A Logic App workflow sends CSV documents to an SAP endpoint. SAP forwards the payload to a second Logic App workflow that performs: rule-based validation (based on pre-defined rules) analysis/enrichment (market trends, predictions, recommendations) The workflow either: returns validated results (or validation errors) to the initiating workflow, or persists outputs for later use For illustration, I’m using fictitious retail data. The content is made up, but the mechanics are generic: the same approach works for orders, inventory, pricing, master data feeds, or any “file in → decision out” integration. You’ll see sample inputs and outputs below to keep the transformations concrete. What this post covers This walkthrough focuses on the integration building blocks that tend to matter in production: Calling SAP RFCs from Logic App workflows, and invoking Logic App workflows from SAP function modules Using the Logic Apps SAP built-in trigger Receiving and processing IDocs Returning responses and exceptions back to SAP in a structured, actionable way Data manipulation patterns in Logic Apps, including: parsing and formatting inline scripts XPath (where it fits, and where it becomes painful). Overall Implementation A high-level view of the implementation is shown below. The source workflow handles end-to-end ingestion—file intake, transformation, SAP integration, error handling, and notifications—using Azure Logic Apps. The destination workflows focus on validation and downstream processing, including AI-assisted analysis and reporting, with robust exception handling across multiple technologies. I’ll cover the AI portion in a follow-up post. Note on AI-assisted development Most of the workflow “glue” in this post—XPath, JavaScript snippets, and Logic Apps expressions—was built with help from Copilot and the AI assistant in the designer (see Get AI-assisted help for Standard workflows - Azure Logic Apps | Microsoft Learn). In my experience, this is exactly where AI assistance pays off: generating correct scaffolding quickly, then iterating based on runtime behavior. I’ve also included SAP ABAP snippets for the SAP-side counterpart. You don’t need advanced ABAP knowledge to follow along; the snippets are deliberately narrow and integration-focused. I include them because it’s hard to design robust integrations if you only understand one side of the contract. When you understand how SAP expects to receive data, how it signals errors, and where transactional boundaries actually are, you end up with cleaner workflows and fewer surprises. 2. Source Workflow This workflow is a small, end‑to‑end “sender” pipeline: it reads a CSV file from Azure Blob Storage, converts the rows into the SAP table‑of‑lines XML shape expected by an RFC, calls Z_GET_ORDERS_ANALYSIS via the SAP connector, then extracts analysis or error details from the RFC response and emails a single consolidated result. At a high level: Input: an HTTP request (used to kick off the run) + a blob name. Processing: CSV → array of rows → XML (…) → RFC call Output: one email containing either: the analysis (success path), or a composed error summary (failure path). The diagram below summarizes the sender pipeline: HTTP trigger → Blob CSV (header included) → rows → SAP RFC → parse response → email. Two design choices are doing most of the work here. First, the workflow keeps the CSV transport contract stable by sending the file as a verbatim list of lines—including the header—wrapped into … elements under IT_CSV . Second, it treats the RFC response as the source of truth: EXCEPTIONMSG and RETURN/MESSAGE drive a single Has errors gate, which determines whether the email contains the analysis or a consolidated failure summary. Step-by-step description Phase 0 — Trigger Trigger — When_an_HTTP_request_is_received The workflow is invoked via an HTTP request trigger (stateful workflow). Phase 1 — Load and split the CSV Read file — Read_CSV_orders_from_blob Reads the CSV from container onlinestoreorders using the blob name from @parameters('DataFileName'). Split into rows — Extract_rows Splits the blob content on \r\n, producing an array of CSV lines. Design note: Keeping the header row is useful when downstream validation or analysis wants column names, and it avoids implicit assumptions in the sender workflow. Phase 2 — Shape the RFC payload Convert CSV rows to SAP XML — Transform_CSV_to_XML Uses JavaScript to wrap each CSV row (including the header line) into the SAP line structure and XML‑escape special characters. The output is an XML fragment representing a table of ZTY_CSV_LINE rows. Phase 3 — Call SAP and extract response fields Call the RFC — [RFC]_ Z_GET_ORDERS_ANALYSIS Invokes Z_GET_ORDERS_ANALYSIS with an XML body containing … built from the transformed rows. Extract error/status — Save_EXCEPTION_message and Save_RETURN_message Uses XPath to pull: EXCEPTIONMSG from the RFC response, and the structured RETURN / MESSAGE field. Phase 4 — Decide success vs failure and notify Initialize output buffer — Initialize_email_body Creates the EmailBody variable used by both success and failure cases. Gate — Has_errors Determines whether to treat the run as failed based on: EXCEPTIONMSG being different from "ok", or RETURN / MESSAGE being non‑empty. Send result — Send_an_email_(V2) Emails either: the extracted ANALYSIS (success), or a concatenated error summary including RETURN / MESSAGE plus message details ( MESSAGE_V1 … MESSAGE_V4 ) and EXCEPTIONMSG . Note: Because the header row is included in IT_CSV , the SAP-side parsing/validation treats the first line as column titles (or simply ignores it). The sender workflow stays “schema-agnostic” by design. Useful snippets Snippet 1 — Split the CSV into rows split(string(body('Read_CSV_orders_from_blob')?['content']), '\r\n') Tip: If your CSV has a header row you don’t want to send to SAP, switch back to: @skip(split(string(body('Read_CSV_orders_from_blob')?['content']), '\r\n'), 1) Snippet 2 — JavaScript transform: “rows → SAP table‑of‑lines XML” const lines = workflowContext.actions.Extract_rows.outputs; function xmlEscape(value) { return String(value) .replace(/&/g, "&") .replace(//g, ">") .replace(/"/g, """) .replace(/'/g, "'"); } // NOTE: we don't want to keep empty lines (which can be produced by reading the blobs) // the reason being that if the recipient uses a schema to validate the xml, // it may reject it if it does not allow empty nodes. const xml = lines .filter(line => line && line.trim() !== '') // keep only non-empty lines .map(line => `<zty_csv_line><line>${xmlEscape(line)}</line></zty_csv_line>`) .join(''); return { xml }; Snippet 3 — XPath extraction of response fields (namespace-robust) EXCEPTIONMSG: @xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string( /*[local-name()="Z_GET_ORDERS_ANALYSISResponse"] /*[local-name()="EXCEPTIONMSG"])') RETURN/MESSAGE: @xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string( /*[local-name()="Z_GET_ORDERS_ANALYSISResponse"] /*[local-name()="RETURN"] /*[local-name()="MESSAGE"])') Snippet 4 — Failure email body composition concat( 'Error message: ', outputs('Save_RETURN_message'), ', details: ', xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string(//*[local-name()=\"MESSAGE_V1\"])'), xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string(//*[local-name()=\"MESSAGE_V2\"])'), xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string(//*[local-name()=\"MESSAGE_V3\"])'), xpath(body('[RFC]_Call_Z_GET_ORDERS_ANALYSIS')?['content'], 'string(//*[local-name()=\"MESSAGE_V4\"])'), '; ', 'Exception message: ', outputs('Save_EXCEPTION_message'), '.') 3. SAP Support To make the SAP/Logic Apps boundary simple, I model the incoming CSV as a table of “raw lines” on the SAP side. The function module Z_GET_ORDERS_ANALYSIS exposes a single table parameter, IT_CSV , typed using a custom line structure. Figure: IT_CSV is a table of CSV lines ( ZTY_CSV_LINE ), with a single LINE field ( CHAR2048 ). IT_CSV uses the custom structure ZTY_CSV_LINE , which contains a single component LINE ( CHAR2048 ). This keeps the SAP interface stable: the workflow can send CSV lines without SAP having to know the schema up front, and the parsing/validation logic can evolve independently. The diagram below shows the plumbing that connects SAP to Azure Logic Apps in two common patterns: SAP sending IDocs to a workflow and SAP calling a remote-enabled endpoint via an RFC destination. I’m showing all three pieces together—the ABAP call site, the SM59 RFC destination, and the Logic Apps SAP built-in trigger—because most “it doesn’t work” problems come down to a small set of mismatched configuration values rather than workflow logic. The key takeaway is that both patterns hinge on the same contract: Program ID plus the SAP Gateway host/service. In SAP, those live in SM59 (TCP/IP destination, registered server program). In Logic Apps, the SAP built-in trigger listens using the same Program ID and gateway settings, while the trigger configuration (for example, IDoc format and degree of parallelism) controls how messages are interpreted and processed. Once these values line up, the rest of the implementation becomes “normal workflow engineering”: validation, predictable error propagation, and response shaping. Before diving into workflow internals, I make the SAP-side contract explicit. The function module interface below shows the integration boundary: CSV lines come in as IT_CSV , results come back as ANALYSIS , and status/error information is surfaced both as a human-readable EXCEPTIONMSG and as a structured RETURN ( BAPIRET2 ). I also use a dedicated exception ( SENDEXCEPTIONTOSAPSERVER ) to signal workflow-raised failures cleanly. Contract (what goes over RFC): Input: IT_CSV (CSV lines) Outputs: ANALYSIS (analysis payload), EXCEPTIONMSG (human-readable status) Return structure: RETURN ( BAPIRET2 ) for structured SAP-style success/error Custom exception: SENDEXCEPTIONTOSAPSERVER for workflow-raised failures Here is the ABAP wrapper that calls the remote implementation and normalizes the result. FUNCTION z_get_orders_analysis. *"---------------------------------------------------------------------- *" This module acts as a caller wrapper. *" Important: the remote execution is determined by DESTINATION. *" Even though the function name is the same, this is not recursion: *" the call runs in the remote RFC server registered under DESTINATION "DEST". *"---------------------------------------------------------------------- *" Contract: *" TABLES it_csv "CSV lines *" IMPORTING analysis "Result payload *" EXPORTING exceptionmsg "Human-readable status / error *" CHANGING return "BAPIRET2 return structure *" EXCEPTIONS sendexceptiontosapserver *"---------------------------------------------------------------------- CALL FUNCTION 'Z_GET_ORDERS_ANALYSIS' DESTINATION dest IMPORTING analysis = analysis TABLES it_csv = it_csv CHANGING return = return EXCEPTIONS sendexceptiontosapserver = 1 system_failure = 2 MESSAGE exceptionmsg communication_failure = 3 MESSAGE exceptionmsg OTHERS = 4. CASE sy-subrc. WHEN 0. exceptionmsg = 'ok'. "Optional: normalize success into RETURN for callers that ignore EXCEPTIONMSG IF return-type IS INITIAL. return-type = 'S'. return-message = 'OK'. ENDIF. WHEN 1. exceptionmsg = |Exception from workflow: SENDEXCEPTIONTOSAPSERVER { sy-msgv1 }{ sy-msgv2 }{ sy-msgv3 }{ sy-msgv4 }|. return-type = 'E'. return-message = exceptionmsg. WHEN 2 OR 3. "system_failure / communication_failure usually already populate exceptionmsg IF exceptionmsg IS INITIAL. exceptionmsg = |RFC system/communication failure.|. ENDIF. return-type = 'E'. return-message = exceptionmsg. WHEN OTHERS. exceptionmsg = |Error in workflow: { sy-msgv1 }{ sy-msgv2 }{ sy-msgv3 }{ sy-msgv4 }|. return-type = 'E'. return-message = exceptionmsg. ENDCASE. ENDFUNCTION. The wrapper is intentionally small: it forwards the payload to the remote implementation via the RFC destination and then normalizes the outcome into a predictable shape. The point isn’t fancy ABAP — it’s reliability. With a stable contract ( IT_CSV, ANALYSIS, RETURN, EXCEPTIONMSG ) the Logic Apps side can evolve independently while SAP callers still get consistent success/error semantics. Important: in 'CALL FUNCTION 'Z_GET_ORDERS_ANALYSIS' DESTINATION dest' the name of the called function should be the same as the name of the ABAP wrapper function module, the reason being that the SAP built-in trigger in the logic app uses the function module signature as the contract (i.e. metadata). Z_GET_ORDERS_ANALYSIS . To sum up, the integration is intentionally shaped around three outputs: the raw input table ( IT_CSV ), a standardized SAP return structure ( RETURN / BAPIRET2 ), and a readable status string ( EXCEPTIONMSG ). The custom exception ( SENDEXCEPTIONTOSAPSERVER ) gives me a clean way to surface workflow failures back into SAP without burying them inside connector-specific error payloads. This is depicted in the figure below. 4. Destination Workflow The diagram below shows the destination workflow at a high level. I designed it as a staged pipeline: guard early, normalize input, validate, and then split the workload into two paths—operational handling of invalid records (notifications and optional IDoc remediation) and analysis of the validated dataset. Importantly, the SAP response is intentionally narrow: SAP receives only the final analysis (or a structured error), while validation details are delivered out-of-band via email. How to read this diagram Guardrail: Validate requested action ensures the workflow only handles the expected request. Normalize: Create CSV payload converts the inbound content into a consistent CSV representation. Validate: Data Validation Agent identifies invalid records (and produces a summary). Operational handling (invalid data): invalid rows are reported by email and may optionally be turned into IDocs (right-hand block). Analyze (valid data): Analyze data runs only on the validated dataset (invalid IDs excluded). Outputs: users receive Email analysis, while SAP receives only the analysis (or a structured error) via Respond to SAP server. Figure: Destination workflow with staged validation, optional IDoc remediation, and an SAP response . Reading the workflow top-to-bottom, the main design choice is separation of concerns. Validation is used to filter and operationalize bad records (notify humans, optionally create IDocs), while the SAP-facing response stays clean and predictable: SAP receives the final analysis for the validated dataset, or an error if the run can’t complete. This keeps the SAP contract stable even as validation rules and reporting details evolve. Step‑by‑step walkthrough Phase 0 — Entry and routing Trigger — When a message is received The workflow starts when an inbound SAP message is delivered to the Logic Apps SAP built‑in trigger. Guardrail — Validate requested action (2 cases) The workflow immediately checks whether the inbound request is the operation it expects (for example, the function/action name equals Z_GET_ORDERS_ANALYSIS ). If the action does not match: the workflow sends an exception back to SAP describing the unexpected action and terminates early (fail fast). If the action matches: processing continues. Phase 1 — Normalize input into a workflow‑friendly payload Prepare input — Create CSV payload The workflow extracts CSV lines from the inbound (XML) SAP payload and normalizes them into a consistent CSV text payload that downstream steps can process reliably. Initialize validation state — Initialize array of invalid order ids The workflow creates an empty array variable to capture order IDs that fail validation. This becomes the “validation output channel” used later for reporting, filtering, and optional remediation. Phase 2 — Validate the dataset (AI agent loop) Validate — Data Validation Agent (3 cases) This stage performs rule‑based validation using an agent pattern (backed by Azure OpenAI). Conceptually, it does three things (as shown in the diagram’s expanded block): Get validation rules: retrieves business rules from a SharePoint‑hosted validation document. Get CSV payload: loads the normalized CSV created earlier. Summarize CSV payload review: evaluates the CSV against the rules and produces structured validation outputs. Outputs produced by validation: A list of invalid order IDs The corresponding invalid CSV rows A human‑readable validation summary Note: The detailed AI prompt/agent mechanics are covered in Part 2. In Part 1, the focus is on the integration flow and how data moves. Phase 3 — Operational handling of invalid records (email + optional SAP remediation) After validation, the workflow treats invalid records as an operational concern: they are reported to humans and can optionally be routed into an SAP remediation path. This is shown in the right‑hand “Create IDocs” block. Notify — Send verification summary The workflow sends an email report (Office 365) to configured recipients containing: the validation summary the invalid order IDs the invalid CSV payload (or the subset of invalid rows) Transform — Transform CSV to XML The workflow converts the invalid CSV lines into an XML format that is suitable for SAP processing. Optional remediation — [RFC] Create all IDocs (conditional) If the workflow parameter (for example, CreateIDocs) is enabled, the workflow calls an SAP RFC (e.g., Z_CREATE_ONLINEORDER_IDOC ) to create IDocs from the transformed invalid data. Why this matters: Validation results are made visible (email) and optionally actionable (IDocs), without polluting the primary analysis response that SAP receives. Phase 4 — Analyze only the validated dataset (AI analysis) The workflow runs AI analysis on the validated dataset, explicitly excluding invalid order IDs discovered during the validation phase. The analysis prompt instructs the model to produce outputs such as trends, predictions, and recommendations. Note: The AI analysis prompt design and output shaping are covered in Part 2. Phase 5 — Post‑process the AI response and publish outputs Package results — Process analysis results (Scope) The workflow converts the AI response into a format suitable for email and for SAP consumption: Parse the OpenAI JSON response Extract the analysis content Convert markdown → HTML using custom JavaScript formatting Outputs Email analysis: sends the formatted analysis to recipients. Respond to SAP server: returns only the analysis (and errors) to SAP. Key design choice: SAP receives a clean, stable contract—analysis on success, structured error on failure. Validation details are handled out‑of‑band via email (and optionally via IDoc creation). Note: the analysis email sent by the destination workflow is there for testing purposes, to verify that the html content remains the same as it is sent back to the source workflow. Useful snippets Snippet 1 - Join each CSV line in the XML to make a CSV table: join( xpath( xml(triggerBody()?['content']), '/*[local-name()=\"Z_GET_ORDERS_ANALYSIS\"] /*[local-name()=\"IT_CSV\"] /*[local-name()=\"ZTY_CSV_LINE\"] /*[local-name()=\"LINE\"]/text()' ), '\r\n') Note: For the sake of simplicity, XPath is used here and throughout all places where XML is parsed. In the general case however, the Parse XML with schema action is the better and recommended way to strictly enforce the data contract between senders and receivers. More information about Parse XML with schema is provided in Appendix 1. Snippet 2 - Format markdown to html (simplified): const raw = workflowContext.actions.Extract_analysis.outputs; // Basic HTML escaping for safety (keeps <code> blocks clean) const escapeHtml = s => s.replace(/[&<>"]/g, c => ({'&':'&','<':'<','>':'>','"':'"'}[c])); // Normalize line endings let md = raw; // raw.replace(/\r\n/g, '\n').trim(); // Convert code blocks (``` ... ```) md = md.replace(/```([\s\S]*?)```/g, (m, p1) => `<pre><code>${escapeHtml(p1)}</code></pre>`); // Horizontal rules --- or *** md = md.replace(/(?:^|\n)---+(?:\n|$)/g, '<hr/>'); // Headings ###### to # for (let i = 6; i >= 1; i--) { const re = new RegExp(`(?:^|\\n)${'#'.repeat(i)}\\s+(.+?)\\s*(?=\\n|$)`, 'g'); md = md.replace(re, (m, p1) => `<h${i}>${p1.trim()}</h${i}>`); } // Bold and italic md = md.replace(/\*\*([^*]+)\*\*/g, '<strong>$1</strong>'); md = md.replace(/\*([^*]+)\*/g, '<em>$1</em>'); // Unordered lists (lines starting with -, *, +) md = md.replace(/(?:^|\n)([-*+]\s.+(?:\n[-*+]\s.+)*)/g, (m) => { const items = m.trim().split(/\n/).map(l => l.replace(/^[-*+]\s+/, '').trim()); return '\n<ul>' + items.map(i => `<li>${i}</li>`).join('\n') + '</ul>'; }); // Paragraphs: wrap remaining text blocks in <p>...</p> const blocks = md.split(/\n{2,}/).map(b => { if (/^<h\d>|^<ul>|^<pre>|^<hr\/>/.test(b.trim())) return b; return `<p>${b.replace(/\n/g, '<br/>')}</p>`; }); const html = blocks.join(''); return { html }; 5. Exception Handling To illustrate exception handling, we supposed that multiple workflows may listen to the same program id (by design or unexpectedly) and could therefore receive messages that were meant for others. So the first thing that happens is validate that the function name is as expected. It is shown below. In this section I show three practical ways to surface workflow failures back to SAP using the Logic Apps action “Send exception to SAP server”, and the corresponding ABAP patterns used to handle them. The core idea is the same in all three: Logic Apps raises an exception on the SAP side, SAP receives it as an RFC exception, and your ABAP wrapper converts that into something predictable (for example, a readable EXCEPTIONMSG , a populated RETURN , or both). The differences are in how much control you want over the exception identity and whether you want to leverage SAP message classes for consistent, localized messages. 5.1 Default exception This first example shows the default behavior of Send exception to SAP server. When the action runs without a custom exception name configuration, the connector raises a pre-defined exception that can be handled explicitly in ABAP. On the Logic Apps side, the action card “Send exception to SAP server” sends an Exception Error Message (for example, “Unexpected action in request: …”). On the ABAP side, the RFC call lists SENDEXCEPTIONTOSAPSERVER = 1 under EXCEPTIONS , and the code uses CASE sy-subrc to map that exception to a readable message. The key takeaway is that you get a reliable “out-of-the-box” exception path: ABAP can treat sy-subrc = 1 as the workflow‑raised failure and generate a consistent EXCEPTIONMSG . This is the simplest option and works well when you don’t need multiple exception names—just one clear “workflow failed” signal. 5.2 Message exception If you want more control than the default, you can configure the action to raise a named exception declared in your ABAP function module interface. This makes it easier to route different failure types without parsing free-form text. The picture shows Advanced parameters under the Logic Apps action, including “Exception Name” with helper text indicating it must match an exception declared in the ABAP function module definition. This option is useful when you want to distinguish workflow error categories (e.g., validation vs. routing vs. downstream failures) using exception identity, not just message text. The contract stays explicit: Logic Apps raises a named exception, and ABAP can branch on that name (or on sy-subrc mapping) with minimal ambiguity. 5.3 Message class exception The third approach uses SAP’s built-in message class mechanism so that the exception raised by the workflow can map cleanly into SAP’s message catalog ( T100 ). This is helpful when you want consistent formatting and localization aligned with standard SAP patterns. On the Logic Apps side, the action shows advanced fields including Message Class, Message Number, and an Is ABAP Message toggle, with helper text stating the message class can come from message maintenance ( SE91 ) or be custom. On the ABAP side, the code highlights an error-handling block that calls using sy-msgid , sy-msgno , and variables sy-msgv1 … sy-msgv4 , then stores the resulting text in EXCEPTIONMSG . This pattern is ideal when you want workflow exceptions to look and behave like “native” SAP messages. Instead of hard-coding strings, you rely on the message catalog and let ABAP produce a consistent final message via FORMAT_MESSAGE . The result is easier to standardize across teams and environments—especially if you already manage message classes as part of your SAP development process. Refer to Appendix 2 for further information on FORMAT_MESSAGE . 5.4 Choosing an exception strategy that SAP can act on Across these examples, the goal is consistent: treat workflow failures as first‑class outcomes in SAP, not as connector noise buried in run history. The Logic Apps action Send exception to SAP server gives you three increasingly structured ways to do that, and the “right” choice depends on how much semantics you want SAP to understand. Default exception (lowest ceremony): Use this when you just need a reliable “workflow failed” signal. The connector raises a pre-defined exception name (for example, SENDEXCEPTIONTOSAPSERVER ), and ABAP can handle it with a simple EXCEPTIONS … = 1 mapping and a sy-subrc check. This is the fastest way to make failures visible and deterministic. Named exception(s) (more routing control): Use this when you want SAP to distinguish failure types without parsing message text. By raising an exception name declared in the ABAP function module interface, you can branch cleanly in ABAP (or map to different return handling) and keep the contract explicit and maintainable. Message class + number (most SAP-native): Use this when you want errors to look and behave like standard SAP messages—consistent wording, centralized maintenance, and better alignment with SAP operational practices. In this mode, ABAP can render the final localized string using FORMAT_MESSAGE and return it as EXCEPTIONMSG (and optionally BAPIRET2 - MESSAGE ), which makes the failure both human-friendly and SAP-friendly. A practical rule of thumb: start with the default exception while you stabilize the integration, move to named exceptions when you need clearer routing semantics, and adopt message classes when you want SAP-native error governance (standardization, maintainability, and localization). Regardless of the option, the key is to end with a predictable SAP-side contract: a clear success path, and a failure path that produces a structured return and a readable message. 6. Response Handling This section shows how the destination workflow returns either a successful analysis response or a workflow exception back to SAP, and how the source (caller) workflow interprets the RFC response structure to produce a single, human‑readable outcome (an email body). The key idea is to keep the SAP-facing contract stable: SAP always returns a Z_GET_ORDERS_ANALYSISResponse envelope, and the caller workflow decides between success and error using just two fields: EXCEPTIONMSG and RETURN / MESSAGE . To summarize the steps: Destination workflow either: sends a normal response via Respond to SAP server, or raises an exception via Send exception to SAP server (with an error message). SAP server exposes those outcomes through the RFC wrapper: sy-subrc = 0 → success ( EXCEPTIONMSG = 'ok') sy-subrc = 1 → workflow exception ( SENDEXCEPTIONTOSAPSERVER ) sy-subrc = 2/3 → system/communication failures Source workflow calls the RFC, extracts: EXCEPTIONMSG RETURN / MESSAGE and uses an Has errors gate to choose between a success email body (analysis) or a failure email body (error summary). The figure below shows the full return path for results and failures. On the right, the destination workflow either responds normally (Respond to SAP server) or raises a workflow exception (Send exception to SAP server). SAP then maps that into the RFC outcome ( sy-subrc and message fields). On the left, the source workflow parses the RFC response structure and populates a single EmailBody variable using two cases: failure (error details) or success (analysis text). Figure: Response/exception flow Two things make this pattern easy to operationalize. First, the caller workflow does not need to understand every SAP field—only EXCEPTIONMSG and RETURN / MESSAGE are required to decide success vs failure. Second, the failure path intentionally aggregates details ( MESSAGE_V1 … MESSAGE_V4 plus the exception text) into a single readable string so errors don’t get trapped in run history. Callout: The caller workflow deliberately treats EXCEPTIONMSG != "ok" or RETURN / MESSAGE present as the single source of truth for failure, which keeps the decision logic stable even if the response schema grows. Detailed description Phase 1 — Destination workflow: choose “response” vs “exception” Respond to SAP server returns the normal response payload back to SAP. Send exception to SAP server raises a workflow failure with an Exception Error Message (the screenshot shows an example beginning with “Unexpected action in request:” and a token for Function Name). Outcome: SAP receives either a normal response or a raised exception for the RFC call. Phase 2 — SAP server: map workflow outcomes to RFC results The SAP-side wrapper code shown in the figure calls: CALL FUNCTION ' Z_GET_ORDERS_ANALYSIS ' DESTINATION DEST ... It declares exception mappings including: SENDEXCEPTIONTOSAPSERVER = 1 system_failure = 2 MESSAGE EXCEPTIONMSG communication_failure = 3 MESSAGE EXCEPTIONMSG OTHERS = 4 Then it uses CASE sy-subrc . to normalize outcomes (the figure shows WHEN 0. setting EXCEPTIONMSG = 'ok'., and WHEN 1. building a readable message for the workflow exception). Outcome: regardless of why it failed, SAP can provide a consistent set of fields back to the caller: a return structure and an exception/status message. Phase 3 — Source workflow: parse response and build one “email body” After the RFC action ([RFC] Call Z GET ORDERS ANALYSIS ) the source workflow performs: Save EXCEPTION message Extracts EXCEPTIONMSG from the response XML using XPath. Save RETURN message Extracts RETURN / MESSAGE from the response XML using XPath. Initialize email body Creates EmailBody once, then sets it in exactly one of two cases. Has errors (two cases) The condition treats the run as “error” if either: EXCEPTIONMSG is not equal to "ok", or RETURN / MESSAGE is not empty. Set email body (failure) / Set email body (success) Failure: builds a consolidated string containing RETURN / MESSAGE , message details ( MESSAGE_V1 ..V4), and EXCEPTIONMSG . Success: sets EmailBody to the ANALYSIS field extracted from the response. Outcome: the caller produces a single artifact (EmailBody) that is readable and actionable, without requiring anyone to inspect the raw RFC response. 7. Destination Workflow #2: Persisting failed rows as custom IDocs In this section I zoom in on the optional “IDoc persistence” branch at the end of the destination workflow. After the workflow identifies invalid rows (via the Data Validation Agent) and emails a verification summary, it can optionally call a second SAP RFC to save the failed rows as IDocs for later processing. This is mainly included to showcase another common SAP integration scenario—creating/handling IDocs—and to highlight that you can combine “AI-driven validation” with traditional enterprise workflows. The deeper motivation for invoking this as part of the agent tooling is covered in Part 2; here, the goal is to show the connector pattern and the custom RFC used to create IDocs from CSV input. The figure below shows the destination workflow at two levels: a high-level overview at the top, and a zoomed view of the post-validation remediation steps at the bottom. The zoom starts from Data Validation Agent → Summarize CSV payload review and then expands the sequence that runs after Send verification summary: Transform CSV to XML followed by an SAP RFC call that creates IDocs from the failed data. The key point is that this branch is not the main “analysis response” path. It’s a practical remediation option: once invalid rows are identified and reported, the workflow can persist them into SAP using a dedicated RFC ( Z_CREATE_ONLINEORDER_IDOC ) and a simple IT_CSV payload. This keeps the end-to-end flow modular: analysis can remain focused on validated data, while failed records can be routed to SAP for follow-up processing on their own timeline. Callout: This branch exists to showcase an IDoc-oriented connector scenario. The “why this is invoked from the agent tooling” context is covered in Part 2; here the focus is the mechanics of calling Z_CREATE_ONLINEORDER_IDOC with IT_CSV and receiving ET_RETURN / ET_DOCNUMS . The screenshot shows an XML body with the RFC root element and an SAP namespace: <z_create_onlineorder_idoc xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/"> <iv_direction>...</iv_direction> <iv_sndptr>...</iv_sndptr> <iv_sndprn>...</iv_sndprn> <iv_rcvptr>...</iv_rcvptr> <iv_rcvprn>...</iv_rcvprn> <it_csv> @{ ...Outputs... } </it_csv> <et_return></et_return> <et_docnums></et_docnums> </z_create_onlineorder_idoc> What to notice: the workflow passes invalid CSV rows in IT_CSV , and SAP returns a status table ( ET_RETURN ) and created document numbers ( ET_DOCNUMS ) for traceability. The payload includes standard-looking control fields ( IV_DIRECTION , IV_SNDPTR , IV_SNDPRN , IV_RCVPTR , IV_RCVPRN ) and the actual failed-row payload as IT_CSV . IT_CSV is populated via a Logic Apps expression (shown as @{ ...Outputs... } in the screenshot), which is the bridge between the prior transform step and the RFC call. The response side indicates table-like outputs: ET_RETURN and ET_DOCNUMS . 7.1 From CSV to IDocs I’ll cover the details of Destination workflow #2 in Part 2. In this post (Part 1), I focus on the contract and the end-to-end mechanics: what the RFC expects, what it returns, and how the created IDocs show up in the receiving workflow. Before looking at the RFC itself, it helps to understand the payload we’re building inside the IDoc. The screenshot below shows the custom segment definition used by the custom IDoc type. This segment is intentionally shaped to mirror the columns of the CSV input so the mapping stays direct and easy to reason about. Figure: Custom segment ZONLINEORDER000 (segment type ZONLINEORDER ) This segment definition is the contract anchor: it makes the CSV-to-IDoc mapping explicit and stable. Each CSV record becomes one segment instance with the same 14 business fields. That keeps the integration “boringly predictable,” which is exactly what you want when you’re persisting rejected records for later processing. The figure below shows the full loop for persisting failed rows as IDocs. The source workflow calls the custom RFC and sends the invalid CSV rows as XML. SAP converts each row into the custom segment and creates outbound IDocs. Those outbound IDocs are then received by Destination workflow #2, which processes them asynchronously (one workflow instance per IDoc) and appends results into shared storage for reporting. This pattern deliberately separates concerns: the first destination workflow identifies invalid rows and decides whether to persist them, SAP encapsulates the mechanics of IDoc creation behind a stable RFC interface, and a second destination workflow processes those IDocs asynchronously (one per IDoc), which is closer to how IDoc-driven integrations typically operate in production. Destination workflow #2 is included here to show the end-to-end contract and the “receipt” side of the connector scenario: Triggered by the SAP built-in trigger and checks FunctionName = IDOC_INBOUND_ASYNCHRONOUS extracts DOCNUM from the IDoc control record ( EDI_DC40 / DOCNUM ) reconstructs a CSV payload from the IDoc data segment (the fields shown match the segment definition) appends a “verification info” line to shared storage for reporting The implementation details of that workflow (including why it is invoked from the agent tooling) are covered in Part 2. 7.2 Z_CREATE_ONLINEORDER_IDOC - Contract overview The full source code for Z_CREATE_ONLINEORDER_IDOC is included in the supporting material. It’s too long to reproduce inline, so this post focuses on the contract—the part you need to call the RFC correctly and interpret its results. A quick note on authorship: most of the implementation was generated with Copilot, with manual review and fixes to resolve build errors and align the behavior with the intended integration pattern. The contract is deliberately generic because the goal was to produce an RFC that’s reusable across more than one scenario, rather than tightly coupled to a single workflow. At a high level, the RFC is designed to support: Both inbound and outbound IDoc creation It can either write IDocs to the SAP database (inbound-style persistence) or create/distribute IDocs outbound. Multiple IDoc/message/segment combinations IDoc type ( IDOCTYP ), message type ( MESTYP ), and segment type ( SEGTP ) are configurable so the same RFC can be reused. Explicit partner/port routing control Optional sender/receiver partner/port fields can be supplied when routing matters. Traceability of created artifacts The RFC returns created IDoc numbers so the caller can correlate “these failed rows” to “these IDocs.” Contract: Inputs (import parameters) IV_DIRECTION (default: 'O') — 'I' for inbound write-to-db, 'O' for outbound distribute/dispatch IV_IDOCTYP (default: ZONLINEORDERIDOC ) IV_MESTYP (default: ZONLINEORDER ) IV_SEGTP (default: ZONLINEORDER ) Optional partner/port routing fields: IV_SNDPRT , IV_SNDPRN , IV_RCVPRT , IV_RCVPRN , IV_RCVPOR Tables IT_CSV (structure ZTY_CSV_LINE ) — each row is one CSV line (the “table-of-lines” pattern) ET_RETURN (structure BAPIRET2 ) — success/warning/error messages (per-row and/or aggregate) ET_DOCNUMS (type ZTY_DOCNUM_TT ) — list of created IDoc numbers for correlation/traceability Outputs EV_DOCNUM — a convenience “primary / last created” DOCNUM value returned by the RFC 8. Concluding Remarks Part 1 established a stable SAP ↔ Logic Apps integration baseline: CSV moves end‑to‑end using explicit contracts, and failures are surfaced predictably. The source workflow reads CSV from Blob, wraps rows into the IT_CSV table‑of‑lines payload, calls Z_GET_ORDERS_ANALYSIS , and builds one outcome using two fields from the RFC response: EXCEPTIONMSG and RETURN / MESSAGE . The destination workflow gates requests, validates input, and returns only analysis (or errors) back to SAP while handling invalid rows operationally (notification + optional persistence). On the error path, we covered three concrete patterns to raise workflow failures back into SAP: the default connector exception ( SENDEXCEPTIONTOSAPSERVER ), named exceptions (explicit ABAP contract), and message‑class‑based errors (SAP‑native formatting via FORMAT_MESSAGE ). On the remediation side, we added a realistic enterprise pattern: persist rejected rows as custom IDocs via Z_CREATE_ONLINEORDER_IDOC ( IT_CSV in, ET_RETURN + ET_DOCNUMS out), using the custom segment ZONLINEORDER000 as the schema anchor and enabling downstream receipt in Destination workflow #2 (one run per IDoc, correlated via DOCNUM ). Part 2 is separate because it tackles a different problem: the AI layer. With contracts and error semantics now fixed, Part 2 can focus on the agent/tooling details that tend to iterate—rule retrieval, structured validation outputs, prompt constraints, token/history controls, and how the analysis output is generated and shaped—without muddying the transport story. Appendix 1: Parse XML with schema In this section I consider the CSV payload creation as an example, but parsing XML with schema applies in every place where we get an XML input to process, such as when receiving SAP responses, exceptions, or request/responses from other RFCs. Strong contract The Create_CSV_payload step in the shown implementation uses an xpath() + join() expression to extract LINE values from the incoming XML: join( xpath( xml(triggerBody()?['content']), '/*[local-name()="Z_GET_ORDERS_ANALYSIS"] /*[local-name()="IT_CSV"] /*[local-name()="ZTY_CSV_LINE"] /*[local-name()="LINE"]/text()' ), '\r\n' ) That approach works, but it’s essentially a “weak contract”: it assumes the message shape stays stable and that your XPath continues to match. By contrast, the Parse XML with schema action turns the XML payload into structured data based on an XSD, which gives you a “strong contract” and enables downstream steps to bind to known fields instead of re-parsing XML strings. The figure below compares two equivalent ways to build the CSV payload from the RFC input. On the left is the direct xpath() compose (labeled “weak contract”). On the right is the schema-based approach (labeled “strong contract”), where the workflow parses the request first and then builds the CSV payload by iterating over typed rows. What’s visible in the diagram is the key tradeoff: XPath compose path (left): the workflow creates the CSV payload directly using join(xpath(...), '\r\n') , with the XPath written using local-name() selectors. This is fast to prototype, but the contract is implicit—your workflow “trusts” the XML shape and your XPath accuracy. Parse XML with schema path (right): the workflow inserts a Parse XML with schema step (“ Parse Z GET ORDERS ANALYSIS request ”), initializes variables, loops For each CSV row, and Appends to CSV payload, then performs join(variables('CSVPayload'), '\r\n') . Here, the contract is explicit—your XSD defines what IT_CSV and LINE mean, and downstream steps bind to those fields rather than re-parsing XML. A good rule of thumb is: XPath is great for lightweight extraction, while Parse XML with schema is better when you want contract enforcement and long-term maintainability, especially in enterprise integration / BizTalk migration scenarios where schemas are already part of the integration culture. Implementation details The next figure shows the concrete configuration for Parse XML with schema and how its outputs flow into the “For each CSV row” loop. This is the “strong contract” version of the earlier XPath compose. This screenshot highlights three practical implementation details: The Parse action is schema-backed. In the Parameters pane, the action uses: Content: the incoming XML Response Schema source: LogicApp Schema name: Z_GET_ORDERS_ANALYSIS The code view snippet shows the same idea: type: "XmlParse" with content: " @triggerBody()?['content'] " and schema: { source: "LogicApp", name: "Z_GET_ORDERS_ANALYSIS.xsd" }. The parsed output becomes typed “dynamic content.” The loop input is shown as “ JSON Schema for element 'Z_GET_ORDERS_ANALYSIS: IT_CSV' ”. This is the key benefit: you are no longer scraping strings—you are iterating over a structured collection that was produced by schema-based parsing. The LINE extraction becomes trivial and readable. The “Append to CSV payload” step appends @item()?['LINE'] to the CSVpayload variable (as shown in the code snippet). Then the final Create CSV payload becomes a simple join(variables('CSVPayload'), '\r\n') . This is exactly the kind of “workflow readability” benefit you get once XML parsing is schema-backed. Schema generation The Parse action requires XSD schemas, which can be stored in the Logic App (or via a linked Integration Account). The final figure shows a few practical ways to obtain and manage those XSDs: Generate Schema (SAP connector): a “Generate Schema” action with Operation Type = RFC and an RFC Name field, which is a practical way to bootstrap schema artifacts when you already know the RFC you’re calling. Run Diagnostics / Fetch RFC Metadata: a “Run Diagnostics” action showing Operation type = Fetch RFC Metadata and RFC Name, which is useful to confirm the shape of the RFC interface and reconcile it with your XSD/contract. If you don’t want to rely solely on connector-side schema generation, there are also classic “developer tools” approaches: Infer XSD from a sample XML using .NET’s XmlSchemaInference (good for quick starting points). Generate XSD from an XML instance using xsd.exe (handy when you already have representative sample payloads) or by asking your favorite AI prompt. When to choose XPath vs Parse XML with schema (practical guidance) Generally speaking, choose XPath when… You need a quick extraction and you’re comfortable maintaining a single XPath. You don’t want to manage schema artifacts yet (early prototypes). Choose Parse XML with schema when… You want a stronger, explicit contract (XSD defines what the payload is). You want the designer to expose structured outputs (“JSON Schema for element …”) so downstream steps are readable and less brittle. You expect the message shape to evolve over time and prefer schema-driven changes over XPath surgery. Appendix 2: Using FORMAT_MESSAGE to produce SAP‑native error text When propagating failures from Logic Apps back into SAP (for example via Send exception to SAP server), I want the SAP side to produce a predictable, human‑readable message without forcing callers to parse connector‑specific payloads. ABAP’s FORMAT_MESSAGE is ideal for this because it converts SAP’s message context—message class, message number, and up to four variables—into the final message text that SAP would normally display, but without raising a UI message. What FORMAT_MESSAGE does FORMAT_MESSAGE formats a message defined in SAP’s message catalog ( T100 / maintained via SE91 ) using the values in sy-msgid , sy-msgno , and sy-msgv1 … sy-msgv4 . Conceptually, it answers the question: “Given message class + number + variables, what is the rendered message string?” This is particularly useful after an RFC call fails, where ABAP may have message context available even if the exception itself is not a clean string. Why this matters in an RFC wrapper In the message class–based exception configuration, the workflow can provide message metadata (class/number/type) so that SAP can behave “natively”: ABAP receives a failure ( sy-subrc <> 0), formats the message using FORMAT_MESSAGE , and returns the final text in a field like EXCEPTIONMSG (and/or in BAPIRET2 - MESSAGE ). The result is: consistent wording across systems and environments easier localization (SAP selects language-dependent text) separation of concerns: code supplies variables; message content lives in message maintenance A robust pattern After the RFC call, I use this order of precedence: Use any explicit text already provided (for example via system_failure … MESSAGE exceptionmsg), because it’s already formatted. If that’s empty but SAP message context exists ( sy-msgid / sy-msgno ), call FORMAT_MESSAGE to produce the final string. If neither is available, fall back to a generic message that includes sy-subrc . Here is a compact version of that pattern: DATA: lv_text TYPE string. CALL FUNCTION 'Z_GET_ORDERS_ANALYSIS' DESTINATION dest IMPORTING analysis = analysis TABLES it_csv = it_csv CHANGING return = return EXCEPTIONS sendexceptiontosapserver = 1 system_failure = 2 MESSAGE exceptionmsg communication_failure = 3 MESSAGE exceptionmsg OTHERS = 4. IF sy-subrc <> 0. "Prefer explicit message text if it already exists IF exceptionmsg IS INITIAL. "Otherwise format SAP message context into a string IF sy-msgid IS NOT INITIAL AND sy-msgno IS NOT INITIAL. CALL FUNCTION 'FORMAT_MESSAGE' EXPORTING id = sy-msgid no = sy-msgno v1 = sy-msgv1 v2 = sy-msgv2 v3 = sy-msgv3 v4 = sy-msgv4 IMPORTING msg = lv_text. exceptionmsg = lv_text. ELSE. exceptionmsg = |RFC failed (sy-subrc={ sy-subrc }).|. ENDIF. ENDIF. "Optionally normalize into BAPIRET2 for structured consumption return-type = 'E'. return-message = exceptionmsg. ENDIF. Common gotchas FORMAT_MESSAGE only helps if sy-msgid and sy-msgno are set. If the failure did not originate from an SAP message (or message mapping is disabled), these fields may be empty—so keep a fallback. Message numbers are typically 3-digit strings (e.g., 001, 012), matching how messages are stored in the catalog. FORMAT_MESSAGE formats text; it does not raise or display a message. That makes it safe to use in RFC wrappers and background processing. Bottom line: FORMAT_MESSAGE is a simple tool that helps workflow‑originated failures “land” in SAP as clean, SAP‑native messages—especially when using message classes to standardize and localize error text. References Logic Apps Agentic Workflows with SAP - Part 2: AI Agents Handling Errors in SAP BAPI Transactions | Microsoft Community Hub Access SAP from workflows | Microsoft Learn Create common SAP workflows | Microsoft Learn Generate Schemas for SAP Artifacts via Workflows | Microsoft Learn Parse XML using Schemas in Standard workflows - Azure Logic Apps | Microsoft Learn Announcing XML Parse and Compose for Azure Logic Apps GA Exception Handling | ABAP Keyword Documentation Handling and Propagating Exceptions - ABAP Keyword Documentation SAP .NET Connector 3.1 Overview SAP .NET Connector 3.1 Programming Guide All supporting content for this post may be found in the companion GitHub repository.Microsoft BizTalk Server Product Lifecycle Update
For more than 25 years, Microsoft BizTalk Server has supported mission-critical integration workloads for organizations around the world. From business process automation and B2B messaging to connectivity across industries such as financial services, healthcare, manufacturing, and government, BizTalk Server has played a foundational role in enterprise integration strategies. To help customers plan confidently for the future, Microsoft is sharing an update to the BizTalk Server product lifecycle and long-term support timelines. BizTalk Server 2020 will be the final version of BizTalk Server. Guidance to support long-term planning for mission-critical workloads This announcement does not change existing support commitments. Customers can continue to rely on BizTalk Server for many years ahead, with a clear and predictable runway to plan modernization at a pace that aligns with their business and regulatory needs. Lifecycle Phase End Date What’s Included Mainstream Support April 11, 2028 Security + non-security updates and Customer Service & Support (CSS) support Extended Support April 9, 2030 CSS support, Security updates, and paid support for fixes (*) End of Support April 10, 2030 No further updates or support (*) Paid Extended Support will be available for BizTalk Server 2020 between April 2028 and April 2030 for customers requiring hotfixes for non-security updates. CSS will continue providing their typical support. BizTalk Server 2016 is already out of mainstream support, and we recommend those customers evaluate a direct modernization path to Azure Logic Apps. Continued Commitment to Enterprise Integration Microsoft remains fully committed to supporting mission-critical integration, including hybrid connectivity, future-ready orchestration, and B2B/EDI modernization. Azure Logic Apps, part of Azure Integration Services — which includes API Management, Service Bus, and Event Grid — delivers the comprehensive integration platform for the next decade of enterprise connectivity. Host Integration Server: Continued Support for Mainframe Workloads Host Integration Server (HIS) has long provided essential connectivity for organizations with mainframe and midrange systems. To ensure continued support for those workloads, Host Integration Server 2028 will ship as a standalone product with its own lifecycle, decoupled from BizTalk Server. This provides customers with more flexibility and a longer planning horizon. Recognizing Mainframe modernization customers might be looking to integrate with their mainframes from Azure, Microsoft provides Logic Apps connectors for mainframe and midrange systems, and we are keen on adding more connectors in this space. Let us know about your HIS plans, and if you require specific features for Mainframe and midranges integration from Logic Apps at: https://aka.ms/lamainframe Azure Logic Apps: The Successor to BizTalk Server Azure Logic Apps, part of Azure Integration Services, is the modern integration platform that carries forward what customers value in BizTalk while unlocking new innovation, scale, and intelligence. With 1,400+ out-of-box connectors supporting enterprise, SaaS, legacy, and mainframe systems, organizations can reuse existing BizTalk maps, schemas, rules, and custom code to accelerate modernization while preserving prior investments including B2B/EDI and healthcare transactions. Logic Apps delivers elastic scalability, enterprise-grade security and compliance, and built-in cost efficiency without the overhead of managing infrastructure. Modern DevOps tooling, Visual Studio Code support, and infrastructure-as-code (ARM/Bicep) ensure consistent, governed deployments with end-to-end observability using Azure Monitor and OpenTelemetry. Modernizing Logic Apps also unlocks agentic business processes, enabling AI-driven routing, predictive insights, and context-aware automation without redesigning existing integrations. Logic Apps adapts to business and regulatory needs, running fully managed in Azure, hybrid via Arc-enabled Kubernetes, or evaluated for air-gapped environments. Throughout this lifecycle transition, customers can continue to rely on the BizTalk investments they have made while moving toward a platform ready for the next decade of integration and AI-driven business. Charting Your Modernization Path Microsoft remains fully committed to supporting customers through this transition. We recognize that BizTalk systems support highly customized and mission-critical business operations. Modernization requires time, planning, and precision. We hope to provide: Proven guidance and recommended design patterns A growing ecosystem of tooling supporting artifact reuse Unified Support engagements for deep migration assistance A strong partner ecosystem specializing in BizTalk modernization Potential incentive programs to help facilitate migration for eligible customers (details forthcoming) Customers can take a phased approach — starting with new workloads while incrementally modernizing existing BizTalk deployments. We’re Here to Help Migration resources are available today: Overview: https://aka.ms/btmig Best practices: https://aka.ms/BizTalkServerMigrationResources Video series: https://aka.ms/btmigvideo Feature request survey: https://aka.ms/logicappsneeds Reactor session: Modernizing BizTalk: Accelerate Migration with Logic Apps - YouTube Migration tool: A BizTalk Migration Tool: From Orchestrations to Logic Apps Workflows | Microsoft Community Hub We encourage customers to engage their Microsoft accounts team early to assess readiness, identify modernization opportunities, and explore assistance programs. Your Modernization Journey Starts Now BizTalk Server has played a foundational role in enterprise integration success for more than two decades. As you plan ahead, Microsoft is here to partner with you every step of the way, ensuring operational continuity today while unlocking innovation tomorrow. To begin your transition, please contact your Microsoft account team or visit our migration hub. Thank you for your continued trust in Microsoft and BizTalk Server. We look forward to partnering closely with you as you plan the future of your integration platforms. Frequently Asked Questions Do I need to migrate now? No. BizTalk Server 2020 is fully supported through April 11, 2028, with paid Extended Support available through April 9, 2030, for non-security hotfixes. CSS will continue providing their typical support. You have a long and predictable runway to plan your transition. Will there be a new BizTalk Server version? No. BizTalk Server 2020 is the final version of the product. What happens after April 9, 2030? BizTalk Server will reach End of Support, and security updates or technical assistance will no longer be provided. Workloads will continue running but without Microsoft servicing. Is paid support available past 2028? Yes. Paid extended support will be available through April 2030 for BizTalk Server 2020 customers looking for non-security hotfixes. CSS will continue to provide the typical support. What about BizTalk Server 2016 or earlier versions? Those versions are already out of mainstream support. We strongly encourage moving directly to Logic Apps rather than upgrading to BizTalk Server 2020. Will Host Integration Server continue? Yes. Host Integration Server (HIS) 2028 will be released as a standalone product with its own lifecycle and support commitments. Can I reuse BizTalk Server artifacts in Logic Apps? Yes. Most of BizTalk maps, schemas, rules, assemblies, and custom code can be reused with minimal effort using Microsoft and partner migration tooling. We welcome feature requests here: https://aka.ms/logicappsneeds Does modernization require moving fully to the cloud? No. Logic Apps supports hybrid deployments for scenarios requiring local processing or regulatory compliance, and fully disconnected environments are under evaluation. More information of the Hybrid deployment model here: https://aka.ms/lahybrid. Does modernization unlock AI capabilities? Yes. Logic Apps enables AI-driven automations through Agent Loop, improving routing, decisioning, and operational intelligence. Where do I get planning support? Your Microsoft account team can assist with assessment and planning. Migration resources are also linked in this announcement to help you get started. Microsoft Corporation3.7KViews3likes1CommentA BizTalk Migration Tool: From Orchestrations to Logic Apps Workflows
As organizations move toward cloud-native architecture, this project addresses one of the most challenging aspects of modernization: converting existing BizTalk artifacts into their Azure Logic Apps equivalents while preserving business logic and integration patterns. Architecture and Components The BizTalk Migration Starter is available here: haroldcampos/BizTalkMigrationStarter and consists of four main projects and a test project, each targeting a specific aspect of BizTalk migration: BTMtoLMLMigrator - BizTalk Map Converter The BTMtoLMLMigrator is a tool that converts BizTalk Maps (.btm files) to the Logic Apps Mapping Language (.lml files). BizTalk maps define data transformations between different message formats, using XSLT and functoids to implement complex mapping logic. Input: Output: Key Capabilities: Parses BizTalk map XML structure to extract transformation logic. Translates BizTalk functoids (string manipulation, mathematical operations, logical operations, date/time functions, etc.) to equivalent LML syntax Preserves source and target schema references Generates Logic Apps-compatible Liquid maps that can be directly deployed to Azure Integration Accounts Core Components: BtmParser.cs: Extracts map structure, functoid definitions, and link connections from BTM files FunctoidTranslator.cs: Converts BizTalk functoid operations to Logic Apps Maps template equivalents LmlGenerator.cs: Generates the final LML output BtmMigrator.cs: Orchestrates the entire conversion process Models.cs: Defines data structures for representing maps, functoids, and links To convert a single map: BTMtoLMLMigrator.exe -btm "C:\BizTalkMaps\OrderToInvoice.btm" -source "C:\Schemas\Order.xsd" -target "C:\Schemas\Invoice.xsd" -output "C:\Output\OrderToInvoice.lml" To Convert all maps in a directory: Be mindful of having the right naming for schemas in the BizTalk maps to avoid the tool picking up the wrong schemas: BTMtoLMLMigrator.exe -batchDir "C:\BizTalkMaps" -schemasDir "C:\Schemas" -outputDir "C:\Output\LMLMaps" Recommendations and Troubleshooting: Make sure to use the real schemas, source and destination, and the corresponding map. Most BizTalk functoids are supported, however for those who don’t, like scripting, it will add the code into the lml file, expecting you to conduct a re-design of the scenario. Currently the Data Mapper does not have a direct function that replaces scripting functoids. We are exploring alternatives for this. Use GHCP agent to troubleshoot the tool if you run into issues. ODXtoWFMigrator - Orchestration to Workflow Converter The ODXtoWFMigrator tackles one of the most complex aspects of BizTalk migration: converting BizTalk Orchestrations (.odx files) to Azure Logic Apps workflow definitions. Orchestrations represent business processes with sequential, parallel, and conditional logic flows. It requires the orchestration and bindings file, exported from the BizTalk Application. If you don't have orchestration, for Content Routing Based scenarios, it uses the bindings file only. Go to BizTalk Central Admin. Select your BizTalk Application: Export all bindings. You can have multiple orchestrations in one binding file, so is important to export all the information available. Input: Output: Key Capabilities: Parses BizTalk orchestration XML to extract process flows, shapes, and connections Maps BizTalk orchestration shapes (Receive, Send, Decide, Parallel, Loop, etc.) to Logic Apps actions and control structures Generates connector configurations for common integration patterns Creates comprehensive migration reports documenting the conversion process and any limitations Produces standard Logic Apps JSON workflow definitions ready for deployment Core Components: BizTalkOrchestrationParser.cs: Analyzes orchestration structure and extracts workflow patterns LogicAppsMapper.cs: Maps BizTalk orchestration shapes to Logic Apps equivalents LogicAppJSONGenerator.cs: Generates the final Logic Apps workflow JSON OrchestrationReportGenerator.cs: Creates detailed migration reports. Schemas/Connectors/connector-registry.json: Registry of connector mappings and configurations Recommendations and Troubleshooting: Most BizTalk shapes are supported, however for those who don’t, it will default to compose actions and inject the code or a comment. It supports most BizTalk adapters. If you need to add support to a new Logic Apps connector/service provider, you can update the connector-registry.json file by adding the trigger or action following the pattern for the other entries. This tool has been tested with multiple patterns and orchestrations. Use GHCP agent to troubleshoot the tool if you run into issues. The following are some of the supported commands. Please run the command line and review the README files to see all supported commands. Command Sample Usage MIGRATE / CONVERT With output file ODXtoWFMigrator.exe convert "C:\BizTalk\InventorySync.odx" "C:\BizTalk\BindingInfo.xml" "C:\Output\InventorySync.workflow.json" With refactored generator ODXtoWFMigrator.exe migrate "C:\BizTalk\MessageRouter.odx" "C:\BizTalk\BindingInfo.xml" --refactor BINDINGS-ONLY Basic bindings-only ODXtoWFMigrator.exe bindings-only "C:\BizTalk\ProductionBindings.xml" With output directory ODXtoWFMigrator.exe bindings-only "C:\BizTalk\BindingInfo.xml" "C:\LogicApps\BindingsWorkflows" With refactored generator ODXtoWFMigrator.exe bindings-only "C:\BizTalk\BindingInfo.xml" --refactor REPORT / ANALYZE Basic HTML report ODXtoWFMigrator.exe report "C:\BizTalk\OrderProcessing.odx" With output file ODXtoWFMigrator.exe report "C:\BizTalk\OrderProcessing.odx" --output "C:\Reports\OrderProcessing_Analysis.html" BATCH REPORT Process directory ODXtoWFMigrator.exe batch report --directory "C:\BizTalk\Orchestrations" Short directory flag ODXtoWFMigrator.exe batch report -d "C:\BizTalk\ContosoProject\Orchestrations" BATCH CONVERT Basic batch convert ODXtoWFMigrator.exe batch convert --directory "C:\BizTalk\Orchestrations" --bindings "C:\BizTalk\BindingInfo.xml" Alternative migrate syntax ODXtoWFMigrator.exe batch migrate -d "C:\BizTalk\AllOrchestrations" -b "C:\BizTalk\BindingInfo.xml" Specific files ODXtoWFMigrator.exe batch convert --files "C:\BizTalk\Order.odx,C:\BizTalk\Invoice.odx" --bindings "C:\BizTalk\BindingInfo.xml" With output directory ODXtoWFMigrator.exe batch convert -d "C:\BizTalk\Orchestrations" -b "C:\BizTalk\BindingInfo.xml" -o "C:\LogicApps\Workflows" With refactored generator ODXtoWFMigrator.exe batch convert -d "C:\BizTalk\Orchestrations" -b "C:\BizTalk\BindingInfo.xml" --refactor GENERATE-PACKAGE Basic package generation ODXtoWFMigrator.exe generate-package "C:\BizTalk\OrderProcessing.odx" "C:\BizTalk\BindingInfo.xml" With output directory ODXtoWFMigrator.exe generate-package "C:\BizTalk\OrderProcessing.odx" "C:\BizTalk\BindingInfo.xml" "C:\Deploy\OrderProcessing" With refactored generator ODXtoWFMigrator.exe package "C:\BizTalk\CloudIntegration.odx" "C:\BizTalk\BindingInfo.xml" --refactor ANALYZE-ODX / GAP-ANALYSIS Analyze directory ODXtoWFMigrator.exe analyze-odx "C:\BizTalk\LegacyOrchestrations" With output report ODXtoWFMigrator.exe analyze-odx "C:\BizTalk\Orchestrations" --output "C:\Reports\gap_analysis.json" LEGACY MODE Legacy basic ODXtoWFMigrator.exe "C:\BizTalk\OrderProcessing.odx" "C:\BizTalk\BindingInfo.xml" "C:\Output\OrderProcessing.json" BTPtoLA - Pipeline to Logic Apps Converter BTPtoLA handles the conversion of BizTalk pipelines to Logic Apps components. BizTalk pipelines process messages as they enter or leave the messaging engine, performing operations like validation, decoding, and transformation. Key Capabilities: Converts BizTalk receive and send pipelines to Logic Apps processing patterns Maps pipeline components (decoders, validators, disassemblers, etc.) to Logic Apps actions Preserves pipeline stage configurations and component properties Generates appropriate connector configurations for pipeline operations Input: Output: Core Components: Pipeline parsing and analysis logic Connector registry (Schemas/Connectors/pipeline-connector-registry.json) for mapping pipeline components Logic Apps workflow generation for pipeline equivalents To convert a receive pipeline: BTPtoLA.exe -pipeline "C:\Pipelines\ReceiveOrderPipeline.btp" -type receive -output "C:\Output\ReceiveOrderPipeline.json" To Convert a send pipeline: BTPtoLA.exe -pipeline "C:\Pipelines\SendInvoicePipeline.btp" -type send -output "C:\Output\SendInvoicePipeline.json" BizTalktoLogicApps.MCP - Model Context Protocol Server The MCP (Model Context Protocol) server provides a standardized interface for AI-assisted migration workflows. This component enables integration with AI tools and assistants to provide intelligent migration suggestions and automation. Key Capabilities: Exposes migration tools through a standardized MCP interface Enables AI-driven migration assistance and recommendations Provides tool handlers for map conversion and other migration operations Facilitates interactive migration workflows with AI assistance Core Components: McpServer.cs: Main MCP server implementation Server/ToolHandlers/MapConversionToolHandler.cs: Handler for map conversion operations test-requests.json: Test request definitions for validating MCP functionality BizTalktoLogicApps.Tests - Test Project A complete test project ensuring the reliability and accuracy of all migration tools, with integration tests that validate end-to-end conversion scenarios. Key Capabilities: Integration tests for BTM to LML conversion across multiple map files Schema validation and error handling tests Batch processing tests for converting multiple artifacts Output verification and quality assurance Where to upload your data: Upload your BizTalk artifacts in the Data directory, and run your tests using the Test explorer. For a complete demonstration, watch the video below:306Views0likes0CommentsLogic Apps Aviators Newsletter - February 2026
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month February 2026’s Ace Aviator: Camilla Bielk What's your role and title? What are your responsibilities? I’m a developer and solution architect, working as a consultant (at XLENT) in teams involved across the integration lifecycle—from understanding business needs and building new solutions to operations. Previously I worked a lot with BizTalk and with customers using both Azure and BizTalk. In my current role, I work exclusively with Azure. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? A typical day starts by syncing operational status with the team to ensure everything is running as expected (Logic Apps, Service Bus, Functions, API Management, etc.) and discussing any findings from the previous day. Then I continue with ongoing work, ranging from stakeholder meetings and designing new integration flows to C# development, operations, and maintenance of the existing landscape. I enjoy being involved across the entire chain. Working closely with operational issues provides valuable input into design decisions, and vice versa, which strengthens me as a designer and architect. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The openness to feedback and knowledge sharing from both community developers and the product team motivates me to stay active, continuously learn, and become the best consultant I can be. Passing knowledge to new team members is rewarding and completes the circle. Integration conferences are inspiring - networking with amazing people and discussing the technology we use every day. That’s also why we started the Nordic Integration Summit: to give more people the chance to take part in such an inspiring event. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Your social skills are more valuable than you might think. Teamwork is everything, and understanding business needs—even from non-technical colleagues—is just as important as technology. Listen to your heart and find environments where you can thrive. Programming languages change over time; the real takeaway is mastering common concepts and patterns and developing the ability to learn continuously. Stay curious and embrace new things as they come! What has helped you grow professionally? 100% of my professional growth comes from working with kind, humble, and collaborative people—teams where knowledge is shared freely in all directions, mistakes are allowed, and learning from them is encouraged. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? Better cost transparency—clearly showing which actions drive costs (executions, retries, data volume, logging) with visibility before production deployment. Design-time guidance that warns about expensive operations, anti-patterns, and inefficient configurations. News from our product group Introducing Unit Test Agent Profiles for Logic Apps & Data Maps Focused unit test agent profiles help Logic Apps Standard teams discover workflows and data maps, write reusable specifications, generate typed mocks/test data, and implement MSTest suites against the Automated Test SDK. The approach promotes spec-first testing, consistent artifacts, and reliable validations across scenarios (happy path, error handling, boundaries). The project demonstrates how GitHub Copilot custom agents and prompts can accelerate unit test authoring while enforcing constraints for maintainability in enterprise integrations. Automated Test Framework - Missing Tests in Test Explorer If Logic Apps Standard tests disappear from VS Code’s Test Explorer, the cause is typically a MSTest version mismatch introduced by a recent C# DevKit update. The fix is to update package references in the project (MSTest, Test SDK, and related dependencies) to supported versions. After adjusting packages and restarting VS Code, tests reappear and run as expected. The extension is being updated, but existing projects should apply the manual changes to restore a stable testing experience. Upcoming Agentic Azure Logic Apps Workshops Free workshops introduce Agentic Business Processes in Azure Logic Apps, including MCP Server integrations and agent loop patterns. Sessions cover connecting the enterprise with MCP Servers from Copilot Studio and building agentic workflows in a day using Logic Apps (Standard). Registration links are provided, with dates set in January. These events help practitioners explore agent tools, orchestration patterns, and practical scenarios for intelligent automation in modern integration landscapes. News from our community Enterprise AI ≠ Copilot Post by Al Ghoniem, MBA Deploying Copilot is not the same as building an enterprise AI strategy. This article distinguishes between adopting a product versus developing a core organizational capability. It explores why many AI rollouts stall when treated as point solutions and outlines what it takes to embed AI as a strategic, scalable function across the enterprise. For leaders evaluating their AI roadmaps, this piece offers a framework for thinking beyond tools toward sustainable transformation. BizTalk Server and WinSCP Error Post by Sandro Pereira A common SFTP adapter issue resurfaces when BizTalk Server cumulative updates are applied. This troubleshooting guide explains why the WinSCPnet assembly fails to load after upgrading from CU5 to CU6 and provides a version compatibility matrix for BizTalk Server 2020. It includes step-by-step instructions to resolve the error by copying the correct WinSCP binaries to the BizTalk installation folder, along with an automated PowerShell script for streamlined remediation. More on finding application registrations used by Logic Apps Post by Mikael Sand Building on earlier work around API connection discovery, this post extends KQL queries to find Logic Apps that authenticate via Client ID and secret in HTTP actions. Using Azure Resource Graph Explorer, the technique scans workflow parameters to identify application registration references. While the approach assumes parameters are used for credentials, it provides a practical method for auditing which Logic Apps depend on a given app registration across subscriptions. MCP Servers in Azure Logic Apps Agent Loops (Step-by-Step) Video by Stephen W. Thomas This walkthrough demonstrates how to move tool logic out of Logic App Agent loops and into reusable Model Context Protocol servers. The video covers setting up an MCP server, registering tools, and invoking them from within an agent workflow. By decoupling tool definitions from orchestration logic, developers can build modular, maintainable agentic systems that scale across multiple workflows and scenarios. From Rigid Choreography to Intelligent Collaboration: Agentic Orchestration as the Evolution of SOA Post by Steef-Jan Wiggers Integration has evolved from rigid, deterministic workflows to adaptive, goal-driven orchestrations powered by intelligent agents. This article traces the journey from BizTalk-era SOA composites to modern agentic patterns, explaining how AI-enabled orchestrators dynamically plan, self-correct, and communicate intent rather than follow fixed sequences. With examples from Azure Logic Apps Agent Loop, it shows how integration professionals can leverage existing connectors and APIs as tools within reasoning-based architectures. Azure Integrations That Actually Work in Production Post by Devarajan Gurusamy Architecture diagrams often look perfect in presentations but fail under real-world conditions. This article examines what it takes to build Azure integrations that survive production workloads, covering resilience patterns, error handling strategies, and operational considerations that separate proof-of-concepts from enterprise-grade solutions. It offers practical guidance for teams moving beyond demos toward reliable, maintainable integration implementations. Configuring BizTalk Server Backup Jobs for Azure Blob Storage Post by Sandro Pereira Modernizing BizTalk infrastructure can start with small, targeted improvements. This guide shows how to redirect native BizTalk backup jobs to Azure Blob Storage using SQL Server’s BACKUP TO URL feature. It covers generating SAS tokens, creating SQL credentials, updating job steps, and validating backups. The approach improves disaster recovery posture with geo-redundant cloud storage while keeping the BizTalk environment unchanged—no new components or custom code required. Building an AI Agent with Logic Apps Consumption and Agent Loop Post by Stephen W. Thomas Logic Apps Consumption with AgentLoop offers a fast path to practical AI agents, particularly for experimentation and low-volume workloads. This post walks through building a stateful poker-playing agent, demonstrating tool orchestration, agent instructions, and execution tracing. It compares Consumption and Standard tiers, explores cost efficiency, and shows how to offload reasoning to external models. At roughly a dollar for 25 runs, it’s an accessible entry point for agentic development. Microsoft Agent Framework Post by Al Ghoniem, MBA Microsoft’s open-source Agent Framework provides a comprehensive toolkit for building, orchestrating, and deploying AI agents in Python and .NET. It features graph-based workflows with streaming and checkpointing, multi-agent patterns, built-in observability via OpenTelemetry, and support for multiple LLM providers. The repository includes quickstart samples, migration guides from Semantic Kernel and AutoGen, and experimental labs for benchmarking and reinforcement learning. Storage Account: You do not have permissions to list the data using your user account with Entra ID Post by Sandro Pereira Being an Owner of an Azure Storage Account does not automatically grant access to its data. This common confusion trips up many developers working with Logic Apps and blob storage. The article explains the distinction between management roles and data-plane permissions, then walks through assigning Storage Blob Data roles via Access Control (IAM) to resolve the “You do not have permissions” error when authenticating with Microsoft Entra ID. Using Office 365 Outlook Connector in Logic Apps Standard Post by Srikanth Gunnala When configuring Office 365 Outlook (V2) with Managed Identity in Logic Apps Standard, the workflow designer may fail to auto-bind existing API connections—even when runtime works correctly. This post details the issue, explaining how a null referenceName in the workflow JSON causes the trigger to appear unbound. The fix involves manually setting the connection reference name in the workflow definition, after which the designer immediately recognizes the connection. Inside Logic Apps Standard: Understanding Compute Units (CU) for Storage Scaling Post by Daniel Jonathan High-volume Logic Apps can hit Azure Storage throttling limits. This deep-dive explains how Compute Units enable horizontal storage scaling by distributing workflow execution data across up to 32 storage accounts. It covers CU routing via Run ID suffixes, table distribution patterns, and configuration via host.json. Understanding this architecture is essential for building monitoring tools, querying run histories programmatically, or troubleshooting performance at scale. E59 - Roles & Personas Video by Sebastian Meyer This episode recaps highlights from SAP TechEd and Microsoft Ignite, with a focus on the SAP Innovation Guide and Logic Apps announcements. It explores roles and personas in the integration space, discussing how different team members contribute to enterprise integration projects. The conversation provides context for practitioners following both SAP and Microsoft ecosystems and offers insights into recent platform developments relevant to hybrid integration scenarios. Architecting Trust: Leveraging Microsoft Foundry to Solve AI Governance Challenges Post by Kent Weare As organizations scale AI deployments, governance becomes critical. This article explores how Microsoft Foundry addresses common AI governance challenges including model management, access controls, auditability, and compliance. It outlines architectural patterns for building trust into AI systems from the ground up and provides guidance for teams navigating the balance between innovation velocity and responsible AI practices in enterprise environments.Logic Apps Aviators Newsletter - January 2026
In this issue: Ace Aviator of the Month News from our product group Community Playbook News from our community Ace Aviator of the Month January's Ace Aviator: Sagar Sharma What's your role and title? What are your responsibilities? I’m a cross-domain Business Solution Architect specializing in delivering new business capabilities to customers. I design end-to-end architectures specially on Azure platforms and also in the Integration domain using azure integration services. My role involves marking architectural decisions, defining standards, ensuring platform reliability, guiding teams, and helping organizations transition from legacy integration systems to modern cloud-native patterns. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My day usually blends architecture work with hands-on collaboration. I review integration designs, refine patterns, help teams troubleshoot integration flows, and ensure deployments run smoothly through DevOps pipelines. A good part of my time is spent translating business needs into integration patterns and making sure the overall platform stays secure, scalable, and maintainable. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community has shaped a big part of my career. Many of my early breakthroughs came from blogs, samples, and talks shared by others. Contributing back feels like closing the loop. I enjoy sharing real-world lessons, learning from peers, and helping others adopt integration patterns with confidence. The energy of the community and the conversations it creates keep me inspired. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Focus on core concepts—messaging, APIs, security, and distributed systems—because tools evolve, but fundamentals stay relevant. Share your learning early, even if it feels small. Be curious about the “why” behind patterns. Build side projects, not just follow tutorials. And don’t fear a nonlinear career path—diverse experience is an asset in technology. What has helped you grow professionally? Hands-on customer work, strong mentors, and consistent learning habits have been key. Community involvement—writing, speaking, and collaborating—has pushed me to structure my knowledge and stay current. And working in environments that encourage experimentation has helped me develop faster and with more confidence. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? I’d love to see a unified, out-of-the-box business transaction tracing experience across Logic Apps, Service Bus, APIM, Functions, and downstream services. Something that automatically correlates events, visualizes the full journey of a transaction, and simplifies root-cause analysis. This would make operational troubleshooting dramatically easier in enterprise environments. News from our product group Microsoft BizTalk Server Product Lifecycle Update BizTalk Server 2020 will be the final release, with support extending through 2030. Microsoft encourages a gradual transition to Azure Logic Apps, offering migration tooling, hybrid deployment options, and reuse of existing BizTalk artifacts. Customers can modernize at their own pace while maintaining operational continuity. Data Mapper Test Executor: A New Addition to Logic Apps Standard Test Framework The Data Mapper Test Executor adds native support for testing XSLT and Data Mapper transformations directly within the Logic Apps Standard test framework. It streamlines validation, improves feedback cycles, and integrates with the latest SDK to enable reliable, automated testing of map generation and execution. Announcing General Availability of AI & RAG Connectors in Logic Apps (Standard) Logic Apps Standard AI and RAG connectors are now GA333, enabling native document processing, semantic search, embeddings, and agentic workflows. These capabilities let teams build intelligent, context‑aware automations using their own data, reducing complexity and enhancing decisioning across enterprise integrations. Logic Apps Labs The Logic Apps Labs, which introduces Azure Logic Apps agentic workflows learning path, offering guided modules on building conversational and autonomous agents, extending capabilities with MCP tools, and orchestrating multi‑agent workflows. It serves as a starting point for hands‑on labs covering design, deployment, and advanced patterns for intelligent automation. News from our community Handling Empty SQL Query Results in Azure Logic Apps Post by Anitha Eswaran If a SQL stored procedure returns no rows, you can detect the empty result set in Logic Apps by checking whether the output’s ResultSets object is {}. When empty, the workflow can be cleanly terminated or used to trigger alerts, ensuring predictable behavior and more resilient integrations. Azure Logic Apps MCP Server Post by Laveesh Bansal Our own Laveesh Bansal spent some time creating an Azure Logic Apps MCP Server that enables natural‑language debugging, workflow inspection, and updates without using the portal. It supports both Standard and Consumption apps, integrates with AI clients like Copilot and Claude, and offers tools for local or cloud‑hosted setups, testing, and workflow lifecycle operations. Azure Logic Apps: Change Detection in JSON Objects and Arrays Post by Suraj Somani Logic Apps offers native functions to detect changes in JSON objects and arrays without worrying about field or item order. Using equals() for objects and intersection() for arrays, you can determine when data has truly changed and trigger workflows only when updates occur, improving efficiency and reducing unnecessary processing. Logic Apps Standard: A clever hack to use JSON schemas in your Artifacts folder for JSON message validation (Part 1) Post by Şahin Özdemir Şahin outlines a workaround for using JSON schemas stored in the Artifacts folder to validate messages in Logic Apps Standard. It revisits integration needs from BizTalk migrations and shows how to bring structured validation into modern workflows without relying on Integration Accounts. This is a two part series and you can find part two here. Let's integrate SAP with Microsoft Video by Sebastian Meyer Sebastian has a new video out, and in this episode he and Martin Pankraz break down SAP GROW and RISE for Microsoft integration developers, covering key differences and integration options across IDoc, RFC, BAPI, SOAP, HTTPS, and OData, giving a concise overview of today’s SAP landscape and what it means for building integrations on Azure. Logic Apps Initialize variables action has a max limit of 20 variables Post by Sandro Pereira Logic Apps allows only 20 variables per Initialize variables action, and exceeding it triggers a validation error. This limit applies per action, not per workflow. Using objects, parameters, or Compose actions often reduces the need for many scalars and leads to cleaner, more maintainable workflows. Did you know that? It is a Friday Fact!Announcing the BizTalk Server 2020 Cumulative Update 7
The BizTalk Server product team has released the Cumulative Update 7 for BizTalk Server 2020. The Cumulative Update 7 contains all released functional and security fixes for customer-reported issues for BizTalk Server 2020. Also, CU7 includes support for the following new Microsoft platforms: Microsoft Visual Studio 2022 Microsoft Windows Server 2022 Microsoft SQL Server 2022 Microsoft Windows 11 BizTalk Server 2016 is currently out of support with its end of life in 2027. If you are running BizTalk 2016, or earlier versions of the product, you must upgrade to BizTalk Server 2020 CU6 or strongly consider migrating to Azure Logic Apps. Please fill this survey: https://aka.ms/biztalklogicapps. More Information about the CU7: This is an optional update only if you require VS 2022. If you don’t need VS2022, you can continue running on BizTalk Server CU6. CU7 requires that you re-create BizTalk groups for BizTalk Server instances that already were part of a BizTalk group before the installation. Existing BizTalk groups can't have different instances at different cumulative update level. We will provide support to our CU6 and CU7 customers. You can obtain the software from the Microsoft 365 admin center or the Visual Studio Subscriber site. For more information about the BizTalk Server 2020 CU7, read the Microsoft Knowledgebase article posted to https://aka.ms/BTS2020CU7KB .1.2KViews3likes2CommentsLogic Apps Aviators Newsletter - December 2025
In this issue: Ace Aviator of the Month News from our product group Community Playbook News from our community Ace Aviator of the Month December’s Ace Aviator: Daniel Jonathan What's your role and title? What are your responsibilities? I’m an Azure Integration Architect at Cnext, helping organizations modernize and migrate their integrations to Azure Integration Services. I design and build solutions using Logic Apps, Azure Functions, Service Bus, and API Management. I also work on AI solutions using Semantic Kernel and LangChain to bring intelligence into business processes. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My day usually begins by attending customer requests and handling recent deployments. Most of my time goes into designing integration patterns, building Logic Apps, mentoring the team, and helping customers with technical solutions. Lately, I’ve also been integrating AI capabilities into workflows. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community is open, friendly, and full of knowledge. I enjoy sharing ideas, writing posts, and helping others solve real-world challenges. It’s great to learn and grow together. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Start small and stay consistent. Learn the basics well—like messaging, retries, and error handling—before diving into complex tools. Keep learning and share what you know. What has helped you grow professionally? Hands-on experience, teamwork, and continuous learning. Working across different projects taught me how to design reliable and scalable systems. Exploring AI with Semantic Kernel and LangChain has also helped me think beyond traditional integrations. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? I’d add an “Overview Page” in Logic Apps containing the HTTP URLs for each workflow, so developers can quickly access to test from one place. It would save time and make working with multiple workflows much easier. News from our product group Logic Apps Community Day 2025 Playlist Did you miss or want to catch up on individual sessions from Logic Apps Community Day 2025? Here is the full playlist – choose your favorite sessions and have fun! The future of integration is here and it's agentic Missed Kent Weare and Divya Swarnkar session at Ignite? It is here for you to watch on demand. Enterprise integration is being reimagined. It’s no longer just about connecting systems, but about enabling adaptive, agentic workflows that unify apps, data, and systems. In this session, discover how to modernize integration, migrate from BizTalk, and adopt AI-driven patterns that deliver agility and intelligence. Through customer stories and live demos, see how to bring these workflows to life with Agent Loop in Azure Logic Apps. Public Preview: Azure Logic Apps Connectors as MCP Tools in Microsoft Foundry Unlock secure enterprise connectivity with Azure Logic Apps connectors as MCP tools in Microsoft Foundry. Agents can now use hundreds of connectors natively—no custom code required. Learn how to configure and register MCP servers for seamless integration. Announcing AI Foundry Agent Service Connector v2 (Preview) AI Foundry Agent Service Connector v2 (Preview) is here! Azure Logic Apps can now securely invoke Foundry agents, enabling low-code AI integration, multi-agent workflows, and faster time-to-value. Explore new operations for orchestration and monitoring. Announcing the General Availability of the XML Parse and Compose Actions in Azure Logic Apps XML Parse and Compose Actions are now GA in Azure Logic Apps! Easily handle XML with XSD schemas, streamline workflows, and support internationalization. Learn best practices for arrays, encoding, and safe transport of content. Clone a Consumption Logic App to a Standard Workflow Clone your Consumption Logic Apps into Standard workflows with ease! This new feature accelerates migration, preserves design, and unlocks advanced capabilities for modern integration solutions. Announcing the HL7 connector for Azure Logic Apps Standard and Hybrid (Public Preview) Connect healthcare systems effortlessly! The new HL7 connector for Azure Logic Apps (Standard & Hybrid) enables secure, standardized data exchange and automation using HL7 protocols—now in Public Preview. Announcing Foundry Control Plane support for Logic Apps Agent Loop (Preview) Foundry Control Plane now supports Logic Apps Agent Loop (Preview)! Manage, govern, and observe agents at scale with built-in integration—no extra steps required. Ensure trust, compliance, and scalability in the agentic era. Announcing General Availability of Agent Loop in Azure Logic Apps Agent Loop transforms Logic Apps into a multi-agent automation platform, enabling AI agents to collaborate with workflows and humans. Build secure, enterprise-ready agentic solutions for business automation at scale. Agent Loop Ignite Update - New Set of AI Features Arrive in Public Preview We are releasing a broad set of Agent Loop new and powerful AI-first capabilities in Public Preview that dramatically expand what developers can build: run agents in the Consumption SKU ,bring your own models through APIM AI Gateway, call any tool through MCP, deploy agents directly into Teams, secure RAG with document-level permissions, onboard with Okta, and build in a completely redesigned workflow designer. Announcing MCP Server Support for Logic Apps Agent Loop Agent Loop in Azure Logic Apps now supports Model Context Protocol (MCP), enabling secure, standardized tool integration. Bring your own MCP connector, use Azure-managed servers, or build custom connectors for enterprise workflows. Enabling API Key Authentication for Logic Apps MCP Servers Logic Apps MCP Servers now support API Key authentication alongside OAuth2 and Anonymous options. Configure keys via host.json or Azure APIs, retrieve and regenerate keys easily, and connect MCP clients securely for agentic workflows. Announcing Public Preview of Agent Loop in Azure Logic Apps Consumption Agent Loop now brings AI-powered automation to Logic Apps Consumption with a frictionless, pay-as-you-go model. Build autonomous and conversational agents using 1,400+ connectors—no dedicated infrastructure required. Ideal for rapid prototyping and enterprise workflows. Moving the Logic Apps Designer Forward Major redesign of Azure Logic Apps designer enters Public Preview for Standard workflows. Phase I focuses on faster onboarding, unified views, draft mode with auto-save, improved search, and enhanced debugging. Feedback will shape future phases for a seamless development experience. Announcing the General Availability of the RabbitMQ Connector RabbitMQ Connector for Azure Logic Apps is now generally available, enabling reliable message exchange for Standard and Hybrid workflows. It supports triggers, publishing, and advanced routing, with global rollout underway for robust, scalable integration scenarios. Duplicate Detection in Logic App Trigger Prevent duplicate processing in Logic Apps triggers with a REST API-based solution. It checks recent runs using clientTrackingId to avoid reprocessing items caused by edits or webhook updates. Works with Logic App Standard and adaptable for Consumption or Power Automate. Announcing the BizTalk Server 2020 Cumulative Update 7 BizTalk Server 2020 Cumulative Update 7 is out, adding support for Visual Studio 2022, Windows Server 2022, SQL Server 2022, and Windows 11. Includes all prior fixes. Upgrade from older versions or consider migrating to Azure Logic Apps for modernization. News from our community Logic Apps Local Development Series Post by Daniel Jonathan Last month I shared an article from Daniel about debugging XSLT in VS Code. This month, I bumped into not one, but five articles in a series about Build, Test and Run Logic Apps Standard locally – definitely worth the read! Working with sessions in Agentic Workflows Post by Simon Stender Build AI-powered chat experiences with session-based agentic workflows in Azure Logic Apps. Learn how they enable dynamic, stateful interactions, integrate with APIs and apps, and avoid common pitfalls like workflows stuck in “running” forever. Integration Love Story with Mimmi Gullberg Video by Ahmed Bayoumy and Robin Wilde Meet Mimmi Gullberg, Green Cargo’s new integration architect driving smarter, sustainable rail logistics. With experience from BizTalk to Azure, she blends tech and business insight to create real value. Her mantra: understand the problem first, then choose the right tools—Logic Apps, Functions, or AI. Integration Love Story with Jenny Andersson Video by Ahmed Bayoumy and Robin Wilde Discover Jenny Andesson’s inspiring journey from skepticism to creativity in tech. In this episode, she shares insights on life as an integration architect, tackling system challenges, listening to customers, and how AI is shaping the future of integration. You Can Get an XPath value in Logic Apps without returning an array Post by Luis Rigueira Working with XML in Azure Logic Apps? The xpath() function always returns an array—even for a single node. Or does it? Found how to return just the values you want on this Friday Fact from Luis Rigueira. Set up Azure Standard Logic App Connectors as MCP Server Video by Srikanth Gunnala Expose your Azure Logic Apps integrations as secure tools for AI assistants. Learn how to turn connectors like SAP, SQL, and Jira into MCP tools, protect them with Entra ID/OAuth, and test in GitHub Copilot Chat for safe, action-ready AI workflows. Making Logic Apps Speak Business Post by Al Ghoniem Stop forcing Logic Apps to look like business diagrams. With Business Process Tracking, you can keep workflows technically sound while giving business users clear, stage-based visibility into processes—decoupled, visual, and KPI-driven.527Views0likes0CommentsLogic Apps Aviators Newsletter - October 25
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month October Ace Aviator: Robin Wilde Business and Marketing Manager @Contica What's your role and title? What are your responsibilities? My role is Business and Marketing Manager at Contica. My main responsibility is helping our customers translate business challenges into technical solutions. I come from a technical background as a BizTalk and Azure developer and architect, so I have one foot in the technical world and the other in the business side. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? On any given day, I’m improving our customer offerings, diving into project challenges with colleagues, and exploring new technologies with Ahmed Bayoumy to find better ways of working. If I’m lucky, we’re recording a new episode of Integration Love Story with a community member/friend and learning from their life and tech experience. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community itself. Since I started working with integration, I’ve always felt that people genuinely want to share their knowledge and experience. The people behind the blog posts that helped me grow as a junior integration developer have turned out to be some of the most humble and generous individuals I’ve met. Everyone is open to sharing their experiences in a kind and respectful way, and that’s incredibly motivating. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Everyone has been a beginner. Don’t be afraid to reach out, ask for help and most importantly, stay curious and lean into new technology. “The more you know, the more you realize you don’t know.” And that’s the beauty of it! What has helped you grow professionally? Curiosity and the drive to keep learning. I've been something of a "Yes man" when it comes to new challenges. And being in an environment where the people around you accept you for who you are, every day, that’s a recipe for real growth. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? A built-in AI-powered "next action suggestion” in your Logic App workflow based on other designs in your resource group or predefined business process in the same logic app standard that you are working one. A feature that would help me be more productivity! News from our product group Logic Apps Live September 2025 Missed Logic Apps Live in September? You can watch it here. We shared our latest updates in AI, including the latest refresh on Agent Loop and the introduction of Logic Apps MCP support. Logic Apps Community Day 2025 We are bringing Logic Apps Community Day again this year, on October 30, 2025 (Pacific Time) and we want you to join us as we host a full day of learning where you will be the star! Check the agenda register for this must watch event. Logic Apps - MCP Demos Explore how Azure Logic Apps and API Center simplify building MCP servers to integrate with Salesforce, Dataverse, SharePoint, ServiceNow, and even Copilot Studio. Introducing Logic Apps MCP servers (Public Preview) Microsoft has launched public preview support for building MCP servers using Azure Logic Apps (Standard). This enables developers to turn connectors into modular, reusable tools for scalable agent creation. Two approaches are supported: streamlined setup via Azure API Center and custom configuration for advanced control. Both methods simplify integration across enterprise systems. Announcement! Python Code Interpreter in Logic Apps is now in Public Preview Logic Apps now support Python code execution via Azure Container Apps, enabling users to analyze data, generate insights, and automate tasks using natural language. This feature empowers business users to explore data without writing code, streamlining workflows across sales, finance, and operations. Azure Logic Apps: Ushering in the Era of Multi-Agentic Business Process Automation Microsoft has transformed Azure Logic Apps into a multi-agentic automation platform, introducing features like Agent Loop for AI-driven workflows, Python Code Interpreter for custom logic, and Foundry Agent Service for model integration. These enhancements enable intelligent, collaborative automation with conversational agents, advanced orchestration, and enterprise-grade security and observability. Calling Logic Apps MCP Server from Copilot Studio Microsoft’s Azure Logic Apps now supports MCP Server integration with Copilot Studio, enabling secure and scalable access to enterprise data assets. Using API Center and Logic Apps Standard, developers can expose workflows and connectors as MCP tools, authenticated via Entra ID and Managed Identity. This setup allows conversational agents in Copilot Studio to interact with enterprise systems efficiently and securely. News from our community What Every Developer Needs to Know about AI! Post by Stephen W. Thomas AI is rapidly transforming integration development, and this guide offers a practical roadmap for developers to stay relevant. It emphasizes three focus areas: building foundational AI knowledge, boosting productivity with tools like GitHub Copilot, and mastering enterprise-ready skills through APIs, Microsoft tools, MCP, and agents. With curated resources and expert recommendations, developers can confidently navigate the evolving AI landscape and apply it effectively in business scenarios. BizTalk to Logic Apps with Harold Campos: Standard or Hybrid, what changes & what maps Video by Ahmed Bayoumy Migrating from BizTalk to Azure? In this conversation with Harold Campos, Principal PM - Azure Logic Apps at Microsoft, Ahmed unpacks Logic Apps Hybrid and Logic apps options. What it is, when to use it, and how it helps teams modernize without losing on-prem control. Determinism vs Nondeterminism Post by Håkan Åkerblom Determinism or nondeterminism, two concepts that shape everything from science to philosophy. But what do they mean for system integration? Track Down and Delete Unused Logic Apps in Azure Post by Dieter Gobeyn Azure Logic Apps (Consumption) charge per execution, but even inactive ones can still cost money. Triggers like polling or timers may continue to run, leading to hidden costs. Cleaning up unused Logic Apps improves governance and supports FinOps goals. Handling “When a Blob is Added or Modified” Trigger Limitation and Pick Files from Subfolders with File Extension Filters Post by Prashant Singh Learn how to overcome Logic Apps’ limitations with blob triggers in subfolders and file filters in this Post from Prashant, learn about trigger conditions and Event Grid options to make your workflows smarter and more scalable. Consume an MCP Endpoint from Azure Logic Apps — with an Agent Loop Post by Daniel Jonathan In this post, Daniel shows how to use Azure Logic Apps with an Agent Loop to discover MCP tools, select the right one, and invoke it—all in one flow. Perfect for building smart, dynamic integrations with MCP servers. Logic Apps ❤️ MCP — Expose Arithmetic Tools (Add & Subtract) Post by Daniel Jonathan And Daniel is in a roll. In this post, Daniel shows how to expose simple arithmetic operations—like add and subtract—as MCP tools using Logic Apps. With Postman support, testing these agent-ready workflows is now easier than ever. Logic Apps & MCP - Leverage Your Existing Integration Platform Post by Pim Simons and Michel Pauwels Learn how to turn existing Logic Apps into MCP servers, enabling secure access for AI agents to your APIs, workflows, and data—without changing your architecture. A smart way to future-proof your integrations. Building Approval Workflows with Logic Apps, Adaptive Cards, and Microsoft Teams Post by Saroj Kumar Learn how to build secure, reusable approval workflows using Azure Logic Apps, Adaptive Cards, and Microsoft Teams. Approvers stay in Teams, while automation handles the rest—fast, transparent, and scalable. Microsoft Introduces Logic Apps as MCP Servers in Public Preview Post by Steef-Jan Wiggers In this InfoQ article, Steef-Jan introduces the public preview support for Azure Logic Apps as MCP servers, enabling scalable, secure integration with AI agents and enterprise systems. Build reusable tools and workflows that agents can discover and invoke with ease. The 56 Resubmits Trap in Logic App Standard Post by Luis Rigueira Logic App Standard limits bulk resubmissions to 56 runs every 5 minutes. This post shows how to bypass that using callback URLs for faster, parallel recovery—ideal when dealing with thousands of failed runs. Stop Using Azure Logic Apps for Data Integration Post by Al Ghoniem Workflow tools like Azure Logic Apps aren’t built for heavy data processing. This post explains why ETL/ELT tools are better for data integration—and how to combine both for scalable, reliable pipelines. Integration Love Story with Tom Canter Video by Ahmed Bayoumy and Robin Wilde In this episode of 'Integration Love Story,' Ahmed and Robin interview the legendary Tom Cantor. He discusses his early involvement and long-standing focus on integration, dating back to 1998. Tom emphasizes the community-centric nature of the integration field, recounting personal stories that highlight the value of trust and collaboration among peers.540Views0likes0CommentsLogic Apps Aviators Newsletter - September 25
In this issue: Ace Aviator of the Month News from our product group Community Playbook News from our community Ace Aviator of the Month September’s Ace Aviator: Kritika Singh Integration Architect & Sr. Consultant at Capgemini Norge AS What's your role and title? What are your responsibilities? I work as an Integration Architect & Sr. Consultant at Capgemini Norge AS. In my role I assist clients in addressing a wide range of integration challenges, with a particular emphasis on modernizing legacy systems, such as BizTalk, by transitioning them to cloud-native Azure iPaaS solutions. I’m responsible for architecting secure and scalable integration landscapes, designing and developing solutions, mentoring team members, and engaging with stakeholders and cross-functional teams. I work extensively with technologies such as Azure Logic Apps, Azure Functions, API Management, Service Bus, App Service Environment(ASEv3), Virtual Networks and CI/CD pipelines using GitHub. One of my proudest achievements was successfully delivering a complex BizTalk modernization project to Azure that required deep technical expertise and strategic coordination. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? As part of a distributed team across different locations and countries, my day starts with stand-ups involving both offshore and onshore team members to review progress and assign tasks. I spend time with developers, helping them navigate technical challenges and mentoring them through dedicated sessions—this has helped improve delivery quality and team confidence. I collaborate closely with cross-functional teams and stakeholders to align on requirements and solution design. Throughout the day, I work on resolving issues, improving existing solutions, work on innovation ideas and managing tasks. What motivates and inspires you to be an active member of the Aviators/Microsoft community? I am deeply passionate about technology—having worked with BizTalk throughout my career and, for the past 5+ years, diving into Azure iPaaS. Microsoft products evolve constantly, and that sparks my curiosity to explore, learn, and innovate every day. What truly drives me is the opportunity to give back to the community by sharing my learnings, challenges, and even failures. It’s all about growing together and inspiring others along the way. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Curiosity and consistency matter more than perfection. Don’t be afraid to ask questions, experiment, and fail, that’s where the real learning happens. Also, find a community that supports you; sharing your journey, both wins and setbacks, can inspire others and help you grow faster. What has helped you grow professionally? My professional growth has been shaped by a blend of curiosity, courage, and the right opportunities. Starting with BizTalk and evolving into Azure iPaaS, I’ve embraced every challenge as a chance to learn. What’s made the biggest difference is having the boldness to take on responsibility, the willingness to take risks, and the drive to keep growing. Sharing my journey with the community has not only helped others but also deepened my own learning. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? If I had a magic wand to enhance Logic Apps, I’d bring in three powerful features to supercharge developer productivity and solution resilience: Seamless Version Control & Rollback Having Git-like capabilities built into Logic Apps—track every change, compare versions, and roll back instantly when needed. This would empower teams to experiment confidently and collaborate more effectively without fear of breaking production workflows. Effortless Disaster Recovery Setup Setting up DR should be as simple as a few clicks. A built-in, automated DR configuration for AIS would ensure business continuity, reduce downtime, and give developers peace of mind—especially in mission-critical environments. Native JSON Mapper(Not Liquid) A visual, intuitive JSON mapping tool would simplify complex data transformations, reduce manual coding, and speed up development. This would be a game-changer for integration scenarios, especially when working with dynamic schemas and APIs. Simplified Authorization like ClaimChecks for Logic Apps Standard (Beyond EasyAuth) A more developer-friendly authorization setup that minimizes manual configurations and integrates seamlessly with identity providers. This would make securing Logic Apps faster, easier, and more consistent across environments. News from our product group Logic Apps Live August 2025 Missed Logic Apps Live in August? You can watch it here. We had a recap on Logic Apps Hybrid, our special guest Kritika Singh talking about her learnings with BizTalk Migration to AIS, and updates on Data Mapper GA and Logic Apps Standard Deployment Center. Logic Apps Community Day 2025 We are bringing Logic Apps Community Day again this year, on October 30, 2025 (Pacific Time) and we want you to join us as we host a full day of learning where you will be the star! Call for Speakers is still open until September 07, 2025 – so hurry and submit your session! General Availability: Enhanced Data Mapper Experience in Logic Apps (Standard) We’re excited to announce the General Availability (GA) of the redesigned Data Mapper UX in the Azure Logic Apps (Standard) extension for Visual Studio Code. This release marks a major milestone in our journey to modernize and streamline data transformation workflows for integration developer. Announcing: Setup CD in Azure Logic Apps Standard with Deployment Center Looking to automate your Azure Logic Apps code deploymentsin a faster way? Deployment Center - a built-in feature in Azure Logic Apps Standard - in now available, with built-in support on your VS Code projects, making it easier to deploy Logic Apps from your source control repository. Deployment Center is designed to make deploying, updating, and managing your Logic Apps workflows simple and straightforward. Hybrid Logic Apps deployment on Rancher K3s Kubernetes cluster Explore how Hybrid Logic Apps run effortlessly on K3s—delivering the power of Hybrid Logic Apps without the complexity and heavy infrastructure demands of a full Kubernetes cluster! News from our community What gets returned to the LLM by my Logic App Agent Loop tool? Video by Michael Stephenson Michael has been experimenting with Logic App Agent loop and, following a discussion with Kent Weare about the interaction between the workflow and the LLM, aimed to understand what data is returned to the model, since the number of tokens influences cost. He summarizes his findings from that conversation in this brief video. SOAP 1.2 Calls from Logic Apps – Fixing Unsupported Media & WS-Addressing Errors Post by Prashant Singh Struggling with SOAP 1.2 in Azure Logic Apps? Learn how to fix Unsupported Media errors, decode MTOM responses, and handle WS-Addressing headers for seamless integration in this post by Prashant. Demystifying AI Agent Loops in Logic Apps: The Future of Integration (But Not Everywhere) Post by Al Ghoniem Explore how AI Agent Loops enhance Azure Logic Apps for non-deterministic tasks like anomaly detection and IT Ops triage—while knowing when traditional workflows are the better fit. Can I use AI to create and deploy an Azure Logic Apps with Business Central connector? Post by Stefano Demiliani Stefano is testing the boundaries of what AI can do, so you don’t have to – he ran a blind test showing that AI can deploy Azure resources well—but struggles with external connectors like Business Central. Learn what worked, what didn’t, and why better prompts matter. How to use ChatGPT Agent Mode with Azure! Video by Stephen W. Thomas And looks like August was the month to experiment with Azure Resources. Stephen did some research too and shows you how easily ChatGPT-5 Agent Mode can auto-provision resources in Azure. This video demonstrates how to use a single prompt to build a logic app and create a resource group. Follow his video along to see how to get ChatGPT-5 agent working for you! Integration Love Story - Andrew Wilson Video by Ahmed Bayoumy and Robin Wilde In this special fast-paced episode recorded at INTEGRATE, Ahmed and Robin sit down with the brilliant Andrew, newly awarded Microsoft MVP and Logic Apps Ace Aviator, to talk about his journey, passions, and why integration is the powerhouse behind every digital experience. Tips for Migrating SAP IDoc Reception Workloads from BizTalk to Azure Logic Apps Post by Francois Malgreve Learn how to reuse BizTalk XSLTs in Azure Logic Apps! In this post by Francois, you will learn hot to configure the SAP trigger with the right IDoc format and namespace settings—minimizing code changes and easing migration. Query Azure DevOps work items with Logic App and Managed Identity Post by Michael Stephenson Learn how to use a reusable Logic App and a user-assigned managed identity to securely query Azure DevOps work items using WIQL—ideal for building scalable, secure workflows. Understand Agent Loops in Azure Logic Apps Video by Srikanth Gunnala In this video, Srikanth explore Azure Logic Apps AI Agents — also known as Agent Workflows or Agent Loops — and how they’re redefining workflow automation with Azure OpenAI. You’ll learn what an AI agent is in Azure Logic Apps, how it works, and see a live demo of building an AI-powered, adaptive workflow. Automate Microsoft Fabric Cost Savings with Logic Apps Post by Sherry L. Robinson Learn how to pause and resume Microsoft Fabric capacity using Azure Logic Apps—cutting costs during off-hours with minimal code and seamless integration via REST API or Resource Manager, in this insightful post by Sherry. Use Graph API to send Emails in Logic Apps Post by Şahin Özdemir In this post, Şahin shows your hot to use Microsoft Graph API with service principals to securely send emails from Logic Apps using app registrations and access policies. This will be quite useful in cases where you can’t associate the calls to a user account, which is a requirement for the Office 365 connector.530Views0likes0Comments