enterprise integration
98 TopicsLogic Apps Agentic Workflows with SAP - Part 2: AI Agents
Part 2 focuses on the AI-shaped portion of the destination workflows: how the Logic Apps Agent is configured, how it pulls business rules from SharePoint, and how its outputs are converted into concrete workflow artifacts. In Destination workflow #1, the agent produces three structured outputs—an HTML validation summary, a CSV list of InvalidOrderIds , and an Invalid CSV payload—which drive (1) a verification email, (2) an optional RFC call to persist failed rows as IDocs, and (3) a filtered dataset used for the separate analysis step that returns only analysis (or errors) back to SAP. In Destination workflow #2, the same approach is applied to inbound IDocs: the workflow reconstructs CSV from the custom segment, runs AI validation against the same SharePoint rules, and safely appends results to an append blob using a lease-based write pattern for concurrency. 1. Introduction In Part 1, the goal was to make the integration deterministic: stable payload shapes, stable response shapes, and predictable error propagation across SAP and Logic Apps. Concretely, Part 1 established: how SAP reaches Logic Apps (Gateway/Program ID plumbing) the RFC contracts ( IT_CSV , response envelope, RETURN / MESSAGE , EXCEPTIONMSG ) how the source workflow interprets RFC responses (success vs error) how invalid rows can be persisted into SAP as custom IDocs ( Z_CREATE_ONLINEORDER_IDOC ) and how the second destination workflow receives those IDocs asynchronously With that foundation in place, Part 2 narrows in on the part that is not just plumbing: the agent loop, the tool boundaries, and the output schemas that make AI results usable inside a workflow rather than “generated text you still need to interpret.” The diagram below highlights the portion of the destination workflow where AI is doing real work. The red-circled section is the validation agent loop (rules in, structured validation outputs out), which then fans out into operational actions like email notification, optional IDoc persistence, and filtering for the analysis step. What matters here is the shape of the agent outputs and how they are consumed by the rest of the workflow. The agent is not treated as a black box; it is forced to emit typed, workflow-friendly artifacts (summary + invalid IDs + filtered CSV). Those artifacts are then used deterministically: invalid rows are reported (and optionally persisted as IDocs), while valid rows flow into the analysis stage and ultimately back to SAP. What this post covers In this post, I focus on five practical topics: Agent loop design in Logic Apps: tools, message design, and output schemas that make the agent’s results deterministic enough to automate. External rule retrieval: pulling validation rules from SharePoint and applying them consistently to incoming payloads. Structured validation outputs → workflow actions: producing InvalidOrderIds and a filtered CSV payload that directly drive notifications and SAP remediation. Two-model pattern: a specialized model for validation (agent) and a separate model call for analysis, with a clean handoff between the two. Output shaping for consumption: converting AI output into HTML for email and into the SAP response envelope (analysis/errors only). (Everything else—SAP plumbing, RFC wiring, and response/exception patterns—was covered in Part 1 and is assumed here.) Next, I’ll break down the agent loop itself—the tool sequence, the required output fields, and the exact points where the workflow turns AI output into variables, emails, and SAP actions. Huge thanks to KentWeareMSFT for helping me understand agent loops and design the validation agent structure. And thanks to everyone in 🤖 Agent Loop Demos 🤖 | Microsoft Community Hub for making such great material available. Note: For the full set of assets used here, see the companion GitHub repository (workflows, schemas, SAP ABAP code, and sample files). 2. Validation Agent Loop In this solution, the Data Validation Agent runs inside the destination workflow after the inbound SAP payload has been normalized into a single CSV string. The agent is invoked as a single Logic Apps Agent action, configured with an Azure OpenAI deployment and a short set of instructions. Its inputs are deliberately simple at this stage: the CSV payload (the dataset to validate), and the ValidationRules reference (where the rule document lives), shown in the instructions as a parameter token. The figure below shows the validation agent configuration used in the destination workflow. The top half is the Agent action configuration (model + instructions), and the bottom half shows the toolset that the agent is allowed to use. The key design choice is that the agent is not “free-form chat”: it’s constrained by a small number of tools and a workflow-friendly output contract. What matters most in this configuration is the separation between instructions and tools. The instructions tell the agent what to do (“follow business process steps 1–3”), while the tools define how the agent can interact with external systems and workflow state. This keeps the agent modular: you can change rules in SharePoint or refine summarization expectations without rewriting the overall SAP integration mechanics. Purpose This agent’s job is narrowly scoped: validate the CSV payload from SAP against externally stored business rules and produce outputs that the workflow can use deterministically. In other words, it turns “validation as reasoning” into workflow artifacts (summary + invalid IDs + invalid payload), instead of leaving validation as unstructured prose. In Azure Logic Apps terms, this is an agent loop: an iterative process where an LLM follows instructions and selects from available tools to complete a multi-step task. Logic Apps agent workflows explicitly support this “agent chooses tools to complete tasks” model (see Agent Workflows Concepts). Tools In Logic Apps agent workflows, a tool is a named sequence that contains one or more actions the agent can invoke to accomplish part of its task (see Agent Workflows Concepts). In the screenshot, the agent is configured with three tools, explicitly labeled Get validation rules, Get CSV payload, Summarize CSV payload review. These tool names match the business process in the “Instructions for agent” box (steps 1–3). The next sections of the post go deeper into what each tool does internally; at this level, the important point is simply that the agent is constrained to a small, explicit toolset. Agent execution The screenshot shows the agent configured with: AI model: gpt-5-3 (gpt-5) A connection line: “Connected to … (Azure OpenAI)” Instructions for agent that define the agent’s role and a 3-step business process: Get validation rules (via the ValidationRules reference) Get CSV payload Summarize the CSV payload review, using the validation document This pattern is intentional: The instructions provide the agent’s “operating procedure” in plain language. The tools give the agent: controlled ways to fetch the rule document, access the CSV input, and return structured results. Because the workflow consumes the agent’s outputs downstream, the instruction text is effectively part of your workflow contract (it must remain stable enough that later actions can trust the output shape). Note: If a reader wants to recreate this pattern, the fastest path is: Start with the official overview of agent workflows (Workflows with AI Agents and Models - Azure Logic Apps). Follow a hands-on walkthrough for building an agent workflow and connecting it to an Azure OpenAI deployment (Logic Apps Labs is a good step-by-step reference). [ azure.github.io ] Use the Azure OpenAI connector reference to understand authentication options and operations available in Logic Apps Standard (see Built-in OpenAI Connector) If you’re using Foundry for resource management, review how Foundry connections are created and used, especially when multiple resources/tools are involved (see How to connect to AI foundry). 2.1 Tool 1: Get validation rules The first tool in the validation agent loop is Get validation rules. Its job is to load the business validation rules that will be applied to the incoming CSV payload from SAP. I keep these rules outside the workflow (in a document) so they can be updated without redeploying the Logic App. In this example, the rules are stored in SharePoint, and the tool simply retrieves the document content at runtime. Get validation rules is implemented as a single action called Get validation document. In the designer, you can see it: uses a SharePoint Online connection (SharePoint icon and connector action) calls GetFileContentByPath (shown by the “File Path” input) reads the rule file from the configured Site Address uses the workflow parameter token ValidationRules for the File Path (so the exact rule file location is configurable per environment) The output of this tool is the raw rule document content, which the Data Validation Agent uses in the next steps to validate the CSV payload. The bottom half of the figure shows an excerpt of the rules document. The format is simple and intentionally human-editable: each rule is expressed as FieldName : condition. For example, the visible rules include: PaymentMethod : value must exist PaymentMethod : value cannot be “Cash” OrderStatus : value must be different from “Cancelled” CouponCode : value must have at least 1 character OrderID : value must be unique in the CSV array A scope note: “Do not validate the Date field.” These rules are the “source of truth” for validation. The workflow does not hardcode them into expressions; instead, it retrieves them from SharePoint and passes them into the agent loop so the validation logic remains configurable and auditable (you can always point to the exact rule document used for a given run). A small but intentional rule in the document is “Do not validate the Date field.” That line is there for a practical reason: in an early version of the source workflow, the date column was being corrupted during CSV generation. The validation agent still tried to validate dates (even though date validation wasn’t part of the original intent), and the result was predictable: every row failed validation, leaving nothing to analyze. The upstream issue is fixed now, but I kept this rule in the demo to illustrate an important point: validation is only useful when it’s aligned with the data contract you can actually guarantee at that point in the pipeline. Note: The rules shown here assume the CSV includes a header row (field names in the first line) so the agent can interpret each column by name. If you want the agent to be schema‑agnostic, you can extend the rules with an explicit column mapping, for example: Column 1: Order ID Column 2: Date Column 3: Customer ID … This makes the contract explicit even when headers are missing or unreliable. With the rules loaded, the next tool provides the second input the agent needs: the CSV payload that will be validated against this document. 2.2 Tool 2: Get CSV payload The second tool in the validation agent loop is Get CSV payload. Its purpose is to make the dataset-to-validate explicit: it defines exactly what the agent should treat as “the CSV payload,” rather than relying on implicit workflow context. In this workflow, the CSV is already constructed earlier (as Create_CSV_payload ), and this tool acts as the narrow bridge between that prepared string and the agent’s validation step. Figure: Tool #2 (“Get CSV payload”) defines a single agent parameter and binds it to the workflow’s generated CSV. The figure shows two important pieces: - The tool parameter contract (“Agent Parameters”) On the right, the tool defines an agent parameter named CSV Payload with type String, and the description (highlighted in yellow) makes the intent explicit: “The CSV payload received from SAP and that we validate based on the validation rules.” This parameter is the tool’s interface: it documents what the agent is supposed to provide/consume when using this tool, and it anchors the rest of the validation process to a single, well-defined input. Tools in Logic Apps agent workflows exist specifically to constrain and structure what an agent can do and what data it operates on (see Agent Workflows Concepts). - Why there is an explicit Compose action (“CSV payload”) In the lower-right “Code view,” the tool’s internal action is shown as a standard Compose: { "type": "Compose", "inputs": "@outputs('Create_CSV_payload')" } This is intentional. Even though the CSV already exists in the workflow, the tool still needs a concrete action that produces the value it returns to the agent. The Compose step: pins the tool output to a single source of truth ( Create_CSV_payload ), and creates a stable boundary: “this is the exact CSV string the agent validates,” independent of other workflow state. Put simply: the Compose action isn’t there because Logic Apps can’t access the CSV—it’s there to make the agent/tool interface explicit, repeatable, and easy to troubleshoot. What “tool parameters” are (in practical terms) In Logic Apps agent workflows, a tool is a named sequence of one or more actions that the agent can invoke while executing its instructions. A tool parameter is the tool’s input/output contract exposed to the agent. In this screenshot, that contract is defined under Agent Parameters, where you specify: Name: CSV Payload Type: String Description: “The CSV payload received from SAP…” This matters because it clarifies (for both the model and the human reader) what the tool represents and what data it is responsible for supplying. With Tool #1 providing the rules document and Tool #2 providing the CSV dataset, Tool #3 is where the agent produces workflow-ready outputs (summary + invalid IDs + filtered payload) that the downstream steps can act on. 2.3 Tool 3: Summarize CSV payload review The third tool, Summarize CSV payload review, is where the agent stops being “an evaluator” and becomes a producer of workflow-ready outputs. It does most of the heavy lifting so let's go into the details. Instead of returning one blob of prose, the tool defines three explicit agent parameters—each with a specific format and purpose—so the workflow can reliably consume the results in downstream actions. In Logic Apps agent workflows, tools are explicitly defined tasks the agent can invoke, and each tool can be structured around actions and schemas that keep the loop predictable (see Agent Workflows Concepts). Figure: Tool #3 (“Summarize CSV payload review”) defines three structured agent outputs Description is not just documentation—it’s the contract the model is expected to satisfy, and it strongly shapes what the agent considers “relevant” when generating outputs. The parameters are: Validation summary (String) Goal: a human-readable summary that can be dropped straight into email. In the screenshot, the description is very explicit about shape and content: “expected format is an HTML table” “create a list of all orderids that have failed” “create a CSV document… only for the orderid values that failed… each row on a separate line” “include title row only in the email body” This parameter is designed for presentation: it’s the thing you want humans to read first. InvalidOrderIds (String, CSV format) Goal: a machine-friendly list of identifiers the workflow can use deterministically. The key part of the description (highlighted in the image) is: “The format is CSV.” That single sentence is doing a lot of work: it tells the model to emit a comma-separated list, which you then convert into an array in the workflow using split(...). Invalid CSV payload (String, one row per line) Goal: the failed rows extracted from the original dataset, in a form that downstream steps can reuse. The description constrains the output tightly: “original CSV rows… for the orderid values that failed validation” “each row must be on a separate line” “keep the title row only for the email body and remove it otherwise” This parameter is designed for automation: it becomes input to remediation steps (like transforming rows to XML and creating IDocs), not just a report. What “agent parameters” do here (and why they matter) A useful way to think about agent parameters is: they are the “typed return values” of a tool. Tools in agent workflows exist to structure work into bounded tasks the agent can perform, and a schema/parameter contract makes the results consumable by the rest of the workflow (see Agent Workflows Concepts). In this tool, the parameters serve two purposes at once: They guide the agent toward salient outputs. The descriptions explicitly name what matters: “failed orderids,” “HTML table,” “CSV format,” “one row per line,” “header row rules.” That phrasing makes it much harder for the model to “wander” into irrelevant commentary. They align with how the workflow will parse and use the results. By stating “ InvalidOrderIds is CSV,” you make it trivially parseable (split), and by stating “Invalid CSV payload is one row per line,” you make it easy to feed into later transformations. Why the wording works (and what wording tends to work best) What’s interesting about the parameter descriptions is that they combine three kinds of constraints: Output format constraints (make parsing deterministic) “expected format is an HTML table” “The format is CSV.” “each row must be on a separate line” These format cues help the agent decide what to emit and help you avoid brittle parsing later. Output selection constraints (force relevance) “only for the orderid values that failed validation” “Create a list of all orderids that have failed” This tells the agent what to keep and what to ignore. Output operational constraints (tie outputs to downstream actions) “Include title row only in the email body” “remove it otherwise” This explicitly anticipates downstream usage (email vs remediation), which is exactly the kind of detail models often miss unless you state it. Rule of thumb: wording works best when it describes what to produce, in what format, with what filtering rules, and why the workflow needs it. How these parameters tie directly to the downstream actions The next picture makes the design intent very clear: each parameter is immediately “bound” to a normal workflow value via Compose actions and then used by later steps. This is the pattern we want: agent output → Compose → (optional) normalization → reused by deterministic workflow actions. It’s the opposite of “read the model output and hope.” This is the reusable pattern: Decide the minimal set of outputs the workflow needs. Specify formats that are easy to parse. Write parameter descriptions that encode both selection and formatting constraints. Immediately bind outputs to workflow variables via Compose/ SetVariable actions. The main takeaway from this tool is that the agent is being forced into a structured contract: three outputs with explicit formats and clear intent. That contract is what makes the rest of the workflow deterministic—Compose actions can safely read @agentParameters(...), the workflow can safely split(...) the invalid IDs, and downstream actions can treat the “invalid payload” as real data rather than narrative. I'll show later how this same “parameter-first” design scales to other agent tools. 2.4 Turning agent outputs into a verification email Once the agent has produced structured outputs (Validation summary, InvalidOrderIds , and Invalid CSV payload), the next goal is to make those outputs operational: humans need a quick summary of what failed, and the workflow needs machine‑friendly values it can reuse downstream. The design here is intentionally straightforward: the workflow converts each agent parameter into a first‑class workflow output (via Compose actions and one variable assignment), then binds those values directly into the Office 365 email body. The result is an email that is both readable and actionable—without anyone needing to open run history. The figure below shows how the outputs of Summarize CSV payload review are mapped into the verification email. On the left, the tool produces three values via subsequent actions (Summary, Invalid order ids, and Invalid CSV payload), and the workflow also normalizes the invalid IDs into an array (Save invalid order ids). On the right, the Send verification summary action composes the email body using those same values as dynamic content tokens. Figure: Mapping agent outputs to the verification email The important point is that the email is not constructed by “re-prompting” or “re-summarizing.” It is assembled from already-structured outputs. This mapping is intentionally direct: each piece of the email corresponds to one explicit output from the agent tool. The workflow doesn’t interpret or transform the summary beyond basic formatting—its job is to preserve the agent’s structured outputs and present them consistently. The only normalization step happens for InvalidOrderIds , where the workflow also converts the CSV string into an array ( ArrayOfInvalidOrderIDs ) for later filtering and analysis steps. The next figure shows a sample verification email produced by this pipeline. It illustrates the three-part structure: an HTML validation summary table, the raw invalid order ID list, and the extracted invalid CSV rows: Figure: Sample verification email — validation summary table + invalid order IDs + invalid CSV rows. The extracted artifacts InvalidOrderIds and Invalid CSV payload are used in the downstream actions that persist failed rows as IDocs for later processing, which were presented in Part 1. I will get back to this later to talk about reusing the validation agent. Next however I will go over the data analysis part of the AI integration. 3. Analysis Phase: from validated dataset to HTML output After the validation agent loop finishes, the workflow enters a second AI phase: analysis. The validation phase is deliberately about correctness (what to exclude and why). The analysis phase is about insight, and it runs on the remaining dataset after invalid rows are filtered out. At a high level, this phase has three steps: Call Azure OpenAI to analyze the CSV dataset while explicitly excluding invalid OrderIDs . Extract the model’s text output from the OpenAI response object. Convert the model’s markdown output into HTML so it renders cleanly in email (and in the SAP response envelope). 3.1 OpenAI component: the “Analyze data” call The figure below shows the Analyze data action that drives the analysis phase. This action is executed after the Data Validation Agent completes, and it uses three messages: a system instruction that defines the task, the CSV dataset as input, and a second user message that enumerates the OrderIDs to exclude (the invalid IDs produced by validation). Figure: Azure OpenAI analysis call. The analysis call is structured as: system: define the task and constraints user: provide the dataset user: provide exclusions derived from validation system: Analyze dataset; provide trends/predictions; exclude specified orderids. user: <csv payload=""> user: Excluded orderids: <comma-separated ids="" invalid=""></comma-separated></csv> Two design choices are doing most of the work here: The model is given the dataset and the exclusions separately. This avoids ambiguity: the dataset is one message, and the “do not include these OrderIDs ” constraint is another. The exclusion list is derived from validation output, not re-discovered during analysis. The analysis step doesn’t re-validate; it consumes the validation phase’s results and focuses purely on trends/predictions. 3.2 Processing the response The next figure shows how the workflow turns the Azure OpenAI response into a single string that can be reused for email and for the SAP response. The workflow does three things in sequence: it parses the response JSON, extracts the model’s text content, and then passes that text into an HTML formatter. Figure: Processing the OpenAI response. This is the only part of the OpenAI response you need to understand for this workflow: Analyze_data response └─ choices[] (array) └─ [0] (object) └─ message (object) └─ content (string) <-- analysis text Everything else in the OpenAI response (filters, indexes, metadata) is useful for auditing but not required to build the final user-facing output. 3.3 Crafting the output to HTML The model’s output is plain text and often includes lightweight markdown structures (headings, lists, separators). To make the analysis readable in email (and safe to embed in the SAP response envelope), the workflow converts the markdown into HTML. The script was generated with copilot. Source code snippet may be found in Part 1. The next figure shows what the formatted analysis looks like when rendered. Not the explicit reference to the excluded OrderIDs and summary of the remaining dataset before listing trend observations. Figure: Example analysis output after formatting. 4. Closing the loop: persisting invalid rows as IDocs In Part 1, I introduced an optional remediation branch: when validation finds bad rows, the workflow can persist them into SAP as custom IDocs for later handling. In Part 2, after unpacking the agent loop, I want to reconnect those pieces and show the “end of the story”: the destination workflow creates IDocs for invalid data, and a second destination workflow receives those IDocs and produces a consolidated audit trail in Blob Storage. This final section is intentionally pragmatic. It shows: where the IDoc creation call happens, how the created IDocs arrive downstream, and how to safely handle many concurrent workflow instances writing to the same storage artifact (one instance per IDoc). 4.1 From “verification summary” to “Create all IDocs” The figure below shows the tail end of the verification summary flow. Once the agent produces the structured validation outputs, the workflow first emails the human-readable summary, then converts the invalid CSV rows into an SAP-friendly XML shape, and finally calls the RFC that creates IDocs from those rows. Figure: End of the validation/remediation branch. This is deliberately a “handoff point.” After this step, the invalid rows are no longer just text in an email—they become durable SAP artifacts (IDocs) that can be routed, retried, and processed independently of the original workflow run. 4.2 Z_CREATE_ONLINEORDER_IDOC and the downstream receiver The next figure is the same overview from Part 1. I’m reusing it here because it captures the full loop: the workflow calls Z_CREATE_ONLINEORDER_IDOC , SAP converts the invalid rows into custom IDocs, and Destination workflow #2 receives those IDocs asynchronously (one workflow run per IDoc). Figure 2: Invalid rows persisted as custom IDocs. This pattern is intentionally modular: Destination workflow #1 decides which rows are invalid and optionally persists them. SAP encapsulates the IDoc creation mechanics behind a stable RFC ( Z_CREATE_ONLINEORDER_IDOC ). Destination workflow #2 processes each incoming IDoc independently, which matches how IDoc-driven integrations typically behave in production. 4.3 Two phases in Destination workflow #2: AI agent + Blob Storage logging In the receiver workflow, there are two distinct phases: AI agent phase (per-IDoc): reconstruct a CSV view from the incoming IDoc payload and (optionally) run the same validation logic. Blob storage phase (shared output): append a normalized “verification line” into a shared blob in a concurrency-safe way. It’s worth calling out: in this demo, the IDocs being received were created from already-validated outputs upstream, so you could argue the second validation is redundant. I keep it anyway for two reasons: it demonstrates that the agent tooling is reusable with minimal changes, and in a general integration, Destination workflow #2 may receive IDocs from multiple sources, not only from this pipeline—so “validate on receipt” can still be valuable. 4.3.1 AI agent phase The figure below shows the validation agent used in Destination workflow #2. The key difference from the earlier agent loop is the output format: instead of producing an HTML summary + invalid lists, this agent writes a single “audit line” that includes the IDoc correlation key ( DOCNUM ) along with the order ID and the failed rules. Figure: Destination workflow #2 agent configuration. The reusable part here is the tooling structure: rules still come from the same validation document, the dataset is still supplied as CSV, and the summarization tool outputs a structured value the workflow can consume deterministically. The only meaningful change is “what shape do I want the output to take,” which is exactly what the agent parameter descriptions control. The next figure zooms in on the summarization tool parameter in Destination workflow #2. Instead of three outputs, this tool uses a single parameter ( VerificationInfo ) whose description forces a consistent line format anchored on DOCNUM . Figure 4: VerificationInfo parameter. This is the same design principle as Tool #3 in the first destination workflow: describe the output as a contract, not as a vague request. The parameter description tells the agent exactly what must be present ( DOCNUM + OrderId + failed rules) and therefore makes it straightforward to append the output to a shared log without additional parsing. Interesting snippets Extracting DOCNUM from the IDoc control record and carry it through the run: xpath(xml(triggerBody()?['content']), 'string(/*[local-name()="Receive"] /*[local-name()="idocData"] /*[local-name()="EDI_DC40"] /*[local-name()="DOCNUM"])') 4.3.2 Blob Storage phase Destination workflow #2 runs one instance per inbound IDoc. That means multiple runs can execute at the same time, all trying to write to the same “ ValidationErrorsYYYYMMDD.txt ” artifact. The figure below shows the resulting appended output: one line per IDoc, each line beginning with DOCNUM , which becomes the stable correlation key. Destination workflow #2 runs one instance per inbound IDoc, so multiple instances can attempt to write to the same daily “validation errors” append blob at the same time. The figure below shows the concurrency control pattern I used to make those writes safe: a short lease acquisition loop that retries until it owns the blob lease, then appends the verification line(s), and finally releases the lease. Figure: Concurrency-safe append pattern. Reading the diagram top‑to‑bottom, the workflow uses a simple lease → append → release pattern to make concurrent writes safe. Each instance waits briefly (Delay), attempts to acquire a blob lease (Acquire validation errors blob lease), and loops until it succeeds (Set status code → Until lease is acquired). Once a lease is obtained, the workflow stores the lease ID (Save lease id), appends its verification output under that lease (Append verification results), and then releases the lease (Release the lease) so the next workflow instance can write. Implementation note: the complete configuration for this concurrency pattern (including the HTTP actions, headers, retries, and loop conditions) is included in the attached artifacts, in the workflow JSON for Destination workflow #2. 5. Concluding remarks Part 2 zoomed in on the AI boundary inside the destination workflows and made it concrete: what the agent sees, what it is allowed to do, what it must return, and how those outputs drive deterministic workflow actions. The practical outcomes of Part 2 are: A tool-driven validation agent that produces workflow artifacts, not prose. The validation loop is constrained by tools and parameter schemas so its outputs are immediately consumable: an email-friendly validation summary, a machine-friendly InvalidOrderIds list, and an invalid-row payload that can be remediated. A clean separation between validation and analysis. Validation decides what not to trust (invalid IDs / rows) and analysis focuses on what is interesting in the remaining dataset. The analysis prompt makes the exclusion rule explicit by passing the dataset and excluded IDs as separate messages. A repeatable response-processing pipeline. You extract the model’s text from a stable response path ( choices[0].message.content ), then shape it into HTML once (markdown → HTML) so the same formatted output can be reused for email and the SAP response envelope. A “reuse with minimal changes” pattern across workflows. Destination workflow #2 shows the same agent principles applied to IDoc reception, but with a different output contract optimized for logging: DOCNUM + OrderId + FailedRules . This demonstrates that the real reusable asset is the tool + parameter contract design. Putting It All Together We have a full integration story where SAP, Logic Apps, AI, and IDocs are connected with explicit contracts and predictable behavior: Part 1 established the deterministic integration foundation. SAP ↔ Logic Apps connectivity (gateway/program wiring) RFC payload/response contracts ( IT_CSV , response envelope, error semantics) predictable exception propagation back into SAP an optional remediation branch that persists invalid rows as IDocs via a custom RFC ( Z_CREATE_ONLINEORDER_IDOC ) and the end-to-end response handling pattern in the caller workflow. Part 2 layered AI on top without destabilizing the contracts. Agent loop + tools for rule retrieval and validation output schemas that convert “reasoning” into workflow artifacts a separate analysis step that consumes validated data and produces formatted results and an asynchronous IDoc receiver that logs outcomes safely under concurrency. The reason it works as a two-part series is that the two layers evolve at different speeds: The integration layer (Part 1) should change slowly. It defines interoperability: payload shapes, RFC names, error contracts, and IDoc interfaces. The AI layer (Part 2) is expected to iterate. Prompts, rule documents, output formatting, and agent tool design will evolve as you tune behavior and edge cases. References Logic Apps Agentic Workflows with SAP - Part 1: Infrastructure 🤖 Agent Loop Demos 🤖 | Microsoft Community Hub Agent Workflows Concepts Workflows with AI Agents and Models - Azure Logic Apps Built-in OpenAI Connector How to connect to AI foundry Create Autonomous AI Agent Workflows - Azure Logic Apps Handling Errors in SAP BAPI Transactions Access SAP from workflows Create common SAP workflows Generate Schemas for SAP Artifacts via Workflows Exception Handling | ABAP Keyword Documentation Handling and Propagating Exceptions - ABAP Keyword Documentation SAP .NET Connector 3.1 Overview SAP .NET Connector 3.1 Programming Guide Connect to Azure AI services from Workflows All supporting content for this post may be found in the companion GitHub repository.Microsoft BizTalk Server Product Lifecycle Update
For more than 25 years, Microsoft BizTalk Server has supported mission-critical integration workloads for organizations around the world. From business process automation and B2B messaging to connectivity across industries such as financial services, healthcare, manufacturing, and government, BizTalk Server has played a foundational role in enterprise integration strategies. To help customers plan confidently for the future, Microsoft is sharing an update to the BizTalk Server product lifecycle and long-term support timelines. BizTalk Server 2020 will be the final version of BizTalk Server. Guidance to support long-term planning for mission-critical workloads This announcement does not change existing support commitments. Customers can continue to rely on BizTalk Server for many years ahead, with a clear and predictable runway to plan modernization at a pace that aligns with their business and regulatory needs. Lifecycle Phase End Date What’s Included Mainstream Support April 11, 2028 Security + non-security updates and Customer Service & Support (CSS) support Extended Support April 9, 2030 CSS support, Security updates, and paid support for fixes (*) End of Support April 10, 2030 No further updates or support (*) Paid Extended Support will be available for BizTalk Server 2020 between April 2028 and April 2030 for customers requiring hotfixes for non-security updates. CSS will continue providing their typical support. BizTalk Server 2016 is already out of mainstream support, and we recommend those customers evaluate a direct modernization path to Azure Logic Apps. Continued Commitment to Enterprise Integration Microsoft remains fully committed to supporting mission-critical integration, including hybrid connectivity, future-ready orchestration, and B2B/EDI modernization. Azure Logic Apps, part of Azure Integration Services — which includes API Management, Service Bus, and Event Grid — delivers the comprehensive integration platform for the next decade of enterprise connectivity. Host Integration Server: Continued Support for Mainframe Workloads Host Integration Server (HIS) has long provided essential connectivity for organizations with mainframe and midrange systems. To ensure continued support for those workloads, Host Integration Server 2028 will ship as a standalone product with its own lifecycle, decoupled from BizTalk Server. This provides customers with more flexibility and a longer planning horizon. Recognizing Mainframe modernization customers might be looking to integrate with their mainframes from Azure, Microsoft provides Logic Apps connectors for mainframe and midrange systems, and we are keen on adding more connectors in this space. Let us know about your HIS plans, and if you require specific features for Mainframe and midranges integration from Logic Apps at: https://aka.ms/lamainframe Azure Logic Apps: The Successor to BizTalk Server Azure Logic Apps, part of Azure Integration Services, is the modern integration platform that carries forward what customers value in BizTalk while unlocking new innovation, scale, and intelligence. With 1,400+ out-of-box connectors supporting enterprise, SaaS, legacy, and mainframe systems, organizations can reuse existing BizTalk maps, schemas, rules, and custom code to accelerate modernization while preserving prior investments including B2B/EDI and healthcare transactions. Logic Apps delivers elastic scalability, enterprise-grade security and compliance, and built-in cost efficiency without the overhead of managing infrastructure. Modern DevOps tooling, Visual Studio Code support, and infrastructure-as-code (ARM/Bicep) ensure consistent, governed deployments with end-to-end observability using Azure Monitor and OpenTelemetry. Modernizing Logic Apps also unlocks agentic business processes, enabling AI-driven routing, predictive insights, and context-aware automation without redesigning existing integrations. Logic Apps adapts to business and regulatory needs, running fully managed in Azure, hybrid via Arc-enabled Kubernetes, or evaluated for air-gapped environments. Throughout this lifecycle transition, customers can continue to rely on the BizTalk investments they have made while moving toward a platform ready for the next decade of integration and AI-driven business. Charting Your Modernization Path Microsoft remains fully committed to supporting customers through this transition. We recognize that BizTalk systems support highly customized and mission-critical business operations. Modernization requires time, planning, and precision. We hope to provide: Proven guidance and recommended design patterns A growing ecosystem of tooling supporting artifact reuse Unified Support engagements for deep migration assistance A strong partner ecosystem specializing in BizTalk modernization Potential incentive programs to help facilitate migration for eligible customers (details forthcoming) Customers can take a phased approach — starting with new workloads while incrementally modernizing existing BizTalk deployments. We’re Here to Help Migration resources are available today: Overview: https://aka.ms/btmig Best practices: https://aka.ms/BizTalkServerMigrationResources Video series: https://aka.ms/btmigvideo Feature request survey: https://aka.ms/logicappsneeds Reactor session: Modernizing BizTalk: Accelerate Migration with Logic Apps - YouTube Migration tool: A BizTalk Migration Tool: From Orchestrations to Logic Apps Workflows | Microsoft Community Hub We encourage customers to engage their Microsoft accounts team early to assess readiness, identify modernization opportunities, and explore assistance programs. Your Modernization Journey Starts Now BizTalk Server has played a foundational role in enterprise integration success for more than two decades. As you plan ahead, Microsoft is here to partner with you every step of the way, ensuring operational continuity today while unlocking innovation tomorrow. To begin your transition, please contact your Microsoft account team or visit our migration hub. Thank you for your continued trust in Microsoft and BizTalk Server. We look forward to partnering closely with you as you plan the future of your integration platforms. Frequently Asked Questions Do I need to migrate now? No. BizTalk Server 2020 is fully supported through April 11, 2028, with paid Extended Support available through April 9, 2030, for non-security hotfixes. CSS will continue providing their typical support. You have a long and predictable runway to plan your transition. Will there be a new BizTalk Server version? No. BizTalk Server 2020 is the final version of the product. What happens after April 9, 2030? BizTalk Server will reach End of Support, and security updates or technical assistance will no longer be provided. Workloads will continue running but without Microsoft servicing. Is paid support available past 2028? Yes. Paid extended support will be available through April 2030 for BizTalk Server 2020 customers looking for non-security hotfixes. CSS will continue to provide the typical support. What about BizTalk Server 2016 or earlier versions? Those versions are already out of mainstream support. We strongly encourage moving directly to Logic Apps rather than upgrading to BizTalk Server 2020. Will Host Integration Server continue? Yes. Host Integration Server (HIS) 2028 will be released as a standalone product with its own lifecycle and support commitments. Can I reuse BizTalk Server artifacts in Logic Apps? Yes. Most of BizTalk maps, schemas, rules, assemblies, and custom code can be reused with minimal effort using Microsoft and partner migration tooling. We welcome feature requests here: https://aka.ms/logicappsneeds Does modernization require moving fully to the cloud? No. Logic Apps supports hybrid deployments for scenarios requiring local processing or regulatory compliance, and fully disconnected environments are under evaluation. More information of the Hybrid deployment model here: https://aka.ms/lahybrid. Does modernization unlock AI capabilities? Yes. Logic Apps enables AI-driven automations through Agent Loop, improving routing, decisioning, and operational intelligence. Where do I get planning support? Your Microsoft account team can assist with assessment and planning. Migration resources are also linked in this announcement to help you get started. Microsoft Corporation3.7KViews3likes1CommentAzure Arc Jumpstart Template for Hybrid Logic Apps Deployment
Today I’m sharing a new Azure Arc Jumpstart template (Jumpstart Drop) that implements a hybrid deployment for Azure Logic Apps (Standard) on Azure Arc-enabled AKS cluster which enables deployment with a single command that sets up a complete, working environment for testing your scenarios. Jumpstart Drop link: Azure Arc Jumpstart Hybrid deployment model allows you to run Logic Apps workloads on your own infrastructure, providing you with the option to host your integration solutions on premises, in a private cloud, or even in a third-party public cloud. You can find more information in this article:Set up your own infrastructure for Standard logic app workflows - Azure Logic Apps | Microsoft Learn Demo video: Given that Hybrid Logic Apps is designed to support mission-critical, enterprise-level integration scenarios, with provision to configure on custom infrastructure across both Azure and Kubernetes environments—the setup can naturally be complex. This Jumpstart template addresses these intricacies by simplifying onboarding and providing an all-in-one script for comprehensive testing and validation. This Jumpstart template deploys the following components and verifies each deployment stage. You can customize each step as per your requirements by modifying the script. An AKS cluster as the Kubernetes substrate for the scenario. Azure Arc enablement for Kubernetes to onboard the cluster into Azure for centralized governance and operations. ACA extension, Custom location and Connected Environment. Azure SQL Server for runtime storage. Azure storage account for SMB artifacts storage. A hybrid logic Apps resource. Getting started : Visit the Jumpstart URL Azure Arc Jumpstart and follow the steps below. Check prerequisites and permissions (see Prerequisites tab). Download the GitHub repo and run the deployment command (see Quick Start tab). After deployment, run test commands to confirm everything works (see Testing tab). Feedback and contributions Jumpstart templates are built for the community and improved with community input. If you try this drop and have suggestions (documentation clarity, additional validation, new scenarios, or fixes), please share feedback through the https://github.com/ShreeDivyaMV/LogicApps_Jumpstart_Templates repository associated with the drop. References: Announcement: General Availability of Logic Apps Hybrid Deployment Model | Microsoft Community Hub Create Standard logic app workflows for hybrid deployment - Azure Logic Apps | Microsoft Learn https://github.com/ShreeDivyaMV/LogicApps_Jumpstart_Templates Azure Arc JumpstartAutomated Test Framework - Missing Tests in Test Explorer
Have your tests created using the Logic Apps Standard Automated Test Framework disappeared from the Test Explorer in VS Code all of a sudden? The answer is a mismatch between the MSTest versions used. A recent update in the C# DevKit extension changed the minimum requirements for the MSTest library - it now requires the minimum of 3.7.*. The project scaffolding created byt the Logic Apps Standard extension uses 3.0.2. The good news is that you can fix this by just changing package versions on your project. Follow the instructions below to have this fixed: Open the .csproj that the extension created (e.g. LogicApps.csproj). Find the ItemGroup containing your package references. It should look like this: <ItemGroup> <PackageReference Include="MSTest" Version="3.2.0"/> <PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.8.0"/> <PackageReference Include="MSTest.TestAdapter" Version="3.2.0"/> <PackageReference Include="MSTest.TestFramework" Version="3.2.0"/> <PackageReference Include="Microsoft.Azure.Workflows.WebJobs.Tests.Extension" Version="1.0.0"/> <PackageReference Include="coverlet.collector" Version="3.1.2"/> </ItemGroup> Replace the package references with this new version: <ItemGroup> <PackageReference Include="Microsoft.Azure.Workflows.WebJobs.Tests.Extension" Version="1.*" /> <PackageReference Include="MSTest" Version="4.0.2" /> <PackageReference Include="coverlet.collector" Version="3.2.0" /> </ItemGroup> Once you make this change, restart VSCode window and rebuild your project - Test Explorer will start showing your tests again. Notice that this package reference is simplified as some of the previous references are already in the dependency chain, so don't need to be explicitly added. ℹ️ We are updating our extension to make sure that new projects are being generated with the new values, but you should make those changes manually on existing projects.352Views0likes0CommentsLogic Apps Aviators Newsletter - February 2026
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month February 2026’s Ace Aviator: Camilla Bielk What's your role and title? What are your responsibilities? I’m a developer and solution architect, working as a consultant (at XLENT) in teams involved across the integration lifecycle—from understanding business needs and building new solutions to operations. Previously I worked a lot with BizTalk and with customers using both Azure and BizTalk. In my current role, I work exclusively with Azure. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? A typical day starts by syncing operational status with the team to ensure everything is running as expected (Logic Apps, Service Bus, Functions, API Management, etc.) and discussing any findings from the previous day. Then I continue with ongoing work, ranging from stakeholder meetings and designing new integration flows to C# development, operations, and maintenance of the existing landscape. I enjoy being involved across the entire chain. Working closely with operational issues provides valuable input into design decisions, and vice versa, which strengthens me as a designer and architect. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The openness to feedback and knowledge sharing from both community developers and the product team motivates me to stay active, continuously learn, and become the best consultant I can be. Passing knowledge to new team members is rewarding and completes the circle. Integration conferences are inspiring - networking with amazing people and discussing the technology we use every day. That’s also why we started the Nordic Integration Summit: to give more people the chance to take part in such an inspiring event. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Your social skills are more valuable than you might think. Teamwork is everything, and understanding business needs—even from non-technical colleagues—is just as important as technology. Listen to your heart and find environments where you can thrive. Programming languages change over time; the real takeaway is mastering common concepts and patterns and developing the ability to learn continuously. Stay curious and embrace new things as they come! What has helped you grow professionally? 100% of my professional growth comes from working with kind, humble, and collaborative people—teams where knowledge is shared freely in all directions, mistakes are allowed, and learning from them is encouraged. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? Better cost transparency—clearly showing which actions drive costs (executions, retries, data volume, logging) with visibility before production deployment. Design-time guidance that warns about expensive operations, anti-patterns, and inefficient configurations. News from our product group Introducing Unit Test Agent Profiles for Logic Apps & Data Maps Focused unit test agent profiles help Logic Apps Standard teams discover workflows and data maps, write reusable specifications, generate typed mocks/test data, and implement MSTest suites against the Automated Test SDK. The approach promotes spec-first testing, consistent artifacts, and reliable validations across scenarios (happy path, error handling, boundaries). The project demonstrates how GitHub Copilot custom agents and prompts can accelerate unit test authoring while enforcing constraints for maintainability in enterprise integrations. Automated Test Framework - Missing Tests in Test Explorer If Logic Apps Standard tests disappear from VS Code’s Test Explorer, the cause is typically a MSTest version mismatch introduced by a recent C# DevKit update. The fix is to update package references in the project (MSTest, Test SDK, and related dependencies) to supported versions. After adjusting packages and restarting VS Code, tests reappear and run as expected. The extension is being updated, but existing projects should apply the manual changes to restore a stable testing experience. Upcoming Agentic Azure Logic Apps Workshops Free workshops introduce Agentic Business Processes in Azure Logic Apps, including MCP Server integrations and agent loop patterns. Sessions cover connecting the enterprise with MCP Servers from Copilot Studio and building agentic workflows in a day using Logic Apps (Standard). Registration links are provided, with dates set in January. These events help practitioners explore agent tools, orchestration patterns, and practical scenarios for intelligent automation in modern integration landscapes. News from our community Enterprise AI ≠ Copilot Post by Al Ghoniem, MBA Deploying Copilot is not the same as building an enterprise AI strategy. This article distinguishes between adopting a product versus developing a core organizational capability. It explores why many AI rollouts stall when treated as point solutions and outlines what it takes to embed AI as a strategic, scalable function across the enterprise. For leaders evaluating their AI roadmaps, this piece offers a framework for thinking beyond tools toward sustainable transformation. BizTalk Server and WinSCP Error Post by Sandro Pereira A common SFTP adapter issue resurfaces when BizTalk Server cumulative updates are applied. This troubleshooting guide explains why the WinSCPnet assembly fails to load after upgrading from CU5 to CU6 and provides a version compatibility matrix for BizTalk Server 2020. It includes step-by-step instructions to resolve the error by copying the correct WinSCP binaries to the BizTalk installation folder, along with an automated PowerShell script for streamlined remediation. More on finding application registrations used by Logic Apps Post by Mikael Sand Building on earlier work around API connection discovery, this post extends KQL queries to find Logic Apps that authenticate via Client ID and secret in HTTP actions. Using Azure Resource Graph Explorer, the technique scans workflow parameters to identify application registration references. While the approach assumes parameters are used for credentials, it provides a practical method for auditing which Logic Apps depend on a given app registration across subscriptions. MCP Servers in Azure Logic Apps Agent Loops (Step-by-Step) Video by Stephen W. Thomas This walkthrough demonstrates how to move tool logic out of Logic App Agent loops and into reusable Model Context Protocol servers. The video covers setting up an MCP server, registering tools, and invoking them from within an agent workflow. By decoupling tool definitions from orchestration logic, developers can build modular, maintainable agentic systems that scale across multiple workflows and scenarios. From Rigid Choreography to Intelligent Collaboration: Agentic Orchestration as the Evolution of SOA Post by Steef-Jan Wiggers Integration has evolved from rigid, deterministic workflows to adaptive, goal-driven orchestrations powered by intelligent agents. This article traces the journey from BizTalk-era SOA composites to modern agentic patterns, explaining how AI-enabled orchestrators dynamically plan, self-correct, and communicate intent rather than follow fixed sequences. With examples from Azure Logic Apps Agent Loop, it shows how integration professionals can leverage existing connectors and APIs as tools within reasoning-based architectures. Azure Integrations That Actually Work in Production Post by Devarajan Gurusamy Architecture diagrams often look perfect in presentations but fail under real-world conditions. This article examines what it takes to build Azure integrations that survive production workloads, covering resilience patterns, error handling strategies, and operational considerations that separate proof-of-concepts from enterprise-grade solutions. It offers practical guidance for teams moving beyond demos toward reliable, maintainable integration implementations. Configuring BizTalk Server Backup Jobs for Azure Blob Storage Post by Sandro Pereira Modernizing BizTalk infrastructure can start with small, targeted improvements. This guide shows how to redirect native BizTalk backup jobs to Azure Blob Storage using SQL Server’s BACKUP TO URL feature. It covers generating SAS tokens, creating SQL credentials, updating job steps, and validating backups. The approach improves disaster recovery posture with geo-redundant cloud storage while keeping the BizTalk environment unchanged—no new components or custom code required. Building an AI Agent with Logic Apps Consumption and Agent Loop Post by Stephen W. Thomas Logic Apps Consumption with AgentLoop offers a fast path to practical AI agents, particularly for experimentation and low-volume workloads. This post walks through building a stateful poker-playing agent, demonstrating tool orchestration, agent instructions, and execution tracing. It compares Consumption and Standard tiers, explores cost efficiency, and shows how to offload reasoning to external models. At roughly a dollar for 25 runs, it’s an accessible entry point for agentic development. Microsoft Agent Framework Post by Al Ghoniem, MBA Microsoft’s open-source Agent Framework provides a comprehensive toolkit for building, orchestrating, and deploying AI agents in Python and .NET. It features graph-based workflows with streaming and checkpointing, multi-agent patterns, built-in observability via OpenTelemetry, and support for multiple LLM providers. The repository includes quickstart samples, migration guides from Semantic Kernel and AutoGen, and experimental labs for benchmarking and reinforcement learning. Storage Account: You do not have permissions to list the data using your user account with Entra ID Post by Sandro Pereira Being an Owner of an Azure Storage Account does not automatically grant access to its data. This common confusion trips up many developers working with Logic Apps and blob storage. The article explains the distinction between management roles and data-plane permissions, then walks through assigning Storage Blob Data roles via Access Control (IAM) to resolve the “You do not have permissions” error when authenticating with Microsoft Entra ID. Using Office 365 Outlook Connector in Logic Apps Standard Post by Srikanth Gunnala When configuring Office 365 Outlook (V2) with Managed Identity in Logic Apps Standard, the workflow designer may fail to auto-bind existing API connections—even when runtime works correctly. This post details the issue, explaining how a null referenceName in the workflow JSON causes the trigger to appear unbound. The fix involves manually setting the connection reference name in the workflow definition, after which the designer immediately recognizes the connection. Inside Logic Apps Standard: Understanding Compute Units (CU) for Storage Scaling Post by Daniel Jonathan High-volume Logic Apps can hit Azure Storage throttling limits. This deep-dive explains how Compute Units enable horizontal storage scaling by distributing workflow execution data across up to 32 storage accounts. It covers CU routing via Run ID suffixes, table distribution patterns, and configuration via host.json. Understanding this architecture is essential for building monitoring tools, querying run histories programmatically, or troubleshooting performance at scale. E59 - Roles & Personas Video by Sebastian Meyer This episode recaps highlights from SAP TechEd and Microsoft Ignite, with a focus on the SAP Innovation Guide and Logic Apps announcements. It explores roles and personas in the integration space, discussing how different team members contribute to enterprise integration projects. The conversation provides context for practitioners following both SAP and Microsoft ecosystems and offers insights into recent platform developments relevant to hybrid integration scenarios. Architecting Trust: Leveraging Microsoft Foundry to Solve AI Governance Challenges Post by Kent Weare As organizations scale AI deployments, governance becomes critical. This article explores how Microsoft Foundry addresses common AI governance challenges including model management, access controls, auditability, and compliance. It outlines architectural patterns for building trust into AI systems from the ground up and provides guidance for teams navigating the balance between innovation velocity and responsible AI practices in enterprise environments.Data Mapper Test Executor: A New Addition to Logic Apps Standard Test Framework
Why Does It Matter? Testing complex data transformations has always been a challenge for Logic Apps developers. With workflows relying on XSLT, Liquid templates, and custom maps, validating these transformations often required manual steps or external tools. The Data Mapper Test Executor changes that by bringing first-class support for map testing directly into the Logic Apps Standard Automated Test Framework. This means faster feedback loops, improved reliability, and a more streamlined developer experience. Key Highlights Native Support for Data Mapper Testing Execute unit tests for XSLT 1.0/2.0/3.0 and Logic Apps Data Mapper (.lml) files without leaving your development environment. Integrated with Automated Test Framework SDK Leverages the latest SDK capabilities for consistent test execution and reporting. Enhanced Validation Supports schema validation and transformation checks, ensuring your maps behave as expected across scenarios. Performance Optimizations Built-in caching and resource management for efficient test runs. How to Get Started Install and reference the Latest Automated Test Framework SDK Make sure you’re referencing the version 1.0.1 or higher of the framework to access the new executor class. You can find the latest version in the NuGet gallery. <PackageReference Include="Microsoft.Azure.Workflows.WebJobs.Tests.Extension" Version="1.0.1" /> Add the Data Mapper Test Executor to Your Test Project Include the new class in your unit test suite for Logic Apps workflows. Write Your First Test Check the sample code below with three different methods, highlighting the options that can be used with the DataMapTestExecutor using System.Collections.Generic; using System.Threading.Tasks; using Microsoft.Azure.Workflows.UnitTesting.Definitions; using Microsoft.VisualStudio.TestTools.UnitTesting; using la1.Tests.Mocks.Stateful1; using Microsoft.Azure.Workflows.UnitTesting; using System.IO; using System.Text; using Microsoft.Azure.Workflows.Data.Entities; using Newtonsoft.Json.Linq; namespace la1.Tests { /// <summary> /// The unit test class. /// </summary> [TestClass] public class DataMapTest { /// <summary> /// The map to test. /// </summary> public const string MapName = "source_order_to_canonical_order"; public string PathToMapDefinition { get; private set; } public string PathToCompiledXslt { get; private set; } public string PathToXsltTestInputs { get; private set; } /// <summary> /// The data map test executor. /// </summary> public TestExecutor TestExecutor; [TestInitialize] public void Setup() { this.TestExecutor = new TestExecutor("Stateful1/testSettings.config"); this.PathToMapDefinition = Path.Combine(this.TestExecutor.rootDirectory, this.TestExecutor.logicAppName, "Artifacts", "MapDefinitions", $"{DataMapTest.MapName}.lml"); this.PathToCompiledXslt = Path.Combine(this.TestExecutor.rootDirectory, this.TestExecutor.logicAppName, "Artifacts", "Maps", $"{DataMapTest.MapName}.xslt"); this.PathToXsltTestInputs = Path.Combine(this.TestExecutor.rootDirectory, "Tests", "la1", "Stateful1", "DataMapTest", "test-input.json"); } /// <summary> /// Sample comparing compiled XSLT to generated XSLT using map name. /// </summary> [TestMethod] public async Task DataMapTest_GenerateXslt() { var dataMapTestExecutor = this.TestExecutor.CreateMapExecutor(); var xsltBytes = await dataMapTestExecutor .GenerateXslt(mapName: DataMapTest.MapName) .ConfigureAwait(continueOnCapturedContext: false); Assert.IsNotNull(xsltBytes); Assert.AreEqual(expected: File.ReadAllText(this.PathToCompiledXslt), actual: Encoding.UTF8.GetString(xsltBytes)); } /// <summary> /// Sample comparing compiled XSLT to generated XSLT using map content. /// </summary> [TestMethod] public async Task DataMapTest_GenerateXsltWithMapContent() { var mapContent = File.ReadAllText(this.PathToMapDefinition); var generateXsltInput = new GenerateXsltInput { MapContent = mapContent }; var dataMapTestExecutor = this.TestExecutor.CreateMapExecutor(); var xsltBytes = await dataMapTestExecutor .GenerateXslt(generateXsltInput: generateXsltInput) .ConfigureAwait(continueOnCapturedContext: false); Assert.IsNotNull(xsltBytes); Assert.AreEqual(expected: File.ReadAllText(this.PathToCompiledXslt), actual: Encoding.UTF8.GetString(xsltBytes)); } /// <summary> /// Sample running data map using map name and test inputs. /// </summary> [TestMethod] public async Task DataMapTest_RunMap() { var dataMapTestExecutor = this.TestExecutor.CreateMapExecutor(); var mapOutput = await dataMapTestExecutor .RunMapAsync( mapName: DataMapTest.MapName, inputContent: File.ReadAllBytes(this.PathToXsltTestInputs)) .ConfigureAwait(continueOnCapturedContext: false); Assert.IsNotNull(mapOutput); Assert.IsTrue(mapOutput.Type == JTokenType.Object); Assert.AreEqual(expected: "FC-20250603-001", actual: mapOutput["orderId"]); Assert.AreEqual(expected: "VIP-789456", actual: mapOutput["customerId"]); Assert.AreEqual(expected: "NEW", actual: mapOutput["status"]); } /// <summary> /// Sample running data map using generated XSLT content and test inputs. /// </summary> [TestMethod] public async Task DataMapTest_RunMapWithXsltContentBytes() { var dataMapTestExecutor = this.TestExecutor.CreateMapExecutor(); var mapContent = File.ReadAllText(this.PathToMapDefinition); var generateXsltInput = new GenerateXsltInput { MapContent = mapContent }; var xsltContent = await dataMapTestExecutor .GenerateXslt(generateXsltInput: generateXsltInput) .ConfigureAwait(continueOnCapturedContext: false); var mapOutput = await dataMapTestExecutor .RunMapAsync( xsltContent: xsltContent, inputContent: File.ReadAllBytes(this.PathToXsltTestInputs)) .ConfigureAwait(continueOnCapturedContext: false); Assert.IsNotNull(mapOutput); Assert.IsTrue(mapOutput.Type == JTokenType.Object); Assert.AreEqual(expected: "FC-20250603-001", actual: mapOutput["orderId"]); Assert.AreEqual(expected: "VIP-789456", actual: mapOutput["customerId"]); Assert.AreEqual(expected: "NEW", actual: mapOutput["status"]); } } } The default TestExecutor should be updated with a new method, so you can use the same class to create UnitTestExecutor and DataMapTestExecutor instances: public DataMapTestExecutor CreateMapExecutor() { // Set the path for workflow-related input files in the workspace and build the full paths to the required JSON files. var appDirectoryPath = Path.Combine(this.rootDirectory, this.logicAppName); return new DataMapTestExecutor(appDirectoryPath: appDirectoryPath); } Note: This feature is available starting with the latest SDK release. Update your dependencies before making changes to your code, to get full intelisense support. Limitations Loop Structures: Dynamic execution within repeating structures is not yet supported. Non-Mocked Connectors: All actions in the execution path should be mocked. Unsupported Actions: Integration Account maps, custom code actions, and EDI encode/decode remain out of scope for this release. Preview Caveats: If you’re using private preview components, expect limited testing compared to GA releases. Learn More Logic Apps Standard Automated Test Framework SDK619Views0likes2CommentsIntroducing Unit Test Agent Profiles for Logic Apps & Data Maps
I did some experimentation this week with GitHub Copilot custom agents and custom prompts. The result was a couple of agents to help with creation of unit tests for Logic Apps Standard workflows and data maps. These agent profiles are designed to keep test artifacts consistent and maintainable by separating discovery, specification, data generation, and implementation. Specifications serve as the single source of truth; generated code and data follow from the specs. Discover: Enumerate workflows/maps, triggers/actions, dependencies, and control flow to build a testability inventory. Spec (Speckit): Write reusable test specifications (intent, inputs/mocks, expected outcomes) stored in a predictable location. Cases: Design scenario catalogs with categories (Happy Path, Error Handling, Boundary Conditions, etc.). Test Data: Generate typed mocks (Logic Apps) or input/expected data files (Data Maps) per scenario. Implement: Produce MSTest code against the Automated Test SDK on net8.0, enforcing strict constraints for reliability. Project Batch: Run operations across all workflows/maps with pre-checks, scaffolding verification, and summary reporting. ℹ️ You can find this project on GitHub: https://github.com/wsilveiranz/logicapps-unittest-custom-agent. This is an example of how those agents can be created using what is available in GitHub Copilot, so feel free to fork it and adjust it to your own requirements. Agent Profiles Available Logic Apps Workflow Unit Test Author Specialist agent for authoring Logic Apps Standard workflow unit tests using MSTest (net8.0) and the Automated Test SDK. Produces spec-first test cases, typed mocks, and runnable MSTest implementations. How It Works Discover workflows: Locate workflow.json, summarize triggers/actions, and identify external dependencies to mock. Spec-first: Write reusable specs capturing intent, mock plan, inputs, and expected outcomes. Create cases: Build scenario catalogs (Happy Path, Error Handling, Boundary Conditions, etc.). Generate test data: Produce typed MockOutput and ActionMock classes instances mapped to action names. This step is optional, as implementation will create test data if not available. Implement tests: Generate MSTest code enforcing SDK constraints and required annotations. Validate: Build and run tests; refine specs and data as needed. Batch (optional): Apply the flow across all workflows once scaffolding is verified. Skill Description Prompt File Discover Workflows Discover workflow definitions, summarize triggers/actions, and produce a mock inventory. .github/prompts/la-unit-tests-discover.prompt.md Create Test Cases Create unit test cases from a workflow definition; define intent, inputs, mock plan, and expected outcomes. .github/prompts/la-unit-tests-create-cases.prompt.md Write Speckit Specs Write and maintain Speckit-style specification documents for reusable scenarios. .github/prompts/la-unit-tests-speckit-specs.prompt.md Implement Tests Implement MSTest unit tests using the Automated Test SDK with enforced constraints. .github/prompts/la-unit-tests-implement.prompt.md Generate Test Data Generate trigger/action mock payloads and typed outputs for success and failure variants. .github/prompts/la-unit-tests-generate-test-data.prompt.md Project Batch Execute project-wide operations with scaffolding verification and roll-up reporting. .github/prompts/la-unit-tests-project-batch.prompt.md Logic Apps Data Map Unit Test Author Specialist agent for authoring Data Map unit tests from LML or XSLT using MSTest (net8.0). Produces spec-first test cases, sample input/expected data, and MSTest implementations. How It Works Discover data maps: Locate Artifacts/MapDefinitions/*.lml and Artifacts/Maps/*.xslt; summarize transformations and schema requirements. Spec-first: Write reusable specs capturing intent, input data, expected output, and validation criteria. Create cases: Build scenario catalogs (Happy Path, Empty/Missing, Boundary Values, Special Characters, Large Data, etc.). Generate test data: Produce representative input files and expected output files aligned to schemas and mapping rules. This step is optional, as implementation will create test data if not available. Implement tests: Generate MSTest classes using DataMapTestExecutor. Validate: Build and run tests; refine specs and data as needed. Batch (optional): Apply the flow across all maps; verify project scaffolding first. Skill Description Prompt File Discover Data Maps Discover LML/XSLT maps and schemas; summarize transformation structure and complexity. .github/prompts/dm-unit-tests-discover.prompt.md Create Test Cases Design scenarios covering happy path, edge cases, and error handling. .github/prompts/dm-unit-tests-create-cases.prompt.md Write Speckit Specs Write and maintain Speckit-style specs with mapping rules and validations. .github/prompts/dm-unit-tests-speckit-specs.prompt.md Implement Tests Implement MSTest suites using DataMapTestExecutor with required defaults. .github/prompts/dm-unit-tests-implement.prompt.md Generate Test Data Generate input and expected output files matched to map rules and schemas. .github/prompts/dm-unit-tests-generate-test-data.prompt.md Project Batch Execute project-wide operations with scaffolding and summary reporting. .github/prompts/dm-unit-tests-project-batch.prompt.md FAQ Why Speckit-style specs? Specs are human-readable contracts that drive consistent test implementation across teams and time. They decouple scenario design from code generation, improving reuse. Can I run batch operations? Yes. Use the project-batch profiles to orchestrate discovery and implementation across all workflows/maps once scaffolding is verified. What if scaffolding is missing? The Implement and Project Batch agents will stop and report the exact folders/files you need to create (via the Logic Apps Designer’s Create Unit Test command or manual steps).566Views2likes0CommentsIntroducing native Service Bus message publishing from Azure API Management (Preview)
We’re excited to announce a preview capability in Azure API Management (APIM) — you can now send messages directly to Azure Service Bus from your APIs using a built-in policy. This enhancement, currently in public preview, simplifies how you connect your API layer with event-driven and asynchronous systems, helping you build more scalable, resilient, and loosely coupled architectures across your enterprise. Why this matters? Modern applications increasingly rely on asynchronous communication and event-driven designs. With this new integration: Any API hosted in API Management can publish to Service Bus — no SDKs, custom code, or middleware required. Partners, clients, and IoT devices can send data through standard HTTP calls, even if they don’t support AMQP natively. You stay in full control with authentication, throttling, and logging managed centrally in API Management. Your systems scale more smoothly by decoupling front-end requests from backend processing. How it works The new send-service-bus-message policy allows API Management to forward payloads from API calls directly into Service Bus queues or topics. High-level flow A client sends a standard HTTP request to your API endpoint in API Management. The policy executes and sends the payload as a message to Service Bus. Downstream consumers such as Logic Apps, Azure Functions, or microservices process those messages asynchronously. All configurations happen in API Management — no code changes or new infrastructure are required. Getting started You can try it out in minutes: Set up a Service Bus namespace and create a queue or topic. Enable a managed identity (system-assigned or user-assigned) on your API Management instance. Grant the identity the “Service Bus data sender” role in Azure RBAC, scoped to your queue/ topic. Add the policy to your API operation: <send-service-bus-message queue-name="orders"> <payload>@(context.Request.Body.As<string>())</payload> </send-service-bus-message> Once saved, each API call publishes its payload to the Service Bus queue or topic. 📖 Learn more. Common use cases This capability makes it easy to integrate your APIs into event-driven workflows: Order processing – Queue incoming orders for fulfillment or billing. Event notifications – Trigger internal workflows across multiple applications. Telemetry ingestion – Forward IoT or mobile app data to Service Bus for analytics. Partner integrations – Offer REST-based endpoints for external systems while maintaining policy-based control. Each of these scenarios benefits from simplified integration, centralized governance, and improved reliability. Secure and governed by design The integration uses managed identities for secure communication between API Management and Service Bus — no secrets required. You can further apply enterprise-grade controls: Enforce rate limits, quotas, and authorization through APIM policies. Gain API-level logging and tracing for each message sent. Use Service Bus metrics to monitor downstream processing. Together, these tools help you maintain a consistent security posture across your APIs and messaging layer. Build modern, event-driven architectures With this feature, API Management can serve as a bridge to your event-driven backbone. Start small by queuing a single API’s workload, or extend to enterprise-wide event distribution using topics and subscriptions. You’ll reduce architectural complexity while enabling more flexible, scalable, and decoupled application patterns. Learn more: Get the full walkthrough and examples in the documentation 👉 here4KViews2likes6CommentsAnnouncing the General Availability (GA) of the Premium v2 tier of Azure API Management
Superior capacity, highest entity limits, unlimited included calls, and the most comprehensive set of features set the Premium v2 tier apart from other API Management tiers. Customers rely on the Premium v2 tier for running enterprise-wide API programs at scale, with high availability, and performance. The Premium v2 tier has a new architecture that eliminates management traffic from the customer VNet, making private networking much more secure and easier to setup. During the creation of a Premium v2 instance, you can choose between VNet injection or VNet integration (introduced in the Standard v2 tier) options. In addition, today we are also adding three new features to Premium v2: Inbound Private Link: You can now enable private endpoint connectivity to restrict inbound access to your Premium v2 instance. It can be enabled along with VNet injection or VNet integration or without a VNet. Availability zone support: Premium v2 now supports availability zones (zone redundancy) to enhance the reliability and resilience of your API gateway. Custom CA certificates: Azure API management v2 gateway can now validate TLS connections with the backend service using custom CA certificates. New and improved VNet injection Using VNet injection in Premium v2 no longer requires configuring routes or service endpoints. Customers can secure their API workloads without impacting API Management dependencies, while Microsoft can secure the infrastructure without interfering with customer API workloads. In short, the new VNet injection implementation enables both parties to manage network security and configuration settings independently and without affecting each other. You can now configure your APIs with complete networking flexibility: force tunnel all outbound traffic to on-premises, send all outbound traffic through an NVA, or add a WAF device to monitor all inbound traffic to your API Management Premium v2—all without constraints. Inbound Private Link Customers can now configure an inbound private endpoint for their API Management Premium v2 instance to allow your API consumers securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Apply different API Management policies based on whether traffic comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with inbound virtual network injection or outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. More details can be found here Today, only the API Management instance’s Gateway endpoint supports inbound private link connections. Each API management instance can support at most 100 Private Link connections. Availability zones Azure API Management Premium v2 now supports Availability Zones (AZ) redundancy to enhance the reliability and resilience of your API gateway. When deploying an API Management instance in an AZ-enabled region, users can choose to enable zone redundancy. This distributes the service's units, including Gateway, management plane, and developer portal, across multiple, physically separate AZs within that region. Learn how to enable AZs here. CA certificates If the API Management Gateway needs to connect to the backends secured with TLS certificates issued by private certificate authorities (CA), you need to configure custom CA certificates in the API Management instance. Custom CA certificates can be added and managed as Authorization Credentials in the Backend entities. The Backend entity has been extended with new properties allowing customers to specify a list of certificate thumbprints or subject name + issuer thumbprint pairs that Gateway should trust when establishing TLS connection with associated backend endpoint. More details can be found here. Region availability The Premium v2 tier is now generally available in six public regions (Australia East, East US2, Germany West Central, Korea Central, Norway East and UK South) with additional regions coming soon. For pricing information and regional availability, please visit the API Management pricing page. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentationLogic Apps Aviators Newsletter - January 2026
In this issue: Ace Aviator of the Month News from our product group Community Playbook News from our community Ace Aviator of the Month January's Ace Aviator: Sagar Sharma What's your role and title? What are your responsibilities? I’m a cross-domain Business Solution Architect specializing in delivering new business capabilities to customers. I design end-to-end architectures specially on Azure platforms and also in the Integration domain using azure integration services. My role involves marking architectural decisions, defining standards, ensuring platform reliability, guiding teams, and helping organizations transition from legacy integration systems to modern cloud-native patterns. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My day usually blends architecture work with hands-on collaboration. I review integration designs, refine patterns, help teams troubleshoot integration flows, and ensure deployments run smoothly through DevOps pipelines. A good part of my time is spent translating business needs into integration patterns and making sure the overall platform stays secure, scalable, and maintainable. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community has shaped a big part of my career. Many of my early breakthroughs came from blogs, samples, and talks shared by others. Contributing back feels like closing the loop. I enjoy sharing real-world lessons, learning from peers, and helping others adopt integration patterns with confidence. The energy of the community and the conversations it creates keep me inspired. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Focus on core concepts—messaging, APIs, security, and distributed systems—because tools evolve, but fundamentals stay relevant. Share your learning early, even if it feels small. Be curious about the “why” behind patterns. Build side projects, not just follow tutorials. And don’t fear a nonlinear career path—diverse experience is an asset in technology. What has helped you grow professionally? Hands-on customer work, strong mentors, and consistent learning habits have been key. Community involvement—writing, speaking, and collaborating—has pushed me to structure my knowledge and stay current. And working in environments that encourage experimentation has helped me develop faster and with more confidence. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? I’d love to see a unified, out-of-the-box business transaction tracing experience across Logic Apps, Service Bus, APIM, Functions, and downstream services. Something that automatically correlates events, visualizes the full journey of a transaction, and simplifies root-cause analysis. This would make operational troubleshooting dramatically easier in enterprise environments. News from our product group Microsoft BizTalk Server Product Lifecycle Update BizTalk Server 2020 will be the final release, with support extending through 2030. Microsoft encourages a gradual transition to Azure Logic Apps, offering migration tooling, hybrid deployment options, and reuse of existing BizTalk artifacts. Customers can modernize at their own pace while maintaining operational continuity. Data Mapper Test Executor: A New Addition to Logic Apps Standard Test Framework The Data Mapper Test Executor adds native support for testing XSLT and Data Mapper transformations directly within the Logic Apps Standard test framework. It streamlines validation, improves feedback cycles, and integrates with the latest SDK to enable reliable, automated testing of map generation and execution. Announcing General Availability of AI & RAG Connectors in Logic Apps (Standard) Logic Apps Standard AI and RAG connectors are now GA333, enabling native document processing, semantic search, embeddings, and agentic workflows. These capabilities let teams build intelligent, context‑aware automations using their own data, reducing complexity and enhancing decisioning across enterprise integrations. Logic Apps Labs The Logic Apps Labs, which introduces Azure Logic Apps agentic workflows learning path, offering guided modules on building conversational and autonomous agents, extending capabilities with MCP tools, and orchestrating multi‑agent workflows. It serves as a starting point for hands‑on labs covering design, deployment, and advanced patterns for intelligent automation. News from our community Handling Empty SQL Query Results in Azure Logic Apps Post by Anitha Eswaran If a SQL stored procedure returns no rows, you can detect the empty result set in Logic Apps by checking whether the output’s ResultSets object is {}. When empty, the workflow can be cleanly terminated or used to trigger alerts, ensuring predictable behavior and more resilient integrations. Azure Logic Apps MCP Server Post by Laveesh Bansal Our own Laveesh Bansal spent some time creating an Azure Logic Apps MCP Server that enables natural‑language debugging, workflow inspection, and updates without using the portal. It supports both Standard and Consumption apps, integrates with AI clients like Copilot and Claude, and offers tools for local or cloud‑hosted setups, testing, and workflow lifecycle operations. Azure Logic Apps: Change Detection in JSON Objects and Arrays Post by Suraj Somani Logic Apps offers native functions to detect changes in JSON objects and arrays without worrying about field or item order. Using equals() for objects and intersection() for arrays, you can determine when data has truly changed and trigger workflows only when updates occur, improving efficiency and reducing unnecessary processing. Logic Apps Standard: A clever hack to use JSON schemas in your Artifacts folder for JSON message validation (Part 1) Post by Şahin Özdemir Şahin outlines a workaround for using JSON schemas stored in the Artifacts folder to validate messages in Logic Apps Standard. It revisits integration needs from BizTalk migrations and shows how to bring structured validation into modern workflows without relying on Integration Accounts. This is a two part series and you can find part two here. Let's integrate SAP with Microsoft Video by Sebastian Meyer Sebastian has a new video out, and in this episode he and Martin Pankraz break down SAP GROW and RISE for Microsoft integration developers, covering key differences and integration options across IDoc, RFC, BAPI, SOAP, HTTPS, and OData, giving a concise overview of today’s SAP landscape and what it means for building integrations on Azure. Logic Apps Initialize variables action has a max limit of 20 variables Post by Sandro Pereira Logic Apps allows only 20 variables per Initialize variables action, and exceeding it triggers a validation error. This limit applies per action, not per workflow. Using objects, parameters, or Compose actions often reduces the need for many scalars and leads to cleaner, more maintainable workflows. Did you know that? It is a Friday Fact!