Blog Post

Azure Integration Services Blog
22 MIN READ

Agentic Logic Apps Integration with SAP - Part 2: AI Agents

Emmanuel_Abram_Profeta's avatar
Feb 17, 2026

Part 2 covers the AI portion of the destination workflows: the Logic Apps validation agent (rules from SharePoint → structured outputs), the downstream actions those outputs drive (email, optional IDoc persistence via Z_CREATE_ONLINEORDER_IDOC, and filtering for analysis), and the separate analysis completion that returns formatted results back to SAP and to users. Part 1 established the SAP/Logic Apps contracts and error propagation; Part 2 is separate because it deals with the agent outputs, tool schemas, and constraints that determine whether AI results are usable (structured, traceable, and workflow‑driven) rather than just “generated text.”

1. Introduction

Part 2 focuses on the AI-shaped portion of the destination workflows: how the Logic Apps Agent is configured, how it pulls business rules from SharePoint, and how its outputs are converted into concrete workflow artifacts. In Destination workflow #1, the agent produces three structured outputs—an HTML validation summary, a CSV list of InvalidOrderIds, and an Invalid CSV payload—which drive (1) a verification email, (2) an optional RFC call to persist failed rows as IDocs, and (3) a filtered dataset used for the separate analysis step that returns only analysis (or errors) back to SAP. In Destination workflow #2, the same approach is applied to inbound IDocs: the workflow reconstructs CSV from the custom segment, runs AI validation against the same SharePoint rules, and safely appends results to an append blob using a lease-based write pattern for concurrency.

In Part 1, the goal was to make the integration deterministic: stable payload shapes, stable response shapes, and predictable error propagation across SAP and Logic Apps. Concretely, Part 1 established:

  • how SAP reaches Logic Apps (Gateway/Program ID plumbing)
  • the RFC contracts (IT_CSV, response envelope, RETURN/MESSAGE, EXCEPTIONMSG)
  • how the source workflow interprets RFC responses (success vs error)
  • how invalid rows can be persisted into SAP as custom IDocs (Z_CREATE_ONLINEORDER_IDOC)
  • and how the second destination workflow receives those IDocs asynchronously

With that foundation in place, Part 2 narrows in on the part that is not just plumbing: the agent loop, the tool boundaries, and the output schemas that make AI results usable inside a workflow rather than “generated text you still need to interpret.”

The diagram below highlights the portion of the destination workflow where AI is doing real work. The red-circled section is the validation agent loop (rules in, structured validation outputs out), which then fans out into operational actions like email notification, optional IDoc persistence, and filtering for the analysis step.

 

Figure: AI boundary in the destination workflow.

What matters here is the shape of the agent outputs and how they are consumed by the rest of the workflow. The agent is not treated as a black box; it is forced to emit typed, workflow-friendly artifacts (summary + invalid IDs + filtered CSV). Those artifacts are then used deterministically: invalid rows are reported (and optionally persisted as IDocs), while valid rows flow into the analysis stage and ultimately back to SAP.

What this post covers

In this post, I focus on five practical topics:

  • Agent loop design in Logic Apps: tools, message design, and output schemas that make the agent’s results deterministic enough to automate.
  • External rule retrieval: pulling validation rules from SharePoint and applying them consistently to incoming payloads.
  • Structured validation outputs → workflow actions: producing InvalidOrderIds and a filtered CSV payload that directly drive notifications and SAP remediation.
  • Two-model pattern: a specialized model for validation (agent) and a separate model call for analysis, with a clean handoff between the two.
  • Output shaping for consumption: converting AI output into HTML for email and into the SAP response envelope (analysis/errors only).

(Everything else—SAP plumbing, RFC wiring, and response/exception patterns—was covered in Part 1 and is assumed here.)

Next, I’ll break down the agent loop itself—the tool sequence, the required output fields, and the exact points where the workflow turns AI output into variables, emails, and SAP actions.

Huge thanks to KentWeareMSFT​ for helping me understand agent loops and design the validation agent structure. And thanks to everyone in 🤖 Agent Loop Demos 🤖 | Microsoft Community Hub for making such great material available.

 

Note: For the full set of assets used here, see the companion GitHub repository (workflows, schemas, SAP ABAP code, and sample files).

2. Validation Agent Loop

In this solution, the Data Validation Agent runs inside the destination workflow after the inbound SAP payload has been normalized into a single CSV string. The agent is invoked as a single Logic Apps Agent action, configured with an Azure OpenAI deployment and a short set of instructions. Its inputs are deliberately simple at this stage:

  • the CSV payload (the dataset to validate), and
  • the ValidationRules reference (where the rule document lives), shown in the instructions as a parameter token.

The figure below shows the validation agent configuration used in the destination workflow. The top half is the Agent action configuration (model + instructions), and the bottom half shows the toolset that the agent is allowed to use. The key design choice is that the agent is not “free-form chat”: it’s constrained by a small number of tools and a workflow-friendly output contract.

 

Figure: Data Validation Agent configuration in Logic Apps.

What matters most in this configuration is the separation between instructions and tools. The instructions tell the agent what to do (“follow business process steps 1–3”), while the tools define how the agent can interact with external systems and workflow state. This keeps the agent modular: you can change rules in SharePoint or refine summarization expectations without rewriting the overall SAP integration mechanics.

Purpose

This agent’s job is narrowly scoped: validate the CSV payload from SAP against externally stored business rules and produce outputs that the workflow can use deterministically. In other words, it turns “validation as reasoning” into workflow artifacts (summary + invalid IDs + invalid payload), instead of leaving validation as unstructured prose.

In Azure Logic Apps terms, this is an agent loop: an iterative process where an LLM follows instructions and selects from available tools to complete a multi-step task. Logic Apps agent workflows explicitly support this “agent chooses tools to complete tasks” model (see Agent Workflows Concepts).

Tools

In Logic Apps agent workflows, a tool is a named sequence that contains one or more actions the agent can invoke to accomplish part of its task (see Agent Workflows Concepts).

In the screenshot, the agent is configured with three tools, explicitly labeled Get validation rules, Get CSV payload, Summarize CSV payload review. These tool names match the business process in the “Instructions for agent” box (steps 1–3). The next sections of the post go deeper into what each tool does internally; at this level, the important point is simply that the agent is constrained to a small, explicit toolset.

Agent execution

The screenshot shows the agent configured with:

  • AI model: gpt-5-3 (gpt-5)
  • A connection line: “Connected to … (Azure OpenAI)”
  • Instructions for agent that define the agent’s role and a 3-step business process:
    1. Get validation rules (via the ValidationRules reference)
    2. Get CSV payload
    3. Summarize the CSV payload review, using the validation document

This pattern is intentional:

  • The instructions provide the agent’s “operating procedure” in plain language.
  • The tools give the agent: controlled ways to fetch the rule document, access the CSV input, and return structured results.
  • Because the workflow consumes the agent’s outputs downstream, the instruction text is effectively part of your workflow contract (it must remain stable enough that later actions can trust the output shape).

Note: If a reader wants to recreate this pattern, the fastest path is:

  • Start with the official overview of agent workflows (Workflows with AI Agents and Models - Azure Logic Apps).
  • Follow a hands-on walkthrough for building an agent workflow and connecting it to an Azure OpenAI deployment (Logic Apps Labs is a good step-by-step reference). [azure.github.io]
  • Use the Azure OpenAI connector reference to understand authentication options and operations available in Logic Apps Standard (see Built-in OpenAI Connector)
  • If you’re using Foundry for resource management, review how Foundry connections are created and used, especially when multiple resources/tools are involved (see How to connect to AI foundry).

2.1 Tool 1: Get validation rules

The first tool in the validation agent loop is Get validation rules. Its job is to load the business validation rules that will be applied to the incoming CSV payload from SAP. I keep these rules outside the workflow (in a document) so they can be updated without redeploying the Logic App. In this example, the rules are stored in SharePoint, and the tool simply retrieves the document content at runtime.

Get validation rules is implemented as a single action called Get validation document. In the designer, you can see it:

  • uses a SharePoint Online connection (SharePoint icon and connector action)
  • calls GetFileContentByPath (shown by the “File Path” input)
  • reads the rule file from the configured Site Address
  • uses the workflow parameter token ValidationRules for the File Path (so the exact rule file location is configurable per environment)

The output of this tool is the raw rule document content, which the Data Validation Agent uses in the next steps to validate the CSV payload.

Figure: Tool #1 — Get validation rules retrieves the validation document from SharePoint

The bottom half of the figure shows an excerpt of the rules document. The format is simple and intentionally human-editable: each rule is expressed as FieldName: condition. For example, the visible rules include:

  • PaymentMethod: value must exist
  • PaymentMethod: value cannot be “Cash”
  • OrderStatus: value must be different from “Cancelled”
  • CouponCode: value must have at least 1 character
  • OrderID: value must be unique in the CSV array
  • A scope note: “Do not validate the Date field.”

These rules are the “source of truth” for validation. The workflow does not hardcode them into expressions; instead, it retrieves them from SharePoint and passes them into the agent loop so the validation logic remains configurable and auditable (you can always point to the exact rule document used for a given run).

A small but intentional rule in the document is “Do not validate the Date field.” That line is there for a practical reason: in an early version of the source workflow, the date column was being corrupted during CSV generation. The validation agent still tried to validate dates (even though date validation wasn’t part of the original intent), and the result was predictable: every row failed validation, leaving nothing to analyze. The upstream issue is fixed now, but I kept this rule in the demo to illustrate an important point: validation is only useful when it’s aligned with the data contract you can actually guarantee at that point in the pipeline.

Note: The rules shown here assume the CSV includes a header row (field names in the first line) so the agent can interpret each column by name. If you want the agent to be schema‑agnostic, you can extend the rules with an explicit column mapping, for example:

  • Column 1: Order ID
  • Column 2: Date
  • Column 3: Customer ID

This makes the contract explicit even when headers are missing or unreliable.

With the rules loaded, the next tool provides the second input the agent needs: the CSV payload that will be validated against this document.

2.2 Tool 2: Get CSV payload

The second tool in the validation agent loop is Get CSV payload. Its purpose is to make the dataset-to-validate explicit: it defines exactly what the agent should treat as “the CSV payload,” rather than relying on implicit workflow context. In this workflow, the CSV is already constructed earlier (as Create_CSV_payload), and this tool acts as the narrow bridge between that prepared string and the agent’s validation step.

 

Figure: Tool #2 (“Get CSV payload”) defines a single agent parameter and binds it to the workflow’s generated CSV.

The figure shows two important pieces:

- The tool parameter contract (“Agent Parameters”)
On the right, the tool defines an agent parameter named CSV Payload with type String, and the description (highlighted in yellow) makes the intent explicit: “The CSV payload received from SAP and that we validate based on the validation rules.”

This parameter is the tool’s interface: it documents what the agent is supposed to provide/consume when using this tool, and it anchors the rest of the validation process to a single, well-defined input. Tools in Logic Apps agent workflows exist specifically to constrain and structure what an agent can do and what data it operates on (see Agent Workflows Concepts).

- Why there is an explicit Compose action (“CSV payload”)
In the lower-right “Code view,” the tool’s internal action is shown as a standard Compose:

{
  "type": "Compose",
  "inputs": "@outputs('Create_CSV_payload')"
}

This is intentional. Even though the CSV already exists in the workflow, the tool still needs a concrete action that produces the value it returns to the agent. The Compose step:

    • pins the tool output to a single source of truth (Create_CSV_payload), and
    • creates a stable boundary: “this is the exact CSV string the agent validates,” independent of other workflow state.

Put simply: the Compose action isn’t there because Logic Apps can’t access the CSV—it’s there to make the agent/tool interface explicit, repeatable, and easy to troubleshoot.

 

What “tool parameters” are (in practical terms)

In Logic Apps agent workflows, a tool is a named sequence of one or more actions that the agent can invoke while executing its instructions.
A tool parameter is the tool’s input/output contract exposed to the agent. In this screenshot, that contract is defined under Agent Parameters, where you specify:

  • Name: CSV Payload
  • Type: String
  • Description: “The CSV payload received from SAP…”

This matters because it clarifies (for both the model and the human reader) what the tool represents and what data it is responsible for supplying.

With Tool #1 providing the rules document and Tool #2 providing the CSV dataset, Tool #3 is where the agent produces workflow-ready outputs (summary + invalid IDs + filtered payload) that the downstream steps can act on.

2.3 Tool 3: Summarize CSV payload review

The third tool, Summarize CSV payload review, is where the agent stops being “an evaluator” and becomes a producer of workflow-ready outputs. It does most of the heavy lifting so let's go into the details.

Instead of returning one blob of prose, the tool defines three explicit agent parameters—each with a specific format and purpose—so the workflow can reliably consume the results in downstream actions. In Logic Apps agent workflows, tools are explicitly defined tasks the agent can invoke, and each tool can be structured around actions and schemas that keep the loop predictable (see Agent Workflows Concepts).

 

Figure: Tool #3 (“Summarize CSV payload review”) defines three structured agent outputs

Description is not just documentation—it’s the contract the model is expected to satisfy, and it strongly shapes what the agent considers “relevant” when generating outputs. The parameters are:

Validation summary (String)

Goal: a human-readable summary that can be dropped straight into email.

In the screenshot, the description is very explicit about shape and content:

  • “expected format is an HTML table”
  • “create a list of all orderids that have failed”
  • “create a CSV document… only for the orderid values that failed… each row on a separate line”
  • “include title row only in the email body”

This parameter is designed for presentation: it’s the thing you want humans to read first.

InvalidOrderIds (String, CSV format)

Goal: a machine-friendly list of identifiers the workflow can use deterministically.

The key part of the description (highlighted in the image) is:

“The format is CSV.”

That single sentence is doing a lot of work: it tells the model to emit a comma-separated list, which you then convert into an array in the workflow using split(...).

Invalid CSV payload (String, one row per line)

Goal: the failed rows extracted from the original dataset, in a form that downstream steps can reuse.

The description constrains the output tightly:

  • “original CSV rows… for the orderid values that failed validation”
  • “each row must be on a separate line”
  • “keep the title row only for the email body and remove it otherwise”

This parameter is designed for automation: it becomes input to remediation steps (like transforming rows to XML and creating IDocs), not just a report.

What “agent parameters” do here (and why they matter)

A useful way to think about agent parameters is: they are the “typed return values” of a tool. Tools in agent workflows exist to structure work into bounded tasks the agent can perform, and a schema/parameter contract makes the results consumable by the rest of the workflow (see Agent Workflows Concepts).

In this tool, the parameters serve two purposes at once:

  1. They guide the agent toward salient outputs.
    The descriptions explicitly name what matters: “failed orderids,” “HTML table,” “CSV format,” “one row per line,” “header row rules.” That phrasing makes it much harder for the model to “wander” into irrelevant commentary.
  2. They align with how the workflow will parse and use the results.
    By stating “InvalidOrderIds is CSV,” you make it trivially parseable (split), and by stating “Invalid CSV payload is one row per line,” you make it easy to feed into later transformations.
Why the wording works (and what wording tends to work best)

What’s interesting about the parameter descriptions is that they combine three kinds of constraints:

  • Output format constraints (make parsing deterministic)
    • “expected format is an HTML table”
    • “The format is CSV.”
    • “each row must be on a separate line”

These format cues help the agent decide what to emit and help you avoid brittle parsing later.

  • Output selection constraints (force relevance)
    • “only for the orderid values that failed validation”
    • “Create a list of all orderids that have failed”

This tells the agent what to keep and what to ignore.

  • Output operational constraints (tie outputs to downstream actions)
    • “Include title row only in the email body”
    • “remove it otherwise”

This explicitly anticipates downstream usage (email vs remediation), which is exactly the kind of detail models often miss unless you state it.

 

Rule of thumb: wording works best when it describes

  1. what to produce, 
  2. in what format, 
  3. with what filtering rules, and 
  4. why the workflow needs it.
How these parameters tie directly to the downstream actions

The next picture makes the design intent very clear: each parameter is immediately “bound” to a normal workflow value via Compose actions and then used by later steps. This is the pattern we want: agent output → Compose → (optional) normalization → reused by deterministic workflow actions. It’s the opposite of “read the model output and hope.”

Figure: Tying up parameters to downstream actions.

This is the reusable pattern:

  • Decide the minimal set of outputs the workflow needs.
  • Specify formats that are easy to parse.
  • Write parameter descriptions that encode both selection and formatting constraints.
  • Immediately bind outputs to workflow variables via Compose/SetVariable actions.

The main takeaway from this tool is that the agent is being forced into a structured contract: three outputs with explicit formats and clear intent. That contract is what makes the rest of the workflow deterministic—Compose actions can safely read @agentParameters(...), the workflow can safely split(...) the invalid IDs, and downstream actions can treat the “invalid payload” as real data rather than narrative.

I'll show later how this same “parameter-first” design scales to other agent tools.

2.4 Turning agent outputs into a verification email

Once the agent has produced structured outputs (Validation summary, InvalidOrderIds, and Invalid CSV payload), the next goal is to make those outputs operational: humans need a quick summary of what failed, and the workflow needs machine‑friendly values it can reuse downstream.

The design here is intentionally straightforward: the workflow converts each agent parameter into a first‑class workflow output (via Compose actions and one variable assignment), then binds those values directly into the Office 365 email body. The result is an email that is both readable and actionable—without anyone needing to open run history.

The figure below shows how the outputs of Summarize CSV payload review are mapped into the verification email. On the left, the tool produces three values via subsequent actions (Summary, Invalid order ids, and Invalid CSV payload), and the workflow also normalizes the invalid IDs into an array (Save invalid order ids). On the right, the Send verification summary action composes the email body using those same values as dynamic content tokens.

 

Figure: Mapping agent outputs to the verification email

The important point is that the email is not constructed by “re-prompting” or “re-summarizing.” It is assembled from already-structured outputs. This mapping is intentionally direct: each piece of the email corresponds to one explicit output from the agent tool. The workflow doesn’t interpret or transform the summary beyond basic formatting—its job is to preserve the agent’s structured outputs and present them consistently. The only normalization step happens for InvalidOrderIds, where the workflow also converts the CSV string into an array (ArrayOfInvalidOrderIDs) for later filtering and analysis steps.

The next figure shows a sample verification email produced by this pipeline. It illustrates the three-part structure: an HTML validation summary table, the raw invalid order ID list, and the extracted invalid CSV rows:

 

Figure: Sample verification email — validation summary table + invalid order IDs + invalid CSV rows.

The extracted artifacts InvalidOrderIds and Invalid CSV payload are used in the downstream actions that persist failed rows as IDocs for later processing, which were presented in Part 1. I will get back to this later to talk about reusing the validation agent. Next however I will go over the data analysis part of the AI integration.

3. Analysis Phase: from validated dataset to HTML output

After the validation agent loop finishes, the workflow enters a second AI phase: analysis. The validation phase is deliberately about correctness (what to exclude and why). The analysis phase is about insight, and it runs on the remaining dataset after invalid rows are filtered out.

At a high level, this phase has three steps:

  1. Call Azure OpenAI to analyze the CSV dataset while explicitly excluding invalid OrderIDs.
  2. Extract the model’s text output from the OpenAI response object.
  3. Convert the model’s markdown output into HTML so it renders cleanly in email (and in the SAP response envelope).

3.1 OpenAI component: the “Analyze data” call

The figure below shows the Analyze data action that drives the analysis phase. This action is executed after the Data Validation Agent completes, and it uses three messages: a system instruction that defines the task, the CSV dataset as input, and a second user message that enumerates the OrderIDs to exclude (the invalid IDs produced by validation).

 

Figure: Azure OpenAI analysis call.

The analysis call is structured as:

  • system: define the task and constraints
  • user: provide the dataset
  • user: provide exclusions derived from validation
system: Analyze dataset; provide trends/predictions; exclude specified orderids.
user:   <csv payload="">
user:   Excluded orderids: <comma-separated ids="" invalid=""></comma-separated></csv>

Two design choices are doing most of the work here:

  • The model is given the dataset and the exclusions separately. This avoids ambiguity: the dataset is one message, and the “do not include these OrderIDs” constraint is another.
  • The exclusion list is derived from validation output, not re-discovered during analysis. The analysis step doesn’t re-validate; it consumes the validation phase’s results and focuses purely on trends/predictions.

3.2 Processing the response

The next figure shows how the workflow turns the Azure OpenAI response into a single string that can be reused for email and for the SAP response. The workflow does three things in sequence: it parses the response JSON, extracts the model’s text content, and then passes that text into an HTML formatter.

 

Figure: Processing the OpenAI response.

This is the only part of the OpenAI response you need to understand for this workflow:

Analyze_data response
└─ choices[]                    (array)
   └─ [0]                        (object)
      └─ message                 (object)
         └─ content              (string)   <-- analysis text

Everything else in the OpenAI response (filters, indexes, metadata) is useful for auditing but not required to build the final user-facing output.

3.3 Crafting the output to HTML

The model’s output is plain text and often includes lightweight markdown structures (headings, lists, separators). To make the analysis readable in email (and safe to embed in the SAP response envelope), the workflow converts the markdown into HTML. The script was generated with copilot. Source code snippet may be found in Part 1. The next figure shows what the formatted analysis looks like when rendered. Not the explicit reference to the excluded OrderIDs and summary of the remaining dataset before listing trend observations.

Figure: Example analysis output after formatting.

4. Closing the loop: persisting invalid rows as IDocs

In Part 1, I introduced an optional remediation branch: when validation finds bad rows, the workflow can persist them into SAP as custom IDocs for later handling. In Part 2, after unpacking the agent loop, I want to reconnect those pieces and show the “end of the story”: the destination workflow creates IDocs for invalid data, and a second destination workflow receives those IDocs and produces a consolidated audit trail in Blob Storage.

This final section is intentionally pragmatic. It shows:

  • where the IDoc creation call happens,
  • how the created IDocs arrive downstream,
  • and how to safely handle many concurrent workflow instances writing to the same storage artifact (one instance per IDoc).

4.1 From “verification summary” to “Create all IDocs”

The figure below shows the tail end of the verification summary flow. Once the agent produces the structured validation outputs, the workflow first emails the human-readable summary, then converts the invalid CSV rows into an SAP-friendly XML shape, and finally calls the RFC that creates IDocs from those rows.

Figure: End of the validation/remediation branch.

This is deliberately a “handoff point.” After this step, the invalid rows are no longer just text in an email—they become durable SAP artifacts (IDocs) that can be routed, retried, and processed independently of the original workflow run.

4.2 Z_CREATE_ONLINEORDER_IDOC and the downstream receiver

The next figure is the same overview from Part 1. I’m reusing it here because it captures the full loop: the workflow calls Z_CREATE_ONLINEORDER_IDOC, SAP converts the invalid rows into custom IDocs, and Destination workflow #2 receives those IDocs asynchronously (one workflow run per IDoc).

 

Figure 2: Invalid rows persisted as custom IDocs.

This pattern is intentionally modular:

  • Destination workflow #1 decides which rows are invalid and optionally persists them.
  • SAP encapsulates the IDoc creation mechanics behind a stable RFC (Z_CREATE_ONLINEORDER_IDOC).
  • Destination workflow #2 processes each incoming IDoc independently, which matches how IDoc-driven integrations typically behave in production.

4.3 Two phases in Destination workflow #2: AI agent + Blob Storage logging

In the receiver workflow, there are two distinct phases:

  1. AI agent phase (per-IDoc): reconstruct a CSV view from the incoming IDoc payload and (optionally) run the same validation logic.
  2. Blob storage phase (shared output): append a normalized “verification line” into a shared blob in a concurrency-safe way.

It’s worth calling out: in this demo, the IDocs being received were created from already-validated outputs upstream, so you could argue the second validation is redundant. I keep it anyway for two reasons:

  • it demonstrates that the agent tooling is reusable with minimal changes, and
  • in a general integration, Destination workflow #2 may receive IDocs from multiple sources, not only from this pipeline—so “validate on receipt” can still be valuable.
4.3.1 AI agent phase

The figure below shows the validation agent used in Destination workflow #2. The key difference from the earlier agent loop is the output format: instead of producing an HTML summary + invalid lists, this agent writes a single “audit line” that includes the IDoc correlation key (DOCNUM) along with the order ID and the failed rules.

Figure: Destination workflow #2 agent configuration.

The reusable part here is the tooling structure: rules still come from the same validation document, the dataset is still supplied as CSV, and the summarization tool outputs a structured value the workflow can consume deterministically. The only meaningful change is “what shape do I want the output to take,” which is exactly what the agent parameter descriptions control.

The next figure zooms in on the summarization tool parameter in Destination workflow #2. Instead of three outputs, this tool uses a single parameter (VerificationInfo) whose description forces a consistent line format anchored on DOCNUM.

Figure 4: VerificationInfo parameter.

This is the same design principle as Tool #3 in the first destination workflow: describe the output as a contract, not as a vague request. The parameter description tells the agent exactly what must be present (DOCNUM + OrderId + failed rules) and therefore makes it straightforward to append the output to a shared log without additional parsing.

Interesting snippets

Extracting DOCNUM from the IDoc control record and carry it through the run:

xpath(xml(triggerBody()?['content']),
  'string(/*[local-name()="Receive"]
           /*[local-name()="idocData"]
           /*[local-name()="EDI_DC40"]
           /*[local-name()="DOCNUM"])')
4.3.2 Blob Storage phase

Destination workflow #2 runs one instance per inbound IDoc. That means multiple runs can execute at the same time, all trying to write to the same “ValidationErrorsYYYYMMDD.txt” artifact. The figure below shows the resulting appended output: one line per IDoc, each line beginning with DOCNUM, which becomes the stable correlation key.

Destination workflow #2 runs one instance per inbound IDoc, so multiple instances can attempt to write to the same daily “validation errors” append blob at the same time. The figure below shows the concurrency control pattern I used to make those writes safe: a short lease acquisition loop that retries until it owns the blob lease, then appends the verification line(s), and finally releases the lease.

 

Figure: Concurrency-safe append pattern.

Reading the diagram top‑to‑bottom, the workflow uses a simple lease → append → release pattern to make concurrent writes safe. Each instance waits briefly (Delay), attempts to acquire a blob lease (Acquire validation errors blob lease), and loops until it succeeds (Set status code → Until lease is acquired). Once a lease is obtained, the workflow stores the lease ID (Save lease id), appends its verification output under that lease (Append verification results), and then releases the lease (Release the lease) so the next workflow instance can write.

Implementation note: the complete configuration for this concurrency pattern (including the HTTP actions, headers, retries, and loop conditions) is included in the attached artifacts, in the workflow JSON for Destination workflow #2.

5. Concluding remarks

Part 2 zoomed in on the AI boundary inside the destination workflows and made it concrete: what the agent sees, what it is allowed to do, what it must return, and how those outputs drive deterministic workflow actions.

The practical outcomes of Part 2 are:

  • A tool-driven validation agent that produces workflow artifacts, not prose.
    The validation loop is constrained by tools and parameter schemas so its outputs are immediately consumable: an email-friendly validation summary, a machine-friendly InvalidOrderIds list, and an invalid-row payload that can be remediated.
  • A clean separation between validation and analysis.
    Validation decides what not to trust (invalid IDs / rows) and analysis focuses on what is interesting in the remaining dataset. The analysis prompt makes the exclusion rule explicit by passing the dataset and excluded IDs as separate messages.
  • A repeatable response-processing pipeline.
    You extract the model’s text from a stable response path (choices[0].message.content), then shape it into HTML once (markdown → HTML) so the same formatted output can be reused for email and the SAP response envelope.
  • A “reuse with minimal changes” pattern across workflows.
    Destination workflow #2 shows the same agent principles applied to IDoc reception, but with a different output contract optimized for logging: DOCNUM + OrderId + FailedRules. This demonstrates that the real reusable asset is the tool + parameter contract design.

Putting It All Together

We have a full integration story where SAP, Logic Apps, AI, and IDocs are connected with explicit contracts and predictable behavior:

  1. Part 1 established the deterministic integration foundation.
    • SAP ↔ Logic Apps connectivity (gateway/program wiring)
    • RFC payload/response contracts (IT_CSV, response envelope, error semantics)
    • predictable exception propagation back into SAP
    • an optional remediation branch that persists invalid rows as IDocs via a custom RFC (Z_CREATE_ONLINEORDER_IDOC)
    • and the end-to-end response handling pattern in the caller workflow.
  2. Part 2 layered AI on top without destabilizing the contracts.
    • Agent loop + tools for rule retrieval and validation
    • output schemas that convert “reasoning” into workflow artifacts
    • a separate analysis step that consumes validated data and produces formatted results
    • and an asynchronous IDoc receiver that logs outcomes safely under concurrency.

The reason it works as a two-part series is that the two layers evolve at different speeds:

  • The integration layer (Part 1) should change slowly. It defines interoperability: payload shapes, RFC names, error contracts, and IDoc interfaces.
  • The AI layer (Part 2) is expected to iterate. Prompts, rule documents, output formatting, and agent tool design will evolve as you tune behavior and edge cases.

References

Agentic Logic Apps Integration with SAP - Part 1: Infrastructure

🤖 Agent Loop Demos 🤖 | Microsoft Community Hub

Agent Workflows Concepts

Workflows with AI Agents and Models - Azure Logic Apps

Built-in OpenAI Connector

How to connect to AI foundry

Create Autonomous AI Agent Workflows - Azure Logic Apps

Handling Errors in SAP BAPI Transactions

Access SAP from workflows

Create common SAP workflows

Generate Schemas for SAP Artifacts via Workflows

Exception Handling | ABAP Keyword Documentation

Handling and Propagating Exceptions - ABAP Keyword Documentation

SAP .NET Connector 3.1 Overview

SAP .NET Connector 3.1 Programming Guide

Connect to Azure AI services from Workflows

 

All supporting content for this post may be found in the companion GitHub repository.

Updated Feb 17, 2026
Version 1.0
No CommentsBe the first to comment