integration
364 TopicsThe Sentinel migration mental model question: what's actually retiring vs what isn't?
Something I keep seeing come up in conversations with other Sentinel operators lately, and I think it's worth surfacing here as a proper discussion. There's a consistent gap in how the migration to the Defender portal is being understood, and I think it's causing some teams to either over-scope their effort or under-prepare. The gap is this: the Microsoft comms have consistently told us *what* is happening (Azure portal experience retires March 31, 2027), but the question that actually drives migration planning, what is architecturally changing versus what is just moving to a different screen, doesn't have a clean answer anywhere in the community right now. The framing I've been working with, which I'd genuinely like to get other practitioners to poke holes in: What's retiring: The Azure portal UI experience for Sentinel operations. Incident management, analytics rule configuration, hunting, automation management: all of that moves to the Defender portal. What isn't changing: The Log Analytics workspace, all ingested data, your KQL rules, connectors, retention config, billing. None of that moves. The Defender XDR data lake is a separate Microsoft-managed layer, not a replacement for your workspace. Where it gets genuinely complex: MSSP/multi-tenant setups, teams with meaningful SOAR investments, and anyone who's built tooling against the SecurityInsights API for incident management (which now needs to shift to Microsoft Graph for unified incidents). The deadline extension from July 2026 to March 2027 tells its own story. Microsoft acknowledged that scale operators needed more time and capabilities. If you're in that camp, that extra runway is for proper planning, not deferral. A few questions I'd genuinely love to hear about from people who've started the migration or are actively scoping it: For those who've done the onboarding already: what was the thing that caught you most off guard that isn't well-documented? For anyone running Sentinel across multiple tenants: how are you approaching the GDAP gap while Microsoft completes that capability? Are you using B2B authentication as the interim path, or Azure Lighthouse for cross-workspace querying? I've been writing up a more detailed breakdown of this, covering the RBAC transition, automation review, and the MSSP-specific path, and the community discussion here is genuinely useful for making sure the practitioner perspective covers the right edge cases. Happy to share more context on anything above if useful.48Views1like3CommentsHow to Access a Shared OneDrive Folder in Azure Logic Apps
What is the problem? A common enterprise automation scenario involves copying files from a OneDrive folder shared by a colleague into another storage service such as SharePoint or Azure Blob Storage using Azure Logic Apps. However, when you configure the OneDrive for Business – “List files in folder” action in a Logic App, you quickly run into a limitation: The folder picker only shows: Root directory Subfolders of the authenticated user’s OneDrive Shared folders do not appear at all, even though you can access them in the OneDrive UI This makes it seem like Logic Apps cannot work with shared OneDrive folders—but that’s not entirely true. Why this happens The OneDrive for Business connector is user‑context scoped. It only enumerates folders that belong to the signed-in user’s drive and does not automatically surface folders that are shared with the user. Even though shared folders are visible under “Shared with me” in the OneDrive UI, they: Live in a different drive Have a different driveId Require explicit identification before Logic Apps can access them How to access a shared OneDrive folder There are two supported ways to access a shared OneDrive directory from Logic Apps. Option 1: Use Microsoft Graph APIs (Delegated permissions) You can invoke Microsoft Graph APIs directly using: HTTP with Microsoft Entra ID (preauthorized) Delegated permissions on behalf of the signed‑in user This requires: Admin consent or delegated consent workflows Additional Entra ID configuration 📘 Reference: HTTP with Microsoft Entra ID (preauthorized) - Connectors | Microsoft Learn While powerful, this approach adds setup complexity. Option 2: Use Graph Explorer to configure the OneDrive connector Instead of calling Graph from Logic Apps directly, you can: Use Graph Explorer to discover the shared folder metadata Manually configure the OneDrive action using that metadata Step-by-step: Using Graph Explorer to access a shared folder Scenario A colleague has shared a OneDrive folder named “Test” with me, and I need to process files inside it using a Logic App. Step 1: List shared folders using Microsoft Graph In Graph Explorer, run the following request: GET https://graph.microsoft.com/v1.0/users/{OneDrive shared folder owner username}/drive/root/children 📘Reference: List the contents of a folder - Microsoft Graph v1.0 | Microsoft Learn ✅This returns all root-level folders visible to the signed-in user, including folders shared with you. From the response, locate the shared folder. You only need two values: parentReference.driveId id (folder ID) Graph explorer snippet showing the request sent to the API to list the files & folders shared by a specific user on the root drive Step 2: Configure Logic App “List files in folder” action In your Logic App: Add OneDrive for Business → List files in folder Do not use the folder picker Manually enter the folder value using this format: {driveId}.{folderId} Once saved, the action successfully lists files from the shared OneDrive folder. Step 3: Build the rest of your workflow After the folder is resolved correctly: You can loop through files Copy them to SharePoint Upload them to Azure Blob Storage Apply filters, conditions, or transformations All standard OneDrive actions now work as expected. Troubleshooting: When Graph Explorer doesn’t help If you’re unable to find the driveId or folderId via Graph Explorer, there’s a reliable fallback. Use browser network tracing Open the shared folder in OneDrive (web) Open Browser Developer Tools → Network Look for requests like: & folderId In the response payload, extract: CurrentFolderUniqueId → folder ID drives/{driveId} from the CurrentFolderSpItemUrl This method is very effective when Graph results are incomplete or filtered.How Azure NetApp Files Object REST API powers Azure and ISV Data and AI services – on YOUR data
This article introduces the Azure NetApp Files Object REST API, a transformative solution for enterprises seeking seamless, real-time integration between their data and Azure's advanced analytics and AI services. By enabling direct, secure access to enterprise data—without costly transfers or duplication—the Object REST API accelerates innovation, streamlines workflows, and enhances operational efficiency. With S3-compatible object storage support, it empowers organizations to make faster, data-driven decisions while maintaining compliance and data security. Discover how this new capability unlocks business potential and drives a new era of productivity in the cloud.1.1KViews0likes0CommentsNew Azure API management service limits
Azure API Management operates on finite physical infrastructure. To ensure reliable performance for all customers, the service enforces limits calibrated based on: Azure platform capacity and performance characteristics Service tier capabilities Typical customer usage patterns Resource limits are interrelated and tuned to prevent any single aspect from disrupting overall service performance. Changes to service limits - 2026 update Starting March 2026 and over the following several months, Azure API Management is introducing updated resource limits for instances across all tiers. The limits are shown in the following table. Entity/Resource Consumption Developer Basic/ Basic v2 Standard/ Standard v2 Premium/ Premium v2 API operations 3,000 3,000 10,000 50,000 75,000 API tags 1,500 1,500 1,500 2,500 15,000 Named values 5,000 5,000 5,000 10,000 18,000 Loggers 100 100 100 200 400 Products 100 100 200 500 2,000 Subscriptions N/A 10,000 15,000 25,000 75,000 Users N/A 20,000 20,000 50,000 75,000 Workspaces per workspace gateway N/A N/A N/A N/A 30 Self-hosted gateways N/A 5 N/A N/A 100 1 1 Applies to Premium tier only. What's changing Limits in the classic tiers now align with those set in the v2 tiers. Limits are enforced for a smaller set of resource types that are directly related to service capacity and performance, such as API operations, tags, products, and subscriptions. Rollout process New limits roll out in a phased approach by tier as follows: Tier Expected rollout date Consumption Developer Basic Basic v2 March 15, 2026 Standard Standard v2 April 15, 2026 Premium Premium v2 May 15, 2026 Limits policy for existing classic tier customers After the new limits take effect, you can continue using your preexisting API Management resources without interruption. Existing classic tier services, where current usage exceeds the new limits, are "grandfathered" when the new limits are introduced. (Instances in the v2 tiers are already subject to the new limits.) Limits in grandfathered services will be set 10% higher than the customer's observed usage at the time new limits take effect. Grandfathering applies per service and service tier. Other existing services and new services are subject to the new limits when they take effect. Guidelines for limit increases In some cases, you might want to increase a service limit. Before requesting a limit increase, note the following guidelines: Explore strategies to address the issue proactively before requesting a limit increase. See the article here Manage resources within limits. Consider potential impacts of the limit increase on overall service performance and stability. Increasing a limit might affect your service's capacity or increase latency in some service operations. Requesting a limit increase The product team considers requests for limit increases only for customers using services in the following tiers that are designed for medium to large production workloads: Standard and Standard v2 Premium and Premium v2 Requests for limit increases are evaluated on a case-by-case basis and aren't guaranteed. The product team prioritizes Premium and Premium v2 tier customers for limit increases. To request a limit increase, create a support request from the Azure portal. For more information, see Azure support plans. Documentation For more information, please see documentation hereLogic Apps Aviators Newsletter - March 2026
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month March 2026's Ace Aviator: Lilan Sameera What's your role and title? What are your responsibilities? I’m a Senior Consultant at Adaptiv, where I design, build, and support integration solutions across cloud and enterprise systems, translating business requirements into reliable, scalable, and maintainable solutions. I work with Azure Logic Apps, Azure Functions, Azure Service Bus, Azure API Management, Azure Storage, Azure Key Vault, and Azure SQL. Can you give us some insights into your day-to-day activities? Most of my work focuses on designing and delivering reliable, maintainable integration solutions. I spend my time shaping workflows in Logic Apps, deciding how systems should connect, handling errors, and making sure solutions are safe and effective. On a typical day, I might be: - Designing or reviewing integration workflows and message flows - Investigating tricky issues - Working with teams to simplify complex processes - Making decisions about patterns, performance, and long-term maintainability A big part of what I do is thinking ahead, anticipating where things could go wrong, and building solutions that are easy to support and extend. The culture at Adaptiv encourages this approach and makes knowledge sharing across teams easy. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The Microsoft and Logic Apps communities are incredibly generous with knowledge. I’ve learned so much from blogs, GitHub repos, and forum posts. Being part of the Aviators community is my way of giving back, sharing real-world experiences, lessons learned, and practical solutions. Adaptiv encourages people to engage with the community, which makes it easier to contribute and stay involved. Looking back, what advice do you wish you had been given earlier? Don’t wait until you feel like you “know everything” to start building or sharing. You learn the most by doing, breaking things, fixing them, and asking questions. Focus on understanding concepts, not simply tools. Technologies change, fundamentals don’t. Communication matters as well. Being able to explain why something works is just as important as making it work. What has helped you grow professionally? Working on real-world, high-impact projects has been key. Being exposed to different systems, integration patterns, and production challenges has taught me more than any textbook. Supportive teammates, constructive feedback, and a culture that encourages learning and ownership have also been key in my growth. If you had a magic wand that could create a feature in Logic Apps, what would it be? I would love a first-class, visual way to version and diff Logic Apps workflows, like how code changes are tracked in Git. It would make reviews, troubleshooting, and collaboration much easier, notably in complex enterprise integrations, and help teams work more confidently. News from our product group New Azure API management service limits Azure API Management announced updated service limits across classic and v2 tiers to ensure predictable performance on shared infrastructure. The post details new limits for key resources such as API operations, tags, products, subscriptions, and users, along with a rollout schedule: Consumption/Developer/Basic (including v2) from March 15, Standard/Standard v2 from April 15, and Premium/Premium v2 from May 15, 2026. Existing classic services are grandfathered at 10% above observed usage at the time limits take effect. Guidance is provided on managing within limits, evaluating impact, and requesting increases (priority for Standard/Standard v2 and Premium/Premium v2). How to Access a Shared OneDrive Folder in Azure Logic Apps Logic Apps can work with files in a OneDrive folder shared by a colleague, but the OneDrive for Business “List files in folder” action doesn’t show shared folders because it enumerates only the signed‑in user’s drive. The article explains two supported approaches: (1) call Microsoft Graph using HTTP with Microsoft Entra ID (delegated permissions), or (2) use Graph Explorer to discover the shared folder’s driveId and folderId, then manually configure the action with {driveId}:{folderId}. A troubleshooting section shows how to extract these identifiers from browser network traces when Graph Explorer results are incomplete. Stop Writing Plumbing! Use the New Logic Apps MCP Server Wizard A new configuration experience in Logic Apps Standard (Preview) turns an existing logic app into an MCP server with a guided, in‑portal workflow. The wizard centralizes setup for authentication, API keys, server creation, and tool exposure, letting teams convert connectors and workflows into discoverable MCP tools that agents can call. You can generate tools from new connectors or register existing HTTP‑based workflows, choose API key or OAuth (EasyAuth) authentication, and test from agent platforms such as VS Code, Copilot Studio, and Foundry. The post also notes prerequisites and a known OAuth issue mitigated by reapplying EasyAuth settings. Logic Apps Agentic Workflows with SAP - Part 2: AI Agents Part 2 focuses on the AI portion of an SAP–Logic Apps integration. A Logic Apps validation agent retrieves business rules from SharePoint and produces structured outputs—an HTML summary, a CSV of invalid order IDs, and an “invalid rows” CSV—that directly drive downstream actions: email notifications, optional persistence of failed rows as custom IDocs, and filtering before a separate analysis step returns results to SAP. The post explains the agent loop design, tool boundaries (“Get validation rules,” “Get CSV payload,” “Summarize review”), and a two‑model pattern (validation vs. analysis) to keep AI outputs deterministic and workflow‑friendly. Logic Apps Agentic Workflows with SAP - Part 1: Infrastructure Part 1 establishes the infrastructure and contracts for a Logic Apps + SAP pattern that keeps integrations deterministic. A source workflow sends CSV data to SAP, while destination workflows handle validation and downstream processing. The post covers SAP connectivity (RFC/IDoc), the SAP‑side wrapper function, and the core contract elements—IT_CSV for input lines, ANALYSIS for results, EXCEPTIONMSG for human‑readable status, and RETURN (BAPIRET2) for structured success/error. It also details data shaping, error propagation, and email notification paths, with code snippets and diagrams to clarify gateway settings, namespace‑robust XPath extraction, and end‑to‑end flow control. Azure API Management - Unified AI Gateway Design Pattern This customer‑implemented pattern from Uniper uses Azure API Management as a unified AI gateway to normalize requests, enforce authentication and governance, and dynamically route traffic across multiple AI providers and models. Key elements include a single wildcard API, unified auth (API keys/JWT plus managed identity to backends), policy‑based path construction and model‑aware routing, circuit breakers with regional load balancing, token limits and metrics, and centralized logging. Reported outcomes include an 85% reduction in API definitions, faster feature availability, and 99.99% service availability. A GitHub sample shows how to implement the policy‑driven pipeline with modular policy fragments. A BizTalk Migration Tool: From Orchestrations to Logic Apps Workflows The BizTalk Migration Starter is an open‑source toolkit for modernizing BizTalk Server solutions to Azure Logic Apps. It includes tools to convert BizTalk maps (.btm) to Logic Apps Mapping Language (.lml), transform orchestrations (.odx) into Logic Apps workflow JSON, map pipelines to Logic Apps processing patterns, and expose migration tools via an MCP server for AI‑assisted workflows. The post outlines capabilities, core components, and command‑line usage, plus caveats (e.g., scripting functoids may require redesign). A demo video and GitHub repo links are provided for getting started, testing, and extending connector mappings and migration reports. Azure Arc Jumpstart Template for Hybrid Logic Apps Deployment A new Azure Arc Jumpstart “drop” provisions a complete hybrid environment for Logic Apps Standard on an Arc‑enabled AKS cluster with a single command. The deployment script sets up AKS, Arc for Kubernetes, the ACA extension, a custom location and Connected Environment, Azure SQL for runtime storage, an Azure Storage account for SMB artifacts, and a hybrid Logic Apps resource. After deployment, test commands verify each stage. The post links to prerequisites, quick‑start steps, a demo video, and references on hybrid deployment requirements. It invites community feedback and contributions via the associated GitHub repository. News from our community Pro-Code Enterprise AI-Agents using MCP for Low-Code Integration Video by Sebastian Meyer This video demonstrates how Model Context Protocol (MCP) can bridge pro-code and low-code integration by combining Microsoft Agent Framework with Azure Logic Apps. It shows how an autonomous AI agent can be wired into enterprise workflows, using MCP as the glue to connect to systems and trigger actions through Logic Apps. Viewers see how this approach reduces friction between traditional development and low-code automation while enabling consistent orchestration across services. The result is a practical pattern for extending enterprise automation with agent capabilities, improving flexibility without sacrificing control. Logic Apps: Autonomous agent loops - a practical solution for application registration secrets expiration (part 1) Post by Şahin Özdemir Şahin Özdemir describes how a single expired client secret disrupted an integration platform and how Logic Apps autonomous agent loops can prevent recurrence. The solution uses an AI-backed agent loop to call Microsoft Graph, list app registrations, detect secrets expiring within three weeks, and notify stakeholders via email using the Office 365 connector. Prerequisites include a Logic App with a managed identity and an AI model (e.g., via Microsoft Foundry). Clear agent instructions and tool context are emphasized to ensure consistent behavior. The result is a low-effort operational guardrail that replaces complex control-flow logic. From Low-Code to Full Power: When Power Platform Needs Azure with Sofia Platas Video by Ahmed Bayoumy & Robin Wilde Robin Wilde hosts Sofia Platas to explore when Power Platform solutions should extend into Azure. The conversation focuses on adopting an engineering mindset beyond low-code constraints—recognizing when workloads need Azure capabilities for scale, integration, or specialized services. It highlights moving from CRM and Power Platform into Azure and AI, and how pushing boundaries accelerates growth. The episode emphasizes practical decision-making over rigid labels, encouraging builders to reach for Azure when required while retaining the speed of low-code. It’s an insightful discussion about balancing agility with the robustness of cloud-native architecture. Cut Logic Apps Standard Costs by 70% in Dev & POC Azure Environments Post by Daniel Jonathan This article explains a practical cost-saving pattern for Logic Apps Standard in non‑production environments. Because Standard runs on an App Service Plan billed continuously, the author recommends deploying compute only during working hours and tearing it down afterward while retaining the Storage Account. Run history persists in storage, so redeployments reconnect seamlessly. Scripts automate deploy/teardown, with guidance on caveats: avoid removing compute during active runs, recurrence triggers won’t “catch up,” and production should stay always‑on. The post compares Standard versus Consumption and shows how this approach typically yields around 70% savings. Friday Fact: You can reference App Settings inside your Logic Apps Workflows Post by Sandro Pereira Sandro Pereira highlights a simple technique to externalize configuration in Logic Apps Standard by using the appsetting('Key') expression directly in workflow actions. The approach allows storing connection details, flags, and endpoints in App Settings or local.settings.json rather than hardcoding values, improving maintainability and environment portability. He notes the expression may not appear in the editor’s suggestion list but still works when added manually. The post includes a concise “one‑minute brief” and reminders to ensure the keys exist in the chosen configuration source, plus a short video for those who prefer a quick walkthrough. LogicAppWorkbook: Azure Monitor Workbook for Logic Apps Standard (App Insights v1) Post by sujith reddy komma This open-source Azure Monitor workbook provides a focused dashboard for Logic Apps Standard using Application Insights v1 telemetry. It organizes monitoring into Overview and Failures tabs, surfacing KPIs, status distribution, execution trends, and detailed failure grids. The repository includes KQL queries (Queries.md), screenshots, and clear import steps for Azure Workbooks. Notably, it targets the v1 telemetry schema (traces table, FlowRunLastJob) and isn’t compatible with newer v2 telemetry without query adjustments. It’s a useful starting point for teams wanting quick visibility into run health and trends without building dashboards from scratch. Azure Logic Apps - Choosing Between Consumption and Standard Models Post by Manish K. This post shares a primer that compares Logic Apps Consumption and Standard models to help teams choose the right hosting approach. It outlines Standard’s single‑tenant isolation, VNET integration, and better fit for long‑running or high‑throughput workloads, versus Consumption’s multi‑tenant, pay‑per‑action model ideal for short, variable workloads. It highlights migration considerations, limitations, and when each model is cost‑effective. The takeaway: align architecture, networking, and workload patterns to the model’s strengths to avoid surprises in performance, security, and pricing as solutions scale. Logic Apps standard monitoring dashboard – Fix ‘Runs’ tab Post by Integration.team Integration.team describes a fix for Logic Apps Standard where the Application Insights “Runs” tab shows a misconfiguration error and no history. The solution has two parts: ensure host.json sets ApplicationInsights telemetry to v2, and add a hidden tag on the Logic App that links it to the App Insights resource. They provide Bicep snippets for automated deployments and a portal-based alternative during initial creation. After applying both steps, run history populates correctly, restoring visibility in the monitoring dashboard and making troubleshooting more reliable. Using MCP Servers with Azure Logic App Agent Loops Post by Stephen W Thomas Stephen W Thomas explains how exposing Logic Apps as MCP servers simplifies agent loop designs. By moving inline tool logic out of the agent and into MCP-exposed endpoints, tools become reusable, easier to debug, and scoped to only what an agent needs. He discusses limiting accessible tools to control cost and execution time, and outlines a structure for organizing Logic Apps as discrete capabilities. The approach reduces agent complexity while improving maintainability and governance for AI-enabled workflows on Azure. Logic App Best Practices, Tips, and Tricks: #49 The Hidden 32-Character Naming Trap in Logic Apps Standard Post by Sandro Pereira Sandro Pereira explains a subtle but impactful pitfall in Logic Apps Standard tied to the Azure Functions runtime: the host ID is derived from only the first 32 characters of the Logic App name. When multiple Logic App Standard instances share a storage account and have identical leading characters, collisions can cause intermittent deployment and runtime failures. He recommends ensuring uniqueness within the first 32 characters or, in advanced cases, explicitly setting the host ID via AzureFunctionsWebHost__hostid. The article includes naming patterns and practical guidance to avoid hours of troubleshooting.442Views0likes0CommentsAnnouncing the General Availability (GA) of the Premium v2 tier of Azure API Management
Superior capacity, highest entity limits, unlimited included calls, and the most comprehensive set of features set the Premium v2 tier apart from other API Management tiers. Customers rely on the Premium v2 tier for running enterprise-wide API programs at scale, with high availability, and performance. The Premium v2 tier has a new architecture that eliminates management traffic from the customer VNet, making private networking much more secure and easier to setup. During the creation of a Premium v2 instance, you can choose between VNet injection or VNet integration (introduced in the Standard v2 tier) options. In addition, today we are also adding three new features to Premium v2: Inbound Private Link: You can now enable private endpoint connectivity to restrict inbound access to your Premium v2 instance. It can be enabled along with VNet injection or VNet integration or without a VNet. Availability zone support: Premium v2 now supports availability zones (zone redundancy) to enhance the reliability and resilience of your API gateway. Custom CA certificates: Azure API management v2 gateway can now validate TLS connections with the backend service using custom CA certificates. New and improved VNet injection Using VNet injection in Premium v2 no longer requires configuring routes or service endpoints. Customers can secure their API workloads without impacting API Management dependencies, while Microsoft can secure the infrastructure without interfering with customer API workloads. In short, the new VNet injection implementation enables both parties to manage network security and configuration settings independently and without affecting each other. You can now configure your APIs with complete networking flexibility: force tunnel all outbound traffic to on-premises, send all outbound traffic through an NVA, or add a WAF device to monitor all inbound traffic to your API Management Premium v2—all without constraints. Inbound Private Link Customers can now configure an inbound private endpoint for their API Management Premium v2 instance to allow your API consumers securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Apply different API Management policies based on whether traffic comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with inbound virtual network injection or outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. More details can be found here Today, only the API Management instance’s Gateway endpoint supports inbound private link connections. Each API management instance can support at most 100 Private Link connections. Availability zones Azure API Management Premium v2 now supports Availability Zones (AZ) redundancy to enhance the reliability and resilience of your API gateway. When deploying an API Management instance in an AZ-enabled region, users can choose to enable zone redundancy. This distributes the service's units, including Gateway, management plane, and developer portal, across multiple, physically separate AZs within that region. Learn how to enable AZs here. CA certificates If the API Management Gateway needs to connect to the backends secured with TLS certificates issued by private certificate authorities (CA), you need to configure custom CA certificates in the API Management instance. Custom CA certificates can be added and managed as Authorization Credentials in the Backend entities. The Backend entity has been extended with new properties allowing customers to specify a list of certificate thumbprints or subject name + issuer thumbprint pairs that Gateway should trust when establishing TLS connection with associated backend endpoint. More details can be found here. Region availability The Premium v2 tier is now generally available in six public regions (Australia East, East US2, Germany West Central, Korea Central, Norway East and UK South) with additional regions coming soon. For pricing information and regional availability, please visit the API Management pricing page. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentationCrowdStrike API Data Connector (via Codeless Connector Framework) (Preview)
API scopes created. Added to Connector however only streams observed are from Alerts and Hosts. Detections is not logging? Anyone experiencing this issue? Github has post about it apears to be escalated for feature request. CrowdStrikeDetections. not ingested Anyone have this setup and working?136Views0likes1CommentMcasShadowItReporting / Cloud Discovery in Azure Sentinel
Hi! I´m trying to Query the McasShadowItReporting Table, for Cloud App DISCOVERYs The Table is empty at the moment, the connector is warning me that the Workspace is onboarded to Unified Security Operations Platform So I cant activate it here I cant mange it via https://security.microsoft.com/, too The Documentation ( https://learn.microsoft.com/en-us/defender-cloud-apps/siem-sentinel#integrating-with-microsoft-sentinel ) Leads me to the SIEM Integration, which is configured for (for a while) I wonder if something is misconfigured here and why there is no log ingress / how I can query them73Views0likes1CommentLogic Apps Agentic Workflows with SAP - Part 2: AI Agents
Part 2 focuses on the AI-shaped portion of the destination workflows: how the Logic Apps Agent is configured, how it pulls business rules from SharePoint, and how its outputs are converted into concrete workflow artifacts. In Destination workflow #1, the agent produces three structured outputs—an HTML validation summary, a CSV list of InvalidOrderIds , and an Invalid CSV payload—which drive (1) a verification email, (2) an optional RFC call to persist failed rows as IDocs, and (3) a filtered dataset used for the separate analysis step that returns only analysis (or errors) back to SAP. In Destination workflow #2, the same approach is applied to inbound IDocs: the workflow reconstructs CSV from the custom segment, runs AI validation against the same SharePoint rules, and safely appends results to an append blob using a lease-based write pattern for concurrency. 1. Introduction In Part 1, the goal was to make the integration deterministic: stable payload shapes, stable response shapes, and predictable error propagation across SAP and Logic Apps. Concretely, Part 1 established: how SAP reaches Logic Apps (Gateway/Program ID plumbing) the RFC contracts ( IT_CSV , response envelope, RETURN / MESSAGE , EXCEPTIONMSG ) how the source workflow interprets RFC responses (success vs error) how invalid rows can be persisted into SAP as custom IDocs ( Z_CREATE_ONLINEORDER_IDOC ) and how the second destination workflow receives those IDocs asynchronously With that foundation in place, Part 2 narrows in on the part that is not just plumbing: the agent loop, the tool boundaries, and the output schemas that make AI results usable inside a workflow rather than “generated text you still need to interpret.” The diagram below highlights the portion of the destination workflow where AI is doing real work. The red-circled section is the validation agent loop (rules in, structured validation outputs out), which then fans out into operational actions like email notification, optional IDoc persistence, and filtering for the analysis step. What matters here is the shape of the agent outputs and how they are consumed by the rest of the workflow. The agent is not treated as a black box; it is forced to emit typed, workflow-friendly artifacts (summary + invalid IDs + filtered CSV). Those artifacts are then used deterministically: invalid rows are reported (and optionally persisted as IDocs), while valid rows flow into the analysis stage and ultimately back to SAP. What this post covers In this post, I focus on five practical topics: Agent loop design in Logic Apps: tools, message design, and output schemas that make the agent’s results deterministic enough to automate. External rule retrieval: pulling validation rules from SharePoint and applying them consistently to incoming payloads. Structured validation outputs → workflow actions: producing InvalidOrderIds and a filtered CSV payload that directly drive notifications and SAP remediation. Two-model pattern: a specialized model for validation (agent) and a separate model call for analysis, with a clean handoff between the two. Output shaping for consumption: converting AI output into HTML for email and into the SAP response envelope (analysis/errors only). (Everything else—SAP plumbing, RFC wiring, and response/exception patterns—was covered in Part 1 and is assumed here.) Next, I’ll break down the agent loop itself—the tool sequence, the required output fields, and the exact points where the workflow turns AI output into variables, emails, and SAP actions. Huge thanks to KentWeareMSFT for helping me understand agent loops and design the validation agent structure. And thanks to everyone in 🤖 Agent Loop Demos 🤖 | Microsoft Community Hub for making such great material available. Note: For the full set of assets used here, see the companion GitHub repository (workflows, schemas, SAP ABAP code, and sample files). 2. Validation Agent Loop In this solution, the Data Validation Agent runs inside the destination workflow after the inbound SAP payload has been normalized into a single CSV string. The agent is invoked as a single Logic Apps Agent action, configured with an Azure OpenAI deployment and a short set of instructions. Its inputs are deliberately simple at this stage: the CSV payload (the dataset to validate), and the ValidationRules reference (where the rule document lives), shown in the instructions as a parameter token (ValidationRules is a logic app parameter). The figure below shows the validation agent configuration used in the destination workflow. The top half is the Agent action configuration (model + instructions), and the bottom half shows the toolset that the agent is allowed to use. The key design choice is that the agent is not “free-form chat”: it’s constrained by a small number of tools and a workflow-friendly output contract. What matters most in this configuration is the separation between instructions and tools. The instructions tell the agent what to do (“follow business process steps 1–3”), while the tools define how the agent can interact with external systems and workflow state. This keeps the agent modular: you can change rules in SharePoint or refine summarization expectations without rewriting the overall SAP integration mechanics. Purpose This agent’s job is narrowly scoped: validate the CSV payload from SAP against externally stored business rules and produce outputs that the workflow can use deterministically. In other words, it turns “validation as reasoning” into workflow artifacts (summary + invalid IDs + invalid payload), instead of leaving validation as unstructured prose. In Azure Logic Apps terms, this is an agent loop: an iterative process where an LLM follows instructions and selects from available tools to complete a multi-step task. Logic Apps agent workflows explicitly support this “agent chooses tools to complete tasks” model (see Agent Workflows Concepts). Tools In Logic Apps agent workflows, a tool is a named sequence that contains one or more actions the agent can invoke to accomplish part of its task (see Agent Workflows Concepts). In the screenshot, the agent is configured with three tools, explicitly labeled Get validation rules, Get CSV payload, Summarize CSV payload review. These tool names match the business process in the “Instructions for agent” box (steps 1–3). The next sections of the post go deeper into what each tool does internally; at this level, the important point is simply that the agent is constrained to a small, explicit toolset. Agent execution The screenshot shows the agent configured with: AI model: gpt-5-3 (gpt-5) A connection line: “Connected to … (Azure OpenAI)” Instructions for agent that define the agent’s role and a 3-step business process: Get validation rules (via the ValidationRules reference) Get CSV payload Summarize the CSV payload review, using the validation document This pattern is intentional: The instructions provide the agent’s “operating procedure” in plain language. The tools give the agent: controlled ways to fetch the rule document, access the CSV input, and return structured results. Because the workflow consumes the agent’s outputs downstream, the instruction text is effectively part of your workflow contract (it must remain stable enough that later actions can trust the output shape). Note: If a reader wants to recreate this pattern, the fastest path is: Start with the official overview of agent workflows (Workflows with AI Agents and Models - Azure Logic Apps). Follow a hands-on walkthrough for building an agent workflow and connecting it to an Azure OpenAI deployment (Logic Apps Labs is a good step-by-step reference). [ azure.github.io ] Use the Azure OpenAI connector reference to understand authentication options and operations available in Logic Apps Standard (see Built-in OpenAI Connector) If you’re using Foundry for resource management, review how Foundry connections are created and used, especially when multiple resources/tools are involved (see How to connect to AI foundry). 2.1 Tool 1: Get validation rules The first tool in the validation agent loop is Get validation rules. Its job is to load the business validation rules that will be applied to the incoming CSV payload from SAP. I keep these rules outside the workflow (in a document) so they can be updated without redeploying the Logic App. In this example, the rules are stored in SharePoint, and the tool simply retrieves the document content at runtime. Get validation rules is implemented as a single action called Get validation document. In the designer, you can see it: uses a SharePoint Online connection (SharePoint icon and connector action) calls GetFileContentByPath (shown by the “File Path” input) reads the rule file from the configured Site Address uses the workflow parameter token ValidationRules for the File Path (so the exact rule file location is configurable per environment) The output of this tool is the raw rule document content, which the Data Validation Agent uses in the next steps to validate the CSV payload. The bottom half of the figure shows an excerpt of the rules document. The format is simple and intentionally human-editable: each rule is expressed as FieldName : condition. For example, the visible rules include: PaymentMethod : value must exist PaymentMethod : value cannot be “Cash” OrderStatus : value must be different from “Cancelled” CouponCode : value must have at least 1 character OrderID : value must be unique in the CSV array A scope note: “Do not validate the Date field.” These rules are the “source of truth” for validation. The workflow does not hardcode them into expressions; instead, it retrieves them from SharePoint and passes them into the agent loop so the validation logic remains configurable and auditable (you can always point to the exact rule document used for a given run). A small but intentional rule in the document is “Do not validate the Date field.” That line is there for a practical reason: in an early version of the source workflow, the date column was being corrupted during CSV generation. The validation agent still tried to validate dates (even though date validation wasn’t part of the original intent), and the result was predictable: every row failed validation, leaving nothing to analyze. The upstream issue is fixed now, but I kept this rule in the demo to illustrate an important point: validation is only useful when it’s aligned with the data contract you can actually guarantee at that point in the pipeline. Note: The rules shown here assume the CSV includes a header row (field names in the first line) so the agent can interpret each column by name. If you want the agent to be schema‑agnostic, you can extend the rules with an explicit column mapping, for example: Column 1: Order ID Column 2: Date Column 3: Customer ID … This makes the contract explicit even when headers are missing or unreliable. With the rules loaded, the next tool provides the second input the agent needs: the CSV payload that will be validated against this document. 2.2 Tool 2: Get CSV payload The second tool in the validation agent loop is Get CSV payload. Its purpose is to make the dataset-to-validate explicit: it defines exactly what the agent should treat as “the CSV payload,” rather than relying on implicit workflow context. In this workflow, the CSV is already constructed earlier (as Create_CSV_payload ), and this tool acts as the narrow bridge between that prepared string and the agent’s validation step. Figure: Tool #2 (“Get CSV payload”) defines a single agent parameter and binds it to the workflow’s generated CSV. The figure shows two important pieces: - The tool parameter contract (“Agent Parameters”) On the right, the tool defines an agent parameter named CSV Payload with type String, and the description (highlighted in yellow) makes the intent explicit: “The CSV payload received from SAP and that we validate based on the validation rules.” This parameter is the tool’s interface: it documents what the agent is supposed to provide/consume when using this tool, and it anchors the rest of the validation process to a single, well-defined input. Tools in Logic Apps agent workflows exist specifically to constrain and structure what an agent can do and what data it operates on (see Agent Workflows Concepts). - Why there is an explicit Compose action (“CSV payload”) In the lower-right “Code view,” the tool’s internal action is shown as a standard Compose: { "type": "Compose", "inputs": "@outputs('Create_CSV_payload')" } This is intentional. Even though the CSV already exists in the workflow, the tool still needs a concrete action that produces the value it returns to the agent. The Compose step: pins the tool output to a single source of truth ( Create_CSV_payload ), and creates a stable boundary: “this is the exact CSV string the agent validates,” independent of other workflow state. Put simply: the Compose action isn’t there because Logic Apps can’t access the CSV—it’s there to make the agent/tool interface explicit, repeatable, and easy to troubleshoot. What “tool parameters” are (in practical terms) In Logic Apps agent workflows, a tool is a named sequence of one or more actions that the agent can invoke while executing its instructions. A tool parameter is the tool’s input/output contract exposed to the agent. In this screenshot, that contract is defined under Agent Parameters, where you specify: Name: CSV Payload Type: String Description: “The CSV payload received from SAP…” This matters because it clarifies (for both the model and the human reader) what the tool represents and what data it is responsible for supplying. With Tool #1 providing the rules document and Tool #2 providing the CSV dataset, Tool #3 is where the agent produces workflow-ready outputs (summary + invalid IDs + filtered payload) that the downstream steps can act on. 2.3 Tool 3: Summarize CSV payload review The third tool, Summarize CSV payload review, is where the agent stops being “an evaluator” and becomes a producer of workflow-ready outputs. It does most of the heavy lifting so let's go into the details. Instead of returning one blob of prose, the tool defines three explicit agent parameters—each with a specific format and purpose—so the workflow can reliably consume the results in downstream actions. In Logic Apps agent workflows, tools are explicitly defined tasks the agent can invoke, and each tool can be structured around actions and schemas that keep the loop predictable (see Agent Workflows Concepts). Figure: Tool #3 (“Summarize CSV payload review”) defines three structured agent outputs Description is not just documentation—it’s the contract the model is expected to satisfy, and it strongly shapes what the agent considers “relevant” when generating outputs. The parameters are: Validation summary (String) Goal: a human-readable summary that can be dropped straight into email. In the screenshot, the description is very explicit about shape and content: “expected format is an HTML table” “create a list of all orderids that have failed” “create a CSV document… only for the orderid values that failed… each row on a separate line” “include title row only in the email body” This parameter is designed for presentation: it’s the thing you want humans to read first. InvalidOrderIds (String, CSV format) Goal: a machine-friendly list of identifiers the workflow can use deterministically. The key part of the description (highlighted in the image) is: “The format is CSV.” That single sentence is doing a lot of work: it tells the model to emit a comma-separated list, which you then convert into an array in the workflow using split(...). Invalid CSV payload (String, one row per line) Goal: the failed rows extracted from the original dataset, in a form that downstream steps can reuse. The description constrains the output tightly: “original CSV rows… for the orderid values that failed validation” “each row must be on a separate line” “keep the title row only for the email body and remove it otherwise” This parameter is designed for automation: it becomes input to remediation steps (like transforming rows to XML and creating IDocs), not just a report. What “agent parameters” do here (and why they matter) A useful way to think about agent parameters is: they are the “typed return values” of a tool. Tools in agent workflows exist to structure work into bounded tasks the agent can perform, and a schema/parameter contract makes the results consumable by the rest of the workflow (see Agent Workflows Concepts). In this tool, the parameters serve two purposes at once: They guide the agent toward salient outputs. The descriptions explicitly name what matters: “failed orderids,” “HTML table,” “CSV format,” “one row per line,” “header row rules.” That phrasing makes it much harder for the model to “wander” into irrelevant commentary. They align with how the workflow will parse and use the results. By stating “ InvalidOrderIds is CSV,” you make it trivially parseable (split), and by stating “Invalid CSV payload is one row per line,” you make it easy to feed into later transformations. Why the wording works (and what wording tends to work best) What’s interesting about the parameter descriptions is that they combine three kinds of constraints: Output format constraints (make parsing deterministic) “expected format is an HTML table” “The format is CSV.” “each row must be on a separate line” These format cues help the agent decide what to emit and help you avoid brittle parsing later. Output selection constraints (force relevance) “only for the orderid values that failed validation” “Create a list of all orderids that have failed” This tells the agent what to keep and what to ignore. Output operational constraints (tie outputs to downstream actions) “Include title row only in the email body” “remove it otherwise” This explicitly anticipates downstream usage (email vs remediation), which is exactly the kind of detail models often miss unless you state it. Rule of thumb: wording works best when it describes what to produce, in what format, with what filtering rules, and why the workflow needs it. How these parameters tie directly to the downstream actions The next picture makes the design intent very clear: each parameter is immediately “bound” to a normal workflow value via Compose actions and then used by later steps. This is the pattern we want: agent output → Compose → (optional) normalization → reused by deterministic workflow actions. It’s the opposite of “read the model output and hope.” This is the reusable pattern: Decide the minimal set of outputs the workflow needs. Specify formats that are easy to parse. Write parameter descriptions that encode both selection and formatting constraints. Immediately bind outputs to workflow variables via Compose/ SetVariable actions. The main takeaway from this tool is that the agent is being forced into a structured contract: three outputs with explicit formats and clear intent. That contract is what makes the rest of the workflow deterministic—Compose actions can safely read @agentParameters(...), the workflow can safely split(...) the invalid IDs, and downstream actions can treat the “invalid payload” as real data rather than narrative. I'll show later how this same “parameter-first” design scales to other agent tools. 2.4 Turning agent outputs into a verification email Once the agent has produced structured outputs (Validation summary, InvalidOrderIds , and Invalid CSV payload), the next goal is to make those outputs operational: humans need a quick summary of what failed, and the workflow needs machine‑friendly values it can reuse downstream. The design here is intentionally straightforward: the workflow converts each agent parameter into a first‑class workflow output (via Compose actions and one variable assignment), then binds those values directly into the Office 365 email body. The result is an email that is both readable and actionable—without anyone needing to open run history. The figure below shows how the outputs of Summarize CSV payload review are mapped into the verification email. On the left, the tool produces three values via subsequent actions (Summary, Invalid order ids, and Invalid CSV payload), and the workflow also normalizes the invalid IDs into an array (Save invalid order ids). On the right, the Send verification summary action composes the email body using those same values as dynamic content tokens. Figure: Mapping agent outputs to the verification email The important point is that the email is not constructed by “re-prompting” or “re-summarizing.” It is assembled from already-structured outputs. This mapping is intentionally direct: each piece of the email corresponds to one explicit output from the agent tool. The workflow doesn’t interpret or transform the summary beyond basic formatting—its job is to preserve the agent’s structured outputs and present them consistently. The only normalization step happens for InvalidOrderIds , where the workflow also converts the CSV string into an array ( ArrayOfInvalidOrderIDs ) for later filtering and analysis steps. The next figure shows a sample verification email produced by this pipeline. It illustrates the three-part structure: an HTML validation summary table, the raw invalid order ID list, and the extracted invalid CSV rows: Figure: Sample verification email — validation summary table + invalid order IDs + invalid CSV rows. The extracted artifacts InvalidOrderIds and Invalid CSV payload are used in the downstream actions that persist failed rows as IDocs for later processing, which were presented in Part 1. I will get back to this later to talk about reusing the validation agent. Next however I will go over the data analysis part of the AI integration. 3. Analysis Phase: from validated dataset to HTML output After the validation agent loop finishes, the workflow enters a second AI phase: analysis. The validation phase is deliberately about correctness (what to exclude and why). The analysis phase is about insight, and it runs on the remaining dataset after invalid rows are filtered out. At a high level, this phase has three steps: Call Azure OpenAI to analyze the CSV dataset while explicitly excluding invalid OrderIDs . Extract the model’s text output from the OpenAI response object. Convert the model’s markdown output into HTML so it renders cleanly in email (and in the SAP response envelope). 3.1 OpenAI component: the “Analyze data” call The figure below shows the Analyze data action that drives the analysis phase. This action is executed after the Data Validation Agent completes, and it uses three messages: a system instruction that defines the task, the CSV dataset as input, and a second user message that enumerates the OrderIDs to exclude (the invalid IDs produced by validation). Figure: Azure OpenAI analysis call. The analysis call is structured as: system: define the task and constraints user: provide the dataset user: provide exclusions derived from validation system: Analyze dataset; provide trends/predictions; exclude specified orderids. user: <csv payload=""> user: Excluded orderids: <comma-separated ids="" invalid=""></comma-separated></csv> Two design choices are doing most of the work here: The model is given the dataset and the exclusions separately. This avoids ambiguity: the dataset is one message, and the “do not include these OrderIDs ” constraint is another. The exclusion list is derived from validation output, not re-discovered during analysis. The analysis step doesn’t re-validate; it consumes the validation phase’s results and focuses purely on trends/predictions. 3.2 Processing the response The next figure shows how the workflow turns the Azure OpenAI response into a single string that can be reused for email and for the SAP response. The workflow does three things in sequence: it parses the response JSON, extracts the model’s text content, and then passes that text into an HTML formatter. Figure: Processing the OpenAI response. This is the only part of the OpenAI response you need to understand for this workflow: Analyze_data response └─ choices[] (array) └─ [0] (object) └─ message (object) └─ content (string) <-- analysis text Everything else in the OpenAI response (filters, indexes, metadata) is useful for auditing but not required to build the final user-facing output. 3.3 Crafting the output to HTML The model’s output is plain text and often includes lightweight markdown structures (headings, lists, separators). To make the analysis readable in email (and safe to embed in the SAP response envelope), the workflow converts the markdown into HTML. The script was generated with copilot. Source code snippet may be found in Part 1. The next figure shows what the formatted analysis looks like when rendered. Not the explicit reference to the excluded OrderIDs and summary of the remaining dataset before listing trend observations. Figure: Example analysis output after formatting. 4. Closing the loop: persisting invalid rows as IDocs In Part 1, I introduced an optional remediation branch: when validation finds bad rows, the workflow can persist them into SAP as custom IDocs for later handling. In Part 2, after unpacking the agent loop, I want to reconnect those pieces and show the “end of the story”: the destination workflow creates IDocs for invalid data, and a second destination workflow receives those IDocs and produces a consolidated audit trail in Blob Storage. This final section is intentionally pragmatic. It shows: where the IDoc creation call happens, how the created IDocs arrive downstream, and how to safely handle many concurrent workflow instances writing to the same storage artifact (one instance per IDoc). 4.1 From “verification summary” to “Create all IDocs” The figure below shows the tail end of the verification summary flow. Once the agent produces the structured validation outputs, the workflow first emails the human-readable summary, then converts the invalid CSV rows into an SAP-friendly XML shape, and finally calls the RFC that creates IDocs from those rows. Figure: End of the validation/remediation branch. This is deliberately a “handoff point.” After this step, the invalid rows are no longer just text in an email—they become durable SAP artifacts (IDocs) that can be routed, retried, and processed independently of the original workflow run. 4.2 Z_CREATE_ONLINEORDER_IDOC and the downstream receiver The next figure is the same overview from Part 1. I’m reusing it here because it captures the full loop: the workflow calls Z_CREATE_ONLINEORDER_IDOC , SAP converts the invalid rows into custom IDocs, and Destination workflow #2 receives those IDocs asynchronously (one workflow run per IDoc). Figure 2: Invalid rows persisted as custom IDocs. This pattern is intentionally modular: Destination workflow #1 decides which rows are invalid and optionally persists them. SAP encapsulates the IDoc creation mechanics behind a stable RFC ( Z_CREATE_ONLINEORDER_IDOC ). Destination workflow #2 processes each incoming IDoc independently, which matches how IDoc-driven integrations typically behave in production. 4.3 Two phases in Destination workflow #2: AI agent + Blob Storage logging In the receiver workflow, there are two distinct phases: AI agent phase (per-IDoc): reconstruct a CSV view from the incoming IDoc payload and (optionally) run the same validation logic. Blob storage phase (shared output): append a normalized “verification line” into a shared blob in a concurrency-safe way. It’s worth calling out: in this demo, the IDocs being received were created from already-validated outputs upstream, so you could argue the second validation is redundant. I keep it anyway for two reasons: it demonstrates that the agent tooling is reusable with minimal changes, and in a general integration, Destination workflow #2 may receive IDocs from multiple sources, not only from this pipeline—so “validate on receipt” can still be valuable. 4.3.1 AI agent phase The figure below shows the validation agent used in Destination workflow #2. The key difference from the earlier agent loop is the output format: instead of producing an HTML summary + invalid lists, this agent writes a single “audit line” that includes the IDoc correlation key ( DOCNUM ) along with the order ID and the failed rules. Figure: Destination workflow #2 agent configuration. The reusable part here is the tooling structure: rules still come from the same validation document, the dataset is still supplied as CSV, and the summarization tool outputs a structured value the workflow can consume deterministically. The only meaningful change is “what shape do I want the output to take,” which is exactly what the agent parameter descriptions control. The next figure zooms in on the summarization tool parameter in Destination workflow #2. Instead of three outputs, this tool uses a single parameter ( VerificationInfo ) whose description forces a consistent line format anchored on DOCNUM . Figure 4: VerificationInfo parameter. This is the same design principle as Tool #3 in the first destination workflow: describe the output as a contract, not as a vague request. The parameter description tells the agent exactly what must be present ( DOCNUM + OrderId + failed rules) and therefore makes it straightforward to append the output to a shared log without additional parsing. Interesting snippets Extracting DOCNUM from the IDoc control record and carry it through the run: xpath(xml(triggerBody()?['content']), 'string(/*[local-name()="Receive"] /*[local-name()="idocData"] /*[local-name()="EDI_DC40"] /*[local-name()="DOCNUM"])') 4.3.2 Blob Storage phase Destination workflow #2 runs one instance per inbound IDoc. That means multiple runs can execute at the same time, all trying to write to the same “ ValidationErrorsYYYYMMDD.txt ” artifact. The figure below shows the resulting appended output: one line per IDoc, each line beginning with DOCNUM , which becomes the stable correlation key. Destination workflow #2 runs one instance per inbound IDoc, so multiple instances can attempt to write to the same daily “validation errors” append blob at the same time. The figure below shows the concurrency control pattern I used to make those writes safe: a short lease acquisition loop that retries until it owns the blob lease, then appends the verification line(s), and finally releases the lease. Figure: Concurrency-safe append pattern. Reading the diagram top‑to‑bottom, the workflow uses a simple lease → append → release pattern to make concurrent writes safe. Each instance waits briefly (Delay), attempts to acquire a blob lease (Acquire validation errors blob lease), and loops until it succeeds (Set status code → Until lease is acquired). Once a lease is obtained, the workflow stores the lease ID (Save lease id), appends its verification output under that lease (Append verification results), and then releases the lease (Release the lease) so the next workflow instance can write. Implementation note: the complete configuration for this concurrency pattern (including the HTTP actions, headers, retries, and loop conditions) is included in the attached artifacts, in the workflow JSON for Destination workflow #2. 5. Concluding remarks Part 2 zoomed in on the AI boundary inside the destination workflows and made it concrete: what the agent sees, what it is allowed to do, what it must return, and how those outputs drive deterministic workflow actions. The practical outcomes of Part 2 are: A tool-driven validation agent that produces workflow artifacts, not prose. The validation loop is constrained by tools and parameter schemas so its outputs are immediately consumable: an email-friendly validation summary, a machine-friendly InvalidOrderIds list, and an invalid-row payload that can be remediated. A clean separation between validation and analysis. Validation decides what not to trust (invalid IDs / rows) and analysis focuses on what is interesting in the remaining dataset. The analysis prompt makes the exclusion rule explicit by passing the dataset and excluded IDs as separate messages. A repeatable response-processing pipeline. You extract the model’s text from a stable response path ( choices[0].message.content ), then shape it into HTML once (markdown → HTML) so the same formatted output can be reused for email and the SAP response envelope. A “reuse with minimal changes” pattern across workflows. Destination workflow #2 shows the same agent principles applied to IDoc reception, but with a different output contract optimized for logging: DOCNUM + OrderId + FailedRules . This demonstrates that the real reusable asset is the tool + parameter contract design. Putting It All Together We have a full integration story where SAP, Logic Apps, AI, and IDocs are connected with explicit contracts and predictable behavior: Part 1 established the deterministic integration foundation. SAP ↔ Logic Apps connectivity (gateway/program wiring) RFC payload/response contracts ( IT_CSV , response envelope, error semantics) predictable exception propagation back into SAP an optional remediation branch that persists invalid rows as IDocs via a custom RFC ( Z_CREATE_ONLINEORDER_IDOC ) and the end-to-end response handling pattern in the caller workflow. Part 2 layered AI on top without destabilizing the contracts. Agent loop + tools for rule retrieval and validation output schemas that convert “reasoning” into workflow artifacts a separate analysis step that consumes validated data and produces formatted results and an asynchronous IDoc receiver that logs outcomes safely under concurrency. The reason it works as a two-part series is that the two layers evolve at different speeds: The integration layer (Part 1) should change slowly. It defines interoperability: payload shapes, RFC names, error contracts, and IDoc interfaces. The AI layer (Part 2) is expected to iterate. Prompts, rule documents, output formatting, and agent tool design will evolve as you tune behavior and edge cases. References Logic Apps Agentic Workflows with SAP - Part 1: Infrastructure 🤖 Agent Loop Demos 🤖 | Microsoft Community Hub Agent Workflows Concepts Workflows with AI Agents and Models - Azure Logic Apps Built-in OpenAI Connector How to connect to AI foundry Create Autonomous AI Agent Workflows - Azure Logic Apps Handling Errors in SAP BAPI Transactions Access SAP from workflows Create common SAP workflows Generate Schemas for SAP Artifacts via Workflows Exception Handling | ABAP Keyword Documentation Handling and Propagating Exceptions - ABAP Keyword Documentation SAP .NET Connector 3.1 Overview SAP .NET Connector 3.1 Programming Guide Connect to Azure AI services from Workflows All supporting content for this post may be found in the companion GitHub repository.