integration
342 TopicsLogic Apps Aviators Newsletter - November 2025
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month Novembers’s Ace Aviator: Al Ghoniem What's your role and title? What are your responsibilities? As a Senior Integration Consultant, I design and deliver enterprise-grade integration on Microsoft Azure, primarily using Logic Apps Standard, API Management, Service Bus, Event Grid and Azure Functions. My remit covers reference architectures, “golden” templates, governance and FinOps guardrails, CI/CD automation (Bicep and YAML), and production-ready patterns for reliability, observability and cost efficiency. Alongside my technical work, I lead teams of consultants and engineers, helping them adopt standardised delivery models, mentor through code reviews and architectural walkthroughs, and ensure we deliver consistent, high-quality outcomes across projects. I also help teams apply decisioning patterns (embedded versus external rules) and integrate AI responsibly within enterprise workflows. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? Architecture and patterns: refining solution designs, sequence diagrams and rules models for new and existing integrations. Build and automation: evolving reusable Logic App Standard templates, Bicep modules and pipelines, embedding monitoring, alerts and identity-first security. Problem-solving: addressing performance tuning, transient fault handling, poison/DLQ flows and “design for reprocessing.” Leadership and enablement: mentoring consultants, facilitating technical discussions, and ensuring knowledge is shared across teams. Community and writing: publishing articles and examples to demystify real-world integration trade-offs. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community continuously turns hard-won lessons into reusable practices. Sharing patterns (and anti-patterns) saves others time and incidents, while learning from peers strengthens my own work. Microsoft’s product teams also listen closely, and seeing customer feedback directly shape the platform is genuinely rewarding. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Optimise for learning speed, not titles. Choose problems that stretch you and deliver in small, measurable increments. Master the fundamentals. Naming, idempotency, retries and observability are not glamorous but make systems dependable. Document everything. Diagrams, runbooks and ADRs multiply your impact. Understand trade-offs. Every decision buys something and costs something; acknowledge both sides clearly. Value collaboration over heroics. Ask questions, share knowledge and give credit freely. What has helped you grow professionally? Reusable scaffolding: creating golden templates and reference repositories that capture best practice once and reuse it everywhere. Feedback loops: leveraging telemetry, post-incident reviews and peer critique to improve. Teaching and mentoring: explaining concepts to others brings clarity and strengthens leadership. Cross-disciplinary curiosity: combining architecture, DevOps, FinOps and AI to address problems holistically. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? "Stateful Sessions and Decisions” as a first-class capability: Built-in session state across multiple workflows, durable correlation and resumable orchestrations without external storage. A native decisioning activity with versioned decision tables and rule auditing (“why this rule fired”). A local-first developer experience with fast testing and contract validation for confident iteration. This would simplify complex, human-in-the-loop and event-driven scenarios, reduce custom plumbing, and make advanced orchestration patterns accessible to a wider audience. News from our product group Logic Apps Community Day 2025 Did you miss or want to catch up again on your favorite Logic Apps Community Day videos – jump back into action on this four hours long learning session, with 10 sessions from our Community Experts. And stay tuned for individual sessions being shared throughout the week. Announcing Parse & Chunk with Metadata in Logic Apps: Build Context-Aware RAG Agents New Parse & Chunk actions add metadata like page numbers and sentence completeness—perfect for context-aware document Q&A using Azure AI Search and Agent Loop. Introducing the RabbitMQ Connector (Public Preview) The new connector (Public Preview) lets you send and receive messages with RabbitMQ in Logic Apps Standard and Hybrid—ideal for scalable, reliable messaging across industries. News from our community EventGrid And Entra Auth In Logic Apps Standard Post by Riccardo Viglianisi Learn how to use Entra Auth for webhook authentication, ditch SAS tokens, and configure private endpoints with public access rules—perfect for secure, scalable integrations. Debugging XSLT Made Easy in VS Code: .NET-Based Debugging for Logic Apps Post by Daniel Jonathan A new .NET-based extension brings real debugging to XSLT for Logic Apps. Set breakpoints, step through transformations, and inspect variables—making XSLT development clear and productive. This is the 3 rd post in a 5 part series, so worth checking out the other posts too. Modifying the Logic App Azure Workbook: Custom Views for Multi Workflow Monitoring Post by Jeff Wessling Learn how to tailor dashboards with KQL, multi-workflow views, and context panes—boosting visibility, troubleshooting speed, and operational efficiency across your integrations. Azure AI Agents in Logic Apps: A Guide to Automate Decisions Post by Imashi Kinigama Discover how GPT-powered agents, created using Logic Apps Agent Loop, automate decisions, extract data, and adapt in real time. Build intelligent workflows with minimal effort—no hardcoding, just instructions and tools. How to Turn Logic App Connectors into MCP Servers (Step-by-Step Guide) Post by Stephen W. Thomas Learn how to expose connectors like Google Drive or Salesforce as MCP endpoints using Azure API Center—giving AI agents secure, real-time access to 1,400+ services directly from VS Code. Custom SAP MCP Server with Logic Apps Post by Sebastian Meyer Learn how to turn Logic Apps into AI-accessible tools using MCP. From workflow descriptions to Easy Auth setup and VS Code integration—this guide unlocks SAP automation with Copilot. How Azure Logic Apps as MCP Servers Accelerate AI Agent Development Post by Monisha S Turn 1,400+ connectors into AI tools with Logic Apps Standard. Build agents fast, integrate with legacy systems, and scale intelligent workflows across your organization. Designing Business Rules in Azure Logic Apps: When to Go Embedded vs External Post by Al Ghoniem Learn when to use Logic Apps' native Rules Engine or offload to Azure Functions with NRules or JSON RulesEngine. Discover hybrid patterns for scalable, testable decision automation. Syncing SharePoint with Azure Blob Storage using Logic Apps & Azure Functions for Azure AI Search Post by Daniel Jonathan Solve folder delete issues by tagging blobs with SharePoint metadata. Use Logic Apps and a custom Azure Function to clean up orphaned files and keep Azure AI Search in sync. Step-by-Step Guide: Building a Conversational Agent in Azure Logic Apps Post by Stephen W. Thomas Use Azure AI Foundry and Logic Apps Standard to create chatbots that shuffle cards, answer questions, and embed into websites—no code required, just smart workflows and EasyAuth. You can hide sensitive data from the Logic App run history Post by Francisco Leal Learn how to protect sensitive data like authentication tokens, credentials, and personal information in Logic App, so this data don’t appear in the run history, which could pose security and privacy risks.214Views0likes0CommentsAgentic Integration with SAP, ServiceNow, and Salesforce
Copilot/Copilot Studio Integration with SAP (No Code) By integrating SAP Cloud Identity Services with Microsoft Entra ID, organizations can establish secure, federated identity management across platforms. This configuration enables Microsoft Copilot and Teams to seamlessly connect with SAP’s Joule digital assistant, supporting natural language interactions and automating business processes efficiently. Key Resources as given in SAP docs (Image courtesy SAP): Configuring SAP Cloud Identity Services and Microsoft Entra ID for Joule Enable Microsoft Copilot and Teams to Pass Requests to Joule Copilot Studio Integration with ServiceNow and Salesforce (No Code) Integration with ServiceNow and Salesforce, has two main approaches: Copilot Agents using Copilot Studio: Custom agents can be built in Copilot Studio to interact directly with Salesforce CRM data or ServiceNow knowledge bases and helpdesk tickets. This enables organizations to automate sales and support processes using conversational AI. Create a custom sales agent using your Salesforce CRM data (YouTube) ServiceNow Connect Knowledge Base + Helpdesk Tickets (YouTube) 3rd Party Agents using Copilot for Service Agent: Microsoft Copilot can be embedded within Salesforce and ServiceNow interfaces, providing users with contextual assistance and workflow automation directly inside these platforms. Set up the embedded experience in Salesforce Set up the embedded experience in ServiceNow MCP or Agent-to-Agent (A2A) Interoperability (Pro Code) - (Image courtesy SAP) If you choose a pro-code approach, you can either implement the Model Context Protocol (MCP) in a client/server setup for SAP, ServiceNow, and Salesforce, or leverage existing agents for these third-party services using Agent-to-Agent (A2A) integration. Depending on your requirements, you may use either method individually or combine them. The recently released Azure Agent Framework offers practical examples for both MCP and A2A implementations. Below is the detailed SAP reference architecture, illustrating how A2A solutions can be layered on top of SAP systems to enable modular, scalable automation and data exchange. Agent2Agent Interoperability | SAP Architecture Center Logic Apps as Integration Actions Logic Apps is the key component of Azure Integration platform. Just like so many other connectors it has connectors for all this three platforms (SAP, ServiceNow, Salesforce). Logic Apps can be invoked from custom Agent (built in action in Foundry) or Copilot Agent. Same can be said for Power Platform/Automate as well. Conclusion This article provides a comprehensive overview of how Microsoft Copilot, Copilot Studio, Foundry by A2A/MCP, and Azure Logic Apps can be combined to deliver robust, agentic integrations with SAP, ServiceNow, and Salesforce. The narrative highlights the importance of secure identity federation, modular agent orchestration, and low-code/pro-code automation in building next-generation enterprise solutions.291Views1like0CommentsIntroducing native Service Bus message publishing from Azure API Management (Preview)
We’re excited to announce a preview capability in Azure API Management (APIM) — you can now send messages directly to Azure Service Bus from your APIs using a built-in policy. This enhancement, currently in public preview, simplifies how you connect your API layer with event-driven and asynchronous systems, helping you build more scalable, resilient, and loosely coupled architectures across your enterprise. Why this matters? Modern applications increasingly rely on asynchronous communication and event-driven designs. With this new integration: Any API hosted in API Management can publish to Service Bus — no SDKs, custom code, or middleware required. Partners, clients, and IoT devices can send data through standard HTTP calls, even if they don’t support AMQP natively. You stay in full control with authentication, throttling, and logging managed centrally in API Management. Your systems scale more smoothly by decoupling front-end requests from backend processing. How it works The new send-service-bus-message policy allows API Management to forward payloads from API calls directly into Service Bus queues or topics. High-level flow A client sends a standard HTTP request to your API endpoint in API Management. The policy executes and sends the payload as a message to Service Bus. Downstream consumers such as Logic Apps, Azure Functions, or microservices process those messages asynchronously. All configurations happen in API Management — no code changes or new infrastructure are required. Getting started You can try it out in minutes: Set up a Service Bus namespace and create a queue or topic. Enable a managed identity (system-assigned or user-assigned) on your API Management instance. Grant the identity the “Service Bus data sender” role in Azure RBAC, scoped to your queue/ topic. Add the policy to your API operation: <send-service-bus-message queue-name="orders"> <payload>@(context.Request.Body.As<string>())</payload> </send-service-bus-message> Once saved, each API call publishes its payload to the Service Bus queue or topic. 📖 Learn more. Common use cases This capability makes it easy to integrate your APIs into event-driven workflows: Order processing – Queue incoming orders for fulfillment or billing. Event notifications – Trigger internal workflows across multiple applications. Telemetry ingestion – Forward IoT or mobile app data to Service Bus for analytics. Partner integrations – Offer REST-based endpoints for external systems while maintaining policy-based control. Each of these scenarios benefits from simplified integration, centralized governance, and improved reliability. Secure and governed by design The integration uses managed identities for secure communication between API Management and Service Bus — no secrets required. You can further apply enterprise-grade controls: Enforce rate limits, quotas, and authorization through APIM policies. Gain API-level logging and tracing for each message sent. Use Service Bus metrics to monitor downstream processing. Together, these tools help you maintain a consistent security posture across your APIs and messaging layer. Build modern, event-driven architectures With this feature, API Management can serve as a bridge to your event-driven backbone. Start small by queuing a single API’s workload, or extend to enterprise-wide event distribution using topics and subscriptions. You’ll reduce architectural complexity while enabling more flexible, scalable, and decoupled application patterns. Learn more: Get the full walkthrough and examples in the documentation 👉 here2.5KViews2likes4CommentsSelecting the Right Agentic Solution on Azure – Part 2 (Security)
Let’s pick up from where we left off in the previous post — Selecting the Right Agentic Solution on Azure - Part 1. Earlier, we explored a decision tree to help identify the most suitable Azure service for building your agentic solution. Following that discussion, we received several requests to dive deeper into the security considerations for each of these services. In this post, we’ll examine the security aspects of each option, one by one. But before going ahead and looking at the security perspective I highly recommend looking at list of Azure AI Services Technologies made available by Microsoft. This list is inclusive of all those services which were part of erstwhile cognitive services and latest additions. Workflows with AI agents and models in Azure Logic Apps (Preview) – This approach focuses on running your agents as an action or as part of an “agent loop” with multiple actions within Azure Logic Apps. It’s important not to confuse this with the alternative setup, where Azure Logic Apps integrates with AI Agents in the Foundry Agent Service—either as a tool or as a trigger. (Announcement: Power your Agents in Azure AI Foundry Agent Service with Azure Logic Apps | Microsoft Community Hub). In that scenario, your agents are hosted under the Azure AI Foundry Agent Service, which we’ll discuss separately below. Although, to create an agent workflow, you’ll need to establish a connection—either to Azure OpenAI or to an Azure AI Foundry project for connecting to a model. When connected to a Foundry project, you can view agents and threads directly within that project’s lists. Since agents here run as Logic Apps actions, their security is governed by the Logic Apps security framework. Let’s look at the key aspects: Easy Auth or App Service Auth (Preview) - Agent workflows often integrate with a broader range of systems—models, MCPs, APIs, agents, and even human interactions. You can secure these workflows using Easy Auth, which integrates with Microsoft Entra ID for authentication and authorization. Read more here: Protect Agent Workflows with Easy Auth - Azure Logic Apps | Microsoft Learn. Securing and Encrypting Data at Rest - Azure Logic Apps stores data in Azure Storage, which uses Microsoft-managed keys for encryption by default. You can further enhance security by: Restricting access to Logic App operations via Azure RBAC Limiting access to run history data Securing inputs and outputs Controlling parameter access for webhook-triggered workflows Managing outbound call access to external services More info here: Secure access and data in workflows - Azure Logic Apps | Microsoft Learn. Secure Data at transit – When exposing your Logic App as an HTTP(S) endpoint, consider using: Azure API Management for access policies and documentation Azure Application Gateway or Azure Front Door for WAF (Web Application Firewall) protection. I highly recommend the labs provided by Logic Apps Product Group to learn more about Agentic Workflows: https://azure.github.io/logicapps-labs/docs/intro. Azure AI Foundry Agent Service – As of this writing, the Azure AI Foundry Agent Service abstracts the underlying infrastructure where your agents run. Microsoft manages this secure environment, so you don’t need to handle compute, network, or storage resources—though bring-your-own-storage is an option. Securing and Encrypting Data at Rest - Microsoft guarantees that your prompts and outputs remain private—never shared with other customers or AI providers (such as OpenAI or Meta). Data (from messages, threads, runs, and uploads) is encrypted using AES-256. It remains stored in the same region where the Agent Service is deployed. You can optionally use Customer-Managed Keys (CMK) for encryption. Read more here: Data, privacy, and security for Azure AI Agent Service - Azure AI Services | Microsoft Learn. Network Security – The service allows integration with your private virtual network using a private endpoint. Note: There are known limitations, such as subnet IP restrictions, the need for a dedicated agent subnet, same-region requirements, and limited regional availability. Read more here: How to use a virtual network with the Azure AI Foundry Agent Service - Azure AI Foundry | Microsoft Learn. Secure Data at transit – Upcoming enhancements include API Management support (soon in Public Preview) for AI APIs, including Model APIs, Tool APIs/MCP servers, and Agent APIs. Here is another great article about using APIM to safeguard HTTP APIs exposed by Azure OpenAI that let your applications perform embeddings or completions by using OpenAI's language models. Agent Orchestrators – We’ve introduced the Agent Framework, which succeeds both AutoGen and Semantic Kernel. According to the product group, it combines the best capabilities of both predecessors. Support for Semantic Kernel and related documentation for AutoGen will continue to be available for some time to allow users to transition smoothly to the new framework. When discussing the security aspects of agent orchestrators, it’s important to note that these considerations also extend to the underlying services hosting them—whether on AKS or Container Apps. However, this discussion will not focus on the security features of those hosting environments, as comprehensive resources already exist for them. Instead, we’ll focus on common security concerns applicable across different orchestrators, including AutoGen, Semantic Kernel, and other frameworks such as LlamaIndex, LangGraph, or LangChain. Key areas to consider include (but are not limited to): Secure Secrets / Key Management Avoid hard-coding secrets (e.g., API keys for Foundry, OpenAI, Anthropic, Pinecone, etc.). Use secret management solutions such as Azure Key Vault or environment variables. Encrypt secrets at rest and enforce strict limits on scope and lifetime. Access Control & Least Privilege Grant each agent or tool only the minimum required permissions. Implement Role-Based Access Control (RBAC) and enforce least privilege principles. Use strong authentication (e.g., OAuth2, Azure AD) for administrative or tool-level access. Restrict the scope of external service credentials (e.g., read-only vs. write) and rotate them regularly. Isolation / Sandboxing Isolate plugin execution and use inter-process separation as needed. Prevent user inputs from executing arbitrary code on the host. Apply resource limits for model or function execution to mitigate abuse. Sensitive Data Protection Encrypt data both at rest and in transit. Mask or remove PII before sending data to models. Avoid persisting sensitive context unnecessarily. Ensure logs and memory do not inadvertently expose secrets or user data. Prompt & Query Security Sanitize or escape user input in custom query engines or chat interfaces. Protect against prompt injection by implementing guardrails to monitor and filter prompts. Set context length limits and use safe output filters (e.g., profanity filters, regex validators). Observability, Logging & Auditing Maintain comprehensive logs, including tool invocations, agent decisions, and execution paths. Continuously monitor for anomalies or unexpected behaviour. I hope this overview assists you in evaluating and implementing the appropriate security measures for your chosen agentic solution.332Views3likes2Comments[DevOps] dps.sentinel.azure.com no longer responds
Hello, Ive been using Repository connections in sentinel to a central DevOps for almost two years now. Today i got my first automated email on error for a webhook related to my last commit from the central repo to my Sentinel intances. Its a webhook that is automticly created in connections that are made the last year (the once from 2 years ago dont have this webhook automaticly created). The hook is found in devops -> service hooks -> webhooks "run state change" for each connected sentinel However, after todays run (which was successfull, all content deployed) this hook generates alerts. It says it cant reach: (EU in my case) eu.prod.dps.sentinel.azure.com full url: https://eu.prod.dps.sentinel.azure.com/webhooks/ado/workspaces/[REDACTED]/sourceControls/[REDACTED] So, what happened to this domain? why is it no longer responding and when was it going offline? I THINK this is the hook that sets the status under Sentinel -> Repositories in the GUI. this success status in screenshoot is from 2025/02/06, no new success has been registered in the receiving Sentinel instance. For the Sentinel that is 2 year old and dont have a hook in my DevOps that last deployment status says "Unknown" - so im fairly sure thats what the webhook is doing. So a second question would be, how can i set up a new webhook ? (it want ID and password of the "Azure Sentinel Content Deployment App" - i will never know that password....) so i cant manually add ieather (if the URL ever comes back online or if a new one exists?). please let me know.128Views1like3CommentsIssue when ingesting Defender XDR table in Sentinel
Hello, We are migrating our on-premises SIEM solution to Microsoft Sentinel since we have E5 licences for all our users. The integration between Defender XDR and Sentinel convinced us to make the move. We have a limited budget for Sentinel, and we found out that the Auxiliary/Data Lake feature is sufficient for verbose log sources such as network logs. We would like to retain Defender XDR data for more than 30 days (the default retention period). We implemented the solution described in this blog post: https://jeffreyappel.nl/how-to-store-defender-xdr-data-for-years-in-sentinel-data-lake-without-expensive-ingestion-cost/ However, we are facing an issue with 2 tables: DeviceImageLoadEvents and DeviceFileCertificateInfo. The table forwarded by Defender to Sentinel are empty like this row: We created a support ticket but so far, we haven't received any solution. If anyone has experienced this issue, we would appreciate your feedback. Lucas32Views0likes0CommentsSentinel Data Connector: Google Workspace (G Suite) (using Azure Functions)
I'm encountering a problem when attempting to run the GWorkspace_Report workbook in Azure Sentinel. The query is throwing this error related to the union operator: 'union' operator: Failed to resolve table expression named 'GWorkspace_ReportsAPI_gcp_CL' I've double-checked, and the GoogleWorkspaceReports connector is installed and updated to version 3.0.2. Has anyone seen this or know what might be causing the table GWorkspace_ReportsAPI_gcp_CL to be unresolved? Thanks!88Views0likes2CommentsTrend Micro Vision One Connector Not working
Hi All, Before I get nuked in the comments to raise an issue on the Sentinel Repo. Here me out 😇 Around a month ago, the logs stopped ingesting. A quick snoop around revealed the reason. But I'm not sure if I should raise an issue, or try to fix the issue, risking voiding any future support I can get, since the connector and the app that comes with it are market solutions. Function app was not running due to a dependency issue. Spotted this on the diagnostic logs, under the "exceptions" table. "module named _cffi_backend not found" a python package google tells me, thats used to interact with C code. So logically, I need to find the requirement.txt and make sure the dependency is there. Also make sure the python version on the runtime and Azure matches, The logs were initially flowing as usual . I had completed integrating Trend Micro using Azure Functions based connector around 7 months ago. Worked like a toyota helix until now. So once again, would like to know the community's thoughts on it. Thxx128Views1like1Comment