playbooks
169 TopicsWhat’s new in Microsoft Sentinel: RSAC 2026
Security is entering a new era, one defined by explosive data growth, increasingly sophisticated threats, and the rise of AI-enabled operations. To keep pace, security teams need an AI-powered approach to collect, reason over, and act on security data at scale. At RSA Conference 2026 (RSAC), we’re unveiling the next wave of Sentinel innovations designed to help organizations move faster, see deeper, and defend smarter with AI-ready tools. These updates include AI-driven playbooks that accelerate SOC automation, Granular Delegated Admin Privileges (GDAP) and granular role-based access controls (RBAC) that let you scale your SOC, accelerated data onboarding through new connectors, and data federation that enables analysis in place without duplication. Together, they give teams greater clarity, control, and speed. Come see us at RSAC to view these innovations in action. Hear from Sentinel leaders during our exclusive Microsoft Pre-Day, then visit Microsoft booth #5744 for demos, theater sessions, and conversations with Sentinel experts. Read on to explore what’s new. See you at RSAC! Sentinel feature innovations: Sentinel SIEM Sentinel data lake Sentinel graph Sentinel MCP Threat Intelligence Microsoft Security Store Sentinel promotions Sentinel SIEM Playbook generator [Now in public preview] The Sentinel playbook generator delivers a new era of automation capabilities. You can vibe code complex automations, integrate with different tools to ensure timely and compliant workflows throughout your SOC and feel confident in the results with built in testing and documentation. Customers and partners are already seeing benefit from this innovation. “The playbook generator gives security engineers the flexibility and speed of AI-assisted coding while delivering the deterministic outcomes that enterprise security operations require. It's the best of both worlds, and it lives natively in Defender where the engineers already work.” – Jaime Guimera Coll | Security and AI Architect | BlueVoyant Learn more about playbook generator. SIEM migration experience [General availability now] The Sentinel SIEM migration experience helps you plan and execute SIEM migrations through a guided, in-product workflow. You can upload Splunk or QRadar exports to generate recommendations for best‑fit Sentinel analytics rules and required data connectors, then assess migration scope, validate detection coverage, and migrate from Splunk or QRadar to Sentinel in phases while tracking progress. “The tool helps turn a Splunk to Sentinel migration into a practical decision process. It gives clear visibility into which detections are relevant, how they align to real security use cases, and where it makes sense to enable or prioritize coverage—especially with cost and data sources in mind.” – Deniz Mutlu | Director | Swiss Post Cybersecurity Ltd Learn more about SIEM migration experience. GDAP, unified RBAC, and row-level RBAC for Sentinel [Public preview, April 1] As Sentinel environments grow for enterprises, MSSPs, hyperscalers, and partners operating across shared or multiple environments, the challenge becomes managing access control efficiently and consistently at scale. Sentinel’s expanded permissions and access capabilities are designed to meet these needs. Granular Delegated Admin Privileges (GDAP) lets you streamline management across multiple governed tenants using your primary account, based on existing GDAP relationships. Unified RBAC allows you to opt in to managing permissions for Sentinel workspaces through a single pane of glass, configuring and enforcing access across Sentinel experiences in the analytics tier and data lake in the Defender portal. This simplifies administration and improves operational efficiency by reducing the number of permission models you need to manage. Row-level RBAC scoping within tables enables precise, scoped access to data in the Sentinel data lake. Multiple SOC teams can operate independently within a shared Sentinel environment, querying only the data they are authorized to see, without separating workspaces or introducing complex data flow changes. Consistent, reusable scope definitions ensure permissions are applied uniformly across tables and experiences, while maintaining strong security boundaries. To learn more, read our technical deep dives on RBAC and GDAP. Sentinel data lake Sentinel data federation [Public preview, April 1] Sentinel data federation lets you analyze security data in place without copying or duplicating your data. Powered by Microsoft Fabric, you can now federate data from Fabric, Azure Data Lake Storage (ADLS), and Azure Databricks into Sentinel data lake. Federated data appears alongside native Sentinel data, so you can use familiar tools like KQL hunting, notebooks, and custom graphs to correlate signals and investigate across your entire digital estate, all while preserving governance and compliance. You can start analyzing data in place and progressively ingest data into Sentinel for deeper security insights, advanced automation, and AI-powered defense at scale. You are billed only when you run analytics on federated data using existing Sentinel data lake query and advanced insights meters. les for unified investigation and hunting Sentinel cost estimation tool [Public Preview, April 9] The new Sentinel cost estimation tool offers all Microsoft customers and partners a guided, meter-level cost estimation experience that makes pricing transparent and predictable. A built-in three-year cost projection lets you model data growth and ramp-up over time, anticipate spend, and avoid surprises. Get transparent estimates into spend as you scale your security operations. All other customers can continue to use the Azure calculator for Sentinel pricing estimates. See the Sentinel pricing page for more information. Sentinel data connectors A365 Observability connector [Public preview, April 15] Bring AI agent telemetry into the Sentinel data lake to investigate agent behavior, tool usage, prompts, reasoning and execution using hunting, graph, and MCP workflows. GitHub audit log connector using API polling [General availability, March 6] Ingest GitHub enterprise audit logs into Sentinel to monitor user and administrator activity, detect risky changes, and investigate security events across your development environment. Google Kubernetes Engine (GKE) connector [General availability, March 6] Collect Google Kubernetes Engine (GKE) audit and workload logs in Sentinel to monitor cluster activity, analyze workload behavior, and detect security threats across Kubernetes environments. Microsoft Entra and Azure Resource Graph (ARG) connector enhancements [Public preview, April 15] Enable new Entra assets (EntraDevices, EntraOrgContacts) and ARG assets (ARGRoleDefinitions) in existing asset connectors, expanding inventory coverage and powering richer, built‑in graph experiences for greater visibility. With over 350 Sentinel data connectors, customers achieve broad visibility into complex digital environments and can expand their security operations effectively. “Microsoft Sentinel data lake forms the core of our agentic SOC. By unifying large volumes of Microsoft and third-party data, enabling graph-based analysis, and supporting MCP-driven workflows, it allows us to investigate faster, at lower cost, and with greater confidence.” – Øyvind Bergerud | Head of Security Operations | Storebrand Learn more about Sentinel data connectors. Sentinel connector builder agent using Sentinel Visual Studio Code extension [Public preview, March 31] Build Sentinel data connectors in minutes instead of weeks using the AI‑assisted Connector Builder agent in Visual Studio Code. This low‑code experience guides developers and ISVs end-to-end, automatically generating schemas, deployment assets, connector UI, secure secret handling, and polling logic. Built‑in validation surfaces issues early, so you can validate event logs before deployment and ingestion. Example prompt in GitHub Copilot Chat: @sentinel-connector-builder Create a new connector for OpenAI audit logs using https://api.openai.com/v1/organization/audit_logs Get started with custom connectors and learn more in our blog. Data filtering and splitting [Public preview, March 30] As security teams ingest more data, the challenge shifts from scale to relevance. With filtering and splitting now built into the Defender portal, teams can shape data before it lands in Sentinel, without switching tools or managing custom JSON files. Define simple KQL‑based transformations directly in the UI to filter low‑value events and intelligently route data, making ingestion optimization faster, more intuitive, and easier to manage at scale. Filtering at ingest time allows you to remove low-value or benign events to reduce noise, cut unnecessary processing, and ensure that high-signal data drives detections and investigations. Splitting enables intelligent routing of data between the analytics tier and the data lake tier based on relevance and usage. Together, these two capabilities help you balance cost and performance while scaling data ingestion sustainably as your digital estate grows. Create workbook reports directly from the data lake [Public preview, April 1] Sentinel workbooks can now directly run on the data lake using KQL, enabling you to visualize and monitor security data straight from the data lake. By selecting the data lake as the workbook data source, you can now create trend analysis and executive reporting. Sentinel graph Custom graphs [Public preview, April 1] Custom graphs let you build tailored security graphs tuned to your unique security scenarios using data from Sentinel data lake as well as non-Microsoft sources. With custom graph, powered by Fabric, you can build, query, and visualize connected data, uncover hidden patterns and attack paths, and help surface risks that are hard to detect when data is analyzed in isolation. These graphs provide the knowledge context that enables AI-powered agent experiences to work more effectively, speeding investigations, revealing blast radius, and helping you move from noisy, disconnected alerts to confident decisions at scale. In the words of our preview customers: “We ingested our Databricks management-plane telemetry into the Sentinel data lake and built a custom security graph. Without writing a single detection rule, the graph surfaced unusual patterns of activity and overprivileged access that we escalated for investigation. We didn't know what we were looking for, the graph surfaced the risk for us by revealing anomalous activity patterns and unusual access combinations driven by relationships, not alerts.” – SVP, Security Solutions | Financial Services organization Custom graph API usage for creating graph and querying graph will be billed starting April 1, 2026, according to the Sentinel graph meter. Creating custom graph Using the Sentinel VS Code extension, you can generate graphs to validate hunting hypotheses, such as understanding attack paths and blast radius of a phishing campaign, reconstructing multi‑step attack chains, and identifying structurally unusual or high‑risk behavior, making it accessible to your team and AI agents. Once persisted via a schedule job, you can access these custom graphs from the ready-to-use section in the graph experience in the Defender portal. Graphs experience in the Microsoft Defender portal After creating your custom graphs, you can access them in the graphs section of the Defender portal under Sentinel. From there, you’ll be able to perform interactive graph-based investigations, such as using a graph built for phishing analysis to help you quickly evaluate the impact of a recent incident, profile the attacker, and trace its paths across Microsoft telemetry and third-party data. The new graph experience lets you run Graph Query Language (GQL) queries, view the graph schema, visualize the graph, view graph results in tabular format, and interactively travers the graph to the next hop with a simple click. Sentinel MCP Sentinel MCP entity analyzer [General availability, April 1] Entity analyzer provides reasoned, out-of-the-box risk assessments that help you quickly understand whether a URL or user identity represents potential malicious activity. The capability analyzes data across modalities including threat intelligence, prevalence, and organizational context to generate clear, explainable verdicts you can trust. Entity analyzer integrates easily with your agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms, or with your SOAR workflows through Logic Apps. The entity analyzer is also a trusted foundation for the Defender Triage Agent and delivers more accurate alert classifications and deeper investigative reasoning. This removes the need to manually engineer evaluation logic and creates trust for analysts and AI agents to act with higher accuracy and confidence. Learn more about entity analyzer and in our blog here. Entity analyzer will be billed starting April 1, 2026, based on Security Compute Units (SCU) consumption. Learn more about MCP billing. Sentinel MCP graph tool collection [Public preview, April 20] Graph tool collection helps you visualize and explore relationships between identities and device assets, threats and activities signals ingested by data connectors and alerted by analytic rules. The tool provides a clear graph view that highlights dependencies and configuration gaps, which makes it easier to understand how content interacts across your environment. This helps security teams assess coverage, optimize content deployment, and identify areas that may need tuning or additional data sources, all from a single, interactive workspace. Executing graph queries via the MCP tools will trigger the graph meter. Claude MCP connector [Public preview, April 1] Anthropic Claude can connect to Sentinel through a custom MCP connector, giving you AI-assisted analysis across your Sentinel environment. Microsoft provides step-by-step guidance for configuring a custom connector in Claude that securely connects to a Sentinel MCP server. With this connection you can summarize incidents, investigate alerts, and reason over security signals while keeping data inside Microsoft's security boundary. Access to large language models (LLMs) is managed through Microsoft authentication and role-based controls, supporting faster triage and investigation workflows while maintaining compliance and visibility. Threat Intelligence CVEs of interest in the Threat Intelligence Briefing Agent [Public preview in April] The Threat Intelligence Briefing Agent delivers curated intelligence based on your organization’s configuration, preferences, and unique industry and geographic needs. CVEs of interest which highlights vulnerabilities actively discussed across the security landscape and assesses their potential impact on your environment, delivering more timely threat intelligence insights. The agent automatically incorporates internet exposure data powered by the Sentinel platform to surface threats targeting technologies exposed in your organization. Together, these enhancements help you focus faster on the threats that matter most, without manual investigation. Microsoft Security Store Security Store embedded in Entra [General availability, March 23] As identity environments grow more complex, teams need to move faster and extend Entra with trusted third‑party capabilities that address operational, compliance, and risk challenges. The Security Store embedded directly into Entra lets you discover and adopt Entra‑ready agents and solutions in your workflow. You can extend Entra with identity‑focused agents that surface privileged access risk, identity posture gaps, network access insights, and overall identity health, turning identity data into clear recommendations and reports teams can use immediately. You can also enhance Entra with Verified ID and External ID integrations that strengthen identity verification, streamline account recovery, and reduce fraud across workforce, consumer, and external identities. Security Store embedded in Microsoft Purview [General availability, March 31] Extending data security across the digital estate requires visibility and enforcement into new data sources and risk surfaces, often requiring a partnered approach. The Security Store embedded directly into Purview lets you discover and evaluate integrated solutions inside your data security workflows. Relevant partner capabilities surface alongside context, making it easier to strengthen data protection, address regulatory requirements, and respond to risk without disrupting existing processes. You can quickly assess which solutions align to data security scenarios, especially with respect to securing AI use, and how they can leverage established classifiers, policies, and investigation workflows in Purview. Keeping integration discovery in‑flow and purchases centralized through the Security Store means you move faster from evaluation to deployment, reducing friction and maintaining a secure, consistent transaction experience. Security Store Advisor [General availability, March 23] Security teams today face growing complexity and choice. Teams often know the security outcome they need, whether that's strengthening identity protection, improving ransomware resilience, or reducing insider risk, but lack a clear, efficient way to determine which solutions will help them get there. Security Store Advisor provides a guided, natural-language discovery experience that shifts security evaluation from product‑centric browsing to outcome‑driven decision‑making. You can describe your goal in plain language, and the Advisor surfaces the most relevant Microsoft and partner agents, solutions, and services available in the Security Store, without requiring deep product knowledge. This approach simplifies discovery, reduces time spent navigating catalogs and documentation, and helps you understand how individual capabilities fit together to deliver meaningful security outcomes. Sentinel promotions Extending signups for promotional 50 GB commitment tier [Through June 2026] The Sentinel promotional 50 GB commitment tier offers small and mid-sized organizations a cost-effective entry point into Sentinel. Sign up for the 50 GB commitment tier until June 30, 2026, and maintain the promotional rate until March 31, 2027. This promotion is available globally with regional variations in pricing and accessible through EA, CSP, and Direct channels. Visit the Sentinel pricing page for details and to get started. Sentinel RSAC 2026 sessions All week – Sentinel product demos, Microsoft Booth #5744 Mon Mar 23, 3:55 PM – RSAC 2026 main stage Keynote with CVP Vasu Jakkal [KEY-M10W] Ambient and autonomous security: Building trust in the agentic AI era Tue Mar 24, 10:30 AM – Live Q&A session, Microsoft booth #5744 and online Ask me anything with Microsoft Security SMEs and real practitioners Tue Mar 24, 11 AM – Sentinel data lake theater session, Microsoft booth #5744 From signals to insights: How Microsoft Sentinel data lake powers modern security operations Tue Mar 24, 2 PM – Sentinel SIEM theater session, Microsoft booth #5744 Vibe-coding SecOps automations with the Sentinel playbook generator Wed Mar 25, 12 PM – Executive event at Palace Hotel with Threat Protection GM Scott Woodgate The AI risk equation: Visibility, control, and threat acceleration Wed Mar 25, 1:30 PM – Sentinel graph theater session, Microsoft booth #5744 Bringing knowledge-driven context to security with Microsoft Sentinel graph Wed Mar 25, 5 PM – MISA theater session, Microsoft booth #5744 Cut SIEM costs without reducing protection: A Sentinel data lake case study Thu Mar 26, 1 PM – Security Store theater session, Microsoft booth #5744 What's next for Security Store: Expanding in portal and smarter discovery All week – 1:1 meetings with Microsoft security experts Meet with Microsoft Defender and Sentinel SIEM and Defender Security Operations Additional resources Sentinel data lake video playlist Explore the full capabilities of Sentinel data lake as a unified, AI-ready security platform that is deeply integrated into the Defender portal Sentinel data lake FAQ blog Get answers to many of the questions we’ve heard from our customers and partners on Sentinel data lake and billing AI‑powered SIEM migration experience ninja training Walk through the SIEM migration experience, see how it maps detections, surfaces connector requirements, and supports phased migration decisions SIEM migration experience documentation Learn how the SIEM migration experience analyzes your exports, maps detections and connectors, and recommends prioritized coverage Accenture collaborates with Microsoft to bring agentic security and business resilience to the front lines of cyber defense Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Sentinel. We’ll see you in the next edition!5.5KViews6likes0CommentsAccelerate Agent Development: Hacks for Building with Microsoft Sentinel data lake
As a Senior Product Manager | Developer Architect on the App Assure team working to bring Microsoft Sentinel and Security Copilot solutions to market, I interact with many ISVs building agents on Microsoft Sentinel data lake for the first time. I’ve written this article to walk you through one possible approach for agent development – the process I use when building sample agents internally at Microsoft. If you have questions about this, or other methods for building your agent, App Assure offers guidance through our Sentinel Advisory Service. Throughout this post, I include screenshots and examples from Gigamon’s Security Posture Insight Agent. This article assumes you have: An existing SaaS or security product with accessible telemetry. A small ISV team (2–3 engineers + 1 PM). Focus on a single high value scenario for the first agent. The Composite Application Model (What You Are Building) When I begin designing an agent, I think end-to-end, from data ingestion requirements through agentic logic, following the Composite application model. The Composite Application Model consists of five layers: Data Sources – Your product’s raw security, audit, or operational data. Ingestion – Getting that data into Microsoft Sentinel. Sentinel data lake & Microsoft Graph – Normalization, storage, and correlation. Agent – Reasoning logic that queries data and produces outcomes. End User – Security Copilot or SaaS experiences that invoke the agent. This separation allows for evolving data ingestion and agent logic simultaneously. It also helps avoid downstream surprises that require going back and rearchitecting the entire solution. Optional Prerequisite You are enrolled in the ISV Success Program, so you can earn Azure Credits to provision Security Compute Units (SCUs) for Security Copilot Agents. Phase 1: Data Ingestion Design & Implementation Choose Your Ingestion Strategy The first choice I face when designing an agent is how the data is going to flow into my Sentinel workspace. Below I document two primary methods for ingestion. Option A: Codeless Connector Framework (CCF) This is the best option for ISVs with REST APIs. To build a CCF solution, reference our documentation for getting started. Option B: CCF Push (Public Preview) In this instance, an ISV pushes events directly to Sentinel via a CCF Push connector. Our MS Learn documentation is a great place to get started using this method. Additional Note: In the event you find that CCF does not support your needs, reach out to App Assure so we can capture your requirements for future consideration. Azure Functions remains an option if you’ve documented your CCF feature needs. Phase 2: Onboard to Microsoft Sentinel data lake Once my data is flowing into Sentinel, I onboard a single Sentinel workspace to data lake. This is a one-time action and cannot be repeated for additional workspaces. Onboarding Steps Go to the Defender portal. Follow the Sentinel Data lake onboarding instructions. Validate that tables are visible in the lake. See Running KQL Queries in data lake for additional information. Phase 3: Build and Test the Agent in Microsoft Foundry Once my data is successfully ingested into data lake, I begin the agent development process. There are multiple ways to build agents depending on your needs and tooling preferences. For this example, I chose Microsoft Foundry because it fit my needs for real-time logging, cost efficiency, and greater control. 1. Create a Microsoft Foundry Instance Foundry is used as a tool for your development environment. Reference our QuickStart guide for setting up your Foundry instance. Required Permissions: Security Reader (Entra or Subscription) Azure AI Developer at the resource group After setup, click Create Agent. 2. Design the Agent A strong first agent: Solves one narrow security problem. Has deterministic outputs. Uses explicit instructions, not vague prompts. Example agent responsibilities: To query Sentinel data lake (Sentinel data exploration tool). To summarize recent incidents. To correlate ISVs specific signals with Sentinel alerts and other ISV tables (Sentinel data exploration tool). 3. Implement Agent Instructions Well-designed agent instructions should include: Role definition ("You are a security investigation agent…"). Data sources it can access. Step by step reasoning rules. Output format expectations. Sample Instructions can be found here: Agent Instructions 4. Configure the Microsoft Model Context Protocol (MCP) tooling for your agent For your agent to query, summarize and correlate all the data your connector has sent to data lake, take the following steps: Select Tools, and under Catalog, type Sentinel, and then select Microsoft Sentinel Data Exploration. For more information about the data exploration tool collection in MCP server, see our documentation. I always test repeatedly with real data until outputs are consistent. For more information on testing and validating the agent, please reference our documentation. Phase 4: Migrate the Agent to Security Copilot Once the agent works in Foundry, I migrate it to Security Copilot. To do this: Copy the full instruction set from Foundry Provision a SCU for your Security Copilot workspace. For instructions, please reference this documentation. Make note of this process as you will be charged per hour per SCU Once you are done testing you will need to deprovision the capacity to prevent additional charges Open Security Copilot and use Create From Scratch Agent Builder as outlined here. Add Sentinel data exploration MCP tools (these are the same instructions from the Foundry agent in the previous step). For more information on linking the Sentinel MCP tools, please refer to this article. Paste and adapt instructions. At this stage, I always validate the following: Agent Permissions – I have confirmed the agent has the necessary permissions to interact with the MCP tool and read data from your data lake instance. Agent Performance – I have confirmed a successful interaction with measured latency and benchmark results. This step intentionally avoids reimplementation. I am reusing proven logic. Phase 5: Execute, Validate, and Publish After setting up my agent, I navigate to the Agents tab to manually trigger the agent. For more information on testing an agent you can refer to this article. Now that the agent has been executed successfully, I download the agent Manifest file from the environment so that it can be packaged. Click View code on the Agent under the Build tab as outlined in this documentation. Publishing to the Microsoft Security Store If I were publishing my agent to the Microsoft Security Store, these are the steps I would follow: Finalize ingestion reliability. Document required permissions. Define supported scenarios clearly. Package agent instructions and guidance (by following these instructions). Summary Based on my experience developing Security Copilot agents on Microsoft Sentinel data lake, this playbook provides a practical, repeatable framework for ISVs to accelerate their agent development and delivery while maintaining high standards of quality. This foundation enables rapid iteration—future agents can often be built in days, not weeks, by reusing the same ingestion and data lake setup. When starting on your own agent development journey, keep the following in mind: To limit initial scope. To reuse Microsoft managed infrastructure. To separate ingestion from intelligence. What Success Looks Like At the end of this development process, you will have the following: A Microsoft Sentinel data connector live in Content Hub (or in process) that provides a data ingestion path. Data visible in data lake. A tested agent running in Security Copilot. Clear documentation for customers. A key success factor I look for is clarity over completeness. A focused agent is far more likely to be adopted. Need help? If you have any issues as you work to develop your agent, please reach out to the App Assure team for support via our Sentinel Advisory Service . Or if you have any other tips, please comment below, I’d love to hear your feedback.240Views0likes0CommentsRSAC 2026: New Microsoft Sentinel Connectors Announcement
Microsoft Sentinel helps organizations detect, investigate, and respond to security threats across increasingly complex environments. With the rollout of the Microsoft Sentinel data lake in the fall, and the App Assure-backed Sentinel promise that supports it, customers now have access to long-term, cost-effective storage for security telemetry, creating a solid foundation for emerging Agentic AI experiences. Since our last announcement at Ignite 2025, the Microsoft Sentinel connector ecosystem has expanded rapidly, reflecting continued investment from software development partners building for our shared customers. These connectors bring diverse security signals together, enabling correlation at scale and delivering richer investigation context across the Sentinel platform. Below is a snapshot of Microsoft Sentinel connectors newly available or recently enhanced since our last announcement, highlighting the breadth of partner solutions contributing data into, and extending the value of, the Microsoft Sentinel ecosystem. New and notable integrations Acronis Cyber Protect Cloud Acronis Cyber Protect Cloud integrates with Microsoft Sentinel to bring data protection and security telemetry into a centralized SOC view. The connector streams alerts, events, and activity data - spanning backup, endpoint protection, and workload security - into Microsoft Sentinel for correlation with other signals. This integration helps security teams investigate ransomware and data-centric threats more effectively, leverage built-in hunting queries and detection rules, and improve visibility across managed environments without adding operational complexity. Anvilogic Anvilogic integrates with Microsoft Sentinel to help security teams operationalize detection engineering at scale. The connector streams Anvilogic alerts into Microsoft Sentinel, giving SOC analysts centralized visibility into high-fidelity detections and faster context for investigation and triage. By unifying detection workflows, reducing alert noise, and improving prioritization, this integration supports more efficient threat detection and response while helping teams extend coverage across evolving attack techniques. CyberArk Audit CyberArk Audit integrates with Microsoft Sentinel to centralize visibility into privileged identity and access activity. By streaming detailed audit logs - covering system events, user actions, and administrative activity - into Microsoft Sentinel, security teams can correlate identity-driven risks with broader security telemetry. This integration supports faster investigations, improved monitoring of privileged access, and more effective incident response through automated workflows and enriched context for SOC analysts. Cyera Cyera integrates with Microsoft Sentinel to extend AI-native data security posture management into security operations. The connector brings Cyera’s data context and actionable intelligence across multi-cloud, on-premises, and SaaS environments into Microsoft Sentinel, helping teams understand where sensitive data resides and how it is accessed, exposed, and used. Built on Sentinel’s modern framework, the integration feeds context-rich data risk signals into the Sentinel data lake, enabling more informed threat hunting, automation, and decision-making around data, user, and AI-related risk. TacitRed CrowdStrike IOC Automation Data443 TacitRed CS IOC Automation integrates with Microsoft Sentinel to streamline the operationalization of compromised credential intelligence. The solution uses Sentinel playbooks to automatically push TacitRed indicators of compromise into CrowdStrike via Sentinel playbooks, helping security teams turn identity-based threat intelligence into action. By automating IOC handling and reducing manual effort, this integration supports faster response to credential exposure and strengthens protection against account-driven attacks across the environment. TacitRed SentinelOne IOC Automation Data443 TacitRed SentinelOne IOC Automation integrates with Microsoft Sentinel to help operationalize identity-focused threat intelligence at the endpoint layer. The solution uses Sentinel playbooks to automatically consume TacitRed indicators and push curated indicators into SentinelOne via Sentinel playbooks and API-based enforcement, enabling faster enforcement of high-risk IOCs without manual handling. By automating the flow of compromised credential intelligence from Sentinel into EDR, this integration supports quicker response to identity-driven attacks and improves coordination between threat intelligence and endpoint protection workflows. TacitRed Threat Intelligence Data443 TacitRed Threat Intelligence integrates with Microsoft Sentinel to provide enhanced visibility into identity-based risks, including compromised credentials and high-risk user exposure. The solution ingests curated TacitRed intelligence directly into Sentinel, enriching incidents with context that helps SOC teams identify credential-driven threats earlier in the attack lifecycle. With built-in analytics, workbooks, and hunting queries, this integration supports proactive identity threat detection, faster triage, and more informed response across the SOC. Cyren Threat Intelligence Cyren Threat Intelligence integrates with Microsoft Sentinel to enhance detection of network-based threats using curated IP reputation and malware URL intelligence. The connector ingests Cyren threat feeds into Sentinel using the Codeless Connector Framework (CCF), transforming raw indicators into actionable insights, dashboards, and enriched investigations. By adding context to suspicious traffic and phishing infrastructure, this integration helps SOC teams improve alert accuracy, accelerate triage, and make more confident response decisions across their environments. TacitRed Defender Threat Intelligence Data443 TacitRed Defender Threat Intelligence integrates with Microsoft Sentinel to surface early indicators of credential exposure and identity-driven risk. The solution automatically ingests compromised credential intelligence from TacitRed into Sentinel and can support synchronization of validated indicators with Microsoft Defender Threat Intelligence through Sentinel workflows, helping SOC teams detect account compromise before abuse occurs. By enriching Sentinel incidents with actionable identity context, this integration supports faster triage, proactive remediation, and stronger protection against credential-based attacks. Datawiza Access Proxy (DAP) Datawiza Access Proxy integrates with Microsoft Sentinel to provide centralized visibility into application access and authentication activity. By streaming access and MFA logs from Datawiza into Sentinel, security teams can correlate identity and session-level events with broader security telemetry. This integration supports detection of anomalous access patterns, faster investigation through session traceability, and more effective response using Sentinel automation, helping organizations strengthen Zero Trust controls and meet auditing and compliance requirements. Endace Endace integrates with Microsoft Sentinel to provide deep network visibility by providing always-on, packet-level evidence. The connector enables one-click pivoting from Sentinel alerts directly to recorded packet data captured by EndaceProbes. This helps SOC and NetOps teams reconstruct events and validate threats with confidence. By combining Sentinel’s AI-driven analytics with Endace’s always-on, full-packet capture across on-premises, hybrid, and cloud environments, this integration supports faster investigations, improved forensic accuracy, and more decisive incident response. Feedly Feedly integrates with Microsoft Sentinel to ingest curated threat intelligence directly into security operations workflows. The connector automatically imports Indicators of Compromise (IoCs) from Feedly Team Boards and folders into Sentinel, enriching detections and investigations with context from the original intelligence articles. By bringing analyst‑curated threat intelligence into Sentinel in a structured, automated way, this integration helps security teams stay current on emerging threats and reduce the manual effort required to operationalize external intelligence. Gigamon Gigamon integrates with Microsoft Sentinel through a new connector that provides access to Gigamon Application Metadata Intelligence (AMI), delivering high-fidelity network-derived telemetry with rich application metadata from inspected traffic directly into Sentinel. This added context helps security teams detect suspicious activity, encrypted threats, and lateral movement faster and with greater precision. By enriching analytics without requiring full packet ingestion, organizations can reduce noise, manage SIEM costs, and extend visibility across hybrid cloud infrastructure. Halcyon Halcyon integrates with Microsoft Sentinel to provide purpose-built ransomware detection and automated containment across the Microsoft security ecosystem. The connector surfaces Halcyon ransomware alerts directly within Sentinel, enabling SOC teams to correlate ransomware behavior with Microsoft Defender and broader Microsoft telemetry. By supporting Sentinel analytics and automation workflows, this integration helps organizations detect ransomware earlier, investigate faster using native Sentinel tools, and isolate affected endpoints to prevent lateral spread and reinfection. Illumio The Illumio platform identifies and contains threats across hybrid multi-cloud environments. By integrating AI-driven insights with Microsoft Sentinel and Microsoft Graph, Illumio Insights enables SOC analysts to visualize attack paths, prioritize high-risk activity, and investigate threats with greater precision. Illumio Segmentation secures critical assets, workloads, and devices and then publishes segmentation policy back into Microsoft Sentinel to ensure compliance monitoring. Joe Sandbox Joe Sandbox integrates with Microsoft Sentinel to enrich incidents with dynamic malware and URL analysis. The connector ingests Joe Sandbox threat intelligence and automatically detonates suspicious files and URLs associated with Sentinel incidents, returning behavioral and contextual analysis results directly into investigation workflows. By adding sandbox-driven insights to indicators, alerts, and incident comments, this integration helps SOC teams validate threats faster, reduce false positives, and improve response decisions using deeper visibility into malicious behavior. Keeper Security The Keeper Security integration with Microsoft Sentinel brings advanced password and secrets management telemetry into your SIEM environment. By streaming audit logs and privileged access events from Keeper into Sentinel, security teams gain centralized visibility into credential usage and potential misuse. The connector supports custom queries and automated playbooks, helping organizations accelerate investigations, enforce Zero Trust principles, and strengthen identity security across hybrid environments. Lookout Mobile Threat Defense (MTD) Lookout Mobile Threat Defense integrates with Microsoft Sentinel to extend SOC visibility to mobile endpoints across Android, iOS, and Chrome OS. The connector streams device, threat, and audit telemetry from Lookout into Sentinel, enabling security teams to correlate mobile risk signals such as phishing, malicious apps, and device compromise, with broader enterprise security data. By incorporating mobile threat intelligence into Sentinel analytics, dashboards, and alerts, this integration helps organizations detect mobile driven attacks earlier and strengthen protection for an increasingly mobile workforce. Miro Miro integrates with Microsoft Sentinel to provide centralized visibility into collaboration activity across Miro workspaces. The connector ingests organization-wide audit logs and content activity logs into Sentinel, enabling security teams to monitor authentication events, administrative actions, and content changes alongside other enterprise signals. By bringing Miro collaboration telemetry into Sentinel analytics and dashboards, this integration helps organizations detect suspicious access patterns, support compliance and eDiscovery needs, and maintain stronger oversight of collaborative environments without disrupting productivity. Obsidian Activity Threat The Obsidian Threat and Activity Feed for Microsoft Sentinel delivers deep visibility into SaaS and AI applications, helping security teams detect account compromise and insider threats. By streaming user behavior and configuration data into Sentinel, organizations can correlate application risks with enterprise telemetry for faster investigations. Prebuilt analytics and dashboards enable proactive monitoring, while automated playbooks simplify response workflows, strengthening security posture across critical cloud apps. OneTrust for Purview DSPM OneTrust integrates with Microsoft Sentinel to bring privacy, compliance, and data governance signals into security operations workflows. The connector enriches Sentinel with privacy relevant events and risk indicators from OneTrust, helping organizations detect sensitive data exposure, oversharing, and compliance risks across cloud and non-Microsoft data sources. By unifying privacy intelligence with Sentinel analytics and automation, this integration enables security and privacy teams to respond more quickly to data risk events and support responsible data use and AI-ready governance. Pathlock Pathlock integrates with Microsoft Sentinel to bring SAP-specific threat detection and response signals into centralized security operations. The connector forwards security-relevant SAP events into Sentinel, enabling SOC teams to correlate SAP activity with broader enterprise telemetry and investigate threats using familiar SIEM workflows. By enriching Sentinel with SAP security context and focused detection logic, this integration helps organizations improve visibility into SAP landscapes, reduce noise, and accelerate detection and response for risks affecting critical business systems. Quokka Q-scout Quokka Q-scout integrates with Microsoft Sentinel to centralize mobile application risk intelligence across Microsoft Intune-managed devices. The connector automatically ingests app inventories from Intune, analyzes them using Quokka’s mobile app vetting engines, and streams security, privacy, and compliance risk findings into Sentinel. By surfacing app-level risks through Sentinel analytics and alerts, this integration helps security teams identify malicious or high-risk mobile apps, prioritize remediation, and strengthen mobile security posture without deploying agents or disrupting users. Synqly Synqly integrates with Microsoft Sentinel to simplify and scale security integrations through a unified API approach. The connector enables organizations and security vendors to establish a bi‑directional connection with Sentinel without relying on brittle, point‑to‑point integrations. By abstracting common integration challenges such as authentication handling, retries, and schema changes, Synqly helps teams orchestrate security data flows into and out of Sentinel more reliably, supporting faster onboarding of new data sources and more maintainable integrations at scale. Versasec vSEC:CMS Versasec vSEC:CMS integrates with Microsoft Sentinel to provide centralized visibility into credential lifecycle and system health events. The connector securely streams vSEC:CMS and vSEC:CLOUD alerts and status data into Sentinel using the Codeless Connector Framework (CCF), transforming credential management activity into correlation-ready security signals. By bringing smart card, token, and passkey management telemetry into Sentinel, this integration helps security teams monitor authentication infrastructure health, investigate credential-related incidents, and unify identity security operations within their SIEM workflows. VirtualMetric DataStream VirtualMetric DataStream integrates with Microsoft Sentinel to optimize how security telemetry is collected, normalized, and routed across the Microsoft security ecosystem. Acting as a high-performance telemetry pipeline, DataStream intelligently filters and enriches logs, sending high-value security data to Sentinel while routing less-critical data to Sentinel data lake or Azure Blob Storage for cost-effective retention. By reducing noise upstream and standardizing logs to Sentinel ready schemas, this integration helps organizations control ingestion costs, improve detection quality, and streamline threat hunting and compliance workflows. VMRay VMRay integrates with Microsoft Sentinel to enrich SIEM and SOAR workflows with automated sandbox analysis and high-fidelity, behavior-based threat intelligence. The connector enables suspicious files and phishing URLs to be submitted directly from Sentinel to VMRay for dynamic analysis, while validated, high-confidence indicators of compromise (IOCs) are streamed back into Sentinel’s Threat Intelligence repository for correlation and detection. By adding detailed attack-chain visibility and enriched incident context, this integration helps SOC teams reduce investigation time, improve detection accuracy, and strengthen automated response workflows across Sentinel environments. Zero Networks Segment Audit Zero Networks Segment integrates with Microsoft Sentinel to provide visibility into micro-segmentation and access-control activity across the network. The connector can collect audit logs or activities from Zero Networks Segment, enabling security teams to monitor policy changes, administrative actions, and access events related to MFA-based network segmentation. By bringing segmentation audit telemetry into Sentinel, this integration supports compliance monitoring, investigation of suspicious changes, and faster detection of attempts to bypass lateral-movement controls within enterprise environments. Zscaler Internet Access (ZIA) Zscaler Internet Access integrates with Microsoft Sentinel to centralize cloud security telemetry from web and firewall traffic. The connector enables ZIA logs to be ingested into Sentinel, allowing security teams to correlate Zscaler Internet Access signals with other enterprise data for improved threat detection, investigation, and response. By bringing ZIA web, firewall, and security events into Sentinel analytics and hunting workflows, this integration helps organizations gain broader visibility into internet-based threats and strengthen Zero Trust security operations. In addition to these solutions from our third-party partners, we are also excited to announce the following connector published by the Microsoft Sentinel team: GitHub Enterprise Audit Logs Microsoft’s Sentinel Promise For Customers Every connector in the Microsoft Sentinel ecosystem is built to work out of the box. In the unlikely event a customer encounters any issue with a connector, the App Assure team stands ready to assist. For Software Developers Software partners in need of assistance in creating or updating a Sentinel solution can also leverage Microsoft’s Sentinel Promise to support our shared customers. For developers seeking to build agentic experiences utilizing Sentinel data lake, we are excited to announce the launch of our Sentinel Advisory Service to guide developers across their Sentinel journey. Customers and developers alike can reach out to us via our intake form. Learn More Microsoft Sentinel data lake Microsoft Sentinel data lake: Unify signals, cut costs, and power agentic AI Introducing Microsoft Sentinel data lake What is Microsoft Sentinel data lake Unlocking Developer Innovation with Microsoft Sentinel data lake Microsoft Sentinel Codeless Connector Framework (CCF) Create a codeless connector for Microsoft Sentinel Public Preview Announcement: Microsoft Sentinel CCF Push What’s New in Microsoft Sentinel Monthly Blog Microsoft App Assure App Assure home page App Assure services App Assure blog App Assure Request Assistance Form App Assure Sentinel Advisory Services announcement App Assure’s promise: Migrate to Sentinel with confidence App Assure’s Sentinel promise now extends to Microsoft Sentinel data lake Ignite 2025 new Microsoft Sentinel connectors announcement Microsoft Security Microsoft’s Secure Future Initiative Microsoft Unified SecOps1.4KViews0likes0CommentsIngest Microsoft XDR Advanced Hunting Data into Microsoft Sentinel
I had difficulty finding a guide that can query Microsoft Defender vulnerability management Advanced Hunting tables in Microsoft Sentinel for alerting and automation. As a result, I put together this guide to demonstrate how to ingest Microsoft XDR Advanced Hunting query results into Microsoft Sentinel using Azure Logic Apps and System‑Assigned Managed Identity. The solution allows you to: Run Advanced Hunting queries on a schedule Collect high‑risk vulnerability data (or other hunting results) Send the results to a Sentinel workspace as custom logs Create alerts and automation rules based on this data This approach avoids credential storage and follows least privilege and managed identity best practices. Prerequisites Before you begin, ensure you have: Microsoft Defender XDR access Microsoft Sentinel deployed Azure Logic Apps permission Application Administrator or higher in Microsoft Entra ID PowerShell with Az modules installed Contributor access to the Sentinel workspace Architecture at a Glance Logic App (Managed Identity) ↓ Microsoft XDR Advanced Hunting API ↓ Logic App ↓ Log Analytics Data Collector API ↓ Microsoft Sentinel (Custom Log) Step 1: Create a Logic App In the Azure Portal, go to Logic Apps Create a new Consumption Logic App Choose the appropriate: Subscription Resource Group Region Step 2: Enable System‑Assigned Managed Identity Open the Logic App Navigate to Settings → Identity Enable System‑assigned managed identity Click Save Note the Object ID This identity will later be granted permission to run Advanced Hunting queries. Step 3: Locate the Logic App in Entra ID Go to Microsoft Entra ID → Enterprise Applications Change filter to All Applications Search for your Logic App name Select the app to confirm it exists Step 4: Grant Advanced Hunting Permissions (PowerShell) Advanced Hunting permissions cannot be assigned via the portal and must be done using PowerShell. Required Permission AdvancedQuery.Read.All PowerShell Script # Your tenant ID (in the Azure portal, under Azure Active Directory > Overview). $TenantID=”Your TenantID” Connect-AzAccount -TenantId $TenantID # Get the ID of the managed identity for the app. $spID = “Your Managed Identity” # Get the service principal for Microsoft Graph by providing the AppID of WindowsDefender ATP $GraphServicePrincipal = Get-AzADServicePrincipal -Filter "AppId eq 'fc780465-2017-40d4-a0c5-307022471b92'" | Select-Object Id # Extract the Advanced query ID. $AppRole = $GraphServicePrincipal.AppRole | ` Where-Object {$_.Value -contains "AdvancedQuery.Read.All"} # If AppRoleID comes up with blank value, it can be replaced with 93489bf5-0fbc-4f2d-b901-33f2fe08ff05 # Now add the permission to the app to read the advanced queries New-AzADServicePrincipalAppRoleAssignment -ServicePrincipalId $spID -ResourceId $GraphServicePrincipal.Id -AppRoleId $AppRole.Id # Or New-AzADServicePrincipalAppRoleAssignment -ServicePrincipalId $spID -ResourceId $GraphServicePrincipal.Id -AppRoleId 93489bf5-0fbc-4f2d-b901-33f2fe08ff05 After successful execution, verify the permission under Enterprise Applications → Permissions. Step 5: Build the Logic App Workflow Open Logic App Designer and create the following flow: Trigger Recurrence (e.g., every 24 hours Run Advanced Hunting Query Connector: Microsoft Defender ATP Authentication: System‑Assigned Managed Identity Action: Run Advanced Hunting Query Sample KQL Query (High‑Risk Vulnerabilities) Send Data to Log Analytics (Sentinel) On Send Data, create a new connection and provide the workspace information where the Sentinel log exists. Obtaining the Workspace Key is not straightforward, we need to retrieve using the PowerShell command. Get-AzOperationalInsightsWorkspaceSharedKey ` -ResourceGroupName "<ResourceGroupName>" ` -Name "<WorkspaceName>" Configuration Details Workspace ID Primary key Log Type (example): XDRVulnerability_CL Request body: Results array from Advanced Hunting Step 6: Run the Logic app to return results In the logic app designer select run, If the run is successful data will be sent to sentinel workspace. Step 7: Validate Data in Microsoft Sentinel In Sentinel, run the query: XDRVulnerability_CL | where TimeGenerated > ago(24h) If data appears, ingestion is successful. Step 8: Create Alerts & Automation Rules Use Sentinel to: Create analytics rules for: CVSS > 9 Exploit available New vulnerabilities in last 24 hours Trigger: Email notifications Incident creation SOAR playbooks Conclusion By combining Logic Apps, Managed Identities, Microsoft XDR, and Microsoft Sentinel, you can create a powerful, secure, and scalable pipeline for ingesting hunting intelligence and triggering proactive detections.70Views1like1CommentWhat’s New in Microsoft Sentinel and XDR: AI Automation, Data Lake Innovation, and Unified SecOps
The most consequential “new” Microsoft Sentinel / Defender XDR narrative for a deeply technical Microsoft Tech Community article is the operational and engineering shift to unified security operations in the Microsoft Defender portal, including an explicit Azure portal retirement/sunset timeline and concrete migration implications (data tiering, correlation engine changes, schema differences, and automation behavior changes). Official sources now align on March 31, 2027 as the sunset date for managing Microsoft Sentinel in the Azure portal, with customers being redirected to the Defender portal after that date. The “headline” feature announcements to anchor your article around (because they create new engineering patterns, not just UI changes) are: AI playbook generator (preview): Natural-language-driven authoring of Python playbooks in an embedded VS Code environment (Cline), using Integration Profiles for dynamic API calls and an Enhanced Alert Trigger for broader automation triggering across Microsoft Sentinel, Defender, and XDR alert sources. CCF Push (public preview): A push-based connector model built on the Azure Monitor Logs Ingestion API, where deploying via Content Hub can automate provisioning of the typical plumbing (DCR/DCE/app registration/RBAC), enabling near-real-time ingestion plus ingestion-time transformations and (per announcement) direct delivery into certain system tables. Data lake tier ingestion for Advanced Hunting tables (GA): Direct ingestion of specific Microsoft XDR Advanced Hunting tables into the Microsoft Sentinel data lake without requiring analytics-tier ingestion—explicitly positioned for long-retention, cost-effective storage and retrospective investigations at scale. Microsoft 365 Copilot data connector (public preview): Ingests Copilot-related audit/activity events via the Purview Unified Audit Log feed into a dedicated table (CopilotActivity) with explicit admin-role requirements and cost notes. Multi-tenant content distribution expansion: Adds support for distributing analytics rules, automation rules, workbooks, and built-in alert tuning rules across tenants via distribution profiles, with stated limitations (for example, automation rules that trigger a playbook cannot currently be distributed). Alert schema differences for “standalone vs XDR connector”: A must-cite engineering artifact documenting breaking/behavioral differences (CompromisedEntity semantics, field mapping changes, alert filtering differences) when moving to the consolidated Defender XDR connector path. What’s new and when Feature and release matrix The table below consolidates officially documented Sentinel and Defender XDR features that are relevant to a “new announcements” technical article. If a source does not explicitly state GA/preview or a specific date, it is marked “unspecified.” Feature Concise description Status (official) Announcement / release date Azure portal Sentinel retirement / redirection Sentinel management experience shifts to Defender portal; sunset date extended; post-sunset redirection expected Date explicitly stated Mar 31, 2027 sunset (date stated) extension published Jan 29, 2026 Sentinel in Defender portal (core GA) Sentinel is GA in Defender portal, including for customers without Defender XDR/E5; unified SecOps surface GA Doc updated Sep 30, 2025; retirement note reiterated 2026 AI playbook generator Natural language → Python playbook, documentation, and a visual flow diagram; VS Code + Cline experience Preview Feb 23, 2026 Integration Profiles (playbook generator) Centralized configuration objects (base URL, auth method, credentials) used by generated playbooks to call external APIs dynamically Preview feature component Feb 23, 2026 Enhanced Alert Trigger (generated playbooks) Tenant-level trigger designed to target alerts across Sentinel + Defender + XDR sources and apply granular conditions Preview feature component Feb 23, 2026 CCF Push Push-based ingestion model that reduces setup friction (DCR/DCE/app reg/RBAC), built on Logs Ingestion API; supports transformations and high-throughput ingestion Public preview Feb 12–13, 2026 Legacy custom data collection API retirement Retirement of legacy custom data collection API noted as part of connector modernization Retirement date stated Sep 2026 (retirement) Data lake tier ingestion for Microsoft XDR Advanced Hunting tables Ingest selected Advanced Hunting tables from MDE/MDO/MDA directly into Sentinel data lake; supports long retention and lake-first analytics GA Feb 10, 2026 Microsoft 365 Copilot data connector Ingests Copilot activities/audit logs; data lands in CopilotActivity; requires specific tenant roles to enable; costs apply Public preview Feb 3, 2026 Multi-tenant content distribution: expanded content types Adds support for analytics rules, automation rules, workbooks, and built-in alert tuning rules; includes limitations and prerequisites Stated as “supported”; feature described as part of public preview experience in monthly update Jan 29, 2026 GKE dedicated connector Dedicated connector built on CCF; ingests GKE cluster activity/workload/security events into GKEAudit; supports DCR transformations and lake-only ingestion GA Mar 4, 2026 UEBA behaviors layer “Who did what to whom” behavior abstraction from raw logs; newer sources state GA; other page sections still label Preview GA and Preview labels appear in official sources (inconsistent) Feb 2026 (GA statement) UEBA widget in Defender portal home Home-page widget to surface anomalous user behavior and accelerate workflows Preview Jan 2026 Alert schema differences: standalone vs XDR connector Documents field mapping differences, CompromisedEntity behavior changes, and alert filtering/scoping differences Doc (behavioral/change reference) Feb 4, 2026 (last updated) Defender incident investigation: Blast radius analysis Graph visualization built on Sentinel data lake + graph for propagation path analysis Preview (per Defender XDR release notes) Sep 2025 (release notes section) Advanced hunting: Hunting graph Graph rendering of predefined threat scenarios in advanced hunting Preview (per Defender XDR release notes) Sep 2025 (release notes section) Sentinel repositories API version retirement “Call to action” to update API versions: older versions retired June 1, 2026; enforcement June 15, 2026 for actions Dates explicitly stated March 2026 (noticed); Jun 1 / Jun 15, 2026 (deadline/enforcement) Technical architecture and integrations Unified reference architecture Microsoft’s official integration documentation describes two “centers of gravity” depending on how you operate: In Defender portal mode, Sentinel data is ingested alongside organizational data into the Defender portal, enabling SOC teams to analyze and respond from a unified surface. In Azure portal mode, Defender XDR incidents/alerts flow via Sentinel connectors and analysts work across both experiences. Integration model: Defender suite and third-party security tools The Defender XDR integration doc is explicit about: Supported Defender components whose alerts appear through the integration (Defender for Endpoint, Identity, Office 365, Cloud Apps), plus other services such as Purview DLP and Entra ID Protection. Behavior when onboarding Sentinel to the Defender portal with Defender XDR licensing: the Defender XDR connector is automatically set up and component alert-provider connectors are disconnected. Expected latency: Defender XDR incidents typically appear in Sentinel UI/API within ~5 minutes, with additional lag before securityIncident ingestion is complete. Cost model: Defender XDR alerts and incidents that populate SecurityAlert / SecurityIncident are synchronized at no charge, while other data types (for example, Advanced Hunting tables) are charged. For third-party tools, Microsoft’s monthly “What’s new” explicitly calls out new GA out-of-the-box connectors/solutions (examples include Mimecast audit logs, Vectra AI XDR, and Proofpoint POD email security) as part of an expanding connector ecosystem intended to unify visibility across cloud, SaaS, and on-premises environments. Telemetry, schemas, analytics, automation, and APIs Data flows and ingestion engineering CCF Push and the “push connector” ingestion path Microsoft’s CCF Push announcement frames the “old” model as predominantly polling-based (Sentinel periodically fetching from partner/customer APIs) and introduces push-based connectors where partners/customers send data directly to a Sentinel workspace, emphasizing that “Deploy” can auto-provision the typical prerequisites: DCE, DCR, Entra app registration + secrets, and RBAC assignments. Microsoft also states that CCF Push is built on the Logs Ingestion API, with benefits including throughput, ingestion-time transformation, and system-table targeting. A precise engineering description of the underlying Logs Ingestion API components (useful for your article even if your readers never build a connector) is documented in Azure Monitor: Sender app authenticates via an app registration that has access to a DCR. Sender sends JSON matching the DCR’s expected structure to a DCR endpoint or a DCE (DCE required for Private Link scenarios). The DCR can apply a transformation to map/filter/enrich before writing to the target table. DCR transformation (KQL) Microsoft documents “transformations in Azure Monitor” and provides concrete sample KQL snippets for common needs such as cost reduction and enrichment. // Keep only Critical events source | where severity == "Critical" // Drop a noisy/unneeded column source | project-away RawData // Enrich with a simple internal/external IP classification (example) source | extend IpLocation = iff(split(ClientIp,".")[0] in ("10","192"), "Internal", "External") These are direct examples from Microsoft’s sample transformations guidance; they are especially relevant because ingestion-time filtering is one of the primary levers for both performance and cost management in Sentinel pipelines. A Sentinel-specific nuance: Microsoft states that Sentinel-enabled Log Analytics workspaces are not subject to Azure Monitor’s filtering ingestion charge, regardless of how much data a transformation filters (while other Azure Monitor transformation cost rules still exist in general). Telemetry schemas and key tables you should call out A “new announcements” article aimed at detection engineers should explicitly name the tables that are impacted by new features: Copilot connector → CopilotActivity table, with a published list of record types (for example, CopilotInteraction and related plugin/workspace/prompt-book operations) and explicit role requirements to enable (Global Administrator or Security Administrator). Defender XDR incident/alert sync → SecurityAlert and SecurityIncident populated at no charge; other Defender data types (Advanced Hunting event tables such as DeviceInfo/EmailEvents) are charged. Sentinel onboarding to Defender advanced hunting: Sentinel alerts tied to incidents are ingested into AlertInfo and accessible in Advanced hunting; SecurityAlert is queryable even if not shown in the schema list in Defender (notable for KQL portability). UEBA “core” tables (engineering relevance: query joins and tuning): IdentityInfo, BehaviorAnalytics, UserPeerAnalytics, Anomalies. UEBA behaviors layer tables (new behavior abstraction): SentinelBehaviorInfo and SentinelBehaviorEntities, created only if behaviors layer is enabled. Microsoft XDR Advanced Hunting lake tier ingestion GA: explicit supported tables from MDE/MDO/MDA (for example DeviceProcessEvents, DeviceNetworkEvents, EmailEvents, UrlClickEvents, CloudAppEvents) and an explicit note that MDI support will follow. Detection and analytics: UEBA and graph UEBA operating model and scoring Microsoft’s UEBA documentation gives you citeable technical detail: UEBA uses machine learning to build behavioral profiles and detect anomalies versus baselines, incorporating peer group analysis and “blast radius evaluation” concepts. Risk scoring is described with two different scoring models: BehaviorAnalytics.InvestigationPriority (0–10) vs Anomalies.AnomalyScore (0–1), with different processing characteristics (near-real-time/event-level vs batch/behavior-level). UEBA Essentials is positioned as a maintained pack of prebuilt queries (including multi-cloud anomaly detection), and Microsoft’s February 2026 update adds detail about expanded anomaly detection across Azure/AWS/GCP/Okta and the anomalies-table-powered queries. Sentinel data lake and graph as the new “analytics substrate” Microsoft’s data lake overview frames a two-tier model: Analytics tier: high-performance, real-time analytics supporting alerting/incident management. Data lake tier: centralized long-term storage for querying and Python-based analytics, designed for retention up to 12 years, with “single-copy” mirroring (data in analytics tier mirrored to lake tier). Microsoft’s graph documentation states that if you already have Sentinel data lake, the required graph is auto-provisioned when you sign into the Defender portal, enabling experiences like hunting graph and blast radius. Microsoft also notes that while the experiences are included in existing licensing, enabling data sources can incur ingestion/processing/storage costs. Automation: AI playbook generator details that matter technically The playbook generator doc contains unusually concrete engineering constraints and required setup. Key technical points to carry into your article: Prerequisites: Security Copilot must be enabled with SCUs available (Microsoft states SCUs aren’t billed for playbook generation but are required), and the Sentinel workspace must be onboarded to Defender. Roles: Sentinel Contributor is required for authoring Automation Rules, and a Detection tuning role in Entra is required to use the generator; permissions may take up to two hours to take effect. Integration Profiles: explicitly defined as Base URL + auth method + required credentials; cannot change API URL/auth method after creation; supports multiple auth methods including OAuth2 client credentials, API key, AWS auth, Bearer/JWT, etc. Enhanced Alert Trigger: designed for broader coverage across Sentinel, Defender, and XDR alerts and tenant-level automation consistency. Limitations: Python only, alerts as the sole input type, no external libraries, max 100 playbooks/tenant, 10-minute runtime, line limits, and separation of enhanced trigger rules from standard alert trigger rules (no automatic migration). APIs and code/CLI (official) Create/update a DCR with Azure CLI (official) Microsoft documents an az monitor data-collection rule create workflow to create/update a DCR from a JSON file, which is directly relevant if your readers build their own “push ingestion” paths outside of CCF Push or need transformations not supported via a guided connector UI. az monitor data-collection rule create \ --location 'eastus' \ --resource-group 'my-resource-group' \ --name 'my-dcr' \ --rule-file 'C:\MyNewDCR.json' \ --description 'This is my new DCR' Send logs via Azure Monitor Ingestion client (Python) (official) Microsoft’s Azure SDK documentation provides a straightforward LogsIngestionClient pattern (and the repo samples document the required environment variables such as DCE, rule immutable ID, and stream name). import os from azure.identity import DefaultAzureCredential from azure.monitor.ingestion import LogsIngestionClient endpoint = os.environ["DATA_COLLECTION_ENDPOINT"] rule_id = os.environ["LOGS_DCR_RULE_ID"] # DCR immutable ID stream_name = os.environ["LOGS_DCR_STREAM_NAME"] # stream name in DCR credential = DefaultAzureCredential() client = LogsIngestionClient(endpoint=endpoint, credential=credential) body = [ {"Time": "2026-03-18T00:00:00Z", "Computer": "host1", "AdditionalContext": "example"} ] # Actual upload method name/details depend on SDK version and sample specifics. # Refer to official ingestion samples and README for the exact call. The repo sample and README explicitly define the environment variables and the use of LogsIngestionClient + DefaultAzureCredential. Sentinel repositories API version retirement (engineering risk) Microsoft’s Sentinel release notes contain an explicit “call to action” that older REST API versions used for Sentinel Repositories will be retired (June 1, 2026) and that Source Control actions using older versions will stop being supported (starting June 15, 2026), recommending migration to specific versions. This is critical for “content-as-code” SOC engineering pipelines. Migration and implementation guidance Prerequisites and planning gates A technically rigorous migration section should treat this as a set of gating checks. Microsoft’s transition guidance highlights several that can materially block or change behavior: Portal transition has no extra cost: Microsoft explicitly states transitioning to the Defender portal has no extra cost (billing remains Sentinel consumption). Data storage and privacy policies change: after onboarding, Defender XDR policies apply even when working with Sentinel data (data retention/sharing differences). Customer-managed keys constraint for data lake: CMK is not supported for data stored in Sentinel data lake; even broader, Sentinel data lake onboarding doc warns that CMK-enabled workspaces aren’t accessible via data lake experiences and that data ingested into the lake is encrypted with Microsoft-managed keys. Region and data residency implications: data lake is provisioned in the primary workspace’s region and onboarding may require consent to ingest Microsoft 365 data into that region if it differs. Data appearance lag when switching tiers: enabling ingestion for the first time or switching between tiers can take 90–120 minutes for data to appear in tables. Step-by-step configuration tasks for the most “new” capabilities Enable lake-tier ingestion for Advanced Hunting tables (GA) Microsoft’s GA announcement provides direct UI steps in the Defender portal: Defender portal → Microsoft Sentinel → Configuration → Tables Select an Advanced Hunting table (from the supported list) Data Retention Settings → choose “Data lake tier” + set retention + save Microsoft states that this allows Defender data to remain accessible in the Advanced Hunting table for 30 days while a copy is sent to Sentinel data lake for long-term retention (up to 12 years) and graph/MCP-related scenarios. Deploy the Microsoft 365 Copilot data connector (public preview) Microsoft’s connector post provides the operational steps and requirements: Install via Content Hub in the Defender portal (search “Copilot”, install solution, open connector page). Enablement requires tenant-level Global Administrator or Security Administrator roles. Data lands in CopilotActivity. Ingestion costs apply based on Sentinel workspace settings or Sentinel data lake tier pricing. Configure multi-tenant content distribution (expanded content types) Microsoft documents: Navigate to “Content Distribution” in Defender multi-tenant management portal. Create/select a distribution profile; choose content types; select content; choose up to 100 workspaces per tenant; save and monitor sync results. Limitations: automation rules that trigger a playbook cannot currently be distributed; alert tuning rules limited to built-in rules (for now). Prerequisites: access to more than one tenant via delegated access; subscription to Microsoft 365 E5 or Office E5. Prepare for Defender XDR connector–driven changes Microsoft explicitly warns that incident creation rules are turned off for Defender XDR–integrated products to avoid duplicates and suggests compensating controls using Defender portal alert tuning or automation rules. It also warns that incident titles will be governed by Defender XDR correlation and recommends avoiding “incident name” conditions in automation rules (tags recommended). Common pitfalls and “what breaks” A strong engineering article should include a “what breaks” section, grounded in Microsoft’s own lists: Schema and field semantics drift: The “standalone vs XDR connector” schema differences doc calls out CompromisedEntity behavior differences, field mapping changes, and alert filtering differences (for example, Defender for Cloud informational alerts not ingested; Entra ID below High not ingested by default). Automation delays and unsupported actions post-onboarding: Transition guidance states automation rules might run up to 10 minutes after alert/incident changes due to forwarding, and that some playbook actions (like adding/removing alerts from incidents) are not supported after onboarding—breaking certain playbook patterns. Incident synchronization boundaries: incidents created in Sentinel via API/Logic App playbook/manual Azure portal aren’t synchronized to Defender portal (per transition doc). Advanced hunting differences after data lake enablement: auxiliary log tables are no longer available in Defender Advanced hunting once data lake is enabled; they must be accessed via data lake exploration KQL experiences. CI/CD failures from API retirement: repository connection create/manage tooling that calls older API versions must migrate by June 1, 2026 to avoid action failures. Performance and cost considerations Microsoft’s cost model is now best explained using tiering and retention: Sentinel data lake tier is designed for cost-effective long retention up to 12 years, with analytics-tier data mirrored to the lake tier as a single copy. For Defender XDR threat hunting data, Microsoft states it is available in analytics tier for 30 days by default; retaining beyond that and moving beyond free windows drives ingestion and/or storage costs depending on whether you extend analytics retention or store longer in lake tier. Ingesting data directly to data lake tier incurs ingestion, storage, and processing costs; retaining in lake beyond analytics retention incurs storage costs. Ingestion-time transformations are a first-class cost lever, and Microsoft explicitly frames filtering as a way to reduce ingestion costs in Log Analytics. Sample deployment checklist Phase Task Acceptance criteria (engineering) Governance Confirm target portal strategy and dates Internal cutover plan aligns with March 31, 2027 retirement; CI/CD deadlines tracked Identity/RBAC Validate roles for onboarding + automation Required Entra roles + Sentinel roles assigned; propagation delays accounted for Data lake readiness Decide whether to onboard to Sentinel data lake CMK policy alignment confirmed; billing subscription owner identified; region implications reviewed Defender XDR integration Choose integration mode and test incident sync Incidents visible within expected latency; bi-directional sync fields behave as expected Schema regression Validate queries/rules against XDR connector schema KQL regression tests pass; CompromisedEntity and filtering changes handled Connector modernization Inventory connectors; plan CCF / CCF Push transitions Function-based connectors migration plan; legacy custom data collection API retirement addressed Automation Pilot AI playbook generator + enhanced triggers Integration Profiles created; generated playbooks reviewed; enhanced trigger scopes correct Multi-tenant operations Configure content distribution if needed Distribution profiles sync reliably; limitations documented; rollback/override plan exists Outage-proofing Update Sentinel repos tooling for API retirement All source-control actions use recommended API versions before June 1, 2026 Use cases and customer impact Detection and response scenarios that map to the new announcements Copilot governance and misuse detection The Copilot connector’s published record types enable detections for scenarios such as unauthorized plugin/workspace/prompt-book operations and anomalous Copilot interactions. Data is explicitly positioned for analytic rules, workbooks, automation, and threat hunting within Sentinel and Sentinel data lake. Long-retention hunting on high-volume Defender telemetry (lake-first approach) Lake-tier ingestion for Advanced Hunting tables (GA) is explicitly framed around scale, cost containment, and retrospective investigations beyond “near-real-time” windows, while keeping 30-day availability in the Advanced Hunting tables themselves. Faster automation authoring and customization (SOAR engineering productivity) Microsoft positions the playbook generator as eliminating rigid templates and enabling dynamic API calls across Microsoft and third-party tools via Integration Profiles, with preview-customer feedback claiming faster automation development (vendor-stated). Multi-tenant SOC standardization (MSSP / large enterprise) Multi-tenant content distribution is explicitly designed to replicate detections, automation, and dashboards across tenants, reducing drift and accelerating onboarding, while keeping execution local to target tenants. Measurable benefit dimensions (how to discuss rigorously) Most Microsoft sources in this announcement set are descriptive (not benchmark studies). A rigorous article should therefore describe what you can measure, and label any numeric claims as vendor-stated. Recommended measurable dimensions grounded in the features as documented: Time-to-detect / time-to-ingest: CCF Push is positioned as real-time, event-driven delivery vs polling-based ingestion. Time-to-triage / time-to-investigate: UEBA layers (Anomalies + Behaviors) are designed to summarize and prioritize activity, with explicit scoring models and tables for query enrichment. Incident queue pressure: Defender XDR grouping/enrichment is explicitly described as reducing SOC queue size and time to resolve. Cost-per-retained-GB and query cost: tiering rules and retention windows define cost tradeoffs; ingestion-time transformations reduce cost by dropping unneeded rows/columns. Vendor-stated metrics: Microsoft’s March 2026 “What’s new” roundup references an external buyer’s guide and reports “44% reduction in total cost of ownership” and “93% faster deployment times” as outcomes for organizations using Sentinel (treat as vendor marketing unless corroborated by an independent study in your environment). Comparison of old vs new Microsoft capabilities and competitor XDR positioning Old vs new (Microsoft) Capability “Older” operating model (common patterns implied by docs) “New” model emphasized in announcements/release notes Primary SOC console Split experience (Azure portal Sentinel + Defender portal XDR) Defender portal as the primary unified SecOps surface; Azure portal sunset Incident correlation engine Sentinel correlation features (e.g., Fusion in Azure portal) Defender XDR correlation engine replaces Fusion for incident creation after onboarding; incident provider always “Microsoft XDR” in Defender portal mode Automation authoring Logic Apps playbooks + automation rules Adds AI playbook generator (Python) + Enhanced Alert Trigger, with explicit constraints/limits Custom ingestion Data Collector API legacy patterns + manual DCR/DCE plumbing CCF Push built on Logs Ingestion API; emphasizes automated provisioning and transformation support Long retention Primarily analytics-tier retention strategies Data lake tier supports up to 12 years; lake-tier ingestion for AH tables GA; explicit tier/cost model Graph-driven investigations Basic incident graphs Blast radius analysis + hunting graph experiences built on Sentinel data lake + graph Competitor XDR offerings (high-level, vendor pages) The table below is intentionally “high-level” and marks details as unspecified unless explicitly stated on the cited vendor pages. Vendor Positioning claims (from official vendor pages) Notes / unspecified items CrowdStrike Falcon Insight XDR is positioned as “AI-native XDR” for “endpoint and beyond,” emphasizing detection/response and threat intelligence. Data lake architecture, ingestion transformation model, and multi-tenant content distribution specifics are unspecified in cited sources. Palo Alto Networks Cortex XDR is positioned as integrated endpoint security with AI-driven operations and broader visibility; vendor site highlights outcome metrics in customer stories and “AI-driven endpoint security.” Lake/graph primitives, connector framework model, and schema parity details are unspecified in cited sources. SentinelOne Singularity XDR is positioned as AI-powered response with automated workflows across the environment; emphasizes machine-speed incident response. Specific SIEM-style retention tiering and documented ingestion-time transformations are unspecified in cited sources.585Views10likes0CommentsWhat’s New in Microsoft Sentinel: March 2026
March brings a set of updates to Microsoft Sentinel focused on helping your SOC automate faster, onboard data with less friction, and detect threats across more of your environment. This month's updates include natural-language playbook generation for more flexible SOAR workflows, streamlined real-time data ingestion with CCF Push, and expanded Kubernetes visibility with a dedicated GKE connector. Together, these innovations help security teams simplify operations, move faster, and strengthen coverage without added complexity. And if you're heading to RSAC 2026, check out how to join us for Microsoft Pre-Day below. What’s new Microsoft Sentinel playbook generator brings natural-language automation to SOC workflows The Microsoft Sentinel playbook generator lets you design and generate fully functional, code-based playbooks by describing what you need in natural language. Instead of relying on rigid templates and limited action libraries, you describe the workflow you want, and the generator produces a Python playbook with documentation and a visual flowchart. This has been a top ask from enterprise customers looking for more flexible automation in their SIEM workflows. The playbook generator works across Microsoft and third-party tools. By defining an Integration Profile with a base URL, authentication method, and credentials, it can create dynamic API calls without predefined connectors. That means you can automate tasks like team notifications, ticket updates, data enrichment, or incident response across your environment, then validate playbooks against real alerts and refine through chat or manual edits. You keep full transparency into the generated code and full control to customize it. Watch a demo and learn more. CCF Push delivers seamless, real-time security data to Microsoft Sentinel (public preview) The Codeless Connector Framework (CCF) Push feature allows you to send security data directly to a Sentinel workspace in real time. Instead of configuring Data Collection Endpoints (DCE), Data Collection Rules (DCR), Entra app registrations, and RBAC assignments, you press "Deploy" and Sentinel sets up all the resources for you. Built on the Log Ingestion API, CCF Push supports high-throughput ingestion, data transformation before ingestion, and direct delivery to system tables to speed up SOC detection and response and to enable more flexible access to critical security telemetry. This opens pathways to advanced scenarios, including data lake integrations and agentic AI use cases. Sentinel solution developers can begin leveraging CCF Push immediately. Partners like Keeper Security, Obsidian Security, and Varonis are already using CCF Push to stream security data into Sentinel. Learn more and check out the getting started guide. Detect threats across GKE clusters in Microsoft Sentinel with a dedicated CCF connector (general availability) A dedicated data connector for Google Kubernetes Engine (GKE) is available in the Microsoft Sentinel content hub, built on the Codeless Connector Framework (CCF). The connector ingests GKE cluster activity, workload behavior, and security events into the GKEAudit Log Analytics table, bringing GKE monitoring in line with how Azure Kubernetes Service (AKS) clusters are monitored in Sentinel today. It includes Data Collection Rule (DCR) support, data lake-only ingestion, and workspace transformation support so you can filter or modify incoming data before it reaches its destination. For security teams running workloads on GKE, this means you can apply Sentinel analytics, workbooks, and hunting queries across your GKE signals alongside the rest of your environment, giving you consistent visibility into Kubernetes threats whether your clusters run on Azure or Google Cloud. Get the GKE data connector Solve hybrid identity challenges with an RSA agent on Microsoft Sentinel data lake and Security Copilot RSA has built an agentic solution that combines RSA ID Plus telemetry with Microsoft Sentinel's data lake and Security Copilot agents. The integration ingests administrative identity telemetry from RSA ID Plus into the Sentinel data lake for cost-effective, long-term retention, then uses Security Copilot agents to assess that data and surface anomalous or risky admin behavior automatically. For security teams managing complex hybrid identity environments, this means identity risk signals from RSA are analyzed alongside your broader Sentinel telemetry without manual correlation. Admin accounts remain one of the highest-value targets for attackers, and having agentic AI continuously assessing identity patterns helps your SOC detect compromised credentials earlier and reduce investigation time. Learn more Join Microsoft Security at RSAC 2026 Pre-Day If you are heading to RSAC™ 2026 in San Francisco, join Microsoft Security for Pre-Day on Sunday, March 22 at the Palace Hotel. Hear from Vasu Jakkal, CVP of Microsoft Security Business, and other Microsoft Security leaders on how AI and autonomous agents are reshaping defense strategy. Product leaders will share what they are focused on for security operations, threat intelligence experts will discuss emerging trends, and Microsoft researchers will highlight the newest areas of security R&D. Register for Microsoft Pre-Day Explore all Microsoft experiences at RSAC 2026 Evaluate your SIEM platform for the agentic era with our strategic buyer's guide Our buyer’s guide from Microsoft Security helps security leaders evaluate what a modern SIEM platform should deliver. The Strategic SIEM Buyer's Guide walks through three essentials: building a unified foundation that is future-proof, accelerating detection and response with AI, and maximizing ROI with faster time to value. Whether you are assessing migration from a legacy on-premises SIEM or benchmarking your current platform, the guide offers practical buyer's tips and capability checklists grounded in real outcomes, including how organizations using Sentinel have achieved a 44% reduction in total cost of ownership and 93% faster deployment times. Learn more Additional resources Sign up for upcoming events: Mar 11: Microsoft Security Day (in-person, Mumbai) Mar 18: Tech brief: Next‑Generation Security Operations with Microsoft Mar 19: Microsoft Security Immersion Event: Shadow Hunter (in-person, Toronto) Mar 23-26: Microsoft Security at RSAC 2026 (in-person, San Francisco) Mar 25: Microsoft Tech Brief: Modernize security operations with a unified platform Apr 2: Master SecOps in the AI Era: Kickstart Your SC-200 Certification Challenge Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Microsoft Sentinel. We’ll see you in the next edition!1.5KViews3likes0CommentsHow to Include Custom Details from an Alert in Email Generated by a Playbook
I have created an analytics rule that queries Sentinel for security events pertaining to group membership additions, and triggers an alert for each event found. The rule does not create an incident. Within the rule logic, I have created three "custom details" for specific fields within the event (TargetAccount, MemberName, SubjectAccount). I have also created a corresponding playbook for the purpose of sending an email to me when an alert is triggered. The associated automation rule has been configured and is triggered in the analytics rule. All of this is working as expected - when a member is added to a security group, I receive an email. The one remaining piece is to populate the email message with the custom details that I've identified in the rule. However, I'm not sure how to do this. Essentially, I would like the values of the three custom details shown in the first screenshot below to show up in the body of the email, shown in the second screenshot, next to their corresponding names. So, for example, say Joe Smith is added to the group "Admin" by Tom Jones. These are the fields and values in the event that I want to pull out. TargetAccount = Admin MemberName = Joe Smith Subject Account = Tom Jones The custom details would then be populated as such: Security_Group = Admin Member_Added = Joe Smith Added_By = Tom Jones and then, the body of the email would contain: Group: Admin Member Added: Joe Smith Added By: Tom Jones1.8KViews0likes6CommentsUpdate: Changing the Account Name Entity Mapping in Microsoft Sentinel
The upcoming update introduces more consistent and predictable entity data across analytics, incidents, and automation by standardizing how the Account Name property is populated when using UPN‑based mappings in analytic rules. Going forward, Account Name property will consistently contain only the UPN prefix, with new dedicated fields added for the full UPN and UPN suffix. While this improves consistency and enables more granular automation, customers who rely on specific Account Name values in automation rules or Logic App playbooks may need to take action. Timeline Effective date: July 1, 2026. The change will apply automatically - no opt-in is required. Scope of impact Analytics Rules which include mapping of User Principal Name (UPN) to the Account Name entity field, where the resulting alerts are processed by Automation Rules or Logic App Playbooks that reference the AccountName property. What’s changing Currently When an Analytic Rule includes mapping of a full UPN (for example: 'user@domain.com') to the Account Name field, the resulting input value for the Automation Rule or/and Logic App Playbook is inconsistent. In some cases, it contains only the UPN prefix: 'user', and in other cases the full UPN ('user@domain.com'). After July 1, 2026 Account Name property will consistently contain only the UPN prefix The following new fields will be added to the entity object: AccountName (UPN prefix) UPNSuffix UserPrincipalName (full UPN) This change provides an enhanced filtering and automation logic based on these new fields. Example Before Analytics Rule maps: 'user@domain.com' Automation Rule receives: Account Name: 'user' or 'user@domain.com' (inconsistent) After Analytics Rule maps: 'user@domain.com' Automation Rule receives: Account Name: 'user' UPNSuffix: 'domain.com' Logic App Playbook and SecurityAlert table receives: AccountName: 'user' UPNSuffix: 'domain.com' UserPrincipalName: 'user@domain.com' Feature / Location Before After SecurityAlert table Logic App Playbook Entity Why does it matter? If your automation logic relies on exact string comparisons against the full UPN stored in Account Name, those conditions may no longer match after the update. This most commonly affects: Automation Rules using "Equals" condition on Account Name Logic App Playbooks comparing entity field 'accountName' to a full UPN value Call to action Avoid strict equality checks against Account Name Use flexible operators such as: Contains Starts with Leverage the new UPNSuffix field for clearer intent Example update Before - Account name will show as 'user' or 'user@domain.com' After - Account Name will show as 'user' Recommended changes: Account Name Contains/Startswith 'user' UPNSuffix Equals/Startswith/Contains 'domain.com' This approach ensures compatibility both before and after the change takes effect. Where to update Review any filters, conditions, or branching logic that depend on Account Name values. Automation Rules: Use the 'Account name' field Logic App Playbooks: Update conditions referencing the entity: 'accountName' For example: Automation Rule before the change: Automation Rule after the change: Summary A consistency improvement to Account Name mapping is coming on July 1, 2026 The change affects Automation Rules and Logic App Playbooks that rely on UPN to Account Name mappings New UPN related fields provide better structure and control Customers should follow the recommendations above before the effective change date1KViews0likes0CommentsSAP & Business-Critical App Security Connectors
I validated what it takes to make SAP and SAP-adjacent security signals operational in a SOC: reliable ingestion, stable schemas, and detections that survive latency and schema drift. I focus on four integrations into Microsoft Sentinel: SAP Enterprise Threat Detection (ETD) cloud edition (SAPETDAlerts_CL, SAPETDInvestigations_CL), SAP S/4HANA Cloud Public Edition agentless audit ingestion (ABAPAuditLog), Onapsis Defend (Onapsis_Defend_CL), and SecurityBridge (also ABAPAuditLog). Because vendor API specifics for ETD Retrieval API / Audit Retrieval API aren’t publicly detailed in the accessible primary sources I could retrieve, I explicitly label pagination/rate/time-window behaviors as unspecified where appropriate. Connector architectures and deployment patterns For SAP-centric telemetry I separate two planes: First is SAP application telemetry that lands in SAP-native tables, especially ABAPAuditLog, ABAPChangeDocsLog, ABAPUserDetails, and ABAPAuthorizationDetails. These tables are the foundation for ABAP-layer monitoring and are documented with typed columns in Azure Monitor Logs reference. Second is external “security product” telemetry (ETD alerts, Onapsis findings). These land in custom tables (*_CL) and typically require a SOC-owned normalization layer to avoid brittle detections. Within Microsoft’s SAP solution itself, there are two deployment models: agentless and containerized connector agent. The agentless connector uses SAP Cloud Connector and SAP Integration Suite to pull logs, and Microsoft documents it as the recommended approach; the containerized agent is being deprecated and disabled on September 14, 2026. On the “implementation technology” axis, Sentinel integrations generally show up as: - Codeless Connector Framework (CCF) pollers/pushers (SaaS-managed ingestion definitions with DCR support). - Function/Logic App custom pipelines using the Logs Ingestion API when you need custom polling, enrichment, or a vendor endpoint that isn’t modeled in CCF. In my view, ETD and S/4HANA Cloud connectors are “agentless” from the Sentinel side (API credentials only), while Onapsis Defend and SecurityBridge connectors behave like push pipelines because Microsoft requires an Entra app + DCR permissions (typical Logs Ingestion API pattern). Authentication and secrets handling Microsoft documents the required credentials per connector: - ETD cloud connector requires Client Id + Client Secret for ETD Retrieval API (token mechanics unspecified). - S/4HANA Cloud Public Edition connector requires Client Id + Client Secret for Audit Retrieval API (token mechanics unspecified), and Microsoft notes “alternative authentication mechanisms” exist (details in linked repo are unspecified in accessible sources). - Onapsis Defend and SecurityBridge connectors require a Microsoft Entra ID app registration and Azure permission to assign Monitoring Metrics Publisher on DCRs. This maps directly to the Logs Ingestion API guidance, where a service principal is granted DCR access via that role (or the Microsoft.Insights/Telemetry/Write data action). For production, I treat these as “SOC platform secrets”: - Store client secrets/certificates in Key Vault when you own the pipeline (Function/Logic App); rotate on an operational schedule; alert on auth failures and sudden ingestion drops. - For vendor-managed ingestion (Onapsis/SecurityBridge), I still require: documented ownership of the Entra app, explicit RBAC scope for the DCR, and change control for credential rotation because a rotated secret is effectively a data outage. API behaviors and ingestion reliability For ETD Retrieval API and Audit Retrieval API, pagination/rate limits/time windows are unspecified in the accessible vendor documentation I could retrieve. I therefore design ingestion and detections assuming non-ideal API behavior: late-arriving events, cursor/page limitations, and throttling. CCF’s RestApiPoller model supports explicit retry policy, windowing, and multiple paging strategies, so if/when you can obtain vendor API semantics, you can encode them declaratively (rather than writing fragile code). For the SAP solution’s telemetry plane, Microsoft provides strong operational cues: agentless collection flows through Integration Suite, and troubleshooting typically happens in the Integration Suite message log; this is where I validate delivery failures before debugging Sentinel-side parsers. For scheduled detections, I always account for ingestion delay explicitly. Microsoft’s guidance is to widen event lookback by expected delay and then constrain on ingestion_time() to prevent duplicates from overlap. Schema, DCR transformations, and normalization layer Connector attribute comparison Connector Auth method Sentinel tables Default polling Backfill Pagination Rate limits SAP ETD (cloud) Client ID + Secret (ETD Retrieval API) SAPETDAlerts_CL, SAPETDInvestigations_CL unspecified unspecified unspecified unspecified SAP S/4HANA Cloud (agentless) Client ID + Secret (Audit Retrieval API); alt auth referenced ABAPAuditLog unspecified unspecified unspecified unspecified Onapsis Defend Entra app + DCR permission (Monitoring Metrics Publisher) Onapsis_Defend_CL n/a (push pattern) unspecified n/a unspecified SecurityBridge Entra app + DCR permission (Monitoring Metrics Publisher) ABAPAuditLog n/a (push pattern) unspecified n/a unspecified Ingestion-time DCR transformations Sentinel supports ingestion-time transformations through DCRs to filter, enrich, and mask data before it’s stored. Example: I remove low-signal audit noise and mask email identifiers in ABAPAuditLog: source | where isnotempty(TransactionCode) and isnotempty(User) | where TransactionCode !in ("SM21","ST22") // example noise; tune per tenant | extend Email = iif(Email has "@", strcat(substring(Email,0,2),"***@", tostring(split(Email,"@")[1])), Email) Normalization functions Microsoft explicitly recommends using SAP solution functions instead of raw tables because they can change the infrastructure beneath without breaking detections. I follow the same pattern for ETD/Onapsis custom tables: I publish SOC-owned functions as a schema contract. .create-or-alter function with (folder="SOC/SAP") Normalize_ABAPAudit() { ABAPAuditLog | project TimeGenerated, SystemId, ClientId, User, TransactionCode, TerminalIpV6, MessageId, MessageClass, MessageText, AlertSeverityText, UpdatedOn } .create-or-alter function with (folder="SOC/SAP") Normalize_ETDAlerts() { SAPETDAlerts_CL | extend AlertId = tostring(coalesce(column_ifexists("AlertId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("User",""), column_ifexists("user",""))) | project TimeGenerated, AlertId, Severity, SapUser, * } .create-or-alter function with (folder="SOC/SAP") Normalize_Onapsis() { Onapsis_Defend_CL | extend FindingId = tostring(coalesce(column_ifexists("FindingId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("user",""))) | project TimeGenerated, FindingId, Severity, SapUser, * } Health/lag monitoring and anti-gap I monitor both connector health and ingestion delay. SentinelHealth is the native health table, and Microsoft provides a health workbook and a schema reference for the fields. let lookback=24h; union isfuzzy=true (ABAPAuditLog | extend T="ABAPAuditLog"), (SAPETDAlerts_CL | extend T="SAPETDAlerts_CL"), (Onapsis_Defend_CL | extend T="Onapsis_Defend_CL") | where TimeGenerated > ago(lookback) | summarize LastEvent=max(TimeGenerated), P95DelaySec=percentile(datetime_diff("second", ingestion_time(), TimeGenerated), 95), Events=count() by T Anti-gap scheduled-rule frame (Microsoft pattern): let ingestion_delay=10m; let rule_lookback=5m; ABAPAuditLog | where TimeGenerated >= ago(ingestion_delay + rule_lookback) | where ingestion_time() > ago(rule_lookback) SOC detections for ABAP privilege abuse, fraud/insider behavior, and audit readiness Privileged ABAP transaction monitoring ABAPAuditLog includes TransactionCode, User, SystemId, and terminal/IP fields, so I start with a curated high-risk tcode set and then add baselines. let PrivTCodes=dynamic(["SU01","PFCG","SM59","RZ10","SM49","SE37","SE16","SE16N"]); Normalize_ABAPAudit() | where TransactionCode in (PrivTCodes) | summarize Actions=count(), Ips=make_set(TerminalIpV6,5) by SystemId, User, TransactionCode, bin(TimeGenerated, 1h) | where Actions >= 3 Fraud/insider scenario: sensitive object change near privileged audit activity ABAPChangeDocsLog exposes ObjectClass, ObjectId, and change types; I correlate sensitive object changes to privileged transactions in a tight window. let w=10m; let Sensitive=dynamic(["BELEG","BPAR","PFCG","IDENTITY"]); ABAPChangeDocsLog | where ObjectClass in (Sensitive) | project ChangeTime=TimeGenerated, SystemId, User=tostring(column_ifexists("User","")), ObjectClass, ObjectId, TypeOfChange=tostring(column_ifexists("ItemTypeOfChange","")) | join kind=innerunique ( Normalize_ABAPAudit() | project AuditTime=TimeGenerated, SystemId, User, TransactionCode ) on SystemId, User | where AuditTime between (ChangeTime-w .. ChangeTime+w) | project ChangeTime, AuditTime, SystemId, User, ObjectClass, ObjectId, TransactionCode, TypeOfChange Audit-ready pipeline: monitoring continuity and configuration touchpoints I treat audit logging itself as a monitored control. A simple SOC-safe control is “volume drop” by system; it’s vendor-agnostic and catches pipeline breaks and deliberate suppression. Normalize_ABAPAudit() | summarize PerHour=count() by SystemId, bin(TimeGenerated, 1h) | summarize Avg=avg(PerHour), Latest=arg_max(TimeGenerated, PerHour) by SystemId | where Latest_PerHour < (Avg * 0.2) Where Onapsis/ETD are present, I increase fidelity by requiring “privileged ABAP activity” plus an external SAP-security product finding (field mappings are tenant-specific; normalize first): let win=1h; Normalize_ABAPAudit() | where TransactionCode in ("SU01","PFCG","SM59","SE16N") | join kind=leftouter (Normalize_Onapsis()) on $left.User == $right.SapUser | where isempty(FindingId) == false and TimeGenerated1 between (TimeGenerated .. TimeGenerated+win) | project TimeGenerated, SystemId, User, TransactionCode, FindingId, OnapsisSeverity=Severity Production validation, troubleshooting, and runbook For acceptance, I validate in this order: table creation, freshness/lag percentiles, connector health state, and cross-check of event counts against the upstream system for the same UTC window (where available). Connector health monitoring is built around SentinelHealth plus the Data collection health workbook. For SAP agentless ingestion, Microsoft states most troubleshooting happens in Integration Suite message logs—this is where I triage authentication/networking failures before tuning KQL. For Onapsis/SecurityBridge-style ingestion, I validate Entra app auth, DCR permission assignment (Monitoring Metrics Publisher), and a minimal ingestion test payload using the Logs Ingestion API tutorial flow. Operational runbook items I treat as non-optional: health alerts on connector failure and freshness drift; scheduled rule anti-gap logic; playbooks that capture evidence bundles (ABAPAuditLog slice + user context from ABAPUserDetails/ABAPAuthorizationDetails); DCR filters to reduce noise and cost; and change control for normalization functions and watchlists. SOC “definition of done” checklist (short): 1) Tables present and steadily ingesting; 2) P95 ingestion delay measured and rules use the anti-gap pattern; 3) SentinelHealth enabled with alerts; 4) SOC-owned normalization functions deployed; 5) at least one privileged-tcode rule + one change-correlation rule + one audit-continuity rule in production. Mermaid ingestion flow:146Views9likes0CommentsUnifying AWS and Azure Security Operations with Microsoft Sentinel
The Multi-Cloud Reality Most modern enterprises operate in multi-cloud environments using Azure for core workloads and AWS for development, storage, or DevOps automation. While this approach increases agility, it also expands the attack surface. Each platform generates its own telemetry: Azure: Activity Logs, Defender for Cloud, Entra ID sign-ins, Sentinel analytics AWS: CloudTrail, GuardDuty, Config, and CloudWatch Without a unified view, security teams struggle to detect cross-cloud threats promptly. That’s where Microsoft Sentinel comes in, bridging Azure and AWS into a single, intelligent Security Operations Center (SOC). Architecture Overview Connect AWS Logs to Sentinel AWS CloudTrail via S3 Connector Enable the AWS CloudTrail connector in Sentinel. Provide your S3 bucket and IAM role ARN with read access. Sentinel will automatically normalize logs into the AWSCloudTrail table. AWS GuardDuty Connector Use the AWS GuardDuty API integration for threat detection telemetry. Detected threats, such as privilege escalation or reconnaissance, appear in Sentinel as the AWSGuardDuty table. Normalize and Enrich Data Once logs are flowing, enrich them to align with Azure activity data. Example KQL for mapping CloudTrail to Sentinel entities: AWSCloudTrail | extend AccountId = tostring(parse_json(Resources)[0].accountId) | extend User = tostring(parse_json(UserIdentity).userName) | extend IPAddress = tostring(SourceIpAddress) | project TimeGenerated, EventName, User, AccountId, IPAddress, AWSRegion Then correlate AWS and Azure activities: let AWS = AWSCloudTrail | summarize AWSActivity = count() by User, bin(TimeGenerated, 1h); let Azure = AzureActivity | summarize AzureActivity = count() by Caller, bin(TimeGenerated, 1h); AWS | join kind=inner (Azure) on $left.User == $right.Caller | where AWSActivity > 0 and AzureActivity > 0 | project TimeGenerated, User, AWSActivity, AzureActivity Automate Cross-Cloud Response Once incidents are correlated, Microsoft Sentinel Playbooks (Logic Apps) can automate your response: Example Playbook: “CrossCloud-Containment.json” Disable user in Entra ID Send a command to the AWS API via Lambda to deactivate IAM key Notify SOC in Teams Create ServiceNow ticket POST https://api.aws.amazon.com/iam/disable-access-key PATCH https://graph.microsoft.com/v1.0/users/{user-id} { "accountEnabled": false } Build a Multi-Cloud SOC Dashboard Use Sentinel Workbooks to visualize unified operations: Query 1 – CloudTrail Events by Region AWSCloudTrail | summarize Count = count() by AWSRegion | render barchart Query 2 – Unified Security Alerts union SecurityAlert, AWSGuardDuty | summarize TotalAlerts = count() by ProviderName, Severity | render piechart Scenario Incident: A compromised developer account accesses EC2 instances on AWS and then logs into Azure via the same IP. Detection Flow: CloudTrail logs → Sentinel detects unusual API calls Entra ID sign-ins → Sentinel correlates IP and user Sentinel incident triggers playbook → disables user in Entra ID, suspends AWS IAM key, notifies SOC Strengthen Governance with Defender for Cloud Enable Microsoft Defender for Cloud to: Monitor both Azure and AWS accounts from a single portal Apply CIS benchmarks for AWS resources Surface findings in Sentinel’s SecurityRecommendations table322Views4likes0Comments