microsoft sentinel
239 TopicsAccelerate Your UEBA Journey: Introducing the Microsoft Sentinel Behaviors Workbook
In our recent announcement, we introduced the UEBA Behaviors layer - a breakthrough capability that transforms noisy, high-volume security telemetry into clear, human-readable behavioral insights. The Behaviors layer answers the critical question: "Who did what to whom, and why does it matter?" by aggregating and sequencing raw events into normalized, MITRE ATT&CK-mapped behaviors. But understanding what behaviors are is just the beginning. The next question for SOC teams is: "How do I actually use behaviors to get value from day one?" Today, we announce the Microsoft Sentinel Behaviors Workbook (part of the “UEBA essentials solution” in the content hub) - a purpose-built, interactive analytics workbook that helps you extract maximum value from the Behaviors layer across your investigation, hunting, and detection workflows. Whether you're a SOC manager looking for high-level situational awareness, an analyst triaging an incident, or a threat hunter searching for hidden threats, this workbook provides the insights you need, when you need them. And the best thing? You can always make it your own! Why a Workbook? While the behaviors data is incredibly rich, knowing where to start and what questions to ask can present a learning curve. The UEBA Behaviors Workbook solves this by providing pre-built, validated analytics across three core SOC workflows: Overview: High-level metrics and trends for leadership and SOC managers Investigation: Deep-dive timeline analysis for incident response Hunting: Proactive threat discovery with anomaly detection and attack chain analysis Think of the workbook as your guided tour through the Behaviors layer - it surfaces the most actionable insights automatically, while still giving you the flexibility to drill down and customize as needed. Quick Recap: What Are UEBA Behaviors? Before diving into the workbook, let's briefly recap what makes the Behaviors layer unique: Behaviors are neutral, descriptive observations - they explain what happened, not whether it's malicious. They aggregate and sequence raw events from sources like AWSCloudTrail, GCPAuditLog and CommonSecurityLog data into unified, human-readable summaries. Each behavior is enriched with MITRE ATT&CK mappings, entity roles, and natural-language explanations. They bridge the gap between raw logs and alerts, providing an abstraction layer that makes investigation and detection dramatically faster. In essence: behaviors turn "what's in the logs" into "what actually happened in my environment" - without requiring deep expertise in every log schema. The Behaviors Workbook: Three Tabs, Three Workflows The Behaviors Workbook is organized into three tabs; each designed for a specific SOC persona and use case. Let's walk through each one. Tab 1: Overview - Situational Awareness at a Glance Who it's for: SOC managers, leadership, and anyone who needs a quick pulse-check on what's happening in the environment. What it provides: The Overview tab delivers high-level metrics and visualizations including key metrics tiles, timeline trend charts, MITRE coverage heatmaps, and behavior type distribution that help you quickly spot spikes, patterns, or anomalies requiring investigation. Use case example: A SOC manager opens the Overview tab and immediately sees an unusual spike in behaviors concentrated in the Defense Evasion and Persistence tactics. The Behavior Type Distribution reveals a surge in "Failed IAM Identity Provider Configuration Attempts" and "AWS EC2 Security Group Rule Modifications Observed", signaling potential attack preparation that needs immediate triage. Tab 2: Hunting - Proactive Threat Discovery Who it's for: Threat hunters, purple teams, and proactive security analysts. What it does: The Hunting tab empowers hunters to discover emerging threats before they become incidents by surfacing anomalous patterns, rare behaviors, and potential attack chains. Unlike the Investigation tab (which reacts to known incidents), Hunting is about proactive discovery. Key capabilities: Use case example: Rarest Behaviors A threat hunter reviews the "Rarest Behaviors" panel, filtered for the past 7 days. They notice a behavior titled "Inbound remote management session from external address" that has only occurred 5 times in the entire environment. Pivoting to the BehaviorEntities table, they discover all 5 instances involve Palo Alto firewall logs showing the same external IP targeting different internal management interfaces - a clear sign of targeted reconnaissance. Use case example: Attack Chain Detector The Attack Chain Detector highlights an AWS IAM role (arn:aws:iam::123456789012:role/CompromisedRole) appearing across 5 distinct MITRE tactics: Reconnaissance, Persistence, Defense Evasion, Credential Access, and Impact. Reviewing the associated behaviors reveals: This multi-stage pattern - invisible when looking at individual CloudTrail events - is now crystal clear. The hunter initiates an immediate investigation. Use case example: CyberArk Vault Anomaly The workbook shows that the "CyberArk Vault CPM Automatic Detection Operations" behavior had an average of 120 instances per day over the past week, but today it has 1,847 instances - a 15x increase. Drilling into the behaviors reveals that a single service account is performing mass privileged account access operations across multiple safes - potential insider threat or compromised privileged account. This insightful information would have been buried in verbose Vault audit logs, but velocity tracking surfaces it immediately. Tab 3: Investigation - Deep-Dive Analysis for Incident Response Who it's for: SOC analysts, incident responders, and anyone investigating a specific incident or specific entities. What it does: The Investigation tab transforms how analysts respond to incidents by providing comprehensive behavioral context for the entities involved. Instead of manually querying multiple log sources and stitching together timelines, analysts get an automated, pre-correlated view of everything those entities did before, during, and after the incident. How to use it: When investigating an incident, you provide: The entities involved (users, machines, IPs, etc.) The time of incident generation Time range before the first alert (e.g., 24 hours before) Time range after the last alert (e.g., 12 hours after) Use case example: An alert fires for "Suspicious AWS IAM Activity" involving IAM user AdminUser123. The analyst opens the Investigation tab, enters the user identity as the entity, sets the incident time, and configures a 24-hour lookback and 12-hour look-forward window. The analyst immediately sees: Before the incident: Normal behaviors like "AWS EC2 Security Group Information Retrieval" show routine reconnaissance. During the incident: Multiple instances of "Failed IAM Identity Provider Configuration Attempts", indicating the attacker is trying to establish persistence through SAML federation After the incident: "AWS Resource Deletion Monitoring" behaviors showing potential attempted cleanup of evidence. This comprehensive view - which would have taken 30+ minutes of manual querying across CloudTrail, VPC Flow Logs, and IAM logs - is now available in seconds and is easily readable and provides rich context. Real Impact on Your SOC The Behaviors Workbook represents a fundamental shift in how SOCs can operate: Investigation time drops from hours to minutes through automated entity-centric behavioral analysis. Threat hunting becomes accessible to junior analysts through pre-built queries that surface rare behaviors and attack chains. Leadership gains visibility into MITRE ATT&CK coverage and behavior trends without needing to know KQL. Detection engineering is faster because rare behaviors and velocity anomalies are automatically surfaced as high-fidelity signals. The workbook doesn't just give you data - it gives you insights you can act on immediately. Getting Started Prerequisites: A Microsoft Sentinel workspace onboarded to the Microsoft Defender portal. The Behaviors layer enabled for your workspace (Settings → UEBA → Behaviors layer) and at least one supported data source configured (list is always updated in the documentation). The Workbook uses the Log Analytics table names, SentinelBehaviorInfo and SentinelBehaviorEntities. The “Sentinel” prefix isn’t needed when querying behaviors in Advanced Hunting. Installation: Navigate to Microsoft Sentinel → Content Hub. Search for the "UEBA essentials" solution in the gallery. Click Save to add it to your workspace. One of the content items is the UEBA behaviors workbook (you will also find there a Workbook for UEBA and great hunting queries to get you started with UEBA). Open the workbook and select your time range and parameters. Adjust the queries as needed for your use cases. We Want Your Feedback As you start using the workbook, let us know: Which tab do you find most valuable? What additional visualizations or hunting queries would help your workflow? What should be integrated into the portal, and where? Share your thoughts in the comments below or reach out to our team directly. For more details on the Behaviors layer, see our original announcement blog post and https://learn.microsoft.com/azure/sentinel/behaviors-overview. You will find those links in the “Resources” tab of the Workbook for ease of use.165Views0likes0CommentsThree ways to run KQL on Microsoft Sentinel data lake: Interactive, Async, or Jobs
SOC analysts often face complex challenges during investigations. They often need to investigate incidents that span weeks or even months, not just hours or days. This requires correlating multiple high-volume data sources such as sign-ins, network traffic, endpoint activity, and logs. Analysts frequently need to revisit query results throughout an active investigation and must determine whether findings are temporary insights or should evolve into long-term detections. Hunting and anomaly detection, in particular, demand access to months of historical data to uncover patterns and validate threats. With Microsoft Sentinel data lake, you can run KQL directly on data in the lake, unlocking flexibility for investigations and hunting. The key is choosing the right execution model for your scenario. Quick, interactive queries are great for initial exploration but may time out on complex queries or large datasets. Deep investigations require a more robust approach, while operationalized security outcomes call for scheduled jobs. “How should I run my KQL queries on the data lake to balance between efficiency, and speed?” If you’ve ever run a long query, waited… waited some more… and then hit a timeout, you’ve already met the limits of interactive KQL queries. The good news is that Sentinel data lake gives you multiple ways to query large volumes of security data, each designed for a different purpose. In this post, we’ll walk through: When to use interactive KQL queries When to run to KQL Async queries When should you consider using KQL jobs We'll explain how these options work in real SOC workflows and guide you in selecting the right KQL execution mode, so you can avoid unnecessary frustration and re-runs. Understanding the three KQL options for Sentinel data lake When working with Sentinel data lake, you’ll typically choose between three query execution modes: Interactive KQL queries Async KQL queries KQL jobs They all use Kusto Query Language (KQL), but they differ in how long they run, how results are stored, and who they’re best for. Let’s break them down. 1. Interactive KQL queries: Ad-hoc, faster access and temporary results Analysts often begin with interactive queries because they are simple, fast, and ideal for exploring data. When to use interactive queries Use interactive queries when: You’re running queries for ad-hoc investigations You want to a validate a query before putting them in a KQL job for scheduling The dataset is small to moderate You only need to view results once You want immediate feedback Queries that are completed quickly (ideally within 2-3 minutes for an analyst to wait for the results interactively). Common use cases Checking recent network logs or sign-in failures (e.g. last 24 hrs.) Exploring a suspicious IP over a short time window (e.g. last 24 hrs.) Verifying a hypothesis during triage What to expect Designed for quick execution Best for short lookback periods Queries may time out between ~7–8 minutes Results do not persist beyond the session Interactive queries are ideal for exploratory analysis, but they may not be ideal for heavy lifting across large datasets or long lookbacks of data in lake. 2. Async queries: Long-running investigations reducing the risk of timeout An Async query is a new feature in data lake where things get interesting, especially for incident investigations involving larger datasets. Instead of waiting for results in real time, Async queries run in the background and a user can check progress in Async tab. Results are in data lake hot cache for quicker retrieval on demand, up to 24 hours after execution. When to use Async queries Async queries are a great fit when: You’re querying larger datasets, or you need to query a longer lookback window. KQL Async queries can run for up to one hour, and you suspect your query would time out interactively A small group of analysts needs fast access to the same results You don’t want to hydrate data into the Analytics tier nor create use a custom table for that You don't need to use the results in your detection rules Common use cases Exploring a suspicious IP over a time window Requiring data from lake for an incident investigation that multiple analysts need to access for a short period of time Key benefits Queries can run for up to one hour Results are stored in data lake hot cache Results remain available for up to 24 hours Multiple users can fetch results Fast re-access without re-running the query on cold storage No need to permanently move data into Analytics tier This makes Async queries especially useful during active incidents, where one or two SOC analysts may need to revisit results multiple times while pivoting their investigation. KQL Jobs: For persistent results or custom detection rules KQL jobs are designed for persistence and reuse. Instead of just querying data, a KQL job hydrates results from the data lake into the Analytics tier, either as a one-time job or on a schedule. When to use KQL jobs Use KQL jobs when: You need the results long-term Data should be available for detections or dashboards You want to schedule recurring queries Multiple teams or detections depend on the output Summarization of logs from data lake into Analytics tier Common use cases One-time: Incident investigations spanning larger datasets from lake Jobs: High volume log summarization, anomaly detection, IoC matching and similar use cases Use Microsoft Sentinel workbooks, building dashboards using data on top of analytics-tier Produce enriched datasets for ongoing monitoring Important considerations KQL jobs can run for up to one hour in the background Results are stored permanently (unless the custom table is deleted) Best when query output becomes part of your proactive hunting process Think of KQL jobs as turning raw lake data into a reusable security asset. Putting it all together: A sample investigation scenario Let’s walk through a realistic SOC scenario to see how these query types work together. Scenario: Suspicious IP activity over 90 days An analyst is investigating a potentially malicious IP address reported by threat intelligence. The IP may have been active over the past 90 days. Step 1: Start Interactive Let’s say you decided to store SecurityEvent logs in lake only mode. An analyst in your team begins with an interactive KQL query on data lake to quickly check recent activity: SecurityEvent | where TimeGenerated > ago(24h) | where IpAddress == "203.0.113.45" | summarize count() by EventLevelName This may run quickly and confirm suspicious activity, with a short lookback such as last 24 hours. Step 2: Switch to Async for scale To understand the full scope, the analyst expands the lookback to 90 days and joins multiple data sources for the same IoC. The query is may be too slow for an interactive execution. So they run it as an Async query: union SigninLogs, SecurityEvent, CommonSecurityLog | where TimeGenerated > ago(90d) | where IPAddress == "203.0.113.45" | summarize FirstSeen = min(TimeGenerated), LastSeen = max(TimeGenerated), EventCount = count() by IPAddress, Category After providing a name for query, the execution begins: The query runs in the background, and it may take few minute. You can always check the progress In Async queries tab: Once the query completes successfully, the results are cached in data lake. Over the next few hours, the analyst revisits the results multiple times while pivoting to related entities, without having to wait query execution on the cold storage. Step 3: Operationalize with a KQL Job The investigation reveals a recurring attack pattern that leadership wants monitored continuously. A KQL job is created to: Run once or in a schedule (by minutes, daily, weekly, monthly) Hydrate results into a custom Analytics table With results in Analytics tier, power custom detection rules and dashboard Now the insights move from investigation to ongoing defense. Read our previous blog posts on how to run KQL jobs on data lake. How to Choose the Right Option Here’s a simple way to decide: Need quick answers now? → Use Interactive queries Query is big, slow, or spans long timeframes? → Use Async queries Results must be reused, scheduled, or used in custom detection rules? → Use KQL jobs Each option exists to reduce friction at a specific stage of analysis, from curiosity, to investigation, to operationalization. Final thoughts Microsoft Sentinel data lake gives security teams flexibility at scale, when the right query mode is used at the right time. Interactive queries keep investigations fast and exploratory. Async queries unlock deep, long-running analysis with higher time out limits. KQL jobs turn insights into durable security capabilities. Still need to run a query on massive datasets or longer lookback? Try Notebooks capabilities on data lake. Once you choose the right option for your scenarios, querying data in the lake becomes less about limits, and more about possibilities. Happy hunting! Resources Get started with Microsoft Sentinel data lake and KQL today. Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn KQL and the Microsoft Sentinel data lake - Microsoft Security | Microsoft Learn Create jobs in the Microsoft Sentinel data lake - Microsoft Security | Microsoft LearnUnlocking the power of Notebooks with Microsoft Sentinel data lake
Co-authors: Vandana Mahtani, Ashwin Patil Security operations are rapidly evolving, driven by AI and the need for scalable, cost-effective analytics. A key differentiator of the Sentinel data lake is its native integration with Jupyter Notebooks, which brings powerful data science and machine learning capabilities directly into security operations. Analysts can move beyond static queries to run interactive investigations, correlate large and diverse datasets, and prototype advanced analytics using familiar tools and languages. By combining notebooks with Sentinel’s security context, teams can build custom detection logic, enrich investigations with ML models, and automate complex workflows. The result is faster insights, deeper analysis, and more efficient security operations, enabling SOC teams to innovate and respond at the speed required by today’s threat landscape. Hunt with Sentinel notebooks Notebooks in Sentinel data lake give security teams a powerful, interactive way to investigate and hunt across their security data at scale: Query and analyze massive datasets: Run Spark queries across months or years of security telemetry (higher thresholds compared to KQL), uncovering slow-moving threats and persistent attack patterns. Automate threat hunting: Schedule recurring jobs to scan for matches against newly ingested indicators of compromise (IOCs), enabling continuous detection and investigation. Build and operationalize ML models: Use Python, Spark, and built-in libraries to create custom anomaly detection, alert enrichment, and predictive analytics workflows. Enrich alerts and investigations: Correlate alerts with firewall, Netflow, and other logs—often stored only in the data lake—to reduce false positives and accelerate triage. Collaborate and share insights: Notebooks provide a transparent, reproducible environment for sharing queries, visualizations through python libraries like plotly that is not natively available in Sentinel, and sharing findings across teams. Cost-Efficient, Scalable Analytics Sentinel data lake’s tiered storage and flexible retention means you can ingets and store all your raw telemetry—network logs, firewall data, and more—at a fraction of the cost of traditional solutions. Notebooks help you unlock the value of this data, transforming raw logs into actionable insights with minimal manual intervention. Notebooks and KQL Jobs in Microsoft Sentinel data lake Both notebooks and KQL jobs enable teams to query and analyze data within Microsoft Sentinel data lake, but they serve very different purposes. Dimension Notebooks (Spark Runtime) KQL Jobs (Data lake KQL Engine) Execution Model Distributed compute using Apache Spark; ideal for heavy ETL, transformation, or ML workloads; supports programmatic querying Query execution using the KQL engine optimized for analytical queries over structured datalake tier data. Language & Flexibility Full Python ecosystem (Pandas, PySpark, MLlib etc.) out of the box in the cluster; ideal for data wrangling, ML, and automation pipelines. Familiar KQL syntax — purpose-built for log analytics, filtering, and aggregation. Best for converting your expensive queries in the pipelines Data Access Direct access to raw and curated tables stored in data lake tiers. Can join across multiple workspaces or schemas. Access to data lake tier tables – which includes mirrored tables from Analytics tier as well as curated table from other jobs. Performance & Scale Highly scalable distributed compute and transformation-heavy workloads. Optimized for low-latency query response and cost-efficient read operations. Ideal for investigative queries. Use Case Fit Advanced analytics, feature engineering, baselining, anomaly detection, enrichment pipelines. Operational queries, scheduled detections, and validation jobs. Automation Can be orchestrated via scheduled Spark jobs within the VSCode extension. Supports end-to-end ETL + ML automation via Python and Spark notebooks. Can be scheduled and parameterized for recurring jobs (e.g., daily data quality checks or detection lookups). Collaboration & Reproducibility Shared notebooks with code, outputs, and markdown for team review and reproducibility. Shared job definitions and saved query templates can be maintained with version control; less narrative, more operational. Visualization Leverage advanced libraries (Plotly, Seaborn, Matplotlib) for rich visuals- all available in spark compute cluster. Jobs will output to tables and then can be used via KQL rendering (timechart, barchart) for validation or quick insights. Extensibility Currently limited libraries (Azure Synapse libraries 3.4) but support will be extended to Bring-your-own libraries and Python dependencies post-Fabric integration. Limited to native KQL functions; extensibility via job scheduling and data connections. Skill Profile Data scientist / advanced security analyst / data engineer. Detection engineer / SOC analyst / operational analytics. Cost model Advanced data insights meter based on vcore compute consumed Microsoft Sentinel Pricing | Microsoft Security Data lake query meter based on the GB processed Microsoft Sentinel Pricing | Microsoft Security In practice, modern SOC teams increasingly treat notebooks and KQL jobs as complementary, not competing. KQL for signal discovery → Notebook for pattern analysis Use Defender Out-of-the box rules or custom detections to surface an interesting low to medium fidelity signal (e.g., spike in hourly failed logons across tenants compared to baseline). Then pivot to the notebook for historical trend analysis across six months of data. Notebook for enrichment → KQL for operationalization A notebook creates a behavioral baseline table stored in the data lake. A KQL rule consumes that dataset daily to trigger alerts when deviations occur. Notebook pipelines → data lake → analytics tier → dashboard A scheduled notebook curates and filters raw logs into efficient partitioned data lake tables. These tables are then used via lake explorer for ad-hoc hunting campaigns. Once the workflow consistently provides good true positives, elevate to analytic tier and set up custom detection to operationalize it for near real time detection. Together, these workflows close the loop between research, detection, and response. Getting Started: From Query to Automation The enhanced notebook experience in Sentinel data lake makes it easy to get started: Author Queries with IntelliSense: Benefit from syntax and table name suggestions for faster, error-free query writing. Schedule Notebooks as Jobs: Automate recurring analytics, such as hourly threat intelligence matching or daily alert summarization. Monitor Job Health: Use the Jobs dashboard to track job status, completions, failures, and historical trends. Leverage GitHub Copilot: Get intelligent assistance for code authoring and troubleshooting. Use GitHub copilot plan mode to come up with boiler plate code for complex workflow notebooks and collaborate with Copilot to refine further to you use case. For a step-by-step guide, see the public documentation Exploring and interacting with lake data using Jupyter Notebooks - Microsoft Security | Microsoft Learn. Watch how to quickly start using Notebooks on Sentinel data lake - Getting Started with Apache Spark for Microsoft Sentinel data lake Real-world scenarios Here are some impactful ways customers are using notebooks in Sentinel data lake: Extended Threat Investigations: Query data older than 90 days to uncover slow-moving attacks like brute-force campaigns that span accounts and geographies. Behavioral Baselining: Build time-series models to establish normal behavior and identify unusual patterns, such as credential abuse or lateral movement. Retrospective Threat Hunting: React to emerging IOCs by running historical queries across the data lake, enabling rapid and informed response. ML-Powered Insights: Operationalize custom machine learning models for anomaly detection and predictive analytics, directly within the notebook environment. Real-World Example — Extending a Detection with a Notebook Challenge A security team wants to identify password spray activity occurring gradually over time — the same IPs attempting one or two logons per day across different users — without overwhelming the system with false positives or exceeding query limits. Using a notebook (Spark runtime), analysts extend investigation across months of raw SigninLogs stored in the data lake: Load 6–12 months of sign-in data directly from lake storage. Aggregate failed logons by IP, ASN, and user over time. Apply logic to find IPs with repeated low-frequency failures spread across many accounts. Build complex statistics-based scoring model to find IP ranges of potential slow password spray. Visualize long-term trends with Plotly or Matplotlib to expose consistent patterns. Write results back as a curated dataset for downstream detection via analytic tier with aggregated threat intel from your own organization. For complete walkthrough of Sentinel data lake Password Spray solution, check out our GitHub - Password Spray Detection – End-to-End Pipeline. The notebook uncovers subtle, recurring IP ranges performing slow password sprays - activity invisible to short-window rules. The enriched dataset with statistical risk scoring and categorization of high, medium and low can then be used by a custom detection to generate proactive alerts when those IPs reappear. Call to Action Ready to transform your security operations? Get started with Microsoft Sentinel data lake today. Explore the possibilities with notebooks in Sentinel data lake: Running notebooks on the Microsoft Sentinel data lake - Microsoft Security | Microsoft Learn Create and manage Jupyter notebook jobs - Microsoft Security | Microsoft LearnUpdate: Changing the Account Name Entity Mapping in Microsoft Sentinel
The upcoming update introduces more consistent and predictable entity data across analytics, incidents, and automation by standardizing how the Account Name property is populated when using UPN‑based mappings in analytic rules. Going forward, Account Name property will consistently contain only the UPN prefix, with new dedicated fields added for the full UPN and UPN suffix. While this improves consistency and enables more granular automation, customers who rely on specific Account Name values in automation rules or Logic App playbooks may need to take action. Timeline Effective date: July 1, 2026. The change will apply automatically - no opt-in is required. Scope of impact Analytics Rules which include mapping of User Principal Name (UPN) to the Account Name entity field, where the resulting alerts are processed by Automation Rules or Logic App Playbooks that reference the AccountName property. What’s changing Currently When an Analytic Rule includes mapping of a full UPN (for example: 'user@domain.com') to the Account Name field, the resulting input value for the Automation Rule or/and Logic App Playbook is inconsistent. In some cases, it contains only the UPN prefix: 'user', and in other cases the full UPN ('user@domain.com'). After July 1, 2026 Account Name property will consistently contain only the UPN prefix The following new fields will be added to the entity object: AccountName (UPN prefix) UPNSuffix UserPrincipalName (full UPN) This change provides an enhanced filtering and automation logic based on these new fields. Example Before Analytics Rule maps: 'user@domain.com' Automation Rule receives: Account Name: 'user' or 'user@domain.com' (inconsistent) After Analytics Rule maps: 'user@domain.com' Automation Rule receives: Account Name: 'user' UPNSuffix: 'domain.com' Logic App Playbook and SecurityAlert table receives: AccountName: 'user' UPNSuffix: 'domain.com' UserPrincipalName: 'user@domain.com' Feature / Location Before After SecurityAlert table Logic App Playbook Entity Why does it matter? If your automation logic relies on exact string comparisons against the full UPN stored in Account Name, those conditions may no longer match after the update. This most commonly affects: Automation Rules using "Equals" condition on Account Name Logic App Playbooks comparing entity field 'accountName' to a full UPN value Call to action Avoid strict equality checks against Account Name Use flexible operators such as: Contains Starts with Leverage the new UPNSuffix field for clearer intent Example update Before - Account name will show as 'user' or 'user@domain.com' After - Account Name will show as 'user' Recommended changes: Account Name Contains/Startswith 'user' UPNSuffix Equals/Startswith/Contains 'domain.com' This approach ensures compatibility both before and after the change takes effect. Where to update Review any filters, conditions, or branching logic that depend on Account Name values. Automation Rules: Use the 'Account name' field Logic App Playbooks: Update conditions referencing the entity: 'accountName' For example: Automation Rule before the change: Automation Rule after the change: Summary A consistency improvement to Account Name mapping is coming on July 1, 2026 The change affects Automation Rules and Logic App Playbooks that rely on UPN to Account Name mappings New UPN related fields provide better structure and control Customers should follow the recommendations above before the effective change date656Views0likes0CommentsWhat’s new in Microsoft Sentinel: February 2026
February brings a set of new innovations to Sentinel that helps you work with security content across your SOC. This month’s updates focus on how security teams ingest, manage, and operationalize content, with new connectors, multi-tenant content distribution capabilities, and an enhanced UEBA Essentials solution to surface high‑risk behavior faster across cloud and identity environments. We’re also introducing new partner-built agentic experiences available through Microsoft Security Store, enabling customers to extend Sentinel with specialized expertise directly inside their existing workflows. Together, these innovations help SOC teams move faster, scale smarter, and unlock deeper security insight without added complexity. Expand your visibility and capabilities with Sentinel content Seamlessly onboard security data with growing out-of-the-box connectors (general availability) Sentinel continues to expand its connector ecosystem, making it easier for security teams to bring together data from across cloud, SaaS, and on-premises‑premises environments so nothing critical slips through the cracks. With broader coverage and faster onboarding, SOCs can unlock unified visibility, stronger analytics, and deeper context across their entire security stack. Customers can now use out-of-the-box connectors and solutions for: o Mimecast Audit Logs o CrowdStrike Falcon Endpoint Protection o Vectra XDR o Palo Alto Networks Cloud NGFW o SocPrime o Proofpoint on Demand (POD) Email Security o Pathlock o MongoDB o Contrast ADR For the full list of connectors, see our documentation. Share your input on what to prioritize next with our App Assure team. Microsoft 365 Copilot data connector (public preview) The Microsoft 365 Copilot connector brings Microsoft 365 Copilot audit logs and activity data into Sentinel, giving security teams visibility into how Microsoft 365 Copilot is being used across their organization. Once ingested, this data can power analytics rules, custom detections, workbooks, automation, and investigations, helping SOC teams quickly spot anomalies, misuse, and policy violations. Customers can also send this data to the Sentinel data lake for advanced scenarios, such as custom graphs and MCP integrations, while benefiting from lower cost ingestion and flexible retention. Learn more here. Transition your Sentinel connectors to the codeless connector framework (CCF) Microsoft is modernizing data connectors by shifting from Azure Function based connectors to the codeless connector framework (CCF). CCF enables partners, customers, and developers to build custom connectors that ingest data into Sentinel with a fully SaaS managed experience, built-in health monitoring, centralized credential management, and enhanced performance. We recommend that customers review their deployed connectors and move to the latest CCF versions to ensure uninterrupted data collection and continued access to the latest Sentinel capabilities. As part of Azure’s modernization of custom data collection, the legacy custom data collection API will be retired in September 2026. Centrally manage and distribute Sentinel content across multiple tenants (public preview) For partners and SOCs managing multiple Sentinel tenants, you can centrally manage and distribute Sentinel content across multiple tenants from the Microsoft Defender portal. With multi-tenant content distribution, you can replicate analytics rules, automation rules, workbooks, and alert tuning rules across tenants instead of rebuilding the same detections, automation, and dashboards in one environment at a time. This helps you onboard new tenants faster, reduce configuration drift, and maintain a consistent security baseline while still keeping local execution in each target tenant under centralized control. Learn more: New content types supported in multi-tenant content distribution Find high-risk anomalous behavior faster with an enhanced UEBA essentials solution (public preview) UEBA Essentials solution now helps SOC teams uncover high‑risk anomalous behavior faster across Azure, AWS, GCP, and Okta. With expanded multi-cloud anomaly detection and new queries powered by the anomalies table, analysts can quickly surface the riskiest activity, establish reliable behavioral baselines, and understand anomalies in context without chasing noisy or disconnected signals. UEBA Essentials aligns activity to MITRE ATT&CK, highlights complex malicious IP patterns, and builds a comprehensive anomaly profile for users in seconds, reducing investigation time while improving signal quality across identity and cloud environments. UEBA Essentials is available directly from the Sentinel content hub, with 30+ prebuilt UEBA queries ready to deploy. Behavior analytics can be enabled automatically from the connectors page as new data sources are added, making it easy to turn deeper insight into immediate action. For more information, see: UEBA Solution Power Boost: Practical Tools for Anomaly Detection Extend Sentinel with partner-built Security Copilot agents in Microsoft Security Store (general availability) You can extend Sentinel with partner-built Security Copilot agents that are discoverable and deployable through Microsoft Security Store in the Defender experience. These AI-powered agents are created by trusted partners specifically to work with Sentinel to deliver packaged expertise for investigation, triage, and response without requiring you to build your own agentic workflows from scratch. These partner-built agents work with Sentinel analytics and incidents to help SOC teams triage faster, investigate deeper, and surface insights that would otherwise take hours of manual effort. For example, these agents can review Sentinel and Defender environments, map attacker activity, or automate forensic analysis and SOC reporting. BlueVoyant’s Watchtower agent helps optimize Sentinel and Defender configurations, AdaQuest’s Data Leak agent accelerates response by surfacing risky data exposure and identity misuse, and Glueckkanja’s Attack Mapping agent automatically maps fragmented entities and attacker behavior into a coherent investigation story. Together, these agents show how the Security Store turns partner innovation into enterprise-ready, Security Copilot-powered capabilities that you can use in your existing SOC workflows. Browse these and more partner-built Security Copilot agents in the Security Store within the Defender portal. At Ignite, we announced the native integration of Security Store within the Defender portal. Read more about the GA announcement here: Microsoft Security Store: Now Generally Available Explore Sentinel experience Enhanced reports in the Threat Intelligence Briefing Agent (general availability) The Threat Intelligence Briefing Agent now applies a structured knowledge graph to Microsoft Defender for Threat Intelligence, enabling it to surface fresher, more relevant threats tailored to a customer’s specific industry and region. Building on this foundation, the agent also features embedded, high‑fidelity Microsoft Threat Intelligence citations, providing authoritative context directly within each insight. With these advancements, security teams gain clearer, more actionable guidance and mitigation steps through context‑rich insights aligned to their environment, helping them focus on what matters most and respond more confidently to emerging threats. Learn more: Microsoft Security Copilot Threat Intelligence Briefing Agent in Microsoft Defender Microsoft Purview Data Security Investigations (DSI) integrated with Sentinel graph (general availability) Sentinel now brings together data‑centric and threat‑centric insights to help teams understand risk faster and respond with more confidence. By combining AI‑powered deep content analysis from Microsoft Purview with activity‑centric graph analytics in Sentinel, security teams can identify sensitive or risky data, see how it was accessed, moved, or exposed, and take action from a single experience. This gives SOC and data security teams a full, contextual view of the potential blast radius, connecting what happened to the data with who accessed it and how, so investigations are faster, clearer, and more actionable. Start using the Microsoft Purview Data Security Investigations (DSI) integration with the Sentinel graph to give your analysts richer context and streamline end‑to‑end data risk investigations. Deadline to migrate the Sentinel experience from Azure to Defender extended to March 2027 To reduce friction and support customers of all sizes, we are extending the sunset date for managing Sentinel in the Azure portal to March 31, 2027. This additional time ensures customers can transition confidently while taking advantage of new capabilities that are becoming available in the Defender portal. Learn more about this decision, why you should start planning your move today, and find helpful resources here: UPDATE: New timeline for transitioning Sentinel experience to Defender portal Events and webinars Stay connected with the latest security innovations and best practices through global conferences and expert‑led sessions that bring the community together to learn, connect, and explore how Microsoft is delivering AI‑driven, end‑to‑end security for the modern enterprise. Join us at RSAC, March 23–26, 2026 at the Moscone Center in San Francisco Register for RSAC and stop by the Microsoft booth to see our latest security innovations in action. Learn how Sentinel SIEM and platform help organizations stay ahead of threats, simplify operations, and protect what matters most. Register today! Microsoft Security Webinars Discover upcoming sessions on Sentinel SIEM & platform, Defender, and more. Sign up today and be part of the conversation that shapes security for everyone. Learn more about upcoming webinars. Additional resources Blogs: UPDATE: New timeline for transitioning Sentinel experience to Defender portal, Accelerate your move to Microsoft Sentinel with AI-powered SIEM migration tool, Automating Microsoft Sentinel: A blog series on enabling Smart Security, The Agentic SOC Era: How Sentinel MCP Enables Autonomous Security Reasoning Documentation: What Is a Security Graph? , SIEM migration tool, Onboarding to Microsoft Sentinel data lake from the Defender portal Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Sentinel. We’ll see you in the next edition!1.5KViews3likes1CommentData lake tier Ingestion for Microsoft Defender Advanced Hunting Tables is Now Generally Available
Today, we’re excited to announce the general availability (GA) of data lake tier ingestion for Microsoft XDR Advanced Hunting tables into Microsoft Sentinel data lake. Security teams continue to generate unprecedented volumes of high‑fidelity telemetry across endpoints, identities, cloud apps, and email. While this data is essential for detection, investigation, and threat hunting, it also creates new challenges around scale, cost, and long‑term retention. With this release, users can now ingest Advanced Hunting data from: Microsoft Defender for Endpoint (MDE) Microsoft Defender for Office 365 (MDO) Microsoft Defender for Cloud Apps (MDA) directly into Sentinel data lake, without requiring ingestion into the Microsoft Sentinel Analytics tier. Support for Microsoft Defender for Identity (MDI) Advanced Hunting tables will follow in the near future. Supported Tables This release enables data lake tier ingestion for Advanced Hunting data from: Defender for Endpoint (MDE) – DeviceInfo, DeviceNetworkInfo, DeviceProcessEvents, DeviceNetworkEvents, DeviceFileEvents, DeviceRegistryEvents, DeviceLogonEvents, DeviceImageLoadEvents, DeviceEvents, DeviceFileCertificateInfo Defender for Office 365 (MDO) – EmailAttachmentInfo, EmailEvents, EmailPostDeliveryEvents, EmailUrlInfo, UrlClickEvents Defender for Cloud Apps (MDA) – CloudAppEvents Each source is ingested natively into Sentinel data lake, aligning with Microsoft’s broader lake‑centric security data strategy. As mentioned above, Microsoft Defender for Identity will be available in the near future. What’s New with data lake Tier Ingestion Until now, Advanced Hunting data was primarily optimized for near‑real‑time security operations and analytics. As users extend their detection strategies to include longer retention, retrospective analysis, AI‑driven investigations, and cross‑domain correlation, the need for a lake‑first architecture becomes critical. With data lake tier ingestion, Sentinel data lake becomes a must-have destination for XDR insights, enabling users to: Store high‑volume Defender Advanced Hunting data efficiently at scale while reducing operation overhead Extend security analytics and data beyond traditional analytics lifespans for investigation, compliance, and threat research with up to 12 years of retention Query data using KQL‑based experiences across unified datasets with the KQL explorer, KQL Jobs, and Notebook Jobs Integrate data with AI-driven tooling via MCP Server for quick and interactive insights into the environment Visualize threat landscapes and relational mappings while threat hunting with custom Sentinel graphs Decouple storage and retention decisions from real‑time SIEM operations while building a more flexible and futureproof Sentinel architecture Enabling Sentinel data lake Tier Ingestion for Advanced Hunting Tables The ingestion pipeline for sending Defender Advanced Hunting data to Sentinel data lake leverages existing infrastructure and UI experiences. To enable Advanced Hunting tables for Sentinel data lake ingestion: Within the Defender Portal, expand the Microsoft Sentinel section in the left navigation. Go to Configuration > Tables. Find any of the listed tables from above and select one. Within the side menu that opens, select Data Retention Settings. Once the options open, select the button next to ‘Data lake tier’ to set the table to ingest directly into Sentinel data lake. Set the desired total retention for the data. Click save. This configuration will allow Defender data to reside within each Advanced Hunting table for 30 days while remaining accessible via custom detections and queries, while a copy of the logs is sent to Sentinel data lake for usage with custom graphs, MCP server, and benefit from the option of retention up to 12 years. Why data lake Tier Ingestion Matters Built for Scale and Cost Efficiency Advanced Hunting data is rich—and voluminous. Sentinel data lake enables users to store this data using a lake‑optimized model, designed for high‑volume ingestion and long‑term analytical workloads while making it easy to manage table tiers and usage. A Foundation for Advanced Analytics With Defender data co‑located alongside other security and cloud signals, users can unlock: Cross‑domain investigations across endpoint, identity, cloud, and email Retrospective hunting without re‑ingestion AI‑assisted analytics and large‑scale pattern detection Flexible Architecture for Modern Security Teams Data lake tier ingestion supports a layered security architecture, where: Workspaces remain optimized for real‑time detection and SOC workflows The data lake serves as the cost-effective and durable system for security telemetry Users can choose the right level of ingestion depending on operational needs, without duplicating data paths or cost. Designed to Work with Existing Sentinel and XDR Experiences This GA release builds on Microsoft Sentinel’s ongoing investment in unified data configuration and management: Native integration with Microsoft Defender XDR Advanced Hunting schemas Alignment with existing Sentinel data lake query and exploration experiences Consistent management alongside other first‑party and third‑party data sources Consistent experiences within the Defender Portal No changes are required to existing Defender deployments to begin using data lake tier ingestion. Get started To learn more about Microsoft Sentinel Data Lake and managing Defender XDR data within Sentinel, visit the Microsoft Sentinel documentation and explore how lake‑based analytics can complement your existing security operations. We look forward to seeing how users use this capability to explore new detection strategies, perform deeper investigations, and build long‑term security habits.3KViews2likes0CommentsThe Microsoft Copilot Data Connector for Microsoft Sentinel is Now in Public Preview
We are happy to announce a new data connector that is available to the public: the Microsoft Copilot data connector for Microsoft Sentinel. The new Microsoft Copilot data connector will allow for audit logs and activities generated by different offerings of Copilot to be ingested into Microsoft Sentinel and Microsoft Sentinel data lake. This allows for Copilot activities to be leveraged within Microsoft Sentinel features such as analytic rules/custom detections, Workbooks, automation, and more. This also allows for Copilot data to be sent to Sentinel data lake, which opens the possibilities for integrations with custom graphs, MCP server, and more while offering lower cost ingestion and longer retention as needed. Eligibility for the Connector The connector is available for all customers within Microsoft Sentinel, but will only ingest data for environments that have access to Copilot licenses and SCUs as the activities rely on Copilot being used. These logs are available via the Purview Unified Audit Log (UAL) feed, which is available and enabled for all users by default. A big value of this new connector is that it eliminates the need for users to go to the Purview Portal in order to see these activities, as they are proactively brought into the workspace, enabling SOCs to generate detections and proactively threat hunt on this information. Note: This data connector is a single-tenant connector, meaning that it will ingest the data for the entire tenant that it resides in. This connector is not designed to handle multi-tenant configurations. What’s Included in the Connector The following are record types from Office 365 Management API that will be supported as part of this connector: 261 CopilotInteraction 310 CreateCopilotPlugin 311 UpdateCopilotPlugin 312 DeleteCopilotPlugin 313 EnableCopilotPlugin 314 DisableCopilotPlugin 315 CreateCopilotWorkspace 316 UpdateCopilotWorkspace 317 DeleteCopilotWorkspace 318 EnableCopilotWorkspace 319 DisableCopilotWorkspace 320 CreateCopilotPromptBook 321 UpdateCopilotPromptBook 322 DeleteCopilotPromptBook 323 EnableCopilotPromptBook 324 DisableCopilotPromptBook 325 UpdateCopilotSettings 334 TeamCopilotInteraction 363 Microsoft365CopilotScheduledPrompt 371 OutlookCopilotAutomation 389 CopilotForSecurityTrigger 390 CopilotAgentManagement These are great options for monitoring users who have permission to make changes to Copilot across the environment. This data can assist with identifying if there are anomalous interactions taking place between users and Copilot, unauthorized attempts of access, or malicious prompt usage. How to Deploy the Connector The connector is available via the Microsoft Sentinel Content Hub and can be installed today. To find the connector: Within the Defender Portal, expand the Microsoft Sentinel navigation in the left menu. Expand Configuration and select Content Hub. Within the search bar, search for “Copilot”. Click on the solution that appears and click Install. Once the solution is installed, the connector can be configured by clicking on the connector within the solution and selecting Open Connector Page. To enable the connector, the user will need either Global Administrator or Security Administrator on the tenant. Once the connector is enabled, the data will be sent to the table named CopilotActivity. Note: Data ingestion costs apply when using this data connector. Pricing will be based on the settings for the Microsoft Sentinel workspace or at the Microsoft Sentinel data lake tier pricing. As this data connector is in Public Preview, users can start deploying this connector right now! As always, let us know what you think in the comments so that we may continue to build what is most valuable to you. We hope that this new data connector continues to assist your SOC with high valuable insights that best empowers your security. Resources: Office Management API Event Number List: https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-schema#auditlogrecordtype Purview Unified Audit Log Library: Audit log activities | Microsoft Learn Copilot Inclusion in the Microsoft E5 Subscription: Learn about Security Copilot inclusion in Microsoft 365 E5 subscription | Microsoft Learn Microsoft Sentinel: What is Microsoft Sentinel SIEM? | Microsoft Learn Microsoft Sentinel Platform: Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn4.6KViews0likes1CommentMicrosoft Sentinel for SAP Agentless connector GA
Dear Community, Today is the day: Our new agentless connector for Microsoft Sentinel Solution for SAP applications is Generally Available now! Fully onboarded to SAP’s official Business Accelerator Hub and ready for prime time wherever your SAP systems are waiting – on-premises, hyperscalers, RISE, or GROW – to be protected. Let’s hear from an agentless customer: “With the Microsoft Sentinel Solution for SAP and its new agentless connector, we accelerated deployment across our SAP landscape without the complexity of containerized agents. This streamlined approach elevated our SOC’s visibility into SAP security events, strengthened our compliance posture, and enabled faster, more informed incident response” SOC Specialist, North American aviation company Use the video below to kick off your own agentless deployment today. #Kudos to the amazing mvigilante for showing us around the new connector! But we didn’t stop there! Security is being reengineered for the AI era - moving from static, rule-based controls to platform-driven, machine-speed defence that anticipates threats before they strike. Attackers think in graphs - Microsoft does too. We’re bringing relationship-aware context to Microsoft Security - so defenders and AI can see connections, understand the impact of a potential compromise (blast radius), and act faster across pre-breach and post-breach scenarios including SAP systems - your crown jewels. See it in action in below phishing-compromise which lead to an SAP login bypassing MFA with followed operating-system activities on the SAP host downloading trojan software. Enjoy this clickable experience for more details on the scenario. Shows how a phishing compromise escalated to an SAP MFA bypass, highlighting cross-domain correlation. The Sentinel Solution for SAP has AI-first in mind and directly integrates with our security platform on the Defender portal for enterprise-wide signal correlation, Security Copilot reasoning, and Sentinel Data Lake usage. Your real-time SAP detections operate on the Analytics tier for instant results and threat hunting, while the same SAP logs get mirrored to the lake for cost-efficient long-term storage (up to 12 years). Access that data for compliance reporting or historic analysis through KQL jobs on the lake. No more – yeah, I have the data stored somewhere to tick the audit report check box – but be able to query and use your SAP telemetry in long term storage at scale. Learn more here. Findings from the Agentless Connector preview During our preview we learned that majority of customers immediately profit from the far smoother onboarding experience compared to the Docker-based approach. Deployment efforts and time to first SAP log arrival in Sentinel went from days and weeks to hours. ⚠️ Deprecation notice for containerized data connector agent ⚠️ The containerised SAP data connector will be deprecated on September 14th, 2026. This change aligns with the discontinuation of the SAP RFC SDK, SAP's strategic integration roadmap, and customer demand for simpler integration. Migrate to the new agentless connector for simplified onboarding and compliance with SAP’s roadmap. All new deployments starting October 31, 2025, will only have the new agentless connector option, and existing customers should plan their migration using the guidance on Microsoft Learn. It will be billed at the same price as the containerized agent, ensuring no cost impact for customers. Note📌: To support transition for those of you on the Docker-based data connector, we have enhanced our built-in KQL functions for SAP to work across data sources for hybrid and parallel execution. Spotlight on new Features Inspired by the feedback of early adopters we are shipping two of the most requested new capabilities with GA right away. Customizable polling frequency: Balance threat detection value (1min intervals best value) with utilization of SAP Integration Suite resources based on your needs. ⚠️Warning! Increasing the intervals may result in message processing truncation to avoid SAP CPI saturation. See this blog for more insights. Refer to the max-rows parameter and SAP documentation to make informed decisions. Customizable API endpoint path suffix: Flexible endpoints allow running all your SAP security integration flows from the agentless connector and adherence to your naming strategies. Furthermore, you can add the community extensions like SAP S/4HANA Cloud public edition (GROW), the SAP Table Reader, and more. Displays the simplified onboarding flow for the agentless SAP connector You want more? Here is your chance to share additional feature requests to influence our backlog. We would like to hear from you! Getting Started with Agentless The new agentless connector automatically appears in your environment – make sure to upgrade to the latest version 3.4.05 or higher. Sentinel Content Hub View: Highlights the agentless SAP connector tile in Microsoft Defender portal, ready for one-click deployment and integration with your security platform The deployment experience on Sentinel is fully automatic with a single button click: It creates the Azure Data Collection Endpoint (DCE), Data Collection Rule (DCR), and Microsoft Entra ID app registration assigned with RBAC role "Monitoring Metrics Publisher" on the DCR to allow SAP log ingest. Explore partner add-ons that build on top of agentless The ISV partner ecosystem for the Microsoft Sentinel Solution for SAP is growing to tailor the agentless offering even further. The current cohort has flagship providers like our co-engineering partner SAP SE themselves with their security products SAP LogServ & SAP Enterprise Threat Detection (ETD), and our mutual partners Onapsis and SecurityBridge. Ready to go agentless? ➤ Get started from here ➤ Explore partner add-ons here. ➤ Share feature requests here. Next Steps Once deployed, I recommend to check AryaG’s insightful blog series for details on how to move to production with the built-in SAP content of agentless. Looking to expand protection to SAP Business Technology Platform? Here you go. #Kudos to the amazing Sentinel for SAP team and our incredible community contributors! That's a wrap 🎬. Remember: bringing SAP under the protection of your central SIEM isn't just a checkbox - it's essential for comprehensive security and compliance across your entire IT estate. Cheers, Martin1.8KViews1like0CommentsNew content types supported in multi-tenant content distribution
Onboard new tenants and maintain a consistent security baseline We’re excited to announce a set of new content types that are now supported by the multi-tenant content distribution capability in the Defender portal: You can now distribute analytics rules, automation rules, workbooks, and alert tuning built in rules. What is content distribution? Content distribution is a powerful multi-tenant feature that enables scalable management of security content across tenants. With this capability, you can create content distribution profiles in the multi-tenant portal that allow you to seamlessly replicate existing content—such as custom detection rules and endpoint security policies—from a source tenant to designated target tenants. Once distributed, the content runs on the target tenant, enabling centralized control with localized execution. This allows you to onboard new tenants quickly and maintain a consistent security baseline across tenants. New supported content types With this release, we add support for several new content types: Analytics rules (Sentinel) Automation rules (Sentinel) Workbooks (Sentinel) Alert tuning rules (built-in rules) Soon, we will introduce more content types, including URBAC roles. How it works Navigate to ‘Content Distribution’ in Defender’s multi-tenant management portal Create a new distribution profile or select an existing distribution profile In the ‘Content selection’ step, select one of the new content types to distribute After choosing the content types, select the actual content that you want to distribute. For example, select analytics rules that you want to distribute to other tenants Use the filters to select which tenant (and workspace) to take the content from Choose at least one workspace that you want to distribute the content to. You can select up to 100 workspaces per tenant. Save the distribution profile and the content will be synced to your target tenants Review the sync result in your distribution profile Good to know Automation rules that trigger a playbook cannot currently be distributed Alert tuning rules are currently limited to distributing built-in rules, this will be expanded to custom rules later Learn more For more information, see Content distribution in multitenant management. To get started, navigate to Content distribution. FAQ What pre-requisites are required? Access to more than one tenant, with delegated access via Azure B2B, using multi-tenant management A subscription to Microsoft 365 E5 or Office E5 What permissions are needed to distribute? Each content type requires you to have permission to create that content type on the target tenant. For example, to create Analytics Rules, you require ‘Sentinel Contributor’ permissions. To distribute content using multi-tenant management content distribution, the Security settings (manage) or Security Data Basic (read) permission is required. Both roles are assigned to the Security Administrator and Security Reader Microsoft Entra built-in roles by default. Can I update or expand distribution profiles later? Yes. You can add more content, include additional tenants, or modify scopes as needed.1.2KViews1like2Comments