microsoft sentinel
825 TopicsAmazon Security Lake Integration with Microsoft Sentinel: Parquet at the Gates
In this blog post, we explore how centralized AWS Security Lake data can be transformed and streamed into Microsoft Sentinel using an AWS Lambda-based pipeline for near real-time ingestion. A story of cost, control, and custom engineering at cloud scale Every organization strives to design its security architecture in a way that can be designed to help address architectural complexity and support cost‑aware, scalable designs, depending on workload characteristics and implementation choices. Across multiple customer environments, we increasingly see a consistent pattern emerge—security telemetry from various AWS services is not ingested in isolation, but is deliberately centralized into Amazon Security Lake. This approach reflects a maturity in design, where organizations move beyond service-level integrations and instead adopt a unified data strategy. Amazon Security Lake enables this by aggregating security data from services such as Route53, WAF, Kubernetes, and others into a centralized, governed repository. The data is normalized and stored in an open, analytics-friendly format, often leveraging Apache Parquet, a columnar storage format optimized for large-scale processing and cost-efficient storage. This can allow organizations to retain high volumes of security data while maintaining performance and may help optimize storage efficiency in certain scenarios, depending on data volume, retention policies, and analytics patterns. However, this architectural choice introduces a new consideration. Microsoft Sentinel, when integrated into such environments, typically expects ingestion through connector-driven pipelines and streaming event models. In contrast, Security Lake represents a batch-oriented, schema-driven data platform. Rather than treating this as a constraint, it becomes an opportunity to rethink how data should flow between these systems. In this blog, we explore how a streaming bridge architecture can be implemented to align Amazon Security Lake with Microsoft Sentinel’s ingestion model. The approach leverages a combination of AWS Lambda and event-driven patterns to process data as it lands in Amazon S3, transforms it into a Sentinel-compatible format, and streams it through Azure Event Hub into Microsoft Sentinel using Data Collection Rules (DCRs) and Data Collection Endpoints (DCEs). This approach can support lower‑latency ingestion patterns when configured appropriately and compared to batch‑only processing models while preserving the lake-first architecture, allowing organizations to support analytics, visualization, and threat‑hunting activities using the ingested data. For demonstration, this implementation focuses on ingesting the following data sources from Amazon Security Lake: Amazon EKS audit and runtime events Route 53 DNS query logs AWS WAF access logs AWS Lambda execution activity Amazon S3 access events Note: The solution and code provided in this blog are not an officially supported Microsoft solution and do not guarantee performance, reliability, availability, or support. No service-level agreements (SLAs) are included. Readers are responsible for validating suitability for their environment. Before we begin, let us briefly discuss Amazon Security Lake and the Parquet format. Amazon Security Lake and the Parquet Constraint Amazon Security Lake provides a centralized, S3-backed repository for security telemetry, addressing fragmentation across services, accounts, and regions. Instead of service-level ingestion pipelines, logs from sources such as EKS, Route 53, WAF, Lambda, and S3 are aggregated into a single, governed data layer, enabling consistent visibility, separation of duties, and cost-efficient storage at scale. This data is stored in Apache Parquet, a columnar format optimized for analytics—delivering high compression, schema evolution, and efficient, selective reads across engines like Athena and Spark. Microsoft Sentinel operates on a streaming ingestion model expecting JSON payloads, source-specific pipelines, and continuous event flows. In lake-first architectures, reintroducing service-level ingestion is neither practical nor efficient. The requirement, therefore, is to bridge the two models—preserving Parquet for storage while enabling event-driven ingestion at the point of consumption. In the following sections we will create an Event Hub, Configure the Lambda function and once the streaming is configured in event hub we will configure the Data Collection Rules, Data Collection endpoints, Data Collection Rule associations to configure the ingestion pipeline from Event hub to Sentinel. Pre-requisites: Log Analytics workspace where you have at least contributor rights. Your Log Analytics workspace needs to be linked to a dedicated cluster or to have a commitment tier. Event Hubs namespace that permits public network access. If public network access is disabled, ensure that "Allow trusted Microsoft services to bypass this firewall" is set to "Yes." Event hubs with events flowing in. In this implementation, events are sent to Event Hubs by the AWS Lambda function configured in the steps below - no manual event sending is required. Appropriate IAM roles in AWS accounts to configure SQS queue, Lambda function, IAM policies, etc. The Event Hub To begin, we need to create an Event Hub. Azure Event Hubs is a fully managed, high-throughput event ingestion and streaming platform, designed to support high event volumes within documented service limits, subject to configuration and tier selection. Azure Event Hubs Documentation SKUs (Standard vs Premium) Standard tier is a throughput-unit (TU) based model, where capacity is explicitly controlled and shared across the namespace. Premium tier provides isolated compute and memory via processing units (PU), which can provide more consistent performance characteristics and higher throughput capacity compared to shared models, depending on workload . Event Hubs Scalability, Event Hub Tiers Additionally, Azure Event Hubs offers a Dedicated tier, which is a fully isolated, single-tenant cluster for enterprise-scale workloads with higher throughputs (at significantly higher cost). Event hub Dedicated Tier Throughput Characteristics (Standard Tier) A single Throughput Unit (TU) provides: Ingress: up to 1 MB/s Egress: up to 2 MB/s Event Hubs Scalability A Standard namespace can scale to a maximum of 40 TUs, giving: Max ingress: 40 MB/s Max egress: 80 MB/s Event Hubs Scalability All event hubs, partitions, and consumers within the namespace share this TU capacity, making it a central ingestion buffer for streaming pipelines rather than a per-source scaling model. Event Hubs Scalability Which SKU to choose: For ingress/egress up to 40/80 MB/s, a Standard SKU may be suitable. Higher volumes may warrant consideration of Premium, depending on workload requirements. Azure Event Hubs Concepts: Event Hub Namespace: A logical container that provides the endpoint, networking boundary, and shared throughput capacity (TUs/PUs) for all event hubs within it. Event Hub: An individual event stream (topic) within the namespace where events are ingested, stored, and read in a partitioned, ordered manner for parallel processing. Reference: Event Hubs features and terminology - Azure Event Hubs Creating Event Hub namespace, Event Hub entities, and considerations: Create the Azure Event Hub Namespace, in this example, we create a Standard Event Hub with minimum 1 TU with Auto Inflate on enabling scaling upto the maximum 40 TUs. Note the maximum Throughput Units should be based on the size of the logs expected per second from Amazon Security Lake. Since the Event Hub will be used to ingest data in Azure Monitor (Sentinel Workspace) please check the Supported Regions. Ensure region alignment with Sentinel. Configure TU based on expected ingestion rate. Once the Event Hub is created, enable the Local Authentication, since the AWS Lambda code we are using (discussed in the next section) uses Shared access signature to connect to the Azure Event Hub. See Shared Access Signatures. Create individual Event Hubs within the namespace created in Step 1. One Event Hub is required per log type—for example, if Amazon Security Lake includes EKS, Route 53, and WAF logs, then three Event Hub entities should be created. Why this matters: Easier DCR mapping Avoids schema conflicts Create a consumer group in each event hub we created in the previous step. Consumer Group is an independent view of an event stream that allows multiple applications to read the same events separately, each maintaining its own position (offset) in the stream. Event Hub Features Avoid using the $Default consumer group. With the Event Hub namespace, entities, and consumer groups in place, the receiving end of the pipeline is ready. The next step is to configure the AWS Lambda function that will translate Security Lake's Parquet files into the JSON events that Event Hub expects. The AWS Lambda Function (SQS-driven Parquet → JSON) In this architecture, AWS Lambda is the translation layer, not an ingestion source. Instead of being invoked directly by S3, Security Lake emits S3 object‑creation notifications into Amazon SQS, and SQS becomes the Lambda trigger. This decoupling is intentional: SQS absorbs bursts of newly written Parquet objects which can help increase resilience of the ingestion pipeline under bursty or variable workloads. Once triggered, the Lambda function processes each queued S3 notification end‑to‑end: it derives the log type from the S3 object key, downloads the Parquet file, converts each row into a discrete JSON event, and forwards the resulting events to Azure Event Hub in batches for downstream ingestion via DCR/DCE. A practical consideration is event size management. Some sources—especially EKS audit telemetry—can carry large, nested fields that are not always useful for Sentinel analytics. For those log types, the Lambda function drops non‑essential fields during transformation to keep each event within Azure ingestion constraints; oversized fields can exceed the 64 KB Azure Monitor field size limit and disrupt ingestion (Fields more than 64 KB will be truncated in Log Analytics Workspace). Solution Architecture The end-to-end flow operates as follows: Amazon Security Lake writes Parquet files to a centralized S3 bucket as logs arrive from source services (CloudTrail, EKS, VPC Flow, WAF, Route 53, and others). Amazon SQS receives S3 event notifications from Security Lake and queues them as Lambda triggers. AWS Lambda picks up each SQS message, identifies the log source from the S3 object key, downloads the Parquet file, converts each row to JSON, and forwards the events to Azure Event Hub. Azure Event Hub receives the JSON events and makes them available for ingestion into Microsoft Sentinel via a Data Collection Rule (DCR). Microsoft Sentinel ingests the data into a custom log table, where it is available for detection rules, hunting queries, and dashboards. The full Lambda function code is available in the GithubRepository-LambdaFunction. Refer to the readme for more details. How the Lambda Function works At a high level, the Lambda function performs four things in sequence for every file it processes: Identify the log source: Security Lake organises files under a structured S3 key path that includes the log type (for example, CLOUD_TRAIL_MGMT, EKS_AUDIT, VPC_FLOW). The function reads this key path to determine which log source the file belongs to, and routes it to the corresponding Azure Event Hub entity. Files that cannot be matched to a known log type are skipped and logged as warnings. Download and decompress the Parquet file: The function streams the Parquet file from S3 directly to local Lambda storage rather than loading it entirely into memory. This keeps memory consumption bounded regardless of file size. Where Security Lake uses gzip compression, the function decompresses automatically before processing. Convert Parquet rows to JSON: Each row in the Parquet file is read in batches using PyArrow and converted to a JSON object. Parquet columns can carry data types — such as NumPy scalars, nested arrays, and high-precision timestamps — that are not natively serialisable to JSON. The function handles these type conversions before serialisation, ensuring clean output that Sentinel's ingestion pipeline can accept without errors. Forward events to Azure Event Hub: Converted JSON events are sent to the respective Azure Event Hub entity in batches, with each row becoming a discrete event. The function respects Event Hub's payload size ceiling, handles throttling responses gracefully using retry logic with exponential backoff, and marks each processed file in S3 metadata to prevent duplicate ingestion on retry. Tuning the Lambda Messages at cloud scale are unforgiving. Memory, timeout, batch size, and retry behavior—each decision determined whether the messenger would keep up or fall behind. Several configuration changes are required beyond the default Lambda settings to make this function production-ready. Runtime and Dependencies — Lambda Layer The function depends on three libraries not available in Lambda's default Python runtime: pyarrow (for Parquet reading), pandas (for type handling), and azure-eventhub (for Event Hub connectivity). These are packaged as an AWS Lambda Layer and attached to the function, keeping the deployment package clean and the layer reusable across function versions. Step-by-step instructions for packaging the dependencies, creating the S3 bucket, publishing the Layer, and deploying the function are available in the GithubRepository-LambdaLayer. Secrets Management — AWS Secrets Manager The Azure Event Hub connection strings — one per log type — are sensitive credentials that must not be stored in environment variables in plaintext. The function retrieves them at cold start from AWS Secrets Manager, using a single secret ARN passed as a Lambda environment variable (SECRET_ARN). The secret is stored as a JSON object with each log type as a key. The full secret structure and configuration steps are available in the GithubRepository-SecretsManager. IAM Permissions The Lambda execution IAM role requires scoped permissions across S3, Secrets Manager, SQS, and CloudWatch Logs. Full IAM policy JSON files following the principle of least privilege are available in the GithubRepository-IAMPolicies. Deployment instructions for the IAM Policies are available in the IAM Policies README. Memory Configuration A starting configuration of 1,792 MB is recommended — this is the threshold at which Lambda may allocate a full vCPU. For environments with high log volumes or large Parquet files, increasing to 2,048 MB provides headroom for concurrent batch processing. Tune further based on observed execution durations in CloudWatch Metrics. Timeout Configuration The default Lambda timeout of 3 seconds is insufficient for Parquet processing at scale. The function must download a file from S3, process it in batches, and flush all events to Event Hub — a sequence that can take tens of seconds for larger Security Lake files. A timeout of 5 minutes (300 seconds) is recommended as a starting point, with adjustment based on observed execution durations in CloudWatch Metrics. SQS Trigger Configuration The SQS queue connected to Security Lake S3 event notifications is configured as the trigger for the Lambda function. This enables automatic invoke of the Lambda function whenever Security Lake writes a new Parquet file to S3. Validate Event Hub Ingestion At this point, events should be streaming into each Event Hub. To validate, open a specific Event Hub from the Azure Portal and navigate to the Overview page. You should see active metrics across Requests, Messages, and Throughput, confirming that the Lambda function is successfully forwarding Security Lake events. In the upcoming sections, we will follow the documentation to ingest events from Eventhub to Azure Monitor. Before we begin, please collect the required information as stated here to have resource ID's and other details ready for configuration of DCR and Data Collection Endpoint. Also create a user assigned managed identity (UAMI), since the DCRs in this setup use a UAMI that should be granted the required permissions on the Event Hub Namespace to Receive Events. To grant the Azure Event Hubs Data Receiver role to the user-assigned managed identity, follow the instructions here. Creating Tables in Log Analytics Workspace As mentioned in the Overview, this blog covers the following data sources: Amazon EKS audit and runtime events Route 53 DNS query logs AWS WAF access logs AWS Lambda execution activity Amazon S3 access events The PowerShell scripts to create these tables are provided here: GithubRepo CreateLAWTables. For assistance with executing these scripts, refer to: Readme.md. Note: In the Microsoft Documentation to Ingest logs from EventHub into Azure Monitor only 3 fields are created in the tables (TimeGenerated, RawData, Properties). In this case, the entire Json event from the Event Hub is sent to the RawData field. However the scripts we are running are creating additional fields since we will be parsing the RawData field and extracting/parsing the information from the complete event into individual fields. This makes it easier to search and analyze logs later and schema improves detection rules and KQL efficiency. Create a Data Collection Endpoint To collect data with a data collection rule, you need a data collection endpoint: Create a data collection endpoint. Note: Create the data collection endpoint in the same region as your Log Analytics workspace. From the data collection endpoint's Overview screen, select JSON View. Copy the Resource ID for the data collection rule. You use this information in the next step while creating Data Collection Rules. Create Data Collection Rules Since we have 5 sources in scope for this example, we need to create 5 DCRs using the collected information as stated here, the user assigned managed identity resource ID, and the Data Collection Endpoint resource ID we created in the previous step. DCR Deployment via ARM Templates The Data Collection Rules ARM templates can be found in the GithubRepo-DataCollectionRules. The instructions to create via ARM templates can be found in the readme.md (Microsoft article reference with manual steps here). DCR Mapping (AWS Sources → DCR Templates) ARM Templates in GitHub Repo to AWS Source mapping Data Source DCR Template Name Amazon EKS Logs DCR-awseks.json AWS S3 Access Logs DCR-awsS3access.json AWS WAF Logs DCR-awswaf.json AWS Lambda Execution DCR-lambdaexecution.json Amazon Route 53 Logs DCR-Route53.json Final Step: Associating the Event Hub with the Data Collection Rule At this stage, the core building blocks of the ingestion pipeline are already in place. We have successfully configured streaming to Event Hubs, created dedicated Event Hubs for each Amazon Security Lake source, provisioned destination tables in the Microsoft Sentinel workspace, and defined Data Collection Rules (DCRs) — leveraging the Azure Monitor pipeline and a user-assigned managed identity to securely read incoming events. The final step is to stitch this entire architecture together by establishing associations between the Event Hubs and their corresponding Data Collection Rules. This association acts as the connection that links Event Hub to Sentinel that enables Azure Monitor to actively pull data from the Event Hubs and route it into the defined destination tables. Without this linkage, the pipeline remains incomplete — data may continue to flow into Event Hubs, but it will not be picked up or ingested into Sentinel. Each Event Hub must be explicitly mapped to its respective Data Collection Rule, ensuring: The correct stream is processed by the intended transformation logic Events are routed to the appropriate custom tables The ingestion pipeline operates in a deterministic and scalable manner helping ensure data is routed to the intended destination in a consistent and predictable manner. Once these associations are configured, the end‑to‑end pipeline is intended to operate as designed, subject to configuration accuracy and ongoing operational monitoring— enabling automated ingestion workflows of Amazon Security Lake data into Microsoft Sentinel with minimal manual intervention once configured. Steps to be followed: Associate the data collection rule with the event hub To complete the setup, we now associate each Event Hub with its corresponding Data Collection Rule (DCR). This creates the link that allows Azure Monitor to read data from the Event Hub and send it to Microsoft Sentinel. Important: You must create one association per Data Collection Rule. Example: If an Event Hub is receiving AWS Route 53 logs, it must be associated with the DCR created for AWS Route 53. Copy the template in the template from the above link and create the Data Collection Rule associations. We have to create 1 association per Data Collection Rule. What You Need Event Hub Resource ID Follow these steps to get the Event Hub Resource ID: Open the Event Hub Namespace in the Azure Portal Go to Entities → Event Hubs Select the Event Hub that is receiving the logs (e.g., Route 53 logs) In the Overview page, click on JSON View Copy the Resource ID Resource Group, Region, and Association Name The Resource Group must be the same as the one where the Event Hub is deployed Provide the Azure region where the resources are deployed Define a unique name for the association Data Collection Rule (DCR) Resource ID Open the corresponding Data Collection Rule Go to the Overview page Click on JSON View Copy the Resource ID Key Note: Make sure that Each Event Hub is mapped to the correct DCR The data source and DCR template are aligned (e.g., Route 53 → Route 53 DCR) Validate End-to-End Ingestion Once this configuration is complete, we can validate the logs in the destination table, fields, and parsing accuracy. successfully flowing through the pipeline. If you are building on a lake-first security architecture and running into ingestion challenges with Sentinel, feel free to share your experience in the comments or raise an issue in the GitHub repository.Identity Attack Graph in Microsoft Sentinel
Identity is now one of the most important attack surfaces in cloud security. In many real-world incidents, attackers do not rely only on malware or network movement. Instead, they abuse identities, permissions, role assignments, group memberships, service principals, and misconfigured access paths to move from an initial compromise to high-value resources. This is why the new Identity Attack Graph in Microsoft Sentinel is an important capability. It helps security teams visualize how identities are connected to Azure resources and how an attacker could potentially move from one identity to another resource through permissions and relationships. What is the Identity Attack Graph? The Identity Attack Graph in Microsoft Sentinel provides a visual way to understand how identities, permissions, groups, and Azure resources are connected. Instead of manually checking multiple portals, logs, and role assignments, the graph helps analysts understand relationships such as: Which identities have access to specific Azure resources Which users or service principals are over-privileged Which groups provide indirect access to sensitive resources Which identities may have a path to critical assets What the potential blast radius of a compromised identity could be How attackers could move laterally through identity and permission relationships This is especially useful because identity risk is often not obvious when looking at a single user, group, or role assignment in isolation. The real risk usually appears when these relationships are connected together. A user may not directly have access to a sensitive resource, but the user may be a member of a group that has access to another resource, which then has permissions that create a path toward a high-value asset. The Identity Attack Graph helps expose these hidden relationships. Why this matters In many Azure environments, permissions grow over time. Users change roles, groups are reused, emergency access is granted, service principals are created, and temporary permissions are not always removed. As a result, organizations often end up with: Too many privileged identities Unused or stale permissions Service principals with excessive access Guest users with unnecessary permissions Groups that provide indirect access to sensitive resources Subscription-level roles that are broader than required Lack of visibility into who can reach critical assets Traditional investigation usually requires analysts to move between several places, including Microsoft Entra ID, Azure RBAC, Azure Activity logs, Sentinel queries, Defender XDR, and Azure Resource Graph. The Identity Attack Graph reduces this complexity by presenting identity relationships as a connected graph. This makes it easier to answer practical security questions such as: “What can this identity access?” “What happens if this user is compromised?” “Which identities have a path to critical resources?” “Which access path should we remediate first?” “Which permissions create the highest risk?” “Why does this identity have access to this asset?” Key use cases The feature can support several important identity security and cloud security scenarios. 1. Attack path discovery Security teams can use the graph to identify how an attacker could move from a compromised identity to a sensitive Azure resource. This is useful during both proactive assessments and active incident response. For example, if a user account is suspected to be compromised, the graph can help identify which resources may be reachable through that identity’s direct or indirect permissions. 2. Blast-radius analysis When an identity is compromised, one of the first questions is: What could the attacker access with this identity? The Identity Attack Graph can help analysts understand the potential impact of a compromised user, group, service principal, or managed identity. This can help with containment, prioritization, and communication with stakeholders. 3. Over-privileged identity detection The graph can help identify identities that have more permissions than they need. Include: Users with Owner or Contributor access at subscription level Service principals with broad permissions Guest users with privileged access Groups that grant access to sensitive resources Identities that have access to multiple critical assets This is useful for enforcing least privilege and reducing identity attack surface. 4. Privileged access review IAM and cloud security teams can use the graph to support access reviews. Instead of only reviewing a list of role assignments, teams can understand the real impact of those permissions. This helps answer: Is this role assignment still required? Does this group create unnecessary risk? Does this identity have access to critical resources? Is this access direct or inherited? Is this path expected or suspicious? 5. Incident response and threat hunting For SOC teams, the graph can support investigations involving: Suspicious sign-ins Compromised users Privilege escalation Suspicious role assignments Lateral movement Service principal abuse Unusual access to sensitive resources The graph does not replace logs or hunting queries, but it gives analysts a faster way to understand relationships and prioritize what to investigate next. Important prerequisites and setup notes During my evaluation, there were a few important setup requirements that should be clearly highlighted. Microsoft Sentinel permissions The environment must already be onboarded to Microsoft Sentinel, and the user testing or configuring the feature must have the appropriate Microsoft Sentinel permissions. The documented role requirement includes Microsoft Sentinel Contributor. However, in my experience, this is not always enough for the full onboarding and validation experience. Subscription-level Owner permission One important prerequisite that should be clearly mentioned is that Owner permissions at the Azure subscription level may be required. This is especially important during onboarding and activation, because the graph depends on access to Azure resource and permission relationships. If the user does not have sufficient subscription-level permissions, some setup steps or visibility into resources and relationships may not work as expected. Recommended permission note: In addition to Microsoft Sentinel permissions, ensure that the user configuring the preview has Owner permissions at the subscription level for the subscriptions that should be represented in the graph. This should be made very clear in the onboarding documentation to avoid confusion during deployment. Required data connector: Azure Resource Graph Another very important setup step is the Azure Resource Graph data connector. The Azure Resource Graph connector must be: Installed manually Activated manually Connected to the relevant Sentinel workspace This is a key point. The connector is not automatically enabled just because the Identity Attack Graph feature is available. Without this connector, Sentinel may not have the required Azure resource relationship data needed to build a useful graph. Why Azure Resource Graph is important Azure Resource Graph provides visibility across Azure resources, subscriptions, and relationships. For an identity attack graph, this data is essential. The graph needs to understand not only identities, but also the resources those identities can reach. This may include: Subscriptions Resource groups Storage accounts Key Vaults Virtual machines Managed identities Role assignments Resource relationships Resource hierarchy Critical assets Without Azure Resource Graph data, the attack graph may not provide the full picture of how identities connect to Azure resources. For this reason, I believe the onboarding instructions should explicitly state: The Azure Resource Graph data connector must be manually installed and activated before using the Identity Attack Graph. Recommended onboarding checklist Before using the Identity Attack Graph, I would recommend validating the following: Requirement Recommendation Microsoft Sentinel workspace Ensure the workspace is active and accessible Sentinel role Microsoft Sentinel Contributor or equivalent access Subscription permissions Owner permissions at subscription level Azure Resource Graph connector Manually install and activate the connector Azure RBAC visibility Ensure access to relevant role assignments Microsoft Entra ID visibility Ensure identity and group data is available Resource visibility Validate that relevant subscriptions and resources are visible Data freshness Allow enough time for data collection and graph population This checklist can help avoid issues where the feature appears available but does not show the expected relationships. How the Identity Attack Graph improves investigation Before using a graph-based approach, an analyst often needs to manually collect and correlate data from multiple sources. A typical investigation may include: Checking the user in Microsoft Entra ID Reviewing group memberships Reviewing Azure RBAC assignments Checking subscription-level access Looking at resource-level permissions Reviewing PIM activations Searching Sentinel logs Running KQL queries Checking Azure Activity logs Validating access with cloud or IAM teams This process can be time-consuming. The Identity Attack Graph helps reduce this effort by showing relationships visually. This allows the analyst to understand the possible path faster and decide where to focus. For example, instead of manually asking: “Does this user have access to this resource through any group, role, or inherited permission?” The graph can help show the relationship directly. This is valuable because many risky permissions are indirect. The user may not have direct access, but may inherit access through a group, role assignment, nested relationship, or service principal path. Where validation is still needed Although the graph provides strong visibility, I would still validate findings before taking remediation action. This is especially important because removing access can affect business operations or production systems. I would still validate with: Microsoft Sentinel KQL queries Microsoft Entra sign-in logs Microsoft Entra audit logs Azure Activity logs Azure RBAC role assignments PIM activation history Defender XDR signals Defender for Cloud recommendations Azure Resource Graph queries IAM team input Cloud platform team input Application owner confirmation The graph is very useful for discovery and prioritization, but final remediation decisions should still be validated. GQL and graph-based investigation One of the interesting aspects of this feature is the use of graph-based thinking. Security teams are already familiar with query languages such as KQL for log analytics. However, graph investigation is different. KQL is excellent for searching and analyzing events over time, such as sign-ins, alerts, audit logs, and activity logs. Graph Query Language, or GQL, is designed for querying connected data. Instead of only asking what happened at a specific time, graph queries help answer how entities are connected. In identity security, this is very powerful because the risk often exists in the relationship between objects. Graph entities include: Users Groups Service principals Managed identities Roles Subscriptions Resource groups Azure resources Permissions Sessions Attack paths Graph relationships include: User is member of group Group has role assignment Identity has access to resource Service principal owns application Managed identity can access Key Vault User can escalate privilege Identity can reach critical asset This allows analysts to ask more relationship-focused questions, such as: Which identities can reach this resource? What is the shortest path from this user to a critical asset? Which groups create privileged access? Which service principals have paths to sensitive resources? Which identities have indirect access through nested relationships? Which attack paths include subscription Owner or Contributor permissions? KQL vs GQL: why both are useful KQL and GQL serve different but complementary purposes. Area KQL GQL / Graph Querying Main purpose Analyze logs and events Analyze relationships and paths Best for Time-based investigation Connected identity/resource investigation question “Did this user sign in from a risky location?” “What resources can this user reach?” Data model Tables Nodes and edges Common use Detection, hunting, analytics Attack path discovery, relationship mapping Strength Event correlation Path discovery In practice, security teams need both. KQL can identify a suspicious sign-in. The Identity Attack Graph can show what the compromised identity could access. KQL can then be used again to validate whether the attacker interacted with those resources. This creates a strong workflow between event-based detection and relationship-based investigation. Graph investigation scenarios The following are conceptual are the types of graph questions that would be useful in identity attack path analysis. Find paths from a user to critical resources A useful graph query would help answer: Show me all paths from this user to critical Azure resources. This could help determine whether a compromised identity has a direct or indirect route to sensitive assets. Find identities with paths to Key Vaults Key Vaults often contain secrets, certificates, and keys. A graph query could help identify: Which users, groups, service principals, or managed identities have a path to Key Vault resources? This would be useful for prioritizing access review and remediation. Find subscription-level privileged identities Subscription-level roles are high-impact because they can provide broad access. A graph query could help find: Which identities have Owner or Contributor access at subscription level? This is especially important because subscription-level permissions can create wide attack paths. Find indirect access through groups Many access paths are created through group membership. A graph query could help answer: Which users have access to this resource through group membership? This can help IAM teams clean up excessive or unnecessary group-based access. Find service principals with broad access Service principals are often used for automation and applications, but they can become high-risk if over-privileged. A useful query would identify: Which service principals have broad access to subscriptions or critical resources? This is important because service principal compromise can lead to significant impact. How GQL can improve analyst workflows Adding strong GQL support to the graph explorer would make the feature more powerful for advanced users. You could use graph queries to: Search for specific paths Filter by identity type Filter by role Filter by resource type Find shortest paths Find high-risk paths Exclude known approved paths Focus on critical assets Query only privileged relationships Identify unexpected permission chains This would help both SOC analysts and cloud security engineers move from visual exploration to repeatable analysis. A SOC analyst may want a quick visual graph during an incident, while a cloud security engineer58Views0likes0CommentsExtending Sentinel Data Integration: Azure Blob Storage Support for CCF Connectors
As organizations scale their security operations, the ability to ingest, process, and analyze high volumes of data reliably becomes increasingly critical. Microsoft Sentinel continues to expand its ecosystem through the Codeless Connector Framework (CCF), enabling ISVs to build and deliver integrations with Sentinel faster while simplifying deployment for customers. Today, CCF extends even further with support for Azure Blob Storage, introducing a new pattern for how data can be delivered into Sentinel. Expanding Connector Patterns with Azure Blob Storage CCF has traditionally enabled connectors that integrate directly with partner APIs and data sources. With this latest enhancement, ISVs can now build connectors that read data from Azure Blob Storage—unlocking new flexibility in how security data is collected and delivered. In this model, an ISV writes data to an Azure Blob Storage account. The Sentinel connector then reads from that storage layer, using Azure-native components such as Event Grid and storage queues to process events and forward them through data collection rules (DCR) into Log Analytics workspace. This approach introduces a durable data layer between the data source and Sentinel, enabling more resilient and scalable ingestion scenarios. Why a durable data layer matters By leveraging Azure Blob Storage as part of the ingestion pipeline, CCF connectors gain important operational advantages. This architecture allows data to be buffered and processed asynchronously, helping manage fluctuations in data volume and ensuring consistent delivery. Key benefits include: Resilience: Buffers spikes and handles backpressure to maintain steady ingestion Improved Compatibility: Supports widely adopted Azure Blob-based log streaming, enabling seamless integration with partners that already use Azure for audit data delivery Data protection: Reduces risk of data loss during outages or throttling Scalability: Supports high-volume ingestion scenarios across tenants Flexibility: Enables architectures that can support multiple SIEMs or data consumers Together, these capabilities make CCF Azure Blob Storage based connectors a strong fit for partners managing large, variable, or distributed data pipelines. Partner adoption Early partners are already taking advantage of this capability to modernize their integrations and support evolving customer needs. Cloudflare Cloudflare integrates with Microsoft Sentinel using the Codeless Connector Framework (CCF) to bring Cloudflare log data into centralized security operations workflows. The connector ingests Cloudflare logs—delivered via Logpush to Azure Blob Storage—into Sentinel for analysis, enabling security teams to correlate web, network, and application activity with other security signals. By combining Cloudflare’s global threat visibility with Sentinel analytics and automation, this integration supports more effective threat detection, investigation, and incident response across Cloudflare‑protected environments. Netskope Web Transaction Events Netskope integrates with Microsoft Sentinel to provide detailed visibility into web and cloud activity across users, applications, and SaaS services. The connector ingests Netskope web transaction logs into Sentinel—leveraging Azure Blob Storage as a staging layer for log streaming and ingestion—to enable near real‑time analysis of user behavior, policy violations, and potential threats. By combining Netskope’s inline web inspection with Sentinel’s analytics and correlation capabilities, this integration helps security teams detect risky activity, investigate incidents, and strengthen monitoring across modern cloud environments. These integrations demonstrate how Azure Blob Storage support can simplify ingestion architectures while improving reliability and scalability for customers. Here is what our partners say about the functionality. Cloudflare: Netskope: Get started Developers can begin building CCF Azure Blob Storage -enabled connectors today using the guidance on Microsoft Learn. This documentation provides step-by-step instructions for configuring storage, processing events, and connecting data to Sentinel. In the unlikely event that you encounter any issues in building or updating your connector, App Assure is here to help. We are an engineering-backed team committed to supporting customers and software development companies throughout their journey with Sentinel to streamline integration and accelerate time to market. Reach out to us via our intake form for assistance.303Views0likes0CommentsWhat’s new in Microsoft Sentinel: RSAC 2026
Security is entering a new era, one defined by explosive data growth, increasingly sophisticated threats, and the rise of AI-enabled operations. To keep pace, security teams need an AI-powered approach to collect, reason over, and act on security data at scale. At RSA Conference 2026 (RSAC), we’re unveiling the next wave of Sentinel innovations designed to help organizations move faster, see deeper, and defend smarter with AI-ready tools. These updates include AI-driven playbooks that accelerate SOC automation, Granular Delegated Admin Privileges (GDAP) and granular role-based access controls (RBAC) that let you scale your SOC, accelerated data onboarding through new connectors, and data federation that enables analysis in place without duplication. Together, they give teams greater clarity, control, and speed. Come see us at RSAC to view these innovations in action. Hear from Sentinel leaders during our exclusive Microsoft Pre-Day, then visit Microsoft booth #5744 for demos, theater sessions, and conversations with Sentinel experts. Read on to explore what’s new. See you at RSAC! Sentinel feature innovations: Sentinel SIEM Sentinel data lake Sentinel graph Sentinel MCP Threat Intelligence Microsoft Security Store Sentinel promotions Sentinel SIEM Playbook generator [Now in public preview] The Sentinel playbook generator delivers a new era of automation capabilities. You can vibe code complex automations, integrate with different tools to ensure timely and compliant workflows throughout your SOC and feel confident in the results with built in testing and documentation. Customers and partners are already seeing benefit from this innovation. “The playbook generator gives security engineers the flexibility and speed of AI-assisted coding while delivering the deterministic outcomes that enterprise security operations require. It's the best of both worlds, and it lives natively in Defender where the engineers already work.” – Jaime Guimera Coll | Security and AI Architect | BlueVoyant Learn more about playbook generator. SIEM migration experience [General availability now] The Sentinel SIEM migration experience helps you plan and execute SIEM migrations through a guided, in-product workflow. You can upload Splunk or QRadar exports to generate recommendations for best‑fit Sentinel analytics rules and required data connectors, then assess migration scope, validate detection coverage, and migrate from Splunk or QRadar to Sentinel in phases while tracking progress. “The tool helps turn a Splunk to Sentinel migration into a practical decision process. It gives clear visibility into which detections are relevant, how they align to real security use cases, and where it makes sense to enable or prioritize coverage—especially with cost and data sources in mind.” – Deniz Mutlu | Director | Swiss Post Cybersecurity Ltd Learn more about SIEM migration experience. GDAP, unified RBAC, and row-level RBAC for Sentinel [Public preview, April 1] As Sentinel environments grow for enterprises, MSSPs, hyperscalers, and partners operating across shared or multiple environments, the challenge becomes managing access control efficiently and consistently at scale. Sentinel’s expanded permissions and access capabilities are designed to meet these needs. Granular Delegated Admin Privileges (GDAP) lets you streamline management across multiple governed tenants using your primary account, based on existing GDAP relationships. Unified RBAC allows you to opt in to managing permissions for Sentinel workspaces through a single pane of glass, configuring and enforcing access across Sentinel experiences in the analytics tier and data lake in the Defender portal. This simplifies administration and improves operational efficiency by reducing the number of permission models you need to manage. Row-level RBAC scoping within tables enables precise, scoped access to data in the Sentinel data lake. Multiple SOC teams can operate independently within a shared Sentinel environment, querying only the data they are authorized to see, without separating workspaces or introducing complex data flow changes. Consistent, reusable scope definitions ensure permissions are applied uniformly across tables and experiences, while maintaining strong security boundaries. To learn more, read our technical deep dives on RBAC and GDAP. Sentinel data lake Sentinel data federation [Public preview, April 1] Sentinel data federation lets you analyze security data in place without copying or duplicating your data. Powered by Microsoft Fabric, you can now federate data from Fabric, Azure Data Lake Storage (ADLS), and Azure Databricks into Sentinel data lake. Federated data appears alongside native Sentinel data, so you can use familiar tools like KQL hunting, notebooks, and custom graphs to correlate signals and investigate across your entire digital estate, all while preserving governance and compliance. You can start analyzing data in place and progressively ingest data into Sentinel for deeper security insights, advanced automation, and AI-powered defense at scale. You are billed only when you run analytics on federated data using existing Sentinel data lake query and advanced insights meters. les for unified investigation and hunting Sentinel cost estimation tool [Public Preview, April 9] The new Sentinel cost estimation tool offers all Microsoft customers and partners a guided, meter-level cost estimation experience that makes pricing transparent and predictable. A built-in three-year cost projection lets you model data growth and ramp-up over time, anticipate spend, and avoid surprises. Get transparent estimates into spend as you scale your security operations. All other customers can continue to use the Azure calculator for Sentinel pricing estimates. See the Sentinel pricing page for more information. Sentinel data connectors A365 Observability connector [Public preview, April 15] Bring AI agent telemetry into the Sentinel data lake to investigate agent behavior, tool usage, prompts, reasoning and execution using hunting, graph, and MCP workflows. GitHub audit log connector using API polling [General availability, March 6] Ingest GitHub enterprise audit logs into Sentinel to monitor user and administrator activity, detect risky changes, and investigate security events across your development environment. Google Kubernetes Engine (GKE) connector [General availability, March 6] Collect Google Kubernetes Engine (GKE) audit and workload logs in Sentinel to monitor cluster activity, analyze workload behavior, and detect security threats across Kubernetes environments. Microsoft Entra and Azure Resource Graph (ARG) connector enhancements [Public preview, April 15] Enable new Entra assets (EntraDevices, EntraOrgContacts) and ARG assets (ARGRoleDefinitions) in existing asset connectors, expanding inventory coverage and powering richer, built‑in graph experiences for greater visibility. With over 350 Sentinel data connectors, customers achieve broad visibility into complex digital environments and can expand their security operations effectively. “Microsoft Sentinel data lake forms the core of our agentic SOC. By unifying large volumes of Microsoft and third-party data, enabling graph-based analysis, and supporting MCP-driven workflows, it allows us to investigate faster, at lower cost, and with greater confidence.” – Øyvind Bergerud | Head of Security Operations | Storebrand Learn more about Sentinel data connectors. Sentinel connector builder agent using Sentinel Visual Studio Code extension [Public preview, March 31] Build Sentinel data connectors in minutes instead of weeks using the AI‑assisted Connector Builder agent in Visual Studio Code. This low‑code experience guides developers and ISVs end-to-end, automatically generating schemas, deployment assets, connector UI, secure secret handling, and polling logic. Built‑in validation surfaces issues early, so you can validate event logs before deployment and ingestion. Example prompt in GitHub Copilot Chat: @sentinel-connector-builder Create a new connector for OpenAI audit logs using https://api.openai.com/v1/organization/audit_logs Get started with custom connectors and learn more in our blog. Data filtering and splitting [Public preview, March 30] As security teams ingest more data, the challenge shifts from scale to relevance. With filtering and splitting now built into the Defender portal, teams can shape data before it lands in Sentinel, without switching tools or managing custom JSON files. Define simple KQL‑based transformations directly in the UI to filter low‑value events and intelligently route data, making ingestion optimization faster, more intuitive, and easier to manage at scale. Filtering at ingest time allows you to remove low-value or benign events to reduce noise, cut unnecessary processing, and ensure that high-signal data drives detections and investigations. Splitting enables intelligent routing of data between the analytics tier and the data lake tier based on relevance and usage. Together, these two capabilities help you balance cost and performance while scaling data ingestion sustainably as your digital estate grows. Create workbook reports directly from the data lake [Public preview, April 1] Sentinel workbooks can now directly run on the data lake using KQL, enabling you to visualize and monitor security data straight from the data lake. By selecting the data lake as the workbook data source, you can now create trend analysis and executive reporting. Sentinel graph Custom graphs [Public preview, April 1] Custom graphs let you build tailored security graphs tuned to your unique security scenarios using data from Sentinel data lake as well as non-Microsoft sources. With custom graph, powered by Fabric, you can build, query, and visualize connected data, uncover hidden patterns and attack paths, and help surface risks that are hard to detect when data is analyzed in isolation. These graphs provide the knowledge context that enables AI-powered agent experiences to work more effectively, speeding investigations, revealing blast radius, and helping you move from noisy, disconnected alerts to confident decisions at scale. In the words of our preview customers: “We ingested our Databricks management-plane telemetry into the Sentinel data lake and built a custom security graph. Without writing a single detection rule, the graph surfaced unusual patterns of activity and overprivileged access that we escalated for investigation. We didn't know what we were looking for, the graph surfaced the risk for us by revealing anomalous activity patterns and unusual access combinations driven by relationships, not alerts.” – SVP, Security Solutions | Financial Services organization Custom graph API usage for creating graph and querying graph will be billed starting April 1, 2026, according to the Sentinel graph meter. Creating custom graph Using the Sentinel VS Code extension, you can generate graphs to validate hunting hypotheses, such as understanding attack paths and blast radius of a phishing campaign, reconstructing multi‑step attack chains, and identifying structurally unusual or high‑risk behavior, making it accessible to your team and AI agents. Once persisted via a schedule job, you can access these custom graphs from the ready-to-use section in the graph experience in the Defender portal. Graphs experience in the Microsoft Defender portal After creating your custom graphs, you can access them in the graphs section of the Defender portal under Sentinel. From there, you’ll be able to perform interactive graph-based investigations, such as using a graph built for phishing analysis to help you quickly evaluate the impact of a recent incident, profile the attacker, and trace its paths across Microsoft telemetry and third-party data. The new graph experience lets you run Graph Query Language (GQL) queries, view the graph schema, visualize the graph, view graph results in tabular format, and interactively travers the graph to the next hop with a simple click. Sentinel MCP Sentinel MCP entity analyzer [General availability, April 1] Entity analyzer provides reasoned, out-of-the-box risk assessments that help you quickly understand whether a URL or user identity represents potential malicious activity. The capability analyzes data across modalities including threat intelligence, prevalence, and organizational context to generate clear, explainable verdicts you can trust. Entity analyzer integrates easily with your agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms, or with your SOAR workflows through Logic Apps. The entity analyzer is also a trusted foundation for the Defender Triage Agent and delivers more accurate alert classifications and deeper investigative reasoning. This removes the need to manually engineer evaluation logic and creates trust for analysts and AI agents to act with higher accuracy and confidence. Learn more about entity analyzer and in our blog here. Entity analyzer will be billed starting April 1, 2026, based on Security Compute Units (SCU) consumption. Learn more about MCP billing. Sentinel MCP graph tool collection [Public preview, May 20] Graph tool collection helps you visualize and explore relationships between identities and device assets, threats and activities signals ingested by data connectors and alerted by analytic rules. The tool provides a clear graph view that highlights dependencies and configuration gaps, which makes it easier to understand how content interacts across your environment. This helps security teams assess coverage, optimize content deployment, and identify areas that may need tuning or additional data sources, all from a single, interactive workspace. Executing graph queries via the MCP tools will trigger the graph meter. Claude MCP connector [Public preview, April 1] Anthropic Claude can connect to Sentinel through a custom MCP connector, giving you AI-assisted analysis across your Sentinel environment. Microsoft provides step-by-step guidance for configuring a custom connector in Claude that securely connects to a Sentinel MCP server. With this connection you can summarize incidents, investigate alerts, and reason over security signals while keeping data inside Microsoft's security boundary. Access to large language models (LLMs) is managed through Microsoft authentication and role-based controls, supporting faster triage and investigation workflows while maintaining compliance and visibility. Threat Intelligence CVEs of interest in the Threat Intelligence Briefing Agent [Public preview in April] The Threat Intelligence Briefing Agent delivers curated intelligence based on your organization’s configuration, preferences, and unique industry and geographic needs. CVEs of interest which highlights vulnerabilities actively discussed across the security landscape and assesses their potential impact on your environment, delivering more timely threat intelligence insights. The agent automatically incorporates internet exposure data powered by the Sentinel platform to surface threats targeting technologies exposed in your organization. Together, these enhancements help you focus faster on the threats that matter most, without manual investigation. Microsoft Security Store Security Store embedded in Entra [General availability, March 23] As identity environments grow more complex, teams need to move faster and extend Entra with trusted third‑party capabilities that address operational, compliance, and risk challenges. The Security Store embedded directly into Entra lets you discover and adopt Entra‑ready agents and solutions in your workflow. You can extend Entra with identity‑focused agents that surface privileged access risk, identity posture gaps, network access insights, and overall identity health, turning identity data into clear recommendations and reports teams can use immediately. You can also enhance Entra with Verified ID and External ID integrations that strengthen identity verification, streamline account recovery, and reduce fraud across workforce, consumer, and external identities. Security Store embedded in Microsoft Purview [General availability, March 31] Extending data security across the digital estate requires visibility and enforcement into new data sources and risk surfaces, often requiring a partnered approach. The Security Store embedded directly into Purview lets you discover and evaluate integrated solutions inside your data security workflows. Relevant partner capabilities surface alongside context, making it easier to strengthen data protection, address regulatory requirements, and respond to risk without disrupting existing processes. You can quickly assess which solutions align to data security scenarios, especially with respect to securing AI use, and how they can leverage established classifiers, policies, and investigation workflows in Purview. Keeping integration discovery in‑flow and purchases centralized through the Security Store means you move faster from evaluation to deployment, reducing friction and maintaining a secure, consistent transaction experience. Security Store Advisor [General availability, March 23] Security teams today face growing complexity and choice. Teams often know the security outcome they need, whether that's strengthening identity protection, improving ransomware resilience, or reducing insider risk, but lack a clear, efficient way to determine which solutions will help them get there. Security Store Advisor provides a guided, natural-language discovery experience that shifts security evaluation from product‑centric browsing to outcome‑driven decision‑making. You can describe your goal in plain language, and the Advisor surfaces the most relevant Microsoft and partner agents, solutions, and services available in the Security Store, without requiring deep product knowledge. This approach simplifies discovery, reduces time spent navigating catalogs and documentation, and helps you understand how individual capabilities fit together to deliver meaningful security outcomes. Sentinel promotions Extending signups for promotional 50 GB commitment tier [Through June 2026] The Sentinel promotional 50 GB commitment tier offers small and mid-sized organizations a cost-effective entry point into Sentinel. Sign up for the 50 GB commitment tier until June 30, 2026, and maintain the promotional rate until March 31, 2027. This promotion is available globally with regional variations in pricing and accessible through EA, CSP, and Direct channels. Visit the Sentinel pricing page for details and to get started. Sentinel RSAC 2026 sessions All week – Sentinel product demos, Microsoft Booth #5744 Mon Mar 23, 3:55 PM – RSAC 2026 main stage Keynote with CVP Vasu Jakkal [KEY-M10W] Ambient and autonomous security: Building trust in the agentic AI era Tue Mar 24, 10:30 AM – Live Q&A session, Microsoft booth #5744 and online Ask me anything with Microsoft Security SMEs and real practitioners Tue Mar 24, 11 AM – Sentinel data lake theater session, Microsoft booth #5744 From signals to insights: How Microsoft Sentinel data lake powers modern security operations Tue Mar 24, 2 PM – Sentinel SIEM theater session, Microsoft booth #5744 Vibe-coding SecOps automations with the Sentinel playbook generator Wed Mar 25, 12 PM – Executive event at Palace Hotel with Threat Protection GM Scott Woodgate The AI risk equation: Visibility, control, and threat acceleration Wed Mar 25, 1:30 PM – Sentinel graph theater session, Microsoft booth #5744 Bringing knowledge-driven context to security with Microsoft Sentinel graph Wed Mar 25, 5 PM – MISA theater session, Microsoft booth #5744 Cut SIEM costs without reducing protection: A Sentinel data lake case study Thu Mar 26, 1 PM – Security Store theater session, Microsoft booth #5744 What's next for Security Store: Expanding in portal and smarter discovery All week – 1:1 meetings with Microsoft security experts Meet with Microsoft Defender and Sentinel SIEM and Defender Security Operations Additional resources Sentinel data lake video playlist Explore the full capabilities of Sentinel data lake as a unified, AI-ready security platform that is deeply integrated into the Defender portal Sentinel data lake FAQ blog Get answers to many of the questions we’ve heard from our customers and partners on Sentinel data lake and billing AI‑powered SIEM migration experience ninja training Walk through the SIEM migration experience, see how it maps detections, surfaces connector requirements, and supports phased migration decisions SIEM migration experience documentation Learn how the SIEM migration experience analyzes your exports, maps detections and connectors, and recommends prioritized coverage Accenture collaborates with Microsoft to bring agentic security and business resilience to the front lines of cyber defense Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Sentinel. We’ll see you in the next edition!11KViews6likes0CommentsWhat’s new in Microsoft Sentinel: April 2026
Welcome to the April 2026 edition of What's new in Microsoft Sentinel. April brings a broad set of updates, with RSAC 2026 announcements rolling out alongside new features. Highlights include cost limit enforcement to prevent runaway query costs, curated open-source intelligence in Threat Analytics, and new data connectors for CrowdStrike, Imperva, AWS, and Logstash. Together, these innovations help security teams control costs, stay ahead of emerging threats, and broaden visibility without added complexity. Read on to learn what's new with Sentinel. What's new OSINT reports in Threat Analytics [Preview] Customers can now consume curated OSINT articles alongside Microsoft-authored Threat Analytics reports, all in one place. (OSINT, or open-source intelligence, is any information readily available to the public.) These OSINT articles come enriched, as detailed in the following list, to help security teams move quickly from awareness to action. What’s included: Curated OSINT articles derived from trusted open-source research Clear summaries with links back to original sources Extracted indicators of compromise (IOCs) Mapped MITRE ATT&CK tactics and techniques Microsoft enrichment, analysis, and recommended actions (when available) By bringing OSINT directly into Threat Analytics, we’re reducing context switching, improving analyst efficiency, and helping customers operationalize open-source intelligence faster within their Defender workflows. Learn more. Cost limit enforcement for KQL queries and notebooks [Preview] Sentinel data lake cost policies do more than just send an alert when usage gets too high. You can set hard limits for KQL queries, jobs, and notebook sessions that block new work once a threshold is exceeded, eliminating surprise bills from runaway queries or heavy workloads. For example, instead of finding out about cost spikes after you run large queries against the data lake tier, enforcement stops further queries before the damage is done. Anything already running still finishes normally, and you get clear messaging about what happened and what to do next. You can lift guardrails temporarily, adjust thresholds, or disable enforcement on the fly. Learn more. Sentinel data connectors With 380 Sentinel data connectors, customers achieve broad visibility into complex digital environments and can expand their security operations effectively. Below are the latest updates. CrowdStrike API Connector [Generally Available] The CrowdStrike API Connector ingests logs from CrowdStrike APIs into Sentinel, fetching details on hosts, detections, incidents, alerts, and vulnerabilities from your CrowdStrike environment. Imperva Cloud WAF [Preview] The Imperva Cloud WAF data connector ingests Imperva logs into Sentinel through AWS S3 buckets, giving you visibility into web application traffic and threats detected by your Imperva deployment for monitoring, investigation, and threat hunting in Sentinel. AWS Elastic Load Balancer (ELB) [Preview] This connector allows you to ingest AWS Elastic Load Balancer (ALB, NLB, and GLB) logs into Sentinel. These logs contain detailed records for requests handled by your load balancers, including client IPs, latencies, request paths, and status codes. These logs are useful for monitoring traffic patterns, investigating anomalies, and ensuring security compliance. Logstash Output Plugin [Preview] For organizations that rely on Logstash to collect from on-premises, legacy, or air-gapped environments, the Sentinel Logstash Output Plugin has been rebuilt in Java to align with Microsoft's Secure Future Initiative (SFI) and provide improved security and long-term maintainability. The plugin uses the Azure Monitor Logs Ingestion API with Data Collection Rules (DCRs), giving you full schema control and the ability to ingest directly into Sentinel data lake as well as standard Sentinel tables. Learn more. Sentinel data federation [Preview] Sentinel data federation enables unified visibility and security analytics across federated and ingested data, without compromising data governance. Security teams can quickly query data in Microsoft Fabric, Azure Data Lake Storage (ADLS) Gen2, and Azure Databricks directly from Sentinel, no data movement required. This approach allows teams to explore data broadly through federation, then selectively ingest what matters most into Sentinel to unlock advanced detections, automation, and AI‑powered analytics. Learn more. Sentinel cost estimation tool [Preview] Customers and partners can confidently estimate Sentinel costs using the cost estimation tool. With meter-level guidance, you can model ingestion across analytics and data lake tiers, compare retention options, and estimate compute costs. Built‑in projections of up to three years offer transparency into spend, making it easier to plan, optimize, and share estimates. Try the Sentinel Cost Estimator. Microsoft Entra and Azure Resource Graph (ARG) connector enhancements [Preview] Enable new Entra assets (EntraDevices, EntraOrgContacts) and ARG assets (ARGRoleDefinitions) in existing asset connectors, expanding inventory coverage and powering richer, built‑in graph experiences for greater visibility. Create workbook reports directly from the data lake [Preview] Sentinel workbooks can directly run on the data lake using KQL, enabling you to visualize and monitor security data straight from the data lake. By selecting the data lake as the workbook data source, you can create trend analysis and executive reporting. Custom graphs [Preview] Custom graphs let you model relationships unique to your organization using data from Sentinel data lake, non-Microsoft sources, and federated data sources, all powered by Fabric. Instead of stitching together dozens of tables manually, you can build graphs that surface blast radius, trace attack paths, map privilege chains, and spot structural outliers like unusually broad access or anomalous email exfiltration. You can generate custom graphs using AI-assisted coding in the Microsoft Sentinel VS Code extension, persist them via a schedule job, and access them in the graphs experience in the Defender portal. Run Graph Query Language (GQL) queries, visualize results, and interactively traverse the graph to the next hop with a single click. These graphs also provide the knowledge context that enables AI-powered agent experiences to work more effectively, speeding investigations and helping you move from disconnected alerts to confident decisions at scale. Custom graph API usage for creating and querying graphs is billed according to the Sentinel graph meter. Learn more. MCP entity analyzer [General availability] Entity analyzer provides reasoned, out-of-the-box risk assessments that help you quickly understand whether a URL or user identity represents potential malicious activity. It analyzes data across threat intelligence, prevalence, and organizational context to generate clear, explainable verdicts you can trust. Entity analyzer integrates with your agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms, or with your SOAR workflows through Logic Apps. It also serves as a trusted foundation for the Defender Triage Agent, delivering more accurate alert classifications and deeper investigative reasoning. Entity analyzer is billed based on Security Compute Units (SCU) consumption. Learn more about entity analyzer and MCP billing. Claude MCP connector [Preview] Anthropic Claude can connect to Sentinel through a custom MCP connector, giving you AI-assisted analysis across your Sentinel environment. Microsoft provides step-by-step guidance for configuring a custom connector in Claude that securely connects to a Sentinel MCP server. With this connection you can summarize incidents, investigate alerts, and reason over security signals while keeping data inside Microsoft's security boundary. Access to large language models (LLMs) is managed through Microsoft authentication and role-based controls, supporting faster triage and investigation workflows while maintaining compliance and visibility. CVEs of interest in the Threat Intelligence Briefing Agent [Preview] The Threat Intelligence Briefing Agent delivers curated intelligence based on your organization’s configuration, preferences, and unique industry and geographic needs. The agent surfaces Common Vulnerabilities and Exposures (CVEs) of interest, highlighting vulnerabilities actively discussed across the security landscape and assessing their potential impact on your environment for more timely threat intelligence insights. The agent automatically incorporates internet exposure data powered by the Sentinel platform to surface threats targeting technologies exposed in your organization. Together, these enhancements help you focus faster on the threats that matter most, without manual investigation. Additional resources Blogs and documentation: Featured blog: App Assure launches its Sentinel Advisory Service Agentic use cases for developers on Microsoft Sentinel The Unified SecOps Transition: Why It Is a Security Architecture Decision, Not Just a Portal Change What's new in Microsoft Defender – April 2026 Webinars and training: Featured webinar: Powering the Agentic SOC with Scott Woodgate, General Manager, Microsoft Threat Protection Featured training: Introducing the Microsoft Sentinel Training Lab. Hands-On Security Operations in Minutes Beyond KQL – Unlocking SOC Insights with Sentinel data lake Jupyter Notebooks Hyper scale your SOC: Manage delegated access and role-based scoping in Microsoft Defender Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Microsoft Sentinel. We’ll see you in the next edition!683Views2likes0CommentsUse Data Wrangler to Streamline Your Microsoft Sentinel data lake Notebook Development
One of the many exciting features of the Microsoft Sentinel data lake is a built-in advanced analytics engine, powered by Apache Spark. This Spark cluster has access to data that is within Sentinel data lake, and can work with this data through Jupyter notebooks in Visual Studio Code. As with any coding effort, creating the right data set can be an iterative process, and sometimes making those changes purely through code can be a little tricky. Wouldn't it be great if you could visualize the distribution of your data, apply some actions to shape and refine it, and then translate those actions to code? Well, you can do that with the Data Wrangler extension in VSCode in conjunction with the Sentinel data lake's MicrosoftSentinelProvider class. This blog will walk you through how to enable Data Wrangler in VSCode, how to use some of its functionality, and incorporating refinement actions back into your data lake notebook. Scenario The dataframe that is being built will be sourced from SignInLogs but will be used in a later algorithm. I need to clean up some of the columns by replacing missing values with default values, removing rows meeting certain criteria, and creating some categorical columns for later machine learning tasks. Initial DataFrame An essential data structure that you use in Jupyter notebooks is a DataFrame. A DataFrame is an in-memory representation of your data, like a database table that has columns and rows. Let's start with a basic DataFrame that contains some sign-in events from the SigninLogs table from the data lake. The returned data is useful, but for our later investigations we will need to "clean" the data by removing some missing values, renaming columns, creating true/false columns for analysis, and some other operations. In our notebook cell, we'll perform the following actions. Initial Includes Before you can use the Sentinel data lake in your notebook, you need to include the proper class from the sentinel_lake.providers module. This module contains a class named MicrosoftSentinelProvider that provides functions that let you read from and write to the data lake. We also will be using a few other Python libraries in our example, and this would look like the following: from sentinel_lake.providers import MicrosoftSentinelProvider from pyspark.sql.functions import col, from_json from pyspark.sql.types import StructType, StructField, StringType, IntegerType import pandas as pd from datetime import datetime, timedelta Variable Definitions Our sample will pull the last 30 days of SigninLogs from the data lake in order to assist with the investigation. This will be a variable that is defined once in the notebook and can be used elsewhere if needed. The same will be done for the name of the workspace in the data lake that will be queried, since the read_table and save_as_table functions can take the workspace name as a parameter and I only want to define the name once and avoid typos with multiple calls. In addition is a very important step where we instantiate our connection to the Sentinel data lake. The "spark" variable we pass to the MicrosoftSentinelProvider class is a global variable representing your Spark session. The variable sentinel_provider exposes the read_table and save_as_table functions that enable reading from and writing to the data lake. one_month_ago = datetime.now() - timedelta(days=30) workspaceName = "YOUR_WORKSPACE_NAME" sentinel_provider = MicrosoftSentinelProvider(spark) Replace "YOUR_WORKSPACE_NAME" with the name of the Sentinel workspace that you will be working with in the data lake. Complex Type Definitions Part of our query of SigninLogs will return complex types that contain name/value pairs. The LocationDetails and Status columns have nested values like city and state for LocationDetails and errorCode and failureReason for Status. To be able to easily access those nested values, the use of a StructType allows us to define that structure and we'll use this when retrieving the DataFrame. location_schema = StructType( [ StructField("city", StringType(), True), StructField("state", StringType(), True), StructField("countryOrRegion", StringType(), True), ] ) status_schema = StructType( [ StructField("errorCode", IntegerType(), True), StructField("failureReason", StringType(), True), StructField("additionalDetails", StringType(), True), ] ) Dataframe Definition We now have the parts needed to make a call to create a DataFrame for the last 30 days of data from the SigninLogs table in the lake. Our code to define the DataFrame uses our time definition as a filter for TimeGenerated, defines a handful of columns that we want returned, breaking down our complex types using the StructTypes defined earlier, and retrieves those nested column names as individual DataFrame columns. signin_events_df = ( sentinel_provider.read_table("SigninLogs", workspaceName) .filter(col("TimeGenerated") >= one_month_ago) .filter(col("UserPrincipalName") != "") .select( col("TimeGenerated"), col("AppDisplayName"), col("IPAddress"), col("IsRisky"), col("RiskState"), col("RiskLevelAggregated"), col("RiskLevelDuringSignIn"), col("ConditionalAccessStatus"), col("ClientAppUsed"), col("IsInteractive"), col("UserType"), col("MfaDetail"), col("LocationDetails"), col("Status"), ) .withColumn("loc", from_json(col("LocationDetails"), location_schema)) .withColumn("status", from_json(col("Status"), status_schema)) .select( "*", col("loc.city").alias("City"), col("loc.state").alias("State"), col("loc.countryOrRegion").alias("Country"), col("status.errorCode").alias("ErrorCode"), col("status.failureReason").alias("FailureReason"), col("status.additionalDetails").alias("AdditionalDetails"), ) .drop("loc", "status") ) Final Code (for now) Putting all of these steps together results in the following code for our cell that retrieves the last 30 days of SigninLogs into a DataFrame. Running that cell and then calling show() on the resulting DataFrame produces the following output: It's great data, but not the most visually appealing. It would be nice to have a cleaner looking table. That's where Data Wrangler can help right away. Install Data Wrangler Data Wrangler is a VSCode extension that's published by Microsoft. You can find it from the VSCode Marketplace by searching for "Data Wrangler". Installing the extension is quick and only requires Python 3.8 or higher to be installed on your machine. Data Wrangler View of a DataFrame Data Wrangler, by default, works natively with Pandas DataFrames. Pandas is an open-source Python library that is very popular with data scientists for data analysis and manipulation. When working with the MicrosoftSentinelProvider class, the DataFrame returned is a PySpark DataFrame. We can easily convert our PySpark DataFrames to Pandas DataFrames by calling `.toPandas()` on that DataFrame. That's a much cleaner looking table. Clicking the ellipsis in the bottom right of the table and selecting "Show column insights" changes the view to provide a quick glance of the distribution of the data: Now, just by glancing at the column headers, you can quickly assess the distribution of data in the DataFrame. You can see that 7% of conditional access attempts failed, that a number of sign-in events were for Security Copilot, and 30% of the sign-in events came from just three IP addresses. Wrangling Your Data A cleaner table view with data distribution statistics is nice, but the real power of Data Wrangler allows you to shape and refine your data for use elsewhere in your notebook. In the simple DataFrame we have created, let's perform some data cleansing steps so that you can more easily filter and join this DataFrame with other DataFrames later in my analysis. Upon first glance at the DataFrame there are a few data cleansing tasks to perform, namely: Remove rows that have non-usable UserType values of -1 Create a true/false column for whether the user is a Member or Guest, and drop the original UserType column Fill in column values that have missing data with a default value Filter out sign ins to the My Profile page Let's get started by opening Data Wrangler by clicking the Data Wrangler icon in the lower left corner of the DataFrame. Data Wrangler will open in a new tab in VS Code. There's a lot going on in this tab, with the left-hand pane having sections for an operations toolbox, a data summary panel that lists some stats about your DataFrame, and cleaning steps that keeps track of the changes you have made to your DataFrame. The rest of the page is split in two, with the DataFrame view taking up the majority of real estate and the operation preview pane at the bottom. We'll spend most of our time in the operations pane, but we'll also use the operation preview pane to do some additional tasks. Let's dive in. Task 1: Remove Rows Looking at the DataFrame grid, I can see the UserType column has some rows with a value of "-1". I don't want those in my DataFrame, so we can remove them using a filter. Selecting Filter in the Operations panel allows me to enter my criteria. I want to exclude rows that have a "-1" for UserType. I'll enter that and if I wait a few seconds, my DataFrame will update allowing me to preview the change. I unchecked the "Keep matching rows" checkbox, so my filter is excluding rows that match my criteria of UserType "Equal to" the value "-1". In the DataFrame, UserType is highlighted and I see that -1 is now not part of the DataFrame. Below the DataFrame, in the operation preview, I can see the Python code that makes this change. And in the Cleaning Steps pane, I see my Filter step is present. I can accept this change by clicking the Apply button in the Operations pane. Once I do that, my DataFrame is updated with my Filter operation. Everything being done by Data Wrangler is done in a sandbox, so these steps do not affect my original DataFrame...at least not yet. (We'll get to that.) Let's make a few more changes. Task 2: One-Hot Encoded Columns I want to be able to filter on UserType later on in my notebook, but I don't want to do string comparisons. I'd rather filter on a simple binary column. That's where One-Hot columns are useful. I'd like to have a column for IsMember and one for IsGuest. Each column will be a 0 or a 1 (false or true) and allows me to quickly filter instead of doing string comparisons. Let's create those columns. In the Operations pane, expand Formulas and select One-hot encode. The panel will switch so you can enter the column you're targeting. Select UserType, and in a few seconds, you'll see your DataFrame update with a preview of the new columns. Notice the new columns created (UserType_Guest and UserType_Member) are in green. The UserType column is in red and will be dropped. Clicking Apply accepts these changes, and you'll see the updated DataFrame. You can rename the new columns by selecting the Rename column operation under Schema. In this case, we'll rename the new columns to be IsMember and IsGuest, and accept the changes. Your Data Wrangler tab should look similar to the below image. Task 3: Provide Default Values for Missing Data Scanning through the DataFrame, we can see that the FailureReason and AdditionalDetails columns have a number of missing values. We would prefer to have a value in a cell rather than missing values. Filling in default values for missing values is another operation. Under Find and Replace in the Operations pane, select Fill missing values. You can set a default value for multiple columns in one swoop with this operation. I'm setting the same default value ("N/A") for both columns in one operation. The columns in red are the old values; the columns in green are the new values. Again, if this looks good, hit the Apply button and the DataFrame is updated. Task 4: Use Copilot to Create Operations One last update that we wanted to make was to filter out rows where the target application was "My Profile". We've already created a filter operation earlier, but this time, we'll use Copilot to generate the operation. In the Operation Preview pane, below your DataFrame, there's a text box where you can type a prompt. Enter something like "For the column AppDisplayName, filter out the rows where the value is equal to My Profile". Hit Enter, and Copilot thinks for a few seconds and will display the code in the preview pane along with a modal dialog stating that the preview is paused. Since this change was generated by Copilot, you need to review the code before accepting the change. If the code looks good, click the Run code link in the modal and your DataFrame will go back to preview mode. You'll see the filtered out rows highlighted, and if this all looks good, click Apply to accept the operation. Using Copilot to help create operations can be very helpful if you know what you want to do, but aren't sure what the operation is called, such as a One-Hot Encoding. But you should always examine the code generated before accepting it. Applying the Changes to Our Notebook We've created a number of operations and our DataFrame looks great, but how can we translate these operations back to our original notebook? Data Wrangler makes that easy by allowing you to export your operations back into the source notebook. Once you're satisfied with your changes, click the Export to notebook button above your DataFrame. This action will take all of the operations you created and create a new cell in your Jupyter notebook, right below the one where you kicked off the Data Wrangler tab, Your operations will be contained within a local function and a copy of your DataFrame will be sent to the function. The result of the function will be a new DataFrame that you can then work with throughout the rest of your notebook. Since this is all code, you can change variable names or even the structure of the generated code. Personally, I like to change the DataFrame names from the generic "df" and "df_clean" to something more meaningful, and even the local function can be renamed to a more meaningful function name. This way, if others are working on the same notebook, they have a better understanding of what is happening in the code. It may look like this: def clean_signin_info(df): # Filter rows based on column: 'UserType' df = df[~(df["UserType"] == "-1")] # One-hot encode column: 'UserType' insert_loc = df.columns.get_loc("UserType") df = pd.concat( [ df.iloc[:, :insert_loc], pd.get_dummies(df.loc[:, ["UserType"]]), df.iloc[:, insert_loc + 1 :], ], axis=1, ) # Rename column 'UserType_Guest' to 'IsGuest' df = df.rename(columns={"UserType_Guest": "IsGuest"}) # Rename column 'UserType_Member' to 'IsMember' df = df.rename(columns={"UserType_Member": "IsMember"}) # Replace missing values with "N/A" in columns: 'FailureReason', 'AdditionalDetails' df = df.fillna({"FailureReason": "N/A", "AdditionalDetails": "N/A"}) return df signin_events_pandas_df = signin_events_df.toPandas() cleaned_signin_events_df = clean_signin_info(signin_events_pandas_df) cleaned_signin_events_df.head() And my resulting DataFrame will have all of my cleaning steps applied. Start Using Data Wrangler Today You can get started using Data Wrangler with your Sentinel data lake notebooks today and explore all of the data wrangling tasks you can do with it. The Data Wrangler extension is available in the VS Code Marketplace and is free to download and use. It works well with the Microsoft Sentinel extension that you use with your Sentinel data lake notebook tasks, so install it today and start wrangling the data lake. Happy wrangling! Resources Running notebooks on the Microsoft Sentinel data lake - Microsoft Security | Microsoft Learn Microsoft Sentinel data lake Microsoft Sentinel Provider class reference | Microsoft Learn Getting Started with Data Wrangler in VS Code Beyond KQL: Unlocking SOC Insights With Sentinel data lake Jupyter Notebooks | Microsoft Virtual Ninja Training262Views0likes0CommentsHow to stop incidents merging under new incident (MultiStage) in defender.
Dear All We are experiencing a challenge with the integration between Microsoft Sentinel and the Defender portal where multiple custom rule alerts and analytic rule incidents are being automatically merged into a single incident named "Multistage." This automatic incident merging affects the granularity and context of our investigations, especially for important custom use cases such as specific admin activities and differentiated analytic logic. Key concerns include: Custom rule alerts from Sentinel merging undesirably into a single "Multistage" incident in Defender, causing loss of incident-specific investigation value. Analytic rules arising from different data sources and detection logic are merged, although they represent distinct security events needing separate attention. Customers require and depend on distinct, non-merged incidents for custom use cases, and the current incident correlation and merging behavior undermines this requirement. We understand that Defender’s incident correlation engine merges incidents based on overlapping entities, timelines, and behaviors but would like guidance or configuration best practices to disable or minimize this automatic merging behavior for our custom and analytic rule incidents. Our goal is to maintain independent incidents corresponding exactly to our custom alerts so that hunting, triage, and response workflows remain precise and actionable. Any recommendations or advanced configuration options to achieve this separation would be greatly appreciated. Thank you for your assistance. Best regardsSolved909Views3likes7CommentsHow Granular Delegated Admin Privileges (GDAP) allows Sentinel customers to delegate access
Simplifying Defender SIEM and XDR delegated access As Microsoft Sentinel and Defender converge into a unified experience, organizations face a fundamental challenge: the lack of a scalable, comprehensive, delegated access model that works seamlessly across Entra ID and Sentinel’s Azure Resource Manage creating a significant barrier for Managed Security Service Providers (MSSPs) and large enterprises with complex multi-tenant structures. Extending GDAP beyond CSPs: a strategic solution In response to these challenges, we have developed an extension to GDAP that makes it available to all Sentinel and Defender customers, including non-CSP organizations. This expansion enables both MSSPs and customers with multi-tenant organizational structures to establish secure, granular delegated access relationships directly through the Microsoft Defender portal. This is now available in public preview. The GDAP extension aligns with zero-trust security principles through a three-way handshake model requiring explicit mutual consent between governing and governed tenants before any relationship is established. This consent-based approach enhances transparency and accountability, reducing risks associated with broad, uncontrolled permissions. By integrating with Microsoft Defender, GDAP enables advanced threat detection and response capabilities across tenant boundaries while maintaining granular permission management through Entra ID roles and Unified RBAC custom permissions. Delivering unified management of delegated access across SIEM and XDR With GDAP, customers gain a truly unified way to manage access across both Microsoft Sentinel and Defender—using a single, consistent delegated access model for SIEM and XDR. For Sentinel customers, this brings parity with the Azure portal experience: where delegated access was previously managed through Azure Lighthouse, it can now be handled directly in the Defender portal using GDAP. More importantly, for organizations running SIEM and XDR together, GDAP eliminates the need to switch between portals—allowing teams to view, manage, and govern security access from one centralized experience. The result is simpler administration, reduced operational friction, and a more cohesive way to secure multi-tenant environments at scale. How GDAP for non-CSPs works: the three-step handshake The GDAP handshake model implements a security-first approach through three distinct steps, each requiring explicit approval to prevent unauthorized access. Step 1 begins with the governed tenant initiating the relationship, allowing the governing tenant to request GDAP access. Step 2 shifts control to the governing tenant, which creates and sends a delegated access request with specific requested permissions through the multi-tenant organization (MTO) portal. Step 3 returns to the governed tenant for final approval. The approach provides customers with complete visibility and control over who can access their security data and with what permissions, while giving MSSPs a streamlined, Microsoft-supported mechanism for managing delegated relationships at scale. Step 4 assigns Sentinel permissions. In Azure resource management, assign governing tenant’s groups with Sentinel workspaces permissions (in the governed tenant), selecting the governing tenant’s security groups used in the created relationship. Learn more here: Configure delegated access with governance relationships for multitenant organizations - Unified se…3.1KViews2likes14CommentsSecurity Copilot Agents in Defender XDR: where things actually stand
With RSAC 2026 behind us and the E5 inclusion now rolling out between April 20 and June 30, anyone planning SOC workflows or sitting on a capacity budget needs to get a clear picture of what is GA, what is preview, and what was just announced. The marketing pages tend to blur those lines. This is my sober look at the current state, with the operational details that matter for adoption decisions. What is actually shipping right now The Phishing Triage Agent is GA. It only handles user-reported phish through Defender for Office 365 P2, but for most SOCs that is a meaningful chunk of the L1 queue. Verdicts come with a natural-language rationale rather than just a label, which is the part that determines whether analysts will trust it. The agent learns from analyst confirmations and overrides, so the feedback loop matters more than the initial setup. There is a setup detail that is easy to miss: the agent will not classify alerts that have already been suppressed by alert tuning. The built-in rule "Auto-Resolve - Email reported by user as malware or phish" needs to be off, and any custom tuning rules that touch this alert type need review. If you skip this, the agent runs on an empty queue and you wonder why nothing is happening. The Threat Intelligence Briefing Agent is also GA. It produces tenant-tailored intel briefings on a regular cadence. Useful, but lower operational impact than the triage agents. Copilot Chat in Defender went GA with the April 2026 update. Conversational Q&A inside the portal, grounded in your incident and entity data. This is the lowest-risk way to get value out of Security Copilot and probably where most teams should start. Public preview, worth watching The Dynamic Threat Detection Agent is the most technically interesting one. It runs continuously in the Defender backend, correlates across Defender and Sentinel telemetry, generates its own hypotheses, and emits a dynamic alert when the evidence converges. Detection source on the alert is Security Copilot. Each alert includes the structured fields (severity, MITRE techniques, remediation) plus a narrative explaining the reasoning. For EU tenants the residency point is worth confirming with whoever owns data protection in your org: the service runs region-local, so customer data and required telemetry stay inside the designated geographic boundary. During public preview it is enabled by default for eligible customers and is free. At GA, currently targeted for late 2026, it transitions to the SCU consumption model and can be disabled. The Threat Hunting Agent is also in public preview. Natural language to KQL with guided hunting. Lower stakes, but useful for teams without deep KQL expertise on hand. Announced at RSAC, still preview Two agents got the headlines in March: The Security Alert Triage Agent extends the agentic triage approach beyond phishing into identity and cloud alerts. The longer-term direction is consolidating phishing, identity, and cloud triage under a single agent. Rollout is from April 2026, in preview. The Security Analyst Agent is the multi-step investigation agent. Deeper context across Defender and Sentinel, prioritised findings, transparent reasoning trace. Preview since March 26. Both look promising on paper, but Microsoft's history of preview features that take a long time to mature is well-documented. I would not plan production workflows around either of them yet. What you actually get with the E5 inclusion This is the licensing change most people are dealing with right now. Security Copilot has been part of the E5 product terms since January 1, 2026. Tenant rollout is phased between April 20 and June 30, 2026, with a 7-day notification before activation. The numbers: 400 SCUs per month for every 1,000 paid user licenses Capped at 10,000 SCUs per month, which you hit at around 25,000 seats Linear scaling below that, so a 3,000-seat tenant gets 1,200 SCUs per month No rollover, the pool resets monthly What is included: chat, promptbooks, agentic scenarios across Defender, Entra, Intune, Purview, and the standalone portal. Agent Builder and the Graph APIs are in. If you also run Sentinel, the included SCUs apply to Security Copilot scenarios there. What is not included: Sentinel data lake compute and storage. Those still run through Azure on the regular meters. Beyond the included pool you pay 6 USD per SCU pay-as-you-go, with 30 days notice before that mode kicks in. Practical things worth knowing before activation A few details that are easy to miss in the docs: Under System > Settings > Copilot in Defender > Preferences, switch from Auto-generate to Generate on demand. Auto-generate will burn SCUs on incidents nobody is going to look at. Generate on demand gives you direct control. In the Security Copilot portal workspace settings, check the data storage location and the data sharing toggle. Data sharing is on by default, which means Microsoft uses interaction data for product improvement. If your compliance position does not allow that, change it before agents start running. Changing it requires the Capacity Contributor role. Agent runs are not equivalent to the same number of analyst chat prompts. A triage agent processing fifty alerts in one run consumes meaningfully more SCUs than fifty manual prompts on the same data. If you have a high-volume phishing pipeline, model that out before you flip the switch broadly. The usage dashboard in the Security Copilot portal breaks down consumption by day, user, and scenario. Output quality depends on telemetry quality. Flaky connectors, gaps in log sources, or a high baseline of misconfigured alerts will produce verdicts that match. Connector health monitoring (the SentinelHealth table in Advanced Hunting is a sensible starting point) is a precondition. The agents only improve if analysts feed the override loop. If your team treats the verdicts as background noise rather than confirming or correcting them, the feedback signal is lost and calibration stays where it shipped. That is a process problem, not a product problem, but it determines whether any of this is worth the SCUs. A reasonable adoption order A rough sequence that minimises capacity surprises: Copilot Chat in Defender first. Lowest risk, immediate value through natural language Q&A in the investigation context. Phishing Triage Agent on a controlled subset, with a review cadence in place. Check the built-in tuning rules first. Watch the SCU dashboard for the first month before adding anything else. Let the Dynamic Threat Detection Agent run while it is in public preview, since it is default-on and free anyway. Compare its alerts against existing Sentinel detections. Security Alert Triage Agent for identity and cloud once the phishing baseline is stable. Establish a monthly review covering agent decisions, false-positive rate, SCU cost, and MTTD/MTTR trends. Technically, agentic triage is moving past phishing into identity and cloud, and the Dynamic Threat Detection Agent represents a genuine attempt at the false-negative problem rather than just another rule engine. Lizenziell, the E5 inclusion removes the biggest barrier to adoption that previously existed. The risk is enabling everything at once. Agents that nobody reviews are agents that consume capacity without delivering value, and the SCU dashboard is the only thing that will tell you that is happening. One agent, one use case, a 30-day baseline, then the next one. The order matters more than the speed.