siem
597 TopicsWhat’s new in Microsoft Sentinel: April 2026
Welcome to the April 2026 edition of What's new in Microsoft Sentinel. April brings a broad set of updates, with RSAC 2026 announcements rolling out alongside new features. Highlights include cost limit enforcement to prevent runaway query costs, curated open-source intelligence in Threat Analytics, and new data connectors for CrowdStrike, Imperva, AWS, and Logstash. Together, these innovations help security teams control costs, stay ahead of emerging threats, and broaden visibility without added complexity. Read on to learn what's new with Sentinel. What's new OSINT reports in Threat Analytics [Preview] Customers can now consume curated OSINT articles alongside Microsoft-authored Threat Analytics reports, all in one place. (OSINT, or open-source intelligence, is any information readily available to the public.) These OSINT articles come enriched, as detailed in the following list, to help security teams move quickly from awareness to action. What’s included: Curated OSINT articles derived from trusted open-source research Clear summaries with links back to original sources Extracted indicators of compromise (IOCs) Mapped MITRE ATT&CK tactics and techniques Microsoft enrichment, analysis, and recommended actions (when available) By bringing OSINT directly into Threat Analytics, we’re reducing context switching, improving analyst efficiency, and helping customers operationalize open-source intelligence faster within their Defender workflows. Learn more. Cost limit enforcement for KQL queries and notebooks [Preview] Sentinel data lake cost policies do more than just send an alert when usage gets too high. You can set hard limits for KQL queries, jobs, and notebook sessions that block new work once a threshold is exceeded, eliminating surprise bills from runaway queries or heavy workloads. For example, instead of finding out about cost spikes after you run large queries against the data lake tier, enforcement stops further queries before the damage is done. Anything already running still finishes normally, and you get clear messaging about what happened and what to do next. You can lift guardrails temporarily, adjust thresholds, or disable enforcement on the fly. Learn more. Sentinel data connectors With 380 Sentinel data connectors, customers achieve broad visibility into complex digital environments and can expand their security operations effectively. Below are the latest updates. CrowdStrike API Connector [Generally Available] The CrowdStrike API Connector ingests logs from CrowdStrike APIs into Sentinel, fetching details on hosts, detections, incidents, alerts, and vulnerabilities from your CrowdStrike environment. Imperva Cloud WAF [Preview] The Imperva Cloud WAF data connector ingests Imperva logs into Sentinel through AWS S3 buckets, giving you visibility into web application traffic and threats detected by your Imperva deployment for monitoring, investigation, and threat hunting in Sentinel. AWS Elastic Load Balancer (ELB) [Preview] This connector allows you to ingest AWS Elastic Load Balancer (ALB, NLB, and GLB) logs into Sentinel. These logs contain detailed records for requests handled by your load balancers, including client IPs, latencies, request paths, and status codes. These logs are useful for monitoring traffic patterns, investigating anomalies, and ensuring security compliance. Logstash Output Plugin [Preview] For organizations that rely on Logstash to collect from on-premises, legacy, or air-gapped environments, the Sentinel Logstash Output Plugin has been rebuilt in Java to align with Microsoft's Secure Future Initiative (SFI) and provide improved security and long-term maintainability. The plugin uses the Azure Monitor Logs Ingestion API with Data Collection Rules (DCRs), giving you full schema control and the ability to ingest directly into Sentinel data lake as well as standard Sentinel tables. Learn more. Sentinel data federation [Preview] Sentinel data federation enables unified visibility and security analytics across federated and ingested data, without compromising data governance. Security teams can quickly query data in Microsoft Fabric, Azure Data Lake Storage (ADLS) Gen2, and Azure Databricks directly from Sentinel, no data movement required. This approach allows teams to explore data broadly through federation, then selectively ingest what matters most into Sentinel to unlock advanced detections, automation, and AI‑powered analytics. Learn more. Sentinel cost estimation tool [Preview] Customers and partners can confidently estimate Sentinel costs using the cost estimation tool. With meter-level guidance, you can model ingestion across analytics and data lake tiers, compare retention options, and estimate compute costs. Built‑in projections of up to three years offer transparency into spend, making it easier to plan, optimize, and share estimates. Try the Sentinel Cost Estimator. Microsoft Entra and Azure Resource Graph (ARG) connector enhancements [Preview] Enable new Entra assets (EntraDevices, EntraOrgContacts) and ARG assets (ARGRoleDefinitions) in existing asset connectors, expanding inventory coverage and powering richer, built‑in graph experiences for greater visibility. Create workbook reports directly from the data lake [Preview] Sentinel workbooks can directly run on the data lake using KQL, enabling you to visualize and monitor security data straight from the data lake. By selecting the data lake as the workbook data source, you can create trend analysis and executive reporting. Custom graphs [Preview] Custom graphs let you model relationships unique to your organization using data from Sentinel data lake, non-Microsoft sources, and federated data sources, all powered by Fabric. Instead of stitching together dozens of tables manually, you can build graphs that surface blast radius, trace attack paths, map privilege chains, and spot structural outliers like unusually broad access or anomalous email exfiltration. You can generate custom graphs using AI-assisted coding in the Microsoft Sentinel VS Code extension, persist them via a schedule job, and access them in the graphs experience in the Defender portal. Run Graph Query Language (GQL) queries, visualize results, and interactively traverse the graph to the next hop with a single click. These graphs also provide the knowledge context that enables AI-powered agent experiences to work more effectively, speeding investigations and helping you move from disconnected alerts to confident decisions at scale. Custom graph API usage for creating and querying graphs is billed according to the Sentinel graph meter. Learn more. MCP entity analyzer [General availability] Entity analyzer provides reasoned, out-of-the-box risk assessments that help you quickly understand whether a URL or user identity represents potential malicious activity. It analyzes data across threat intelligence, prevalence, and organizational context to generate clear, explainable verdicts you can trust. Entity analyzer integrates with your agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms, or with your SOAR workflows through Logic Apps. It also serves as a trusted foundation for the Defender Triage Agent, delivering more accurate alert classifications and deeper investigative reasoning. Entity analyzer is billed based on Security Compute Units (SCU) consumption. Learn more about entity analyzer and MCP billing. Sentinel MCP graph tool collection [Preview] Graph tool collection helps security teams visualize and explore relationships between identities and device assets, threats, and activity signals ingested by data connectors and alerted by analytic rules. The tool provides a clear graph view that highlights dependencies and configuration gaps, which makes it easier to understand interactions across environments. This tool helps security teams assess coverage, optimize content deployment, and identify areas that may need tuning or additional data sources—all from a single, interactive workspace. Executing graph queries via the MCP tools triggers the graph meter. Claude MCP connector [Preview] Anthropic Claude can connect to Sentinel through a custom MCP connector, giving you AI-assisted analysis across your Sentinel environment. Microsoft provides step-by-step guidance for configuring a custom connector in Claude that securely connects to a Sentinel MCP server. With this connection you can summarize incidents, investigate alerts, and reason over security signals while keeping data inside Microsoft's security boundary. Access to large language models (LLMs) is managed through Microsoft authentication and role-based controls, supporting faster triage and investigation workflows while maintaining compliance and visibility. CVEs of interest in the Threat Intelligence Briefing Agent [Preview] The Threat Intelligence Briefing Agent delivers curated intelligence based on your organization’s configuration, preferences, and unique industry and geographic needs. The agent surfaces Common Vulnerabilities and Exposures (CVEs) of interest, highlighting vulnerabilities actively discussed across the security landscape and assessing their potential impact on your environment for more timely threat intelligence insights. The agent automatically incorporates internet exposure data powered by the Sentinel platform to surface threats targeting technologies exposed in your organization. Together, these enhancements help you focus faster on the threats that matter most, without manual investigation. Additional resources Blogs and documentation: Featured blog: App Assure launches its Sentinel Advisory Service Agentic use cases for developers on Microsoft Sentinel The Unified SecOps Transition: Why It Is a Security Architecture Decision, Not Just a Portal Change What's new in Microsoft Defender – April 2026 Webinars and training: Featured webinar: Powering the Agentic SOC with Scott Woodgate, General Manager, Microsoft Threat Protection Featured training: Introducing the Microsoft Sentinel Training Lab. Hands-On Security Operations in Minutes Beyond KQL – Unlocking SOC Insights with Sentinel data lake Jupyter Notebooks Hyper scale your SOC: Manage delegated access and role-based scoping in Microsoft Defender Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Microsoft Sentinel. We’ll see you in the next edition!386Views1like0CommentsHow to stop incidents merging under new incident (MultiStage) in defender.
Dear All We are experiencing a challenge with the integration between Microsoft Sentinel and the Defender portal where multiple custom rule alerts and analytic rule incidents are being automatically merged into a single incident named "Multistage." This automatic incident merging affects the granularity and context of our investigations, especially for important custom use cases such as specific admin activities and differentiated analytic logic. Key concerns include: Custom rule alerts from Sentinel merging undesirably into a single "Multistage" incident in Defender, causing loss of incident-specific investigation value. Analytic rules arising from different data sources and detection logic are merged, although they represent distinct security events needing separate attention. Customers require and depend on distinct, non-merged incidents for custom use cases, and the current incident correlation and merging behavior undermines this requirement. We understand that Defender’s incident correlation engine merges incidents based on overlapping entities, timelines, and behaviors but would like guidance or configuration best practices to disable or minimize this automatic merging behavior for our custom and analytic rule incidents. Our goal is to maintain independent incidents corresponding exactly to our custom alerts so that hunting, triage, and response workflows remain precise and actionable. Any recommendations or advanced configuration options to achieve this separation would be greatly appreciated. Thank you for your assistance. Best regardsSolved881Views3likes7CommentsYour Sentinel AMA Logs & Queries Are Public by Default — AMPLS Architectures to Fix That
When you deploy Microsoft Sentinel, security log ingestion travels over public Azure Data Collection Endpoints by default. The connection is encrypted, and the data arrives correctly — but the endpoint is publicly reachable, and so is the workspace itself, queryable from any browser on any network. For many organisations, that trade-off is fine. For others — regulated industries, healthcare, financial services, critical infrastructure — it is the exact problem they need to solve. Azure Monitor Private Link Scope (AMPLS) is how you solve it. What AMPLS Actually Does AMPLS is a single Azure resource that wraps your monitoring pipeline and controls two settings: Where logs are allowed to go (ingestion mode: Open or PrivateOnly) Where analysts are allowed to query from (query mode: Open or PrivateOnly) Change those two settings and you fundamentally change the security posture — not as a policy recommendation, but as a hard platform enforcement. Set ingestion to PrivateOnly and the public endpoint stops working. It does not fall back gracefully. It returns an error. That is the point. It is not a firewall rule someone can bypass or a policy someone can override. Control is baked in at the infrastructure level. Three Patterns — One Spectrum There is no universally correct answer. The right architecture depends on your organisation's risk appetite, existing network infrastructure, and how much operational complexity your team can realistically manage. These three patterns cover the full range: Architecture 1 — Open / Public (Basic) No AMPLS. Logs travel to public Data Collection Endpoints over the internet. The workspace is open to queries from anywhere. This is the default — operational in minutes with zero network setup. Cloud service connectors (Microsoft 365, Defender, third-party) work immediately because they are server-side/API/Graph pulls and are unaffected by AMPLS. Azure Monitor Agents and Azure Arc agents handle ingestion from cloud or on-prem machines via public network. Simplicity: 9/10 | Security: 6/10 Good for: Dev environments, teams getting started, low-sensitivity workloads Architecture 2 — Hybrid: Private Ingestion, Open Queries (Recommended for most) AMPLS is in place. Ingestion is locked to PrivateOnly — logs from virtual machines travel through a Private Endpoint inside your own network, never touching a public route. On-premises or hybrid machines connect through Azure Arc over VPN or a dedicated circuit and feed into the same private pipeline. Query access stays open, so analysts can work from anywhere without needing a VPN/Jumpbox to reach the Sentinel portal — the investigation workflow stays flexible, but the log ingestion path is fully ring-fenced. You can also split ingestion mode per DCE if you need some sources public and some private. This is the architecture most organisations land on as their steady state. Simplicity: 6/10 | Security: 8/10 Good for: Organisations with mixed cloud and on-premises estates that need private ingestion without restricting analyst access Architecture 3 — Fully Private (Maximum Control) Infrastructure is essentially identical to Architecture 2 — AMPLS, Private Endpoints, Private DNS zones, VPN or dedicated circuit, Azure Arc for on-premises machines. The single difference: query mode is also set to PrivateOnly. Analysts can only reach Sentinel from inside the private network. VPN or Jumpbox required to access the portal. Both the pipe that carries logs in and the channel analysts use to read them are fully contained within the defined boundary. This is the right choice when your organisation needs to demonstrate — not just claim — that security data never moves outside a defined network perimeter. Simplicity: 2/10 | Security: 10/10 Good for: Organisations with strict data boundary requirements (regulated industries, audit, compliance mandates) Quick Reference — Which Pattern Fits? Scenario Architecture Getting started / low-sensitivity workloads Arch 1 — No network setup, public endpoints accepted Private log ingestion, analysts work anywhere Arch 2 — AMPLS PrivateOnly ingestion, query mode open Both ingestion and queries must be fully private Arch 3 — Same as Arch 2 + query mode set to PrivateOnly One thing all three share: Microsoft 365, Entra ID, and Defender connectors work in every pattern — they are server-side pulls by Sentinel and are not affected by your network posture. Please feel free to reach out if you have any questions regarding the information provided.180Views1like1CommentSentinel RBAC in the Unified portal: who has activated Unified RBAC, and how did it go?
Following the RSAC 2026 announcements last month, I have been working through the full permission picture for the Unified portal and wanted to open a discussion here given how much has shifted in a short period. A quick framing of where things stand. The baseline is still that Azure RBAC carries across for Sentinel SIEM access when you onboard, no changes required. But there are now two significant additions in public preview: Unified RBAC for Sentinel SIEM itself (extending the Defender Unified RBAC model to cover Sentinel directly), and a new Defender-native GDAP model for non-CSP organisations managing delegated access across tenants. The GDAP piece in particular is worth discussing carefully, because I want to be precise about what has and has not changed. The existing limitation from Microsoft's onboarding documentation, that GDAP with Azure Lighthouse is not supported for Sentinel data in the Defender portal, has not changed. What is new is a separate, Defender-portal-native GDAP mechanism announced at RSAC, which is a different thing. These are not the same capability. If you were using Entra B2B as the interim path based on earlier guidance, that guidance was correct and that path remains the generally available option today. A few things I would genuinely like to hear from practitioners: For those who have activated Unified RBAC for a Sentinel workspace in the Defender portal: what did the migration from Azure RBAC roles look like in practice? Did the import function bring roles across cleanly, or did you find gaps particularly around custom roles? For environments using Playbook Operator, Automation Contributor, or Workbook Contributor role assignments: how are you handling the fact those three roles are not yet in Unified RBAC and still require Azure portal management? Is the dual-management posture creating operational friction? For MSSPs evaluating the new Defender-native GDAP model against their existing Entra B2B setup: what factors are driving the decision either way at your scale? Writing this up as Part 3 of the migration series and the community experience here is directly useful for making sure the practitioner angle is grounded.110Views0likes2CommentsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.Solved2.4KViews2likes6CommentsIntroducing the New Microsoft Sentinel Logstash Output Plugin (Public Preview!)
Many organizations rely on Logstash as a flexible, trusted data pipeline for collecting, transforming, and forwarding logs from on-premises and hybrid environments. Microsoft Sentinel has long supported a Logstash output plugin, enabling customers to send data directly into Sentinel as part of their existing pipelines. The original plugin was implemented in Ruby, and while it has served its purpose, it no longer meets Microsoft’s Secure Future Initiative (SFI) standards and has limited engineering support. To address both security and sustainability, we have rebuilt the plugin from the ground up in Java, a language that is more secure, better supported across Microsoft, and aligned with long-term platform investments. To ensure a seamless transition, the new implementation is still packaged and distributed as a standard Logstash Ruby gem. This means the installation and usage experience remains unchanged for customers, while benefiting from a more secure and maintainable foundation. What's New in This Version Java‑based and SFI‑compliant Same Logstash plugin experience, now rebuilt on a stronger foundation. The new implementation is fully Java‑based, aligning with Microsoft’s Secure Future Initiative (SFI) and providing improved security, supportability, and long-term maintainability. Modern, DCR‑based ingestion The plugin now uses the Azure Monitor Logs Ingestion API with Data Collection Rules (DCRs), replacing the legacy HTTP Data Collection API (For more info, see Migrate from the HTTP Data Collector API to the Log Ingestion API - Azure Monitor | Microsoft Learn). This gives customers full schema control, enables custom log tables, and supports ingestion into standard Microsoft Sentinel tables as well as Microsoft Sentinel data lake. Flexible authentication options Authentication is automatically determined based on your configuration, with support for: Client secret (App registration / service principal) Managed identity, eliminating the need to store credentials in configuration files Sovereign cloud support: The plugin supports Azure sovereign clouds, including Azure US Government, Azure China, and Azure Germany. Standard Logstash distribution model The plugin is published on RubyGems.org, the standard distribution channel for Logstash plugins, and can be installed directly using the Logstash plugin manager, no change to your existing installation workflow. What the Plugin Does Logstash plugin operates as a three-stage data pipeline: Input → Filter → Output. Input: You control how data enters the pipeline, using sources such as syslog, filebeat, Kafka, Event Hubs, databases (via JDBC), files, and more. Filter: You enrich and transform events using Logstash’s powerful filtering ecosystem, including plugins like grok, mutate, and Json, shaping data to match your security and operational needs. Output: This is where Microsoft comes in. The Microsoft Sentinel Logstash Output Plugin securely sends your processed events to an Azure Monitor Data Collection Endpoint, where they are ingested into Sentinel via a Data Collection Rule (DCR). With this model, you retain full control over your Logstash pipeline and data processing logic, while the Sentinel plugin provides a secure, reliable path to ingest data into Microsoft Sentinel. Getting Started Prerequisites Logstash installed and running An Azure Monitor Data Collection Endpoint (DCE) and Data Collection Rule (DCR) in your subscription Contributor role on your Log Analytics workspace Who Is This For? Organizations that already have Logstash pipelines, need to collect from on-premises or legacy systems, and operate in distributed/hybrid environments including air-gapped networks. To learn more, see: microsoft-sentinel-log-analytics-logstash-output-plugin | RubyGems.org | your community gem host1KViews1like2CommentsEnforce Cost Limits on KQL Queries and Notebooks in the Microsoft Sentinel Data Lake
Security teams face a constant tension: run the advanced analytics you need to stay ahead of threats, or hold back to keep costs predictable. Until now, Microsoft Sentinel let you set alerts to get notified when data lake usage approached a threshold — useful for awareness, but not enough to prevent budget overruns. Today, we're excited to announce threshold enforcement for KQL queries and notebooks in the Microsoft Sentinel data lake. With this release, you can go beyond notifications and automatically block new queries and jobs when your configured usage limits are exceeded. Your analysts keep working confidently, and your budgets stay protected. What's new Previously, the Configure Policies experience in Microsoft Sentinel let you set threshold-based alerts for data lake usage. You'd receive an email notification when consumption approached a limit — but nothing stopped usage from continuing past that point. Now, you can enable enforcement on those same policies. When enforcement is turned on and a threshold is exceeded, Microsoft Sentinel blocks new queries, jobs, and notebook sessions with a clear "Limit exceeded" error. No more surprise cost spikes from runaway queries or analysts who mistakenly run heavy workloads against data lake data. Enforcement is supported for two data lake capability categories: Data Lake Query — interactive KQL queries and KQL jobs (scheduled and ad hoc) Advanced Data Insights — notebook runs and notebook jobs How it works Consistent controls across KQL queries and notebooks Cost controls are enforced consistently across Sentinel data lake workloads, regardless of how analysts access the data. The same policy applies whether someone is running a quick investigation or executing a long-running job. Controls apply to: Interactive KQL queries in the data lake explorer in the Defender portal KQL jobs, including scheduled and ad-hoc jobs Notebook queries run through the Microsoft Sentinel VS Code extension Notebook jobs running as background or scheduled workloads This ensures advanced analytics remain powerful — but predictable and governed. Clear enforcement without disruption Enforcement is applied at execution and validation boundaries — not retroactively. This means: Queries or jobs already running are not interrupted. In-flight work completes normally. New queries, jobs, or notebook sessions are blocked once limits are exceeded. Failures occur early (for example, during validation), avoiding wasted compute. From an analyst's perspective, enforcement is explicit and consistent. Clear messaging appears in query editors, job validation responses, and notebooks when limits are reached — so your team always understands what happened and what to do next. How to set it up Prerequisites To configure enforcement policies, ensure you have the necessary permissions that are outlined here: Manage and monitor costs for Microsoft Sentinel | Microsoft Learn. Where to access Navigate to Microsoft Sentinel > Cost management > Configure Policies in the Microsoft Defender portal (https://security.microsoft.com). Step-by-step configuration In Microsoft Sentinel > Cost management, select Configure Policies. Select the policy you want to edit (Data Lake Query or Advanced Data Insights). Enter the total threshold value for the policy. Enter an alert percentage to receive email notifications before the threshold is reached. Enable the Enforcement toggle to block usage after the threshold is exceeded. Review your settings and select Submit. Once enforcement is active, administrators receive advance notifications as usage approaches the threshold. If circumstances change — for example, during an active breach — you can adjust the threshold, disable enforcement temporarily, or modify the policy to give your SOC the room it needs to respond without being blocked. Real-world scenario: Preventing unexpected cost spikes Consider a large SOC that ingests roughly 6 TB of data per day, with 1 TB going to the Sentinel Analytics tier and the remaining 5 TB going to the Sentinel data lake. Analysts are proactively hunting for threats, performing investigations, and running automation. Tier 3 analysts are also running Jupyter Notebooks against the Sentinel data lake to build graphs, execute queries, and automate incident investigation and remediation with code. Last month, the SOC experienced a cost spike after a newly hired analyst ran large, frequent queries against data lake data — mistakenly thinking it was Analytics tier. The SOC manager needs to prevent this from happening again. With enforcement now available, the SOC manager can navigate to Microsoft Sentinel > Cost management > Configure Policies in the Defender portal and set up two policies: A Data Lake Query policy to cap data processing for KQL queries An Advanced Data Insights policy to cap notebook compute consumption With these policies in place, the SOC manager gets notified in advance when consumption approaches the threshold while having confidence that the thresholds set will be enforced to prevent unexpected consumption and cost. Analysts can continue their day-to-day work without worrying about accidental overages. Should a breach scenario demand more capacity, the SOC manager can quickly adjust or temporarily disable the policies — keeping the team unblocked while maintaining overall budget governance. Outside of a breach scenario, should the same SOC analyst generate large amounts of data scanned, the threshold will take action and prevent queries from being performed. Learn more With enforceable KQL and notebook guardrails, Microsoft Sentinel data lake helps security teams scale advanced analytics with confidence. You can control usage in production and keep investigations moving — without tradeoffs between visibility, analytics, and budget. To get started, visit the documentation: Manage and monitor costs for Microsoft Sentinel | Microsoft Learn We'd love to hear your feedback. Share your thoughts in the comments below or reach out through your usual Microsoft support channels.914Views1like0CommentsMicrosoft Sentinel MCP Entity Analyzer: Explainable risk analysis for URLs and identities
What makes this release important is not just that it adds another AI feature to Sentinel. It changes the implementation model for enrichment and triage. Instead of building and maintaining a chain of custom playbooks, KQL lookups, threat intel checks, and entity correlation logic, SOC teams can call a single analyzer that returns a reasoned verdict and supporting evidence. Microsoft positions the analyzer as available through Sentinel MCP server connections for agent platforms and through Logic Apps for SOAR workflows, which makes it useful both for interactive investigations and for automated response pipelines. Why this matters First, it formalizes Entity Analyzer as a production feature rather than a preview experiment. Second, it introduces a real cost model, which means organizations now need to govern usage instead of treating it as a free enrichment helper. Third, Microsoft’s documentation is now detailed enough to support repeatable implementation patterns, including prerequisites, limits, required tables, Logic Apps deployment, and cost behavior. From a SOC engineering perspective, Entity Analyzer is interesting because it focuses on explainability. Microsoft describes the feature as generating clear, explainable verdicts for URLs and user identities by analyzing multiple modalities, including threat intelligence, prevalence, and organizational context. That is a much stronger operational model than simple point-enrichment because it aims to return an assessment that analysts can act on, not just more raw evidence What Entity Analyzer actually does The Entity Analyzer tools are described as AI-powered tools that analyze data in the Microsoft Sentinel data lake and provide a verdict plus detailed insights on URLs, domains, and user entities. Microsoft explicitly says these tools help eliminate the need for manual data collection and complex integrations usually required for investigation and enrichment hat positioning is important. In practice, many SOC teams have built enrichment playbooks that fetch sign-in history, query TI feeds, inspect click data, read watchlists, and collect relevant alerts. Those workflows work, but they create maintenance overhead and produce inconsistent analyst experiences. Entity Analyzer centralizes that reasoning layer. For user entities, Microsoft’s preview architecture explains that the analyzer retrieves sign-in logs, security alerts, behavior analytics, cloud app events, identity information, and Microsoft Threat Intelligence, then correlates those signals and applies AI-based reasoning to produce a verdict. Microsoft lists verdict examples such as Compromised, Suspicious activity found, and No evidence of compromise, and also warns that AI-generated content may be incorrect and should be checked for accuracy. That warning matters. The right way to think about Entity Analyzer is not “automatic truth,” but “high-value, explainable triage acceleration.” It should reduce analyst effort and improve consistency, while still fitting into human review and response policy. Under the hood: the implementation model Technically, Entity Analyzer is delivered through the Microsoft Sentinel MCP data exploration tool collection. Microsoft documents that entity analysis is asynchronous: you start analysis, receive an identifier, and then poll for results. The docs note that analysis may take a few minutes and that the retrieval step may need to be run more than once if the internal timeout is not enough for long operations. That design has two immediate implications for implementers. First, this is not a lightweight synchronous enrichment call you should drop carelessly into every automation branch. Second, any production workflow should include retry logic, timeouts, and concurrency controls. If you ignore that, you will create fragile playbooks and unnecessary SCU burn. The supported access path for the data exploration collection requires Microsoft Sentinel data lake and one of the supported MCP-capable platforms. Microsoft also states that access to the tools is supported for identities with at least Security Administrator, Security Operator, or Security Reader. The data exploration collection is hosted at the Sentinel MCP endpoint, and the same documentation notes additional Entity Analyzer roles related to Security Copilot usage. The prerequisite many teams will miss The most important prerequisite is easy to overlook: Microsoft Sentinel data lake is required. This is more than a licensing footnote. It directly affects data quality, analyzer usefulness, and rollout success. If your organization has not onboarded the right tables into the data lake, Entity Analyzer will either fail or return reduced-confidence output. For user analysis, the following tables are required to ensure accuracy: AlertEvidence, SigninLogs, CloudAppEvents, and IdentityInfo. also notes that IdentityInfo depends on Defender for Identity, Defender for Cloud Apps, or Defender for Endpoint P2 licensing. The analyzer works best with AADNonInteractiveUserSignInLogs and BehaviorAnalytics as well. For URL analysis, the analyzer works best with EmailUrlInfo, UrlClickEvents, ThreatIntelIndicators, Watchlist, and DeviceNetworkEvents. If those tables are missing, the analyzer returns a disclaimer identifying the missing sources A practical architecture view An incident, hunting workflow, or analyst identifies a high-interest URL or user. A Sentinel MCP client or Logic App calls Entity Analyzer. Entity Analyzer queries relevant Sentinel data lake sources and correlates the findings. AI reasoning produces a verdict, evidence narrative, and recommendations. The result is returned to the analyst, incident record, or automation workflow for next-step action. This model is especially valuable because it collapses a multi-query, multi-tool investigation pattern into a single explainable decisioning step. Where it fits in real Sentinel operations Entity Analyzer is not a replacement for analytics rules, UEBA, or threat intelligence. It is a force multiplier for them. For identity triage, it fits naturally after incidents triggered by sign-in anomaly detections, UEBA signals, or Defender alerts because it already consumes sign-in logs, cloud app events, and behavior analytics as core evidence sources. For URL triage, it complements phishing and click-investigation workflows because it uses TI, URL activity, watchlists, and device/network context. Implementation path 1: MCP clients and security agents Microsoft states that Entity Analyzer integrates with agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms. In practice, this makes it attractive for analyst copilots, engineering-side investigation agents, and guided triage experiences The benefit of this model is speed. A security engineer or analyst can invoke the analyzer directly from an MCP-capable client without building a custom orchestration layer. The tradeoff is governance: once you make the tool widely accessible, you need a clear policy for who can run it, when it should be used, and how results are validated before action is taken. Implementation path 2: Logic Apps and SOAR playbooks For SOC teams, Logic Apps is likely the most immediately useful deployment model. Microsoft documents an entity analyzer action inside the Microsoft Sentinel MCP tools connector and provides the required parameters for adding it to an existing logic app. These include: Workspace ID Look Back Days Properties payload for either URL or User The documented payloads are straightforward: { "entityType": "Url", "url": "[URL]" } And { "entityType": "User", "userId": "[Microsoft Entra object ID or User Principal Name]" } Also states that the connector supports Microsoft Entra ID, service principals, and managed identities, and that the Logic App identity requires Security Reader to operate. This makes playbook integration a strong pattern for incident enrichment. A high-severity incident can trigger a playbook, extract entities, invoke Entity Analyzer, and post the verdict back to the incident as a comment or decision artifact. The concurrency lesson most people will learn the hard way Unusually direct guidance on concurrency: to avoid timeouts and threshold issues, turn on Concurrency control in Logic Apps loops and start with a degree of parallelism of . The data exploration doc repeats the same guidance, stating that running multiple instances at once can increase latency and recommending starting with a maximum of five concurrent analyses. This is a strong indicator that the correct implementation pattern is selective analysis, not blanket analysis. Do not analyze every entity in every incident. Analyze the entities that matter most: external URLs in phishing or delivery chains accounts tied to high-confidence alerts entities associated with high-severity or high-impact incidents suspicious users with multiple correlated signals That keeps latency, quota pressure, and SCU consumption under control. KQL still matters Entity Analyzer does not eliminate KQL. It changes where KQL adds value. Before running the analyzer, KQL is still useful for scoping and selecting the right entities. After the analyzer returns, KQL is useful for validation, deeper hunting, and building custom evidence views around the analyzer’s verdict. For example, a simple sign-in baseline for a target user: let TargetUpn = "email address removed for privacy reasons"; SigninLogs | where TimeGenerated between (ago(7d) .. now()) | where UserPrincipalName == TargetUpn | summarize Total=count(), Failures=countif(ResultType != "0"), Successes=countif(ResultType == "0"), DistinctIPs=dcount(IPAddress), Apps=make_set(AppDisplayName, 20) by bin(TimeGenerated, 1d) | order by TimeGenerated desc And a lightweight URL prevalence check: let TargetUrl = "omicron-obl.com"; UrlClickEvents | where TimeGenerated between (ago(7d) .. now()) | search TargetUrl | take 50 Cost, billing, and governance GA is where technical excitement meets budget reality. Microsoft’s Sentinel billing documentation says there is no extra cost for the MCP server interface itself. However, for Entity Analyzer, customers are charged for the SCUs used for AI reasoning and also for the KQL queries executed against the Microsoft Sentinel data lake. Microsoft further states that existing Security Copilot entitlements apply The April 2026 “What’s new” entry also explicitly says that starting April 1, 2026, customers are charged for the SCUs required when using Entity Analyzer. That means every rollout should include a governance plan: define who can invoke the analyzer decide when playbooks are allowed to call it monitor SCU consumption limit unnecessary repeat runs preserve results in incident records so you do not rerun the same analysis within a short period Microsoft’s MCP billing documentation also defines service limits: 200 total runs per hour, 500 total runs per day, and around 15 concurrent runs every five minutes, with analysis results available for one hour. Those are not just product limits. They are design requirements. Limitations you should state clearly The analyze_user_entity supports a maximum time window of seven days and only works for users with a Microsoft Entra object ID. On-premises Active Directory-only users are not supported for user analysis. Microsoft also says Entity Analyzer results expire after one hour and that the tool collection currently supports English prompts only. Recommended rollout pattern If I were implementing this in a production SOC, I would phase it like this: Start with a narrow set of high-value use cases, such as suspicious user identities and phishing-related URLs. Confirm that the required tables are present in the data lake. Deploy a Logic App enrichment pattern for incident-triggered analysis. Add concurrency control and retry logic. Persist returned verdicts into incident comments or case notes. Then review SCU usage and analyst value before expanding coverage.728Views8likes0CommentsObserved Automation Discrepancies
Hi Team ... I want to know the logic behind the Defender XDR Automation Engine . How it works ? I have observed Defender XDR Automation Engine Behavior contrary to expectations of identical incident and automation handling in both environments, discrepancies were observed. Specifically, incidents with high-severity alerts were automatically closed by Defender XDR's automation engine before reaching their SOC for review, raising concerns among clients and colleagues. Automation rules are clearly logged in the activity log, whereas actions performed by Microsoft Defender XDR are less transparent . A high-severity alert related to a phishing incident was closed by Defender XDR's automation, resulting in the associated incident being closed and removed from SOC review. Wherein the automation was not triggered by our own rules, but by Microsoft's Defender XDR, and sought clarification on the underlying logic.232Views2likes4CommentsHow Should a Fresher Learn Microsoft Sentinel Properly?
Hello everyone, I am a fresher interested in learning Microsoft Sentinel and preparing for SOC roles. Since Sentinel is a cloud-native enterprise tool and usually used inside organizations, I am unsure how individuals without company access are expected to gain real hands-on experience. I would like to hear from professionals who actively use Sentinel: - How do freshers typically learn and practice Sentinel? - What learning resources or environments are commonly used by beginners? - What level of hands-on experience is realistically expected at entry level? I am looking for guidance based on real industry practice. Thank you for your time.206Views0likes2Comments