microsoft sentinel
785 TopicsHow Granular Delegated Admin Privileges (GDAP) allows Sentinel customers to delegate access
Simplifying Defender SIEM and XDR delegated access As Microsoft Sentinel and Defender converge into a unified experience, organizations face a fundamental challenge: the lack of a scalable, comprehensive, delegated access model that works seamlessly across Entra ID and Sentinel’s Azure Resource Manage creating a significant barrier for Managed Security Service Providers (MSSPs) and large enterprises with complex multi-tenant structures. Extending GDAP beyond CSPs: a strategic solution In response to these challenges, we have developed an extension to GDAP that makes it available to all Sentinel and Defender customers, including non-CSP organizations. This expansion enables both MSSPs and customers with multi-tenant organizational structures to establish secure, granular delegated access relationships directly through the Microsoft Defender portal. This is now available in public preview. The GDAP extension aligns with zero-trust security principles through a three-way handshake model requiring explicit mutual consent between governing and governed tenants before any relationship is established. This consent-based approach enhances transparency and accountability, reducing risks associated with broad, uncontrolled permissions. By integrating with Microsoft Defender, GDAP enables advanced threat detection and response capabilities across tenant boundaries while maintaining granular permission management through Entra ID roles and Unified RBAC custom permissions. Delivering unified management of delegated access across SIEM and XDR With GDAP, customers gain a truly unified way to manage access across both Microsoft Sentinel and Defender—using a single, consistent delegated access model for SIEM and XDR. For Sentinel customers, this brings parity with the Azure portal experience: where delegated access was previously managed through Azure Lighthouse, it can now be handled directly in the Defender portal using GDAP. More importantly, for organizations running SIEM and XDR together, GDAP eliminates the need to switch between portals—allowing teams to view, manage, and govern security access from one centralized experience. The result is simpler administration, reduced operational friction, and a more cohesive way to secure multi-tenant environments at scale. How GDAP for non-CSPs works: the three-step handshake The GDAP handshake model implements a security-first approach through three distinct steps, each requiring explicit approval to prevent unauthorized access. Step 1 begins with the governed tenant initiating the relationship, allowing the governing tenant to request GDAP access. Step 2 shifts control to the governing tenant, which creates and sends a delegated access request with specific requested permissions through the multi-tenant organization (MTO) portal. Step 3 returns to the governed tenant for final approval. The approach provides customers with complete visibility and control over who can access their security data and with what permissions, while giving MSSPs a streamlined, Microsoft-supported mechanism for managing delegated relationships at scale. Step 4 assigns Sentinel permissions. In Azure resource management, assign governing tenant’s groups with Sentinel workspaces permissions (in the governed tenant), selecting the governing tenant’s security groups used in the created relationship. Learn more here: Configure delegated access with governance relationships for multitenant organizations - Unified se…2KViews1like10CommentsClarification on AADSignInEventsBeta vs. IdentityLogonEvents Logs
Hey everyone, I’ve been reading up on the AADSignInEventsBeta table and got a bit confused. From what I understand, the AADSignInEventsBeta table is in beta and is only available for those with a Microsoft Entra ID P2 license. The idea is that the sign-in schema will eventually move over to the IdentityLogonEvents table. What I’m unsure about is whether the data from the AADSignInEventsBeta table has already been migrated to the IdentityLogonEvents table, or if they’re still separate for now. Can anyone clarify this for me? Thanks in advance for your help!223Views0likes1CommentEnforce Cost Limits on KQL Queries and Notebooks in the Microsoft Sentinel Data Lake
Security teams face a constant tension: run the advanced analytics you need to stay ahead of threats, or hold back to keep costs predictable. Until now, Microsoft Sentinel let you set alerts to get notified when data lake usage approached a threshold — useful for awareness, but not enough to prevent budget overruns. Today, we're excited to announce threshold enforcement for KQL queries and notebooks in the Microsoft Sentinel data lake. With this release, you can go beyond notifications and automatically block new queries and jobs when your configured usage limits are exceeded. Your analysts keep working confidently, and your budgets stay protected. What's new Previously, the Configure Policies experience in Microsoft Sentinel let you set threshold-based alerts for data lake usage. You'd receive an email notification when consumption approached a limit — but nothing stopped usage from continuing past that point. Now, you can enable enforcement on those same policies. When enforcement is turned on and a threshold is exceeded, Microsoft Sentinel blocks new queries, jobs, and notebook sessions with a clear "Limit exceeded" error. No more surprise cost spikes from runaway queries or analysts who mistakenly run heavy workloads against data lake data. Enforcement is supported for two data lake capability categories: Data Lake Query — interactive KQL queries and KQL jobs (scheduled and ad hoc) Advanced Data Insights — notebook runs and notebook jobs How it works Consistent controls across KQL queries and notebooks Cost controls are enforced consistently across Sentinel data lake workloads, regardless of how analysts access the data. The same policy applies whether someone is running a quick investigation or executing a long-running job. Controls apply to: Interactive KQL queries in the data lake explorer in the Defender portal KQL jobs, including scheduled and ad-hoc jobs Notebook queries run through the Microsoft Sentinel VS Code extension Notebook jobs running as background or scheduled workloads This ensures advanced analytics remain powerful — but predictable and governed. Clear enforcement without disruption Enforcement is applied at execution and validation boundaries — not retroactively. This means: Queries or jobs already running are not interrupted. In-flight work completes normally. New queries, jobs, or notebook sessions are blocked once limits are exceeded. Failures occur early (for example, during validation), avoiding wasted compute. From an analyst's perspective, enforcement is explicit and consistent. Clear messaging appears in query editors, job validation responses, and notebooks when limits are reached — so your team always understands what happened and what to do next. How to set it up Prerequisites To configure enforcement policies, ensure you have the necessary permissions that are outlined here: Manage and monitor costs for Microsoft Sentinel | Microsoft Learn. Where to access Navigate to Microsoft Sentinel > Cost management > Configure Policies in the Microsoft Defender portal (https://security.microsoft.com). Step-by-step configuration In Microsoft Sentinel > Cost management, select Configure Policies. Select the policy you want to edit (Data Lake Query or Advanced Data Insights). Enter the total threshold value for the policy. Enter an alert percentage to receive email notifications before the threshold is reached. Enable the Enforcement toggle to block usage after the threshold is exceeded. Review your settings and select Submit. Once enforcement is active, administrators receive advance notifications as usage approaches the threshold. If circumstances change — for example, during an active breach — you can adjust the threshold, disable enforcement temporarily, or modify the policy to give your SOC the room it needs to respond without being blocked. Real-world scenario: Preventing unexpected cost spikes Consider a large SOC that ingests roughly 6 TB of data per day, with 1 TB going to the Sentinel Analytics tier and the remaining 5 TB going to the Sentinel data lake. Analysts are proactively hunting for threats, performing investigations, and running automation. Tier 3 analysts are also running Jupyter Notebooks against the Sentinel data lake to build graphs, execute queries, and automate incident investigation and remediation with code. Last month, the SOC experienced a cost spike after a newly hired analyst ran large, frequent queries against data lake data — mistakenly thinking it was Analytics tier. The SOC manager needs to prevent this from happening again. With enforcement now available, the SOC manager can navigate to Microsoft Sentinel > Cost management > Configure Policies in the Defender portal and set up two policies: A Data Lake Query policy to cap data processing for KQL queries An Advanced Data Insights policy to cap notebook compute consumption With these policies in place, the SOC manager gets notified in advance when consumption approaches the threshold while having confidence that the thresholds set will be enforced to prevent unexpected consumption and cost. Analysts can continue their day-to-day work without worrying about accidental overages. Should a breach scenario demand more capacity, the SOC manager can quickly adjust or temporarily disable the policies — keeping the team unblocked while maintaining overall budget governance. Outside of a breach scenario, should the same SOC analyst generate large amounts of data scanned, the threshold will take action and prevent queries from being performed. Learn more With enforceable KQL and notebook guardrails, Microsoft Sentinel data lake helps security teams scale advanced analytics with confidence. You can control usage in production and keep investigations moving — without tradeoffs between visibility, analytics, and budget. To get started, visit the documentation: Manage and monitor costs for Microsoft Sentinel | Microsoft Learn We'd love to hear your feedback. Share your thoughts in the comments below or reach out through your usual Microsoft support channels.456Views1like0CommentsEnterprise Cybersecurity in the Age of AI: Why Legacy Security Is Failing as Attackers Move Faster
Cybersecurity has always been an asymmetric game. But with the rise of AI‑enabled attacks, that imbalance has widened dramatically. Microsoft Threat Intelligence and Microsoft Defender Security Research have publicly reported a clear shift in how attackers operate: AI is now being embedded across the entire attack lifecycle. Threat actors are using it to accelerate reconnaissance, generate highly targeted phishing at scale, automate infrastructure, and adapt their techniques in real time - reducing the time and effort required to move from initial access to impact. In recent months, Microsoft has documented AI‑enabled phishing campaigns abusing legitimate authentication mechanisms - including OAuth and device‑code flows - to compromise enterprise accounts at scale. These campaigns rely on automation, dynamic code generation, and highly personalised lures, rather than on stealing passwords or exploiting traditional vulnerabilities. Meanwhile, many large enterprises are still defending themselves with security controls designed for a very different threat model - one rooted in predictability, static signatures, and trusted perimeters. These approaches were built to stop repeatable attacks, not adversaries that continuously adapt and blend into normal business activity. The result is a dangerous gap: highly adaptive attackers versus static, legacy defences. Below are some of the most common outdated security practices still widely used by enterprises today - and why they are no longer sufficient against modern, AI‑driven threats. 1. Signature‑Based Antivirus Traditional antivirus solutions rely on known signatures and hashes, assuming malware looks the same each time it is deployed. AI has completely broken that assumption. Modern malware families now automatically mutate their code, generate new variants on execution, and adapt behaviour based on the environment they encounter. Microsoft Threat Intelligence has observed multiple actors using AI‑assisted tooling to rapidly rewrite payload components during development and testing, making each deployment look subtly different. In this model, there is no stable signature to detect. By the time a pattern exists, the attacker has already iterated past it. Signature‑based detection is not just slow - it is structurally mismatched to how modern threats operate. What to adopt instead Shift from artifact‑based detection to behavior‑based endpoint protection: EDR/XDR platforms that analyse process behaviour, memory activity, and execution chains Machine‑learning models trained on what attackers do, not what binaries look like Continuous monitoring with automated response, not one‑time blocking 2. Firewalls Many enterprises still rely on firewalls that enforce static allow/deny rules based on ports and IP addresses. That approach worked when applications were predictable and networks were clearly segmented. Today, traffic is encrypted, cloud‑based, API‑driven, and deeply intertwined with legitimate SaaS and identity services. Recent AI‑assisted phishing campaigns abusing legitimate OAuth and device‑code authentication flows illustrate this perfectly. From a network perspective, everything looks allowed: HTTPS traffic to trusted identity providers. There is no suspicious port, no malicious domain, no obvious anomaly - yet the attacker successfully hijacks the authentication process itself. What to adopt instead Move from perimeter controls to identity‑ and context‑aware network security: Application‑aware firewalls with behavioural and risk‑based inspection Integration with identity signals (user, device, location, risk score) Continuous evaluation of sessions, not one‑time allow/deny decisions In modern environments, identity is the new control plane. 3. Single‑Factor Authentication Despite years of guidance, single‑factor passwords remain common - especially for legacy applications, VPN access, and service accounts. AI‑powered credential abuse changes the economics of these attacks entirely. Threat actors now operate credential‑stuffing and phishing campaigns that adapt lures in real time, testing millions of combinations with minimal cost. In multiple Microsoft‑observed campaigns, attackers didn’t brute‑force access broadly. Instead, they used AI to identify which compromised identities were financially or operationally valuable - executives, payroll, procurement - and focused only on those accounts. What to adopt instead Replace static authentication with phishing‑resistant, risk‑based identity controls: Phishing‑resistant MFA (hardware‑backed or passkeys) Conditional access based on user behaviour, device health, and risk Continuous authentication instead of a single login event 4. VPN‑Centric Security VPNs were designed to extend the corporate network to remote users, based on the assumption that “inside” meant trustworthy. That assumption no longer holds. AI‑assisted attacks increasingly exploit VPN access post‑compromise. Once credentials are obtained, automation is used to map internal resources, identify privilege escalation paths, and move laterally - often without triggering traditional alerts. In parallel, Microsoft has observed nation‑state actors using AI to create highly convincing fake employee personas, complete with AI‑generated resumes, consistent communication styles, and synthetic media, allowing them to pass hiring and onboarding processes and gain long‑term, trusted access. In these scenarios, VPN access is not breached - it is granted. What to adopt instead Transition from network trust to Zero Trust access models: Identity‑based access to applications, not networks Least‑privilege, per‑app/user/service access instead of broad internal connectivity Continuous verification using behavioural signals In modern enterprises, access should be explicit, scoped, and continuously re‑evaluated. 5. Treating Unencrypted Data as “Low‑Risk” It is still common to find sensitive data stored unencrypted in older databases, file shares, and backups. In an AI‑driven threat landscape, data discovery is no longer manual or slow. After compromise, attackers increasingly use AI as an on‑demand analyst - summarizing directory structures, classifying stolen datasets, and prioritizing what matters most for impact or monetization. Unencrypted data dramatically lowers the cost and consequence of breach activity, turning what could have been a limited incident into a full‑scale exposure. What to adopt instead Shift from passive data storage to data‑centric security: Encryption by default, both at rest and in transit Data classification and sensitivity labeling built into platforms Access controls tied to data sensitivity, not just system location Begin preparing for post‑quantum cryptography (PQC) as part of long‑term data protection and crypto‑agility strategy 6. Intrusion Detection Systems (IDS) Built on Known Patterns Traditional IDS platforms look for known indicators of compromise - assuming attackers reuse the same tools and techniques. AI‑driven attacks deliberately avoid that assumption. Microsoft Threat Intelligence reports actors using large language models to quickly analyse publicly disclosed vulnerabilities, understand exploitation paths, and compress the time between disclosure and weaponization. This isn’t about zero‑days - it’s about speed. What once took days or weeks now takes hours. Legacy IDS platforms often fail silently in these scenarios, detecting only what they already know how to recognize. What to adopt instead Move from static detection to adaptive, correlation‑based threat detection: Graph‑based XDR platforms correlating signals across identity, endpoint, email, cloud, and network Anomaly detection that focuses on deviation from normal behaviour Automated investigation and response to match attacker speed Closing Thought: Security Is a Journey, Not a Destination AI is not a future cybersecurity problem. It is a current force multiplier for attackers - and it is exposing the limits of legacy security architectures faster than many organisations are willing to admit. A realistic security strategy starts with an uncomfortable but necessary acknowledgement: no organisation can be 100% secure. Intrusions will happen. Credentials will be compromised. Controls will be tested. The difference between a resilient enterprise and a vulnerable one is not the absence of incidents, but how effectively risk is managed when they occur. In mature organisations, this means assuming breach and designing for containment. Strong access controls limit blast radius. Least privilege and conditional access reduce what an attacker can reach. Data Loss Prevention (DLP) ensures that even when access is misused, sensitive data cannot be freely exfiltrated. Just as importantly, leaders understand the business consequences of compromise - which data matters most, which systems are critical, and which risks are acceptable versus existential. As a cybersecurity architect, I see this moment as a unique opportunity. AI adoption does not have to repeat the mistakes of earlier technology waves, where innovation moved fast and security followed years later. AI gives organisations the chance to introduce a new class of service while embedding security from day one - designing access, data boundaries, monitoring, and governance into the platform before it becomes business‑critical. When security is built in upfront, enterprises don’t just reduce risk - they gain confidence to move faster and truly leverage AI’s value. Security, especially in the age of AI, is not about preventing every intrusion. It is about controlling impact, preserving trust, and maintaining operational continuity in a world where attackers move faster than ever. In the age of AI, standing still is the same as falling behind. References: Inside an AI‑enabled device code phishing campaign | Microsoft Security Blog AI as tradecraft: How threat actors operationalize AI | Microsoft Security Blog Detecting and analyzing prompt abuse in AI tools | Microsoft Security Blog Post-Quantum Cryptography | CSRC Microsoft Digital Defense Report 2025 | MicrosoftMicrosoft Sentinel MCP Entity Analyzer: Explainable risk analysis for URLs and identities
What makes this release important is not just that it adds another AI feature to Sentinel. It changes the implementation model for enrichment and triage. Instead of building and maintaining a chain of custom playbooks, KQL lookups, threat intel checks, and entity correlation logic, SOC teams can call a single analyzer that returns a reasoned verdict and supporting evidence. Microsoft positions the analyzer as available through Sentinel MCP server connections for agent platforms and through Logic Apps for SOAR workflows, which makes it useful both for interactive investigations and for automated response pipelines. Why this matters First, it formalizes Entity Analyzer as a production feature rather than a preview experiment. Second, it introduces a real cost model, which means organizations now need to govern usage instead of treating it as a free enrichment helper. Third, Microsoft’s documentation is now detailed enough to support repeatable implementation patterns, including prerequisites, limits, required tables, Logic Apps deployment, and cost behavior. From a SOC engineering perspective, Entity Analyzer is interesting because it focuses on explainability. Microsoft describes the feature as generating clear, explainable verdicts for URLs and user identities by analyzing multiple modalities, including threat intelligence, prevalence, and organizational context. That is a much stronger operational model than simple point-enrichment because it aims to return an assessment that analysts can act on, not just more raw evidence What Entity Analyzer actually does The Entity Analyzer tools are described as AI-powered tools that analyze data in the Microsoft Sentinel data lake and provide a verdict plus detailed insights on URLs, domains, and user entities. Microsoft explicitly says these tools help eliminate the need for manual data collection and complex integrations usually required for investigation and enrichment hat positioning is important. In practice, many SOC teams have built enrichment playbooks that fetch sign-in history, query TI feeds, inspect click data, read watchlists, and collect relevant alerts. Those workflows work, but they create maintenance overhead and produce inconsistent analyst experiences. Entity Analyzer centralizes that reasoning layer. For user entities, Microsoft’s preview architecture explains that the analyzer retrieves sign-in logs, security alerts, behavior analytics, cloud app events, identity information, and Microsoft Threat Intelligence, then correlates those signals and applies AI-based reasoning to produce a verdict. Microsoft lists verdict examples such as Compromised, Suspicious activity found, and No evidence of compromise, and also warns that AI-generated content may be incorrect and should be checked for accuracy. That warning matters. The right way to think about Entity Analyzer is not “automatic truth,” but “high-value, explainable triage acceleration.” It should reduce analyst effort and improve consistency, while still fitting into human review and response policy. Under the hood: the implementation model Technically, Entity Analyzer is delivered through the Microsoft Sentinel MCP data exploration tool collection. Microsoft documents that entity analysis is asynchronous: you start analysis, receive an identifier, and then poll for results. The docs note that analysis may take a few minutes and that the retrieval step may need to be run more than once if the internal timeout is not enough for long operations. That design has two immediate implications for implementers. First, this is not a lightweight synchronous enrichment call you should drop carelessly into every automation branch. Second, any production workflow should include retry logic, timeouts, and concurrency controls. If you ignore that, you will create fragile playbooks and unnecessary SCU burn. The supported access path for the data exploration collection requires Microsoft Sentinel data lake and one of the supported MCP-capable platforms. Microsoft also states that access to the tools is supported for identities with at least Security Administrator, Security Operator, or Security Reader. The data exploration collection is hosted at the Sentinel MCP endpoint, and the same documentation notes additional Entity Analyzer roles related to Security Copilot usage. The prerequisite many teams will miss The most important prerequisite is easy to overlook: Microsoft Sentinel data lake is required. This is more than a licensing footnote. It directly affects data quality, analyzer usefulness, and rollout success. If your organization has not onboarded the right tables into the data lake, Entity Analyzer will either fail or return reduced-confidence output. For user analysis, the following tables are required to ensure accuracy: AlertEvidence, SigninLogs, CloudAppEvents, and IdentityInfo. also notes that IdentityInfo depends on Defender for Identity, Defender for Cloud Apps, or Defender for Endpoint P2 licensing. The analyzer works best with AADNonInteractiveUserSignInLogs and BehaviorAnalytics as well. For URL analysis, the analyzer works best with EmailUrlInfo, UrlClickEvents, ThreatIntelIndicators, Watchlist, and DeviceNetworkEvents. If those tables are missing, the analyzer returns a disclaimer identifying the missing sources A practical architecture view An incident, hunting workflow, or analyst identifies a high-interest URL or user. A Sentinel MCP client or Logic App calls Entity Analyzer. Entity Analyzer queries relevant Sentinel data lake sources and correlates the findings. AI reasoning produces a verdict, evidence narrative, and recommendations. The result is returned to the analyst, incident record, or automation workflow for next-step action. This model is especially valuable because it collapses a multi-query, multi-tool investigation pattern into a single explainable decisioning step. Where it fits in real Sentinel operations Entity Analyzer is not a replacement for analytics rules, UEBA, or threat intelligence. It is a force multiplier for them. For identity triage, it fits naturally after incidents triggered by sign-in anomaly detections, UEBA signals, or Defender alerts because it already consumes sign-in logs, cloud app events, and behavior analytics as core evidence sources. For URL triage, it complements phishing and click-investigation workflows because it uses TI, URL activity, watchlists, and device/network context. Implementation path 1: MCP clients and security agents Microsoft states that Entity Analyzer integrates with agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms. In practice, this makes it attractive for analyst copilots, engineering-side investigation agents, and guided triage experiences The benefit of this model is speed. A security engineer or analyst can invoke the analyzer directly from an MCP-capable client without building a custom orchestration layer. The tradeoff is governance: once you make the tool widely accessible, you need a clear policy for who can run it, when it should be used, and how results are validated before action is taken. Implementation path 2: Logic Apps and SOAR playbooks For SOC teams, Logic Apps is likely the most immediately useful deployment model. Microsoft documents an entity analyzer action inside the Microsoft Sentinel MCP tools connector and provides the required parameters for adding it to an existing logic app. These include: Workspace ID Look Back Days Properties payload for either URL or User The documented payloads are straightforward: { "entityType": "Url", "url": "[URL]" } And { "entityType": "User", "userId": "[Microsoft Entra object ID or User Principal Name]" } Also states that the connector supports Microsoft Entra ID, service principals, and managed identities, and that the Logic App identity requires Security Reader to operate. This makes playbook integration a strong pattern for incident enrichment. A high-severity incident can trigger a playbook, extract entities, invoke Entity Analyzer, and post the verdict back to the incident as a comment or decision artifact. The concurrency lesson most people will learn the hard way Unusually direct guidance on concurrency: to avoid timeouts and threshold issues, turn on Concurrency control in Logic Apps loops and start with a degree of parallelism of . The data exploration doc repeats the same guidance, stating that running multiple instances at once can increase latency and recommending starting with a maximum of five concurrent analyses. This is a strong indicator that the correct implementation pattern is selective analysis, not blanket analysis. Do not analyze every entity in every incident. Analyze the entities that matter most: external URLs in phishing or delivery chains accounts tied to high-confidence alerts entities associated with high-severity or high-impact incidents suspicious users with multiple correlated signals That keeps latency, quota pressure, and SCU consumption under control. KQL still matters Entity Analyzer does not eliminate KQL. It changes where KQL adds value. Before running the analyzer, KQL is still useful for scoping and selecting the right entities. After the analyzer returns, KQL is useful for validation, deeper hunting, and building custom evidence views around the analyzer’s verdict. For example, a simple sign-in baseline for a target user: let TargetUpn = "email address removed for privacy reasons"; SigninLogs | where TimeGenerated between (ago(7d) .. now()) | where UserPrincipalName == TargetUpn | summarize Total=count(), Failures=countif(ResultType != "0"), Successes=countif(ResultType == "0"), DistinctIPs=dcount(IPAddress), Apps=make_set(AppDisplayName, 20) by bin(TimeGenerated, 1d) | order by TimeGenerated desc And a lightweight URL prevalence check: let TargetUrl = "omicron-obl.com"; UrlClickEvents | where TimeGenerated between (ago(7d) .. now()) | search TargetUrl | take 50 Cost, billing, and governance GA is where technical excitement meets budget reality. Microsoft’s Sentinel billing documentation says there is no extra cost for the MCP server interface itself. However, for Entity Analyzer, customers are charged for the SCUs used for AI reasoning and also for the KQL queries executed against the Microsoft Sentinel data lake. Microsoft further states that existing Security Copilot entitlements apply The April 2026 “What’s new” entry also explicitly says that starting April 1, 2026, customers are charged for the SCUs required when using Entity Analyzer. That means every rollout should include a governance plan: define who can invoke the analyzer decide when playbooks are allowed to call it monitor SCU consumption limit unnecessary repeat runs preserve results in incident records so you do not rerun the same analysis within a short period Microsoft’s MCP billing documentation also defines service limits: 200 total runs per hour, 500 total runs per day, and around 15 concurrent runs every five minutes, with analysis results available for one hour. Those are not just product limits. They are design requirements. Limitations you should state clearly The analyze_user_entity supports a maximum time window of seven days and only works for users with a Microsoft Entra object ID. On-premises Active Directory-only users are not supported for user analysis. Microsoft also says Entity Analyzer results expire after one hour and that the tool collection currently supports English prompts only. Recommended rollout pattern If I were implementing this in a production SOC, I would phase it like this: Start with a narrow set of high-value use cases, such as suspicious user identities and phishing-related URLs. Confirm that the required tables are present in the data lake. Deploy a Logic App enrichment pattern for incident-triggered analysis. Add concurrency control and retry logic. Persist returned verdicts into incident comments or case notes. Then review SCU usage and analyst value before expanding coverage.412Views1like0CommentsRunning KQL queries on Microsoft Sentinel data lake using API
Co-Authors: Zeinab Mokhtarian Koorabbasloo and Matthew Lowe As security data lakes become the backbone of modern analytics platforms, organizations need new ways to operationalize their data. While interactive tools and portals support data exploration, many real-world workflows increasingly require flexible programmatic access that enables automation, scale, and seamless integration. By running KQL (Kusto Query Language) queries on Microsoft Sentinel data lake through APIs, you can embed analytics directly into automation workflows, background services, and intelligent agents, without relying on manual query execution. In this post, we explore API based KQL query execution, review some of the scenarios where it delivers the most value, and what you need to get started. Why run KQL queries on Sentinel data lake via API? Traditional query experiences, such as dashboards and query editors, are optimized for human interaction. APIs, on the other hand, are optimized for systems. Running KQL through an API enables: Automation-first analytics Repeatable and scheduled insights Integration with external systems and agents Consistent query execution at scale Instead of asking “How do I run this query?”, our customers are asking “How do I embed analytics into my workflow?” Scenarios where API-based KQL queries add value Automated monitoring and alerting SOC teams often want to continuously analyze data in their lake to detect anomalies, trends, or policy violations. With API-based KQL execution, they can: Run queries as part of automated workflows and playbooks Evaluate query results programmatically Trigger downstream actions such as alerts, tickets, or notifications This turns KQL into a signal engine, not just an exploration tool. Powering intelligent agents AI agents require programmatic access to data lakes to retrieve timely, relevant context for decision making. Using KQL over an API allows agents to: Dynamically query data lake based on user intent or system context Retrieve aggregated or filtered results on demand Combine analytical results with reasoning and decision logic In this model, KQL acts as the analytical retrieval layer, while the agent focuses on orchestration, reasoning, and action. Embedding analytics into business workflows Many organizations want analytics embedded directly into CI/CD and operational pipelines. Instead of exporting data or duplicating logic, they can: Run KQL queries inline via API Use results as inputs to other systems Keep analytics logic centralized and consistent This reduces drift between “analytics code” and “application code.” High-level flow: What happens when you run KQL via API At a conceptual level, the flow looks like this: A client authenticates to Microsoft Sentinel data lake platform. The client submits a KQL query via an API. The query executes against data stored in the data lake. Results are returned in a structured, machine-readable format. The client processes or acts on the results. Prerequisites To run KQL queries against the Sentinel data lake using APIs, you will need: A user token or a service principal Appropriate permissions to execute queries on the Sentinel data lake. Azure RBAC roles such as Log Analytics reader or Log Analytics contributor on the workspace are needed. Familiarity with KQL and API based query execution patterns Scenario 1: Execute a KQL query via API within a Playbook The following Sentinel SOAR playbook example demonstrates how data within Sentinel data lake can be used within automation. This example leverages a service principal that will be used to query the DeviceNetworkEvent logs that are within Sentinel data lake to enrich an incident involving a device before taking action on it. Within this playbook, the entities involved within the incident are retrieved, then queries are executed against the Sentinel data lake to gain insights on each host involved. For this example, the API call to the Sentinel data lake to retrieve events from the DeviceNetworkEvents table to find relevant information that shows network connections with the host where the IP originated from outside of the United States. As this action does not have a gallery artifacts within Azure Logic Apps, the action must be built out by using the HTTP action that is offered within Logic Apps. This action requires the API details for the API call as well as the authentication details that will be used to run the API. The step that executes the query leverages the Sentinel data lake API by performing the following call: POST https://api.securityplatform.microsoft.com/lake/kql/v2/rest/query. The service principal being used has read permissions on the Sentinel data lake that contains the relevant details and is authenticating to Entra ID OAuth when running the API call. NOTE: When using API calls to query Sentinel data lake, use 4500ebfb-89b6-4b14-a480-7f749797bfcd/.default as the scope/audience when retrieving a token for the service principal. This GUID is associated with the query service for Sentinel data lake. The body of the query is the following: { "csl": "DeviceNetworkEvents | where TimeGenerated >= ago(30d) | where DeviceName has '' | where ActionType in (\"ConnectionSuccess\", \"ConnectionAttempted\", \"InboundConnectionAccepted\") | extend GeoInfo = geo_info_from_ip_address(RemoteIP) | extend Country = tostring(GeoInfo.country), State = tostring(GeoInfo.state), City = tostring(GeoInfo.city) | where Country != 'United States' and RemoteIP !has '127.0.0.1' | project TimeGenerated, DeviceName, ActionType, RemoteIP, RemotePort, RemoteUrl, City, State, Country, InitiatingProcessFileName | order by TimeGenerated desc | top 2 by DeviceName", “db”: “WORKSPACENAMEHERE – WORKSPACEIDHERE” } Within this body, the query and workspace are defined. “csl” represents the query to run against the Sentinel data lake and “db” represents the Sentinel workspace/lake. This value is a combination of the workspace name – workspace ID. Both of these values can be found on the workspace overview blade within Azure. NOTE: The query must be one line in the JSON. Multi-line items will not be seen as valid JSON. With this, initial investigative querying via Sentinel data lake has been done the moment that the incident is triggered, allowing the SOC analyst responding to expediate their investigation and validating that the automated action of disabling the account was justified. For this Playbook, the results gathered from Sentinel data lake were placed into a comment and added to the incident within Defender, allowing SOC analysts to quickly review relevant details when beginning their work: Scenario 2: Execute a KQL query via API in code The following Python example demonstrates how to use a service principal to execute a KQL query on the Sentinel data lake via API. This example is provided for illustration purposes, but you can also call the API directly via common API tools. Within the code, the query and workspace are defined. “csl” represents the query to run against the Sentinel data lake and “db” represents the Sentinel workspace/lake. This value is a combination of the workspace name – workspace ID. Both of these values can be found on the workspace overview blade within Azure. You also need to use a token or a service principal. import requests import msal # ====== SPN / Entra app settings ====== TENANT_ID = "" CLIENT_ID = "" CLIENT_SECRET = "" # Token authority AUTHORITY = f"https://login.microsoftonline.com/{TENANT_ID}" # ---- IMPORTANT ---- # Most APIs use the resource + "/.default" pattern for client-credentials. # Try this first: SCOPE = ["4500ebfb-89b6-4b14-a480-7f749797bfcd/.default"] # ====== KQL query payload ====== KQL_QUERY = { "csl": "SigninLogs| take 10", "db": " workspace1-12345678-abcd-abcd-1234-1234567890ab ", "properties": { "Options": { "servertimeout": "00:04:00", "queryconsistency": "strongconsistency", "query_language": "kql", "request_readonly": False, "request_readonly_hardline": False } } } # ====== Acquire token using client credentials ====== app = msal.ConfidentialClientApplication( client_id=CLIENT_ID, authority=AUTHORITY, client_credential=CLIENT_SECRET ) result = app.acquire_token_for_client(scopes=SCOPE) if "access_token" not in result: raise RuntimeError( f"Token acquisition failed: {result.get('error')} - {result.get('error_description')}" ) access_token = result["access_token"] # ====== Call the KQL API ====== headers = { "Authorization": f"Bearer {access_token}", "Content-Type": "application/json" } url = "https://api.securityplatform.microsoft.com/lake/kql/v2/rest/query" # same endpoint response = requests.post(url, headers=headers, json=KQL_QUERY) if response.status_code == 200: print("Query Results:") print(response.json()) else: print(f"Error {response.status_code}: {response.text}") In summary, you need the following parameters in your API call: Request URI: https://api.securityplatform.microsoft.com/lake/kql/v2/rest/query Method: POST Sample payload: { "csl": " SigninLogs | take 10", "db": "workspace1-12345678-abcd-abcd-1234-1234567890ab", } Limitations and considerations The following considerations should be considered when planning to execute KQL queries on a data lake: Service principal permissions When using a service principal, Azure RBAC roles can be assigned at the Sentinel workspace level. Entra ID roles or XDR unified RBAC role are not supported for this scenario. Alternatively, user tokens with Entra ID roles can be used. Result size limits Queries are subject to limits on execution time and response size. Review Microsoft Sentinel data lake query service limits when designing your workflows. Summary Running KQL queries on Sentinel data lake via APIs unlocks a new class of scenarios, from intelligent agents to fully automated analytics pipelines. By decoupling query execution from user interfaces, customers gain flexibility, scalability, and control over how insights are generated and consumed. If you’re already using KQL for interactive analysis, API access is the natural next step toward production grade analytics. Happy hunting! Resources Run KQL queries on Sentinel data lake: Run KQL queries against the Microsoft Sentinel data lake - Microsoft Security | Microsoft Learn Service parameters and limits: Microsoft Sentinel data lake service limits - Microsoft Security | Microsoft LearnEstimate Microsoft Sentinel Costs with Confidence Using the New Sentinel Cost Estimator
One of the first questions teams ask when evaluating Microsoft Sentinel is simple: what will this actually cost? Today, many customers and partners estimate Sentinel costs using the Azure Pricing Calculator, but it doesn’t provide the Sentinel-specific usage guidance needed to understand how each Sentinel meter contributes to overall spend. As a result, it can be hard to produce accurate, trustworthy estimates, especially early on, when you may not know every input upfront. To make these conversations easier and budgets more predictable, Microsoft is introducing the new Sentinel Cost Estimator (public preview) for Microsoft customers and partners. The Sentinel Cost Estimator gives organizations better visibility into spend and more confidence in budgeting as they operate at scale. You can access the Microsoft Sentinel Cost Estimator here: https://microsoft.com/en-us/security/pricing/microsoft-sentinel/cost-estimator What the Sentinel Cost Estimator does The new Sentinel Cost Estimator makes pricing transparent and predictable for Microsoft customers and partners. The Sentinel Cost Estimator helps you understand what drives costs at a meter level and ensures your estimates are accurate with step-by-step guidance. You can model multi-year estimates with built-in projections for up to three years, making it easy to anticipate data growth, plan for future spend, and avoid budget surprises as your security operations mature. Estimates can be easily shared with finance and security teams to support better budgeting and planning. When to Use the Sentinel Cost Estimator Use the Sentinel Cost Estimator to: Model ingestion growth over time as new data sources are onboarded Explore tradeoffs between Analytics and Data Lake storage tiers Understand the impact of retention requirements on total spend Estimate compute usage for notebooks and advanced queries Project costs across a multi‑year deployment timeline For broader Azure infrastructure cost planning, the Azure Pricing Calculator can still be used alongside the Sentinel Cost Estimator. Cost Estimator Example Let’s walk through a practical example using the Cost Estimator. A medium-sized company that is new to Microsoft Sentinel wants a high-level estimate of expected costs. In their previous SIEM, they performed proactive threat hunting across identity, endpoint, and network logs; ran detections on high-security-value data sources from multiple vendors; built a small set of dashboards; and required three years of retention for compliance and audit purposes. Based on their prior SIEM, they estimate they currently ingest about 2 TB per day. In the Cost Estimator, they select their region and enter their daily ingestion volume. As they are not currently using Sentinel data lake, they can explore different ways of splitting ingestion between tiers to understand the potential cost benefit of using the data lake. Their retention requirement is three years. If they choose to use Sentinel data lake, they can plan to retain 90 days in the Analytics tier (included with Microsoft Sentinel) and keep the remaining data in Sentinel data lake for the full three years. As notebooks are new to them, they plan to evaluate notebooks for SOC workflows and graph building. They expect to start in the light usage tier and may move to medium as they mature. Since they occasionally query data older than 90 days to build trends—and anticipate using the Sentinel MCP server for SOC workflows on Sentinel lake data—they expect to start in the medium query volume tier. Note: These tiers are for estimation purposes only; they do not lock in pricing when using the Microsoft Sentinel platform. Because this customer is upgrading from Microsoft 365 E3 to E5, they may be eligible for free ingestion based on their user count. Combined with their eligible server data from Defender for Servers, this can reduce their billable ingestion. In the review step, the Cost Estimator projects costs across a three-year window and breaks down drivers such as data tiers, commitment tiers, and comparisons with alternative storage options. From there, the customer can go back to earlier steps to adjust inputs and explore different scenarios. Once done, the estimate report can be exported for reference with Microsoft representatives and internal leadership when discussing the deployment of Microsoft Sentinel and Sentinel Platform. Finalize Your Estimate with Microsoft The Microsoft Sentinel Cost Estimator is designed to provide directional guidance and help organizations understand how architectural decisions may influence cost. Final pricing may vary based on factors such as deployment architecture, commitment tiers, and applicable discounts. We recommend working with your Microsoft account team or a Security sales specialist to develop a formal proposal tailored to your organization’s requirements. Try the Microsoft Sentinel Cost Estimator Start building your Microsoft Sentinel cost estimate today: https://microsoft.com/en-us/security/pricing/microsoft-sentinel/cost-estimator.1.7KViews0likes1CommentCustom data collection in MDE - what is default?
So you just announced the preview of "Custom data collection in Microsoft Defender for Endpoint (Preview)" which lets me ingest custom data to sentinel. Is there also an overview of what is default and what I can add? e.g. we want to examine repeating disconnects from AzureVPN clients (yes, it's most likely just Microsoft's fault, as the app ratings show 'everyone' is having them) How do I know which data I can add to DeviceCustomNetworkEvents which isnt already in DeviceNetworkEvents?Solved148Views1like1CommentHow to Ingest Microsoft Intune Logs into Microsoft Sentinel
For many organizations using Microsoft Intune to manage devices, integrating Intune logs into Microsoft Sentinel is an essential for security operations (Incorporate the device into the SEIM). By routing Intune’s device management and compliance data into your central SIEM, you gain a unified view of endpoint events and can set up alerts on critical Intune activities e.g. devices falling out of compliance or policy changes. This unified monitoring helps security and IT teams detect issues faster, correlate Intune events with other security logs for threat hunting and improve compliance reporting. We’re publishing these best practices to help unblock common customer challenges in configuring Intune log ingestion. In this step-by-step guide, you’ll learn how to successfully send Intune logs to Microsoft Sentinel, so you can fully leverage Intune data for enhanced security and compliance visibility. Prerequisites and Overview Before configuring log ingestion, ensure the following prerequisites are in place: Microsoft Sentinel Enabled Workspace: A Log Analytics Workspace with Microsoft Sentinel enabled; For information regarding setting up a workspace and onboarding Microsoft Sentinel, see: Onboard Microsoft Sentinel - Log Analytics workspace overview. Microsoft Sentinel is now available in the Defender Portal, connect your Microsoft Sentinel Workspace to the Defender Portal: Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations. Intune Administrator permissions: You need appropriate rights to configure Intune Diagnostic Settings. For information, see: Microsoft Entra built-in roles - Intune Administrator. Log Analytics Contributor role: The account configuring diagnostics should have permission to write to the Log Analytics workspace. For more information on the different roles, and what they can do, go to Manage access to log data and workspaces in Azure Monitor. Intune diagnostic logging enabled: Ensure that Intune diagnostic settings are configured to send logs to Azure Monitor / Log Analytics, and that devices and users are enrolled in Intune so that relevant management and compliance events are generated. For more information, see: Send Intune log data to Azure Storage, Event Hubs, or Log Analytics. Configure Intune to Send Logs to Microsoft Sentinel Sign in to the Microsoft Intune admin center. Select Reports > Diagnostics settings. If it’s the first time here, you may be prompted to “Turn on” diagnostic settings for Intune; enable it if so. Then click “+ Add diagnostic setting” to create a new setting: Select Intune Log Categories. In the “Diagnostic setting” configuration page, give the setting a name (e.g. “Microsoft Sentinel Intune Logs Demo”). Under Logs to send, you’ll see checkboxes for each Intune log category. Select the categories you want to forward. For comprehensive monitoring, check AuditLogs, OperationalLogs, DeviceComplianceOrg, and Devices. The selected log categories will be sent to a table in the Microsoft Sentinel Workspace. Configure Destination Details – Microsoft Sentinel Workspace. Under Destination details on the same page, select your Azure Subscription then select the Microsoft Sentinel workspace. Save the Diagnostic Setting. After you click save, the Microsoft Intune Logs will will be streamed to 4 tables which are in the Analytics Tier. For pricing on the analytic tier check here: Plan costs and understand pricing and billing. Verify Data in Microsoft Sentinel. After configuring Intune to send diagnostic data to a Microsoft Sentinel Workspace, it’s crucial to verify that the Intune logs are successfully flowing into Microsoft Sentinel. You can do this by checking specific Intune log tables both in the Microsoft 365 Defender portal and in the Azure Portal. The key tables to verify are: IntuneAuditLogs IntuneOperationalLogs IntuneDeviceComplianceOrg IntuneDevices Microsoft 365 Defender Portal (Unified) Azure Portal (Microsoft Sentinel) 1. Open Advanced Hunting: Sign in to the https://security.microsoft.com (the unified portal). Navigate to Advanced Hunting. – This opens the unified query editor where you can search across Microsoft Defender data and any connected Sentinel data. 2. Find Intune Tables: In the Advanced hunting Schema pane (on the left side of the query editor), scroll down past the Microsoft Sentinel Tables. Under the LogManagement Section Look for IntuneAuditLogs, IntuneOperationalLogs, IntuneDeviceComplianceOrg, and IntuneDevices in the list. Microsoft Sentinel in Defender Portal – Tables 1. Navigate to Logs: Sign in to the https://portal.azure.com and open Microsoft Sentinel. Select your Sentinel workspace, then click Logs (under General). 2. Find Intune Tables: In the Logs query editor that opens, you’ll see a Schema or tables list on the left. If it’s collapsed, click >> to expand it. Scroll down to find LogManagement and expand it; look for these Intune-related tables: IntuneAuditLogs, IntuneOperationalLogs, IntuneDeviceComplianceOrg, and IntuneDevices Microsoft Sentinel in Azure Portal – Tables Querying Intune Log Tables in Sentinel – Once the tables are present, use Kusto Query Language (KQL) in either portal to view and analyze Intune data: Microsoft 365 Defender Portal (Unified) Azure Portal (Microsoft Sentinel) In the Advanced Hunting page, ensure the query editor is visible (select New query if needed). Run a simple KQL query such as: IntuneDevice | take 5 Click Run query to display sample Intune device records. If results are returned, it confirms that Intune data is being ingested successfully. Note that querying across Microsoft Sentinel data in the unified Advanced Hunting view requires at least the Microsoft Sentinel Reader role. In the Azure Logs blade, use the query editor to run a simple KQL query such as: IntuneDevice | take 5 Select Run to view the results in a table showing sample Intune device data. If results appear, it confirms that your Intune logs are being collected successfully. You can select any record to view full event details and use KQL to further explore or filter the data - for example, by querying IntuneDeviceComplianceOrg to identify devices that are not compliant and adjust the query as needed. Once Microsoft Intune logs are flowing into Microsoft Sentinel, the real value comes from transforming that raw device and audit data into actionable security signals. To achieve this, you should set up detection rules that continuously analyze the Intune logs and automatically flag any risky or suspicious behavior. In practice, this means creating custom detection rules in the Microsoft Defender portal (part of the unified XDR experience) see [https://learn.microsoft.com/en-us/defender-xdr/custom-detection-rules] and scheduled analytics rules in Microsoft Sentinel (in either the Azure Portal or the unified Defender portal interface) see:[Create scheduled analytics rules in Microsoft Sentinel | Microsoft Learn]. These detection rules will continuously monitor your Intune telemetry – tracking device compliance status, enrollment activity, and administrative actions – and will raise alerts whenever they detect suspicious or out-of-policy events. For example, you can be alerted if a large number of devices fall out of compliance, if an unusual spike in enrollment failures occurs, or if an Intune policy is modified by an unexpected account. Each alert generated by these rules becomes an incident in Microsoft Sentinel (and in the XDR Defender portal’s unified incident queue), enabling your security team to investigate and respond through the standard SOC workflow. In turn, this converts raw Intune log data into high-value security insights: you’ll achieve proactive detection of potential issues, faster investigation by pivoting on the enriched Intune data in each incident, and even automated response across your endpoints (for instance, by triggering playbooks or other automated remediation actions when an alert fires). Use this Detection Logic to Create a detection Rule IntuneDeviceComplianceOrg | where TimeGenerated > ago(24h) | where ComplianceState != "Compliant" | summarize NonCompliantCount = count() by DeviceName, TimeGenerated | where NonCompliantCount > 3 Additional Tips: After confirming data ingestion and setting up alerts, you can leverage other Microsoft Sentinel features to get more value from your Intune logs. For example: Workbooks for Visualization: Create custom workbooks to build dashboards for Intune data (or check if community-contributed Intune workbooks are available). This can help you monitor device compliance trends and Intune activities visually. Hunting and Queries: Use advanced hunting (KQL queries) to proactively search through Intune logs for suspicious activities or trends. The unified Defender portal’s Advanced Hunting page can query both Sentinel (Intune logs) and Defender data together, enabling correlation across Intune and other security data. For instance, you might join IntuneDevices data with Azure AD sign-in logs to investigate a device associated with risky sign-ins. Incident Management: Leverage Sentinel’s Incidents view (in Azure portal) or the unified Incidents queue in Defender to investigate alerts triggered by your new rules. Incidents in Sentinel (whether created in Azure or Defender portal) will appear in the connected portal, allowing your security operations team to manage Intune-related alerts just like any other security incident. Built-in Rules & Content: Remember that Microsoft Sentinel provides many built-in Analytics Rule templates and Content Hub solutions. While there isn’t a native pre-built Intune content pack as of now, you can use general Sentinel features to monitor Intune data. Frequently Asked Questions If you’ve set everything up but don’t see logs in Sentinel, run through these checks: Check Diagnostic Settings Go to the Microsoft Intune admin center → Reports → Diagnostic settings. Make sure the setting is turned ON and sending the right log categories to the correct Microsoft Sentinel workspace. Confirm the Right Workspace Double-check that the Azure subscription and Microsoft Sentinel workspace are selected. If you have multiple tenants/directories, make sure you’re in the right one. Verify Permissions Make Sure Logs Are Being Generated If no devices are enrolled or no actions have been taken, there may be nothing to log yet. Try enrolling a device or changing a policy to trigger logs. Check Your Queries Make sure you’re querying the correct workspace and time range in Microsoft Sentinel. Try a direct query like: IntuneAuditLogs | take 5 Still Nothing? Try deleting and re-adding the diagnostic setting. Most issues come down to permissions or selecting the wrong workspace. How long are Intune logs retained, and how can I keep them longer? The analytics tier keeps data in the interactive retention state for 90 days by default, extensible for up to two years. This interactive state, while expensive, allows you to query your data in unlimited fashion, with high performance, at no charge per query: Log retention tiers in Microsoft Sentinel. We hope this helps you to successfully connect your resources and end-to-end ingest Intune logs into Microsoft Sentinel. If you have any questions, leave a comment below or reach out to us on X @MSFTSecSuppTeam!1KViews2likes0Comments