microsoft sentinel
819 TopicsThe Unified SecOps Transition — Why It Is a Security Architecture Decision, Not Just a Portal Change
Microsoft will retire the standalone Azure Sentinel portal on March 31, 2027. Most of the conversation around this transition focuses on cost optimization and portal consolidation. That framing undersells what is actually happening. The unified Defender portal is not a new interface for the same capabilities. It is the platform foundation for a fundamentally different SOC operating model — one built on a 2-tier data architecture, graph-based investigation, and AI agents that can hunt, enrich, and respond at machine speed. Partners who understand this will help customers build security programs that match how attackers actually operate. Partners who treat it as a portal migration will be offering the same services they offered five years ago. This document covers four things: What the unified platform delivers — the security capabilities that do not exist in standalone Sentinel and why they matter against today’s threats. What the transition really involves - is not data migration, but it is a data architecture project that changes how telemetry flows, where it lives, and who queries it. Where the partner opportunity lives — a structured progression from professional services (transactional, transition execution, and advisory) to ongoing managed security services. Why does the unified platform win competitively — factual capability advantages that give partners a defensible position against third-party SIEM alternatives. The Bigger Picture: Preparing for the Agentic SOC Before getting into transition mechanics, partners need to understand where the industry is headed — because the platform decisions made during this transition will determine whether a customer’s SOC is ready for what comes next. The security industry is moving from human-driven, alert-centric workflows to an operating model built on three pillars: Intellectual Property — the detection logic, hunting hypotheses, response playbooks, and domain expertise that differentiate one security team from another. Human Orchestration — the judgment, context, and decision-making that humans bring to complex incidents. Humans set strategy, validate findings, and make containment decisions. They do not manually triage every alert. AI Agents - built agents that execute repeatable work: enriching incidents, hunting across months of telemetry, validating security posture, drafting response actions, and flagging anomalies for human review. The SOC of 2027 will not be scaled by hiring more analysts. It will be scaled by deploying agents that encode institutional knowledge into automated workflows — orchestrated by humans who focus on the decisions that require judgment. This transformation requires a platform that provides three things: Deep telemetry — agents need months of queryable data to analyze behavioral patterns, build baselines, and detect slow-moving threats. The Sentinel Data Lake provides this at a cost point that makes long-retention feasible. Relationship context — agents need to understand how entities connect. Which accounts share credentials? What is the blast radius of a compromised service principle? What is the attack path from a phished user to domain admin? Sentinel Graph provides this. Extensibility — partners and customers need to build and deploy their own agents without waiting for Microsoft to ship them. The MCP framework and Copilot agent architecture provide this. None of these exist in standalone Azure Sentinel. All three ship with the unified platform. The urgency goes beyond the March 2027 deadline. Organizations are deploying AI agents, copilots, and autonomous workflows across their businesses — and every one of those creates a new attack surface. Prompt injection, data poisoning, agent hijacking, cross-plugin exploitation — these are not theoretical risks. They are in the wild today. Defending against AI-powered attacks requires a security platform that is itself AI Agent-ready. The unified Defender portal is that platform. What the Unified Platform Actually Delivers The original framing — “single pane of glass for SIEM and XDR” — is accurate but insufficient. Here is what the unified platform delivers that standalone Sentinel does not. Cross-Domain Incident Correlation The Defender correlation engine does not just group alerts by time proximity. It builds multi-stage incident graphs that link identity compromise to lateral movement to data exfiltration across SIEM and XDR telemetry — automatically. Consider a token theft chain: an infostealer harvests browser session cookies (endpoint telemetry), the attacker replays the token from a foreign IP (Entra ID sign-in logs), creates a mailbox forwarding rule (Exchange audit logs), and begins exfiltrating data (DLP alerts). In standalone Sentinel, these are four separate alerts in four different tables. In the unified platform, they are one correlated incident with a visual attack timeline. 2-Tier Data Architecture The Sentinel Data Lake introduces a second storage tier that changes the economics and capabilities of security telemetry: Analytics Tier Data Lake Purpose Real-time detection rules, SOAR, alerting Hunting, forensics, behavioral analysis, AI agent queries Latency Sub-5-minute query and alerting Minutes to hours acceptable Cost ~$4.30/GB PAYG ingestion (~$2.96 at 100 GB/day commitment) ~$0.05/GB ingestion + $0.10/GB data processing (at least 20x cheaper) Retention 90 days default (expensive to extend) Up to 12 years at low cost Best for High-signal, low-volume sources High-volume, investigation-critical sources The architecture decision is not “which tier is cheaper.” It is “which tier gives me the right detection capability for each data source.” Analytics tier candidates: Entra ID sign-in logs, Azure activity, audit logs, EDR alerts, PAM events, Defender for Identity alerts, email threat detections. These need sub-5-minute alerting. Data Lake candidates: Raw firewall session logs, full DNS query streams, proxy request logs, Sysmon process events, NSG flow logs. These drive hunting and forensic analysis over weeks or months. Dual-ingest sources: Some sources need both tiers. Entra ID sign-in logs are the canonical example — analytics tier for real-time password spray detection, Data Lake for graph-based blast radius analysis across months of authentication history. Implementation is straightforward: a single Data Collection Rule (DCR) transformation handles the split. One collection point, two routing destinations. The right framing: “Right data in the right tier = better detections AND lower cost.” Cost savings are a side effect of good security architecture, not the goal. Sentinel Graph Sentinel Graph enables SOC teams and AI agents to answer questions that flat log queries cannot: What is the blast radius of this compromised account? Which service principals share credentials with the breached identity? What is the attack path from this phished user to domain admin? Which entities are connected to this suspicious IP across all telemetry sources? Graph-based investigation turns isolated alerts into context-rich intelligence. It is the difference between knowing “this account was compromised” and understanding “this account has access to 47 service principals, 3 of which have written access to production Key Vault.” Security Copilot Integration Security Copilot embedded in the unified portal helps analysts summarize incidents, generate hunting queries, explain attacker behavior, and draft response actions. For complex multi-stage incidents, it reduces the time from “I see an alert” to “I understand the full scope” from hours to minutes. With free SCUs available with Microsoft 365 E5, teams can apply AI to the highest-effort investigation work without adding incremental cost. MCP and the Agent Framework The Model Context Protocol (MCP) and Copilot agent architecture let partners and customers build purpose-built security agents. A concrete example: an MCP-enabled agent can automatically enrich a phishing incident by querying email metadata, checking the sender against threat intelligence, pulling the user’s recent sign-in patterns, correlating with Sentinel Graph for lateral risk, and drafting a containment recommendation — in under 60 seconds. This is where partner intellectual property becomes competitive advantage. The agent framework is the mechanism for encoding proprietary detection logic, response playbooks, and domain expertise into automated workflows that run at machine speed. Security Store Security Store allows partners to evolve from one‑time transition projects into repeatable, scalable offerings—supporting professional services, managed services, and agent‑based IP that align with the customer’s unified SecOps operating model. As part of the transition, the Microsoft Security Store becomes the extension layer for the unified SecOps platform—allowing partners to deliver differentiated agents, SaaS, and security services natively within Defender and Sentinel, instead of building and integrating in isolation The 4 Investigation Surfaces: A Customer Maturity Ladder The Sentinel Data Lake exposes four distinct investigation surfaces, each representing a step toward the Agentic SOC — and a partner service opportunity: Surface Capability Maturity Level Partner Opportunity KQL Query Ad-hoc hunting, forensic investigation Basic — “we can query” Hunting query libraries; KQL training Graph Analytics Blast radius, attack paths, entity relationships Intermediate — “we understand relationships” Graph investigation training; attack path workshops Notebooks (PySpark) Statistical analysis, behavioral baselines, ML models Advanced — “we predict behaviors” Custom notebook development; anomaly scoring Agent/MCP Access Autonomous hunting, triage, response at machine speed Agentic SOC — “we automate” Custom agent development; MCP integration The customer who starts with “help us hunt better” ends up at “build us agents that hunt autonomously.” That is the progression from professional services to managed services. What the Transition Actually Involves It is not a data migration — customers’ underlying log data and analytics remain in their existing Log Analytics workspaces. That is important for partners to communicate clearly. But partners should not set the expectation that nothing changes except the URL. Microsoft’s official transition guide documents significant operational changes — including automation rules and playbooks, analytics rule, RBAC restructuring to the new unified model (URBAC), API schema changes that break ServiceNow and Jira integrations, analytics rule transitions where the Fusion engine is replaced by the Defender XDR correlation engine, and data policy shifts for regulated industries. Most customers cannot navigate this complexity without professional help. Important: Transitioning to the Defender portal has no extra cost - estimate the billing with the new Sentinel Cost Estimator Optimizing the unified platform means making deliberate changes: Adding dual-ingest for critical sources that need both real-time detection and long-horizon hunting. Moving high-volume telemetry to the Data Lake — enabling hunting at scale that was previously cost-prohibitive. Retiring redundant data copies where Defender XDR already provides the investigation capability. Updating RBAC, automation, and integrations for the unified portal’s consolidated schema and permission structure. Training analysts on new investigation workflows, Sentinel Graph navigation, and Copilot-assisted triage. Threat Coverage: The Detection Gap Most Organizations Do Not Know They Have This transition is an opportunity to quantify detection maturity — and most organizations will not like what they find. Based on real-world breach analysis — infostealers, business email compromise, human-operated ransomware, cloud identity abuse, vulnerability exploitation, nation-state espionage, and other prevalent threat categories — organizations running standalone Sentinel with default configurations typically have significant detection gaps. Those gaps cluster in three areas: Cross-domain correlation gaps — attacks that span identity, endpoint, email, and cloud workloads. These require the Defender correlation engine because no single log source tells the complete story. Long-retention hunting gaps — threats like command-and-control beaconing and slow data exfiltration that unfold over weeks or months. Analytics-tier retention at 90 days is too expensive to extend and too short for historical pattern analysis. Graph-based analysis gaps — lateral movement, blast radius assessment, and attack path analysis that require understanding entity relationships rather than flat log queries. The unified platform with proper log source coverage across Microsoft-native sources can materially close these gaps — but only if the transition includes a detection coverage assessment, not just a portal cutover. Partners should use MITRE ATT&CK as the common framework for measuring detection maturity. Map existing detections to ATT&CK tactics and techniques before and after transition — a measurable, defensible improvement that justifies advisory fees and ongoing managed services. Partner Opportunity: Professional Services to Managed Services The USX transition creates a structured progression for all partner types — from professional services that build trust and surface findings, to managed security services that deliver ongoing value. The key insight most partners miss: do not jump from “transition assessment” to “managed services pitch.” Customers are not ready for that conversation until they have experienced the value of professional services. The bridge engagement — whether transactional, transition execution, or advisory — builds trust, demonstrates the expertise, and surfaces the findings that make the managed services conversation a logical next step. Professional Services (transactional + transition execution + advisory) → Managed Security Services (MSSP) The USX transition is the ideal professional services entry point because it combines a mandatory deadline (March 2027) with genuine technical complexity (analytics rule, automation behavioral changes, RBAC restructuring, API schema shifts) that most customers cannot navigate alone. Every engagement produces findings — detection gaps, automation fragility, staffing shortfalls — that are the most credible possible evidence for managed services. Professional Services Transactional Partners Offer Customer Value Key Deliverables Transition Readiness Assessment Risk-mitigated transition with clear scope Sentinel deployment inventory; Defender portal compatibility check; transition roadmap with timeline; MITRE ATT&CK detection coverage baseline Transition Execution and Enablement Accelerated time-to-value, minimal disruption Workspace onboarding; RBAC and automation updates; Dual-portal testing and validation; SOC team training on unified workflows Security Posture and Detection Optimization Better detections and lower cost Data ingestion and tiering strategy; Dual-ingest implementation for critical sources; Detection coverage gap analysis; Automation and Copilot/MCP recommendations Advisory Partners Offer Customer Value Key Deliverables Executive and Strategy Advisory Leadership alignment on why this transition matters Unified SecOps vision and business case; Zero Trust and SOC modernization alignment; Stakeholder alignment across security, IT, and leadership Architecture and Design Advisory Future-ready architecture optimized for the Agentic SOC Target-state 2-tier data architecture; Dual-ingest routing decisions mapped to MITRE tactics; RBAC, retention, and access model design Detection Coverage and Gap Analysis Measurable detection maturity improvement Current-state MITRE ATT&CK coverage mapping; Gap analysis against 24 threat patterns; Detection improvement roadmap with priority recommendations SOC Operating Model Advisory Smooth analyst adoption with clear ownership Redesigned SOC workflows for unified portal; Incident triage and investigation playbooks; RACI for detection engineering, hunting, and platform ops Agentic SOC Readiness Preparation for AI-driven security operations MCP and agent architecture assessment; Custom agent development roadmap; IP + Human Orchestration + Agent operating model design Cost, Licensing and Value Advisory Transparent cost impact with strong business case Current vs. future cost analysis; Data tiering optimization recommendations; TCO and ROI modeling for leadership The conversion to managed services is evidence-based. Every professional services engagement produces findings — detection gaps, automation fragility, staffing shortfalls. Those findings are the most credible possible case for ongoing managed services. Managed Security Services The unified platform changes the managed security conversation. Partners are no longer selling “we watch your alerts 24/7.” They are selling an operating model where proprietary AI agents handle the repeatable work — enrichment, hunting, posture validation, response drafting — and human experts focus on the decisions that require judgment. This is where the competitive moat forms. The formula: IP + Human Orchestration + AI Agents = differentiated managed security. The unified platform enables this through: Multi-tenancy — the built-in multitenant portal eliminates the need for third-party management layers. Sentinel Data Lake — agents can query months of customer telemetry for behavioral analysis without cost constraints. Sentinel Graph — agents can traverse entity relationships to assess blast radius and map attack paths. MCP extensibility — partners can build agents that integrate with proprietary tools and customer-specific systems. Partners who build proprietary agents encoding their detection logic into the MCP framework will differentiate from partners who rely on out-of-box capabilities. The Securing AI Opportunity Organizations are deploying AI agents, copilots, and autonomous workflows across their businesses at an accelerating pace. Every AI deployment creates a new attack surface — prompt injection, data poisoning, agent hijacking, cross-plugin exploitation, unauthorized data access through agentic workflows. These are not theoretical risks. They are in the wild today. Partners who can help customers secure their AI deployments while also using AI to strengthen their SOC will command premium positioning. This requires a security platform that is itself AI Agent-ready — one that can deploy defensive agents at the same pace organizations deploy business AI. The unified Defender portal is that platform. Partners who position USX as “preparing your SOC for AI-driven security operations” will differentiate from partners who position it as “moving to a new portal.” Cost and Operational Benefits Better security architecture also costs less. This is not a contradiction — it is the natural result of putting the right data in the right tier. Benefit How It Works Eliminate low-value ingestion Identify and remove log sources that are never used for detections, investigations, or hunting. Immediately lowers analytics-tier costs without impacting security outcomes. Right-size analytics rules Disable unused rules, consolidate overlapping detections, and remove automation that does not reduce SOC effort. Pay only for processing that delivers measurable security value. Avoid SIEM/XDR duplication Many threats can be investigated directly in Defender XDR without duplicating telemetry into Sentinel. Stop re-ingesting data that Defender already provides. Tier data by detection need Store high-volume, hunt-oriented telemetry in the Data Lake at at least 20x lower cost. Promote only high-signal sources to the analytics tier. Full data fidelity preserved in both tiers. Reduce operational overhead Unified SIEM+XDR workflows in a single portal reduce tool switching, accelerate investigations, simplify analyst onboarding, and enable SOC teams to scale without proportional headcount increases. Improve detection quality The Defender correlation engine produces higher-fidelity incidents with fewer false positives. SOC teams spend less time triaging noise and more time on real threats. Competitive Positioning Partners need defensible talking points when customers evaluate third-party SIEM alternatives. The following advantages are factual, sourced from Microsoft’s transition documentation and platform capabilities — not marketing claims. No extra cost for transitioning — even for non-E5 customers. Third-party SIEM migrations involve licensing, data migration, detection rewrite, and integration rebuild costs. Native cross-domain correlation across Sentinel + Defender products into multi-stage incident graphs. Third-party SIEMs receive Microsoft logs as flat events — they lack the internal signal context, entity resolution, and product-specific intelligence that powers cross-domain correlation. Custom detections across SIEM + XDR — query both Sentinel and Defender XDR tables without ingesting Defender data into Sentinel. Eliminates redundant ingestion cost. Alert tuning extends to Sentinel — previously Defender-only capability, now applicable to Sentinel analytics rules. Net-new noise reduction. Unified entity pages — consolidated user, device, and IP address pages with data from both Sentinel and Defender XDR, plus global search across SIEM and XDR. Third-party SIEMs provide entity views from ingested data only. Built-in multi-tenancy for MSSPs — multitenant portal manages incidents, alerts, and hunting across tenants without third-party management layers. Try out the new GDAP capabilities in Defender portal. Industry validation: Microsoft’s SIEM+XDR platform has been recognized as a Leader by both Forrester (Security Analytics Platforms, 2025) and Gartner (SIEM Magic Quadrant, 2025). Summary: What Partners Should Take Away Topic Key Message Framing USX is a security architecture transformation, not a portal transition. Lead with detection capability, not cost savings. Platform foundation Sentinel Data Lake + Sentinel Graph + MCP/Agent Framework = the platform for the Agentic SOC. 4 investigation surfaces KQL → Graph → Notebooks → Agent/MCP. A maturity ladder from “we can query” to “we automate at machine speed.” Architecture 2-tier data model (analytics + Data Lake) with dual-ingest for critical sources. Cost savings are a side effect of good architecture. Transition complexity Analytics rules and automation rules. API schema changes. RBAC restructuring. Most customers need professional help. Partner engagement model Professional Services (transactional + transition execution + advisory) → Managed Services (MSSP). Competitive positioning No extra cost. Native correlation. Cross-domain detections. Built-in multi-tenancy. Capabilities third-party SIEMs cannot replicate. Partner differentiation IP + Human Orchestration + AI Agents. Partners who build proprietary agents on MCP have competitive advantage. Timeline March 31, 2027. Start now — phased transition with one telemetry domain first, then scale.Introducing the New Microsoft Sentinel Logstash Output Plugin (Public Preview!)
Many organizations rely on Logstash as a flexible, trusted data pipeline for collecting, transforming, and forwarding logs from on-premises and hybrid environments. Microsoft Sentinel has long supported a Logstash output plugin, enabling customers to send data directly into Sentinel as part of their existing pipelines. The original plugin was implemented in Ruby, and while it has served its purpose, it no longer meets Microsoft’s Secure Future Initiative (SFI) standards and has limited engineering support. To address both security and sustainability, we have rebuilt the plugin from the ground up in Java, a language that is more secure, better supported across Microsoft, and aligned with long-term platform investments. To ensure a seamless transition, the new implementation is still packaged and distributed as a standard Logstash Ruby gem. This means the installation and usage experience remains unchanged for customers, while benefiting from a more secure and maintainable foundation. What's New in This Version Java‑based and SFI‑compliant Same Logstash plugin experience, now rebuilt on a stronger foundation. The new implementation is fully Java‑based, aligning with Microsoft’s Secure Future Initiative (SFI) and providing improved security, supportability, and long-term maintainability. Modern, DCR‑based ingestion The plugin now uses the Azure Monitor Logs Ingestion API with Data Collection Rules (DCRs), replacing the legacy HTTP Data Collection API (For more info, see Migrate from the HTTP Data Collector API to the Log Ingestion API - Azure Monitor | Microsoft Learn). This gives customers full schema control, enables custom log tables, and supports ingestion into standard Microsoft Sentinel tables as well as Microsoft Sentinel data lake. Flexible authentication options Authentication is automatically determined based on your configuration, with support for: Client secret (App registration / service principal) Managed identity, eliminating the need to store credentials in configuration files Sovereign cloud support: The plugin supports Azure sovereign clouds, including Azure US Government, Azure China, and Azure Germany. Standard Logstash distribution model The plugin is published on RubyGems.org, the standard distribution channel for Logstash plugins, and can be installed directly using the Logstash plugin manager, no change to your existing installation workflow. What the Plugin Does Logstash plugin operates as a three-stage data pipeline: Input → Filter → Output. Input: You control how data enters the pipeline, using sources such as syslog, filebeat, Kafka, Event Hubs, databases (via JDBC), files, and more. Filter: You enrich and transform events using Logstash’s powerful filtering ecosystem, including plugins like grok, mutate, and Json, shaping data to match your security and operational needs. Output: This is where Microsoft comes in. The Microsoft Sentinel Logstash Output Plugin securely sends your processed events to an Azure Monitor Data Collection Endpoint, where they are ingested into Sentinel via a Data Collection Rule (DCR). With this model, you retain full control over your Logstash pipeline and data processing logic, while the Sentinel plugin provides a secure, reliable path to ingest data into Microsoft Sentinel. Getting Started Prerequisites Logstash installed and running An Azure Monitor Data Collection Endpoint (DCE) and Data Collection Rule (DCR) in your subscription Contributor role on your Log Analytics workspace Who Is This For? Organizations that already have Logstash pipelines, need to collect from on-premises or legacy systems, and operate in distributed/hybrid environments including air-gapped networks. To learn more, see: microsoft-sentinel-log-analytics-logstash-output-plugin | RubyGems.org | your community gem host929Views1like2CommentsIntroducing the next generation of SOC automation: Sentinel playbook generator
Security teams today operate under constant pressure. They are expected to respond faster, automate more, and do so without sacrificing precision. Traditional security orchestration, automation and response (SOAR) approaches have helped, but they still rely heavily on rigid templates, limited action libraries, and workflows stretched across multiple portals. Building and maintaining automation is often slow and constrained at exactly the time organizations need more flexibility. Something needs to change – and with the introduction of AI and coding models the future of automation is going to look very different than it is today. Today, we’re introducing the Microsoft Sentinel playbook generator, a new way to design code-based playbooks using natural language. With the introduction of generative AI and coding models, coding itself is becoming democratized, and we are excited to bring these new capabilities into our experience. This release represents the first milestone in our next‑generation security automation journey. The playbook generator allows users to design and generate fully functional playbooks simply by describing what they need. The tool generates a Python playbook with documentation and a visual flowchart, streamlining workflows from creation to execution for greater efficiency. This approach is highly flexible, allowing users to automate tasks like team notifications, ticket updates, data enrichment, or incident response across Microsoft and third-party tools. By defining an Integration Profile (base URL, authentication, credentials), the playbook generator can create API calls dynamically without needing predefined connectors. The system also identifies missing integrations and guides users to add them from the Automation tab or within the authoring page Users especially value this capability, allowing for more advanced automations. Playbook creation starts by outlining the workflow. The playbook generator asks questions, proposes a plan, then generates code and documentation once approved. Users can validate playbooks with real alerts and refine code anytime through chat instructions or manual edits. This approach combines the speed of natural language with transparent code, enabling engineers to automate efficiently without sacrificing control or flexibility. Preview customers report that the playbook generator speeds up automation development, simplifies automations for teams, and enables flexible workflow customization without reliance on templates. The playbook generator focuses on fast, intuitive, natural‑language‑driven automation creation, supported by a powerful coding foundation. It aligns with how security teams want to work: flexible, integrated, and deeply customizable. We’re excited to see how customers will use this capability to simplify operations, eliminate repetitive work, and automate tasks that previously demanded deep engineering effort. This marks the start of a new chapter, as AI continues to evolve and reshape what’s possible in security automation. How to get started With just a few prerequisites in place, you can begin creating code‑based automations through natural‑language conversations, directly inside the Microsoft Defender portal. Here’s a quick guide to help you move from first steps to your first generated playbook: 1. Make sure the prerequisites are in place Before you open your first chat in the playbook generator, the AI coding agent behind the playbook generator, confirm that your environment is ready: Security Copilot enabled: Your tenant must have a Security Copilot workspace, configured to use a Europe or US-based capacity. Sentinel workspace in the Defender portal: Ensure your Microsoft Sentinel workspace is onboarded to the Microsoft Defender portal. 2. Ensure you have the right permissions To build and deploy generated playbooks, make sure you have the same permissions required to author Automation Rules—the Microsoft Sentinel Contributor role on the relevant workspaces or resource groups. 3. Configure your integration profiles Integration profiles allow the playbook generator to create and execute any dynamic API calls—one of the most powerful capabilities of this new system. Before you create your first playbook: Go to Automation → Integration Profiles in the Defender portal. Create a Graph API Integration Create Integration to the services you want to have in the playbook (Microsoft Graph, ticketing tools, communication systems, third‑party providers, or others). Provide the base URL, authentication method, and required credentials. 4. Create your first generated playbook From the Automation tab: Select Create → Generated Playbook. Give your playbook a name. 3. The embedded Visual Studio Code window opens— Start in plan mode by simply describing what you want your automation to do. Be explicit about: What data to extract What actions to perform Any conditions or branches Example prompt you can use: “Based on the alert, extract the user principal name, check if the account exists in Entra ID, and if it does, disable the account, create a ticket in ServiceNow, and post a message to the security team channel.” The playbook generator will guide the process, ask clarifying questions, propose a plan, and then—once approved—switch to Act mode to generate the full Python playbook, documentation with a visual flow diagram, and tests. Completing your first playbook marks the beginning of a more intuitive, responsive, and intelligent automation experience—one where your expertise and AI work side by side to transform how your SOC operates. This is more than a new tool; it’s a foundation that will continue to evolve, adapt, and empower defenders as security automation enters its next era. Watch a demo here: https://aka.ms/NLSOARDEMO For deeper guidance, advanced scenarios, and end‑to‑end instructions, you can explore the full playbook generator documentation: Generate playbooks using AI in Microsoft Sentinel | Microsoft Learn7.4KViews8likes4CommentsWhy UK Enterprise Cybersecurity Is Failing in 2026 (And What Leaders Must Change)
Enterprise cybersecurity in large organisations has always been an asymmetric game. But with the rise of AI‑enabled cyber attacks, that imbalance has widened dramatically - particularly for UK and EMEA enterprises operating complex cloud, SaaS, and identity‑driven environments. Microsoft Threat Intelligence and Microsoft Defender Security Research have publicly reported a clear shift in how attackers operate: AI is now embedded across the entire attack lifecycle. Threat actors use AI to accelerate reconnaissance, generate highly targeted phishing at scale, automate infrastructure, and adapt tactics in real time - dramatically reducing the time required to move from initial access to business impact. In recent months, Microsoft has documented AI‑enabled phishing campaigns abusing legitimate authentication mechanisms, including OAuth and device‑code flows, to compromise enterprise accounts at scale. These attacks rely on automation, dynamic code generation, and highly personalised lures - not on exploiting traditional vulnerabilities or stealing passwords. The Reality Gap: Adaptive Attackers vs. Static Enterprise Defences Meanwhile, many UK enterprises still rely on legacy cybersecurity controls designed for a very different threat model - one rooted in a far more predictable world. This creates a dangerous "Resilience Gap." Here is why your current stack is failing- and the C-Suite strategy required to fix it. 1. The Failure of Traditional Antivirus in the AI Era Traditional antivirus (AV) relies on static signatures and hashes. It assumes malicious code remains identical across different targets. AI has rendered this assumption obsolete. Modern malware now uses automated mutation to generate unique code variants at execution time, and adapts behaviour based on its environment. Microsoft Threat Intelligence has observed threat actors using AI‑assisted tooling to rapidly rewrite payload components, ensuring that every deployment looks subtly different. In this model, there is no reliable signature to detect. By the time a pattern exists, the attacker has already moved on. Signature‑based detection is not just slow - it is structurally misaligned with AI‑driven attacks. The Risk: If your security relies on "recognising" a threat, you are already breached. By the time a signature exists, the attacker has evolved. The C-Suite Pivot: Shift investment from artifact detection to EDR/XDR (Extended Detection and Response). We must prioritise behavioural analytics and machine learning models that identify intent rather than file names. 2. Why Perimeter Firewalls Fail in a Cloud-First World Many UK enterprise still rely on firewalls enforcing static allow/deny rules based on IP addresses and ports. This model worked when applications were predictable and networks clearly segmented. Today, enterprise traffic is encrypted, cloud‑hosted, API‑driven, and deeply integrated with SaaS and identity services. AI‑assisted phishing campaigns abusing OAuth and device‑code flows demonstrate this clearly. From a network perspective, everything looks legitimate: HTTPS traffic to trusted identity providers. No suspicious port. No malicious domain. Yet the attacker successfully compromises identity. The Risk: Traditional firewalls are "blind" to identity-based breaches in cloud environments. The C-Suite Pivot: Move to Identity-First Security. Treat Identity as the new Control Plane, integrating signals like user risk, device health, and geolocation into every access decision. 3. The Critical Weakness of Single-Factor Authentication Despite clear NCSC guidance, single-factor passwords remain a common vulnerability in legacy applications and VPNs. AI-driven credential abuse has changed the economics of these attacks. Threat actors now deploy adaptive phishing campaigns that evolve in real-time. Microsoft has observed attackers using AI to hyper-target high-value UK identities- specifically CEOs, Finance Directors, and Procurement leads. The Risk: Static passwords are now the primary weak link in UK supply chain security. The C-Suite Pivot: Mandate Phishing‑resistant MFA (Passkeys or hardware security keys). Implement Conditional Access policies that evaluate risk dynamically at the moment of access, not just at login. Legacy Security vs. AI‑Era Reality 4. The Inherent Risk of VPN-Centric Security VPNs were built on a flawed assumption: that anyone "inside" the network is trustworthy. In 2026, this logic is a liability. AI-assisted attackers now use automation to map internal networks and identify escalation paths the moment they gain VPN access. Furthermore, Microsoft has tracked nation-state actors using AI to create synthetic employee identities- complete with fake resumes and deepfake communication. In these scenarios, VPN access isn't "hacked"; it is legally granted to a fraudster. The Risk: A compromised VPN gives an attacker the "keys to the kingdom." The C-Suite Pivot: Transition to Zero Trust Architecture (ZTA). Access must be explicit, scoped to the specific application, and continuously re‑evaluated using behavioural signals. 5. Data: The High-Velocity Target Sensitive data sitting unencrypted in legacy databases or backups is a ticking time bomb. In the AI era, data discovery is no longer a slow, manual process for a hacker. Attackers now use AI to instantly analyse your directory structures, classify your files, and prioritise high-value data for theft. Unencrypted data significantly increases your "blast radius," turning a containable incident into a catastrophic board-level crisis. The Risk: Beyond the technical breach, unencrypted data leads to massive UK GDPR fines and irreparable brand damage. The C-Suite Pivot: Adopt Data-Centric Security. Implement encryption by default, classify data while adding sensitivity labels and start board-level discussions regarding post‑quantum cryptography (PQC) to future-proof your most sensitive assets. 6. The Failure of Static IDS Traditional Intrusion Detection Systems (IDS) rely on known indicators of compromise - assuming attackers reuse the same tools and techniques. AI‑driven attacks deliberately avoid that assumption. Threat actors are now using Large Language Models (LLMs) to weaponize newly disclosed vulnerabilities within hours. While your team waits for a "known pattern" to be updated in your system, the attacker is already using a custom, AI-generated exploit. The Risk: Your team is defending against yesterday's news while the attacker is moving at machine speed. The C-Suite Pivot: Invest in Adaptive Threat Detection. Move toward Graph‑based XDR platforms that correlate signals across email, endpoint, and cloud to automate investigation and response before the damage spreads. From Static Security to Continuous Security Closing Thought: Security Is a Journey, Not a Destination For UK enterprises, the shift toward adaptive cybersecurity is no longer optional - it is increasingly driven by regulatory expectation, board oversight, and accountability for operational resilience. Recent UK cyber resilience reforms and evolving regulatory frameworks signal a clear direction of travel: cybersecurity is now a board‑level responsibility, not a back‑office technical concern. Directors and executive leaders are expected to demonstrate effective governance, risk ownership, and preparedness for cyber disruption - particularly as AI reshapes the threat landscape. AI is not a future cybersecurity problem. It is a current force multiplier for attackers, exposing the limits of legacy enterprise security architectures faster than many organisations are willing to admit. The uncomfortable truth for boards in 2026 is that no enterprise is 100% secure. Intrusions are inevitable. Credentials will be compromised. Controls will be tested. The difference between a resilient enterprise and a vulnerable one is not the absence of incidents, but how risk is managed when they occur. In mature organisations, this means assuming breach and designing for containment: Access controls that limit blast radius Least privilege and conditional access restricting attackers to the smallest possible scope if an identity is compromised Data‑centric security using automated classification and encryption, ensuring that even when access is misused, sensitive data cannot be freely exfiltrated As a Senior Enterprise Cybersecurity Architect, I see this moment as a unique opportunity. AI adoption does not have to repeat the mistakes of earlier technology waves, where innovation moved fast and security followed years later. We now have a rare chance to embed security from day one - designing identity controls, data boundaries, automated monitoring, and governance before AI systems become business‑critical. When security is built in upfront, enterprises don’t just reduce risk - they gain the confidence to move faster and unlock AI’s value safely. Security is no longer a “department”. In the age of AI, it is a continuous business function - essential to preserving trust and maintaining operational continuity as attackers move at machine speed. References: Inside an AI‑enabled device code phishing campaign | Microsoft Security Blog AI as tradecraft: How threat actors operationalize AI | Microsoft Security Blog Detecting and analyzing prompt abuse in AI tools | Microsoft Security Blog Post-Quantum Cryptography | CSRC Microsoft Digital Defense Report 2025 | Microsoft https://www.ncsc.gov.uk/news/government-adopt-passkey-technology-digital-servicesHow Granular Delegated Admin Privileges (GDAP) allows Sentinel customers to delegate access
Simplifying Defender SIEM and XDR delegated access As Microsoft Sentinel and Defender converge into a unified experience, organizations face a fundamental challenge: the lack of a scalable, comprehensive, delegated access model that works seamlessly across Entra ID and Sentinel’s Azure Resource Manage creating a significant barrier for Managed Security Service Providers (MSSPs) and large enterprises with complex multi-tenant structures. Extending GDAP beyond CSPs: a strategic solution In response to these challenges, we have developed an extension to GDAP that makes it available to all Sentinel and Defender customers, including non-CSP organizations. This expansion enables both MSSPs and customers with multi-tenant organizational structures to establish secure, granular delegated access relationships directly through the Microsoft Defender portal. This is now available in public preview. The GDAP extension aligns with zero-trust security principles through a three-way handshake model requiring explicit mutual consent between governing and governed tenants before any relationship is established. This consent-based approach enhances transparency and accountability, reducing risks associated with broad, uncontrolled permissions. By integrating with Microsoft Defender, GDAP enables advanced threat detection and response capabilities across tenant boundaries while maintaining granular permission management through Entra ID roles and Unified RBAC custom permissions. Delivering unified management of delegated access across SIEM and XDR With GDAP, customers gain a truly unified way to manage access across both Microsoft Sentinel and Defender—using a single, consistent delegated access model for SIEM and XDR. For Sentinel customers, this brings parity with the Azure portal experience: where delegated access was previously managed through Azure Lighthouse, it can now be handled directly in the Defender portal using GDAP. More importantly, for organizations running SIEM and XDR together, GDAP eliminates the need to switch between portals—allowing teams to view, manage, and govern security access from one centralized experience. The result is simpler administration, reduced operational friction, and a more cohesive way to secure multi-tenant environments at scale. How GDAP for non-CSPs works: the three-step handshake The GDAP handshake model implements a security-first approach through three distinct steps, each requiring explicit approval to prevent unauthorized access. Step 1 begins with the governed tenant initiating the relationship, allowing the governing tenant to request GDAP access. Step 2 shifts control to the governing tenant, which creates and sends a delegated access request with specific requested permissions through the multi-tenant organization (MTO) portal. Step 3 returns to the governed tenant for final approval. The approach provides customers with complete visibility and control over who can access their security data and with what permissions, while giving MSSPs a streamlined, Microsoft-supported mechanism for managing delegated relationships at scale. Step 4 assigns Sentinel permissions. In Azure resource management, assign governing tenant’s groups with Sentinel workspaces permissions (in the governed tenant), selecting the governing tenant’s security groups used in the created relationship. Learn more here: Configure delegated access with governance relationships for multitenant organizations - Unified se…2.6KViews1like12CommentsWhat's new: IdentityInfo table is now in public preview!
Gain more visibility to the user accounts in your tenant with the new IdentityInfo table, which surface a snapshot of your Azure AD users inside of Log Analytics, that can be used for hunting, alerts correlation and more!20KViews7likes3CommentsClarification on AADSignInEventsBeta vs. IdentityLogonEvents Logs
Hey everyone, I’ve been reading up on the AADSignInEventsBeta table and got a bit confused. From what I understand, the AADSignInEventsBeta table is in beta and is only available for those with a Microsoft Entra ID P2 license. The idea is that the sign-in schema will eventually move over to the IdentityLogonEvents table. What I’m unsure about is whether the data from the AADSignInEventsBeta table has already been migrated to the IdentityLogonEvents table, or if they’re still separate for now. Can anyone clarify this for me? Thanks in advance for your help!235Views0likes1CommentEnforce Cost Limits on KQL Queries and Notebooks in the Microsoft Sentinel Data Lake
Security teams face a constant tension: run the advanced analytics you need to stay ahead of threats, or hold back to keep costs predictable. Until now, Microsoft Sentinel let you set alerts to get notified when data lake usage approached a threshold — useful for awareness, but not enough to prevent budget overruns. Today, we're excited to announce threshold enforcement for KQL queries and notebooks in the Microsoft Sentinel data lake. With this release, you can go beyond notifications and automatically block new queries and jobs when your configured usage limits are exceeded. Your analysts keep working confidently, and your budgets stay protected. What's new Previously, the Configure Policies experience in Microsoft Sentinel let you set threshold-based alerts for data lake usage. You'd receive an email notification when consumption approached a limit — but nothing stopped usage from continuing past that point. Now, you can enable enforcement on those same policies. When enforcement is turned on and a threshold is exceeded, Microsoft Sentinel blocks new queries, jobs, and notebook sessions with a clear "Limit exceeded" error. No more surprise cost spikes from runaway queries or analysts who mistakenly run heavy workloads against data lake data. Enforcement is supported for two data lake capability categories: Data Lake Query — interactive KQL queries and KQL jobs (scheduled and ad hoc) Advanced Data Insights — notebook runs and notebook jobs How it works Consistent controls across KQL queries and notebooks Cost controls are enforced consistently across Sentinel data lake workloads, regardless of how analysts access the data. The same policy applies whether someone is running a quick investigation or executing a long-running job. Controls apply to: Interactive KQL queries in the data lake explorer in the Defender portal KQL jobs, including scheduled and ad-hoc jobs Notebook queries run through the Microsoft Sentinel VS Code extension Notebook jobs running as background or scheduled workloads This ensures advanced analytics remain powerful — but predictable and governed. Clear enforcement without disruption Enforcement is applied at execution and validation boundaries — not retroactively. This means: Queries or jobs already running are not interrupted. In-flight work completes normally. New queries, jobs, or notebook sessions are blocked once limits are exceeded. Failures occur early (for example, during validation), avoiding wasted compute. From an analyst's perspective, enforcement is explicit and consistent. Clear messaging appears in query editors, job validation responses, and notebooks when limits are reached — so your team always understands what happened and what to do next. How to set it up Prerequisites To configure enforcement policies, ensure you have the necessary permissions that are outlined here: Manage and monitor costs for Microsoft Sentinel | Microsoft Learn. Where to access Navigate to Microsoft Sentinel > Cost management > Configure Policies in the Microsoft Defender portal (https://security.microsoft.com). Step-by-step configuration In Microsoft Sentinel > Cost management, select Configure Policies. Select the policy you want to edit (Data Lake Query or Advanced Data Insights). Enter the total threshold value for the policy. Enter an alert percentage to receive email notifications before the threshold is reached. Enable the Enforcement toggle to block usage after the threshold is exceeded. Review your settings and select Submit. Once enforcement is active, administrators receive advance notifications as usage approaches the threshold. If circumstances change — for example, during an active breach — you can adjust the threshold, disable enforcement temporarily, or modify the policy to give your SOC the room it needs to respond without being blocked. Real-world scenario: Preventing unexpected cost spikes Consider a large SOC that ingests roughly 6 TB of data per day, with 1 TB going to the Sentinel Analytics tier and the remaining 5 TB going to the Sentinel data lake. Analysts are proactively hunting for threats, performing investigations, and running automation. Tier 3 analysts are also running Jupyter Notebooks against the Sentinel data lake to build graphs, execute queries, and automate incident investigation and remediation with code. Last month, the SOC experienced a cost spike after a newly hired analyst ran large, frequent queries against data lake data — mistakenly thinking it was Analytics tier. The SOC manager needs to prevent this from happening again. With enforcement now available, the SOC manager can navigate to Microsoft Sentinel > Cost management > Configure Policies in the Defender portal and set up two policies: A Data Lake Query policy to cap data processing for KQL queries An Advanced Data Insights policy to cap notebook compute consumption With these policies in place, the SOC manager gets notified in advance when consumption approaches the threshold while having confidence that the thresholds set will be enforced to prevent unexpected consumption and cost. Analysts can continue their day-to-day work without worrying about accidental overages. Should a breach scenario demand more capacity, the SOC manager can quickly adjust or temporarily disable the policies — keeping the team unblocked while maintaining overall budget governance. Outside of a breach scenario, should the same SOC analyst generate large amounts of data scanned, the threshold will take action and prevent queries from being performed. Learn more With enforceable KQL and notebook guardrails, Microsoft Sentinel data lake helps security teams scale advanced analytics with confidence. You can control usage in production and keep investigations moving — without tradeoffs between visibility, analytics, and budget. To get started, visit the documentation: Manage and monitor costs for Microsoft Sentinel | Microsoft Learn We'd love to hear your feedback. Share your thoughts in the comments below or reach out through your usual Microsoft support channels.765Views1like0CommentsMicrosoft Sentinel MCP Entity Analyzer: Explainable risk analysis for URLs and identities
What makes this release important is not just that it adds another AI feature to Sentinel. It changes the implementation model for enrichment and triage. Instead of building and maintaining a chain of custom playbooks, KQL lookups, threat intel checks, and entity correlation logic, SOC teams can call a single analyzer that returns a reasoned verdict and supporting evidence. Microsoft positions the analyzer as available through Sentinel MCP server connections for agent platforms and through Logic Apps for SOAR workflows, which makes it useful both for interactive investigations and for automated response pipelines. Why this matters First, it formalizes Entity Analyzer as a production feature rather than a preview experiment. Second, it introduces a real cost model, which means organizations now need to govern usage instead of treating it as a free enrichment helper. Third, Microsoft’s documentation is now detailed enough to support repeatable implementation patterns, including prerequisites, limits, required tables, Logic Apps deployment, and cost behavior. From a SOC engineering perspective, Entity Analyzer is interesting because it focuses on explainability. Microsoft describes the feature as generating clear, explainable verdicts for URLs and user identities by analyzing multiple modalities, including threat intelligence, prevalence, and organizational context. That is a much stronger operational model than simple point-enrichment because it aims to return an assessment that analysts can act on, not just more raw evidence What Entity Analyzer actually does The Entity Analyzer tools are described as AI-powered tools that analyze data in the Microsoft Sentinel data lake and provide a verdict plus detailed insights on URLs, domains, and user entities. Microsoft explicitly says these tools help eliminate the need for manual data collection and complex integrations usually required for investigation and enrichment hat positioning is important. In practice, many SOC teams have built enrichment playbooks that fetch sign-in history, query TI feeds, inspect click data, read watchlists, and collect relevant alerts. Those workflows work, but they create maintenance overhead and produce inconsistent analyst experiences. Entity Analyzer centralizes that reasoning layer. For user entities, Microsoft’s preview architecture explains that the analyzer retrieves sign-in logs, security alerts, behavior analytics, cloud app events, identity information, and Microsoft Threat Intelligence, then correlates those signals and applies AI-based reasoning to produce a verdict. Microsoft lists verdict examples such as Compromised, Suspicious activity found, and No evidence of compromise, and also warns that AI-generated content may be incorrect and should be checked for accuracy. That warning matters. The right way to think about Entity Analyzer is not “automatic truth,” but “high-value, explainable triage acceleration.” It should reduce analyst effort and improve consistency, while still fitting into human review and response policy. Under the hood: the implementation model Technically, Entity Analyzer is delivered through the Microsoft Sentinel MCP data exploration tool collection. Microsoft documents that entity analysis is asynchronous: you start analysis, receive an identifier, and then poll for results. The docs note that analysis may take a few minutes and that the retrieval step may need to be run more than once if the internal timeout is not enough for long operations. That design has two immediate implications for implementers. First, this is not a lightweight synchronous enrichment call you should drop carelessly into every automation branch. Second, any production workflow should include retry logic, timeouts, and concurrency controls. If you ignore that, you will create fragile playbooks and unnecessary SCU burn. The supported access path for the data exploration collection requires Microsoft Sentinel data lake and one of the supported MCP-capable platforms. Microsoft also states that access to the tools is supported for identities with at least Security Administrator, Security Operator, or Security Reader. The data exploration collection is hosted at the Sentinel MCP endpoint, and the same documentation notes additional Entity Analyzer roles related to Security Copilot usage. The prerequisite many teams will miss The most important prerequisite is easy to overlook: Microsoft Sentinel data lake is required. This is more than a licensing footnote. It directly affects data quality, analyzer usefulness, and rollout success. If your organization has not onboarded the right tables into the data lake, Entity Analyzer will either fail or return reduced-confidence output. For user analysis, the following tables are required to ensure accuracy: AlertEvidence, SigninLogs, CloudAppEvents, and IdentityInfo. also notes that IdentityInfo depends on Defender for Identity, Defender for Cloud Apps, or Defender for Endpoint P2 licensing. The analyzer works best with AADNonInteractiveUserSignInLogs and BehaviorAnalytics as well. For URL analysis, the analyzer works best with EmailUrlInfo, UrlClickEvents, ThreatIntelIndicators, Watchlist, and DeviceNetworkEvents. If those tables are missing, the analyzer returns a disclaimer identifying the missing sources A practical architecture view An incident, hunting workflow, or analyst identifies a high-interest URL or user. A Sentinel MCP client or Logic App calls Entity Analyzer. Entity Analyzer queries relevant Sentinel data lake sources and correlates the findings. AI reasoning produces a verdict, evidence narrative, and recommendations. The result is returned to the analyst, incident record, or automation workflow for next-step action. This model is especially valuable because it collapses a multi-query, multi-tool investigation pattern into a single explainable decisioning step. Where it fits in real Sentinel operations Entity Analyzer is not a replacement for analytics rules, UEBA, or threat intelligence. It is a force multiplier for them. For identity triage, it fits naturally after incidents triggered by sign-in anomaly detections, UEBA signals, or Defender alerts because it already consumes sign-in logs, cloud app events, and behavior analytics as core evidence sources. For URL triage, it complements phishing and click-investigation workflows because it uses TI, URL activity, watchlists, and device/network context. Implementation path 1: MCP clients and security agents Microsoft states that Entity Analyzer integrates with agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms. In practice, this makes it attractive for analyst copilots, engineering-side investigation agents, and guided triage experiences The benefit of this model is speed. A security engineer or analyst can invoke the analyzer directly from an MCP-capable client without building a custom orchestration layer. The tradeoff is governance: once you make the tool widely accessible, you need a clear policy for who can run it, when it should be used, and how results are validated before action is taken. Implementation path 2: Logic Apps and SOAR playbooks For SOC teams, Logic Apps is likely the most immediately useful deployment model. Microsoft documents an entity analyzer action inside the Microsoft Sentinel MCP tools connector and provides the required parameters for adding it to an existing logic app. These include: Workspace ID Look Back Days Properties payload for either URL or User The documented payloads are straightforward: { "entityType": "Url", "url": "[URL]" } And { "entityType": "User", "userId": "[Microsoft Entra object ID or User Principal Name]" } Also states that the connector supports Microsoft Entra ID, service principals, and managed identities, and that the Logic App identity requires Security Reader to operate. This makes playbook integration a strong pattern for incident enrichment. A high-severity incident can trigger a playbook, extract entities, invoke Entity Analyzer, and post the verdict back to the incident as a comment or decision artifact. The concurrency lesson most people will learn the hard way Unusually direct guidance on concurrency: to avoid timeouts and threshold issues, turn on Concurrency control in Logic Apps loops and start with a degree of parallelism of . The data exploration doc repeats the same guidance, stating that running multiple instances at once can increase latency and recommending starting with a maximum of five concurrent analyses. This is a strong indicator that the correct implementation pattern is selective analysis, not blanket analysis. Do not analyze every entity in every incident. Analyze the entities that matter most: external URLs in phishing or delivery chains accounts tied to high-confidence alerts entities associated with high-severity or high-impact incidents suspicious users with multiple correlated signals That keeps latency, quota pressure, and SCU consumption under control. KQL still matters Entity Analyzer does not eliminate KQL. It changes where KQL adds value. Before running the analyzer, KQL is still useful for scoping and selecting the right entities. After the analyzer returns, KQL is useful for validation, deeper hunting, and building custom evidence views around the analyzer’s verdict. For example, a simple sign-in baseline for a target user: let TargetUpn = "email address removed for privacy reasons"; SigninLogs | where TimeGenerated between (ago(7d) .. now()) | where UserPrincipalName == TargetUpn | summarize Total=count(), Failures=countif(ResultType != "0"), Successes=countif(ResultType == "0"), DistinctIPs=dcount(IPAddress), Apps=make_set(AppDisplayName, 20) by bin(TimeGenerated, 1d) | order by TimeGenerated desc And a lightweight URL prevalence check: let TargetUrl = "omicron-obl.com"; UrlClickEvents | where TimeGenerated between (ago(7d) .. now()) | search TargetUrl | take 50 Cost, billing, and governance GA is where technical excitement meets budget reality. Microsoft’s Sentinel billing documentation says there is no extra cost for the MCP server interface itself. However, for Entity Analyzer, customers are charged for the SCUs used for AI reasoning and also for the KQL queries executed against the Microsoft Sentinel data lake. Microsoft further states that existing Security Copilot entitlements apply The April 2026 “What’s new” entry also explicitly says that starting April 1, 2026, customers are charged for the SCUs required when using Entity Analyzer. That means every rollout should include a governance plan: define who can invoke the analyzer decide when playbooks are allowed to call it monitor SCU consumption limit unnecessary repeat runs preserve results in incident records so you do not rerun the same analysis within a short period Microsoft’s MCP billing documentation also defines service limits: 200 total runs per hour, 500 total runs per day, and around 15 concurrent runs every five minutes, with analysis results available for one hour. Those are not just product limits. They are design requirements. Limitations you should state clearly The analyze_user_entity supports a maximum time window of seven days and only works for users with a Microsoft Entra object ID. On-premises Active Directory-only users are not supported for user analysis. Microsoft also says Entity Analyzer results expire after one hour and that the tool collection currently supports English prompts only. Recommended rollout pattern If I were implementing this in a production SOC, I would phase it like this: Start with a narrow set of high-value use cases, such as suspicious user identities and phishing-related URLs. Confirm that the required tables are present in the data lake. Deploy a Logic App enrichment pattern for incident-triggered analysis. Add concurrency control and retry logic. Persist returned verdicts into incident comments or case notes. Then review SCU usage and analyst value before expanding coverage.568Views8likes0Comments