microsoft sentinel
258 TopicsRunning KQL queries on Microsoft Sentinel data lake using API
Co-Authors: Zeinab Mokhtarian Koorabbasloo and Matthew Lowe As security data lakes become the backbone of modern analytics platforms, organizations need new ways to operationalize their data. While interactive tools and portals support data exploration, many real-world workflows increasingly require flexible programmatic access that enables automation, scale, and seamless integration. By running KQL (Kusto Query Language) queries on Microsoft Sentinel data lake through APIs, you can embed analytics directly into automation workflows, background services, and intelligent agents, without relying on manual query execution. In this post, we explore API based KQL query execution, review some of the scenarios where it delivers the most value, and what you need to get started. Why run KQL queries on Sentinel data lake via API? Traditional query experiences, such as dashboards and query editors, are optimized for human interaction. APIs, on the other hand, are optimized for systems. Running KQL through an API enables: Automation-first analytics Repeatable and scheduled insights Integration with external systems and agents Consistent query execution at scale Instead of asking “How do I run this query?”, our customers are asking “How do I embed analytics into my workflow?” Scenarios where API-based KQL queries add value Automated monitoring and alerting SOC teams often want to continuously analyze data in their lake to detect anomalies, trends, or policy violations. With API-based KQL execution, they can: Run queries as part of automated workflows and playbooks Evaluate query results programmatically Trigger downstream actions such as alerts, tickets, or notifications This turns KQL into a signal engine, not just an exploration tool. Powering intelligent agents AI agents require programmatic access to data lakes to retrieve timely, relevant context for decision making. Using KQL over an API allows agents to: Dynamically query data lake based on user intent or system context Retrieve aggregated or filtered results on demand Combine analytical results with reasoning and decision logic In this model, KQL acts as the analytical retrieval layer, while the agent focuses on orchestration, reasoning, and action. Embedding analytics into business workflows Many organizations want analytics embedded directly into CI/CD and operational pipelines. Instead of exporting data or duplicating logic, they can: Run KQL queries inline via API Use results as inputs to other systems Keep analytics logic centralized and consistent This reduces drift between “analytics code” and “application code.” High-level flow: What happens when you run KQL via API At a conceptual level, the flow looks like this: A client authenticates to Microsoft Sentinel data lake platform. The client submits a KQL query via an API. The query executes against data stored in the data lake. Results are returned in a structured, machine-readable format. The client processes or acts on the results. Prerequisites To run KQL queries against the Sentinel data lake using APIs, you will need: A user token or a service principal Appropriate permissions to execute queries on the Sentinel data lake. Azure RBAC roles such as Log Analytics reader or Log Analytics contributor on the workspace are needed. Familiarity with KQL and API based query execution patterns Scenario 1: Execute a KQL query via API within a Playbook The following Sentinel SOAR playbook example demonstrates how data within Sentinel data lake can be used within automation. This example leverages a service principal that will be used to query the DeviceNetworkEvent logs that are within Sentinel data lake to enrich an incident involving a device before taking action on it. Within this playbook, the entities involved within the incident are retrieved, then queries are executed against the Sentinel data lake to gain insights on each host involved. For this example, the API call to the Sentinel data lake to retrieve events from the DeviceNetworkEvents table to find relevant information that shows network connections with the host where the IP originated from outside of the United States. As this action does not have a gallery artifacts within Azure Logic Apps, the action must be built out by using the HTTP action that is offered within Logic Apps. This action requires the API details for the API call as well as the authentication details that will be used to run the API. The step that executes the query leverages the Sentinel data lake API by performing the following call: POST https://api.securityplatform.microsoft.com/lake/kql/v2/rest/query. The service principal being used has read permissions on the Sentinel data lake that contains the relevant details and is authenticating to Entra ID OAuth when running the API call. NOTE: When using API calls to query Sentinel data lake, use 4500ebfb-89b6-4b14-a480-7f749797bfcd/.default as the scope/audience when retrieving a token for the service principal. This GUID is associated with the query service for Sentinel data lake. The body of the query is the following: { "csl": "DeviceNetworkEvents | where TimeGenerated >= ago(30d) | where DeviceName has '' | where ActionType in (\"ConnectionSuccess\", \"ConnectionAttempted\", \"InboundConnectionAccepted\") | extend GeoInfo = geo_info_from_ip_address(RemoteIP) | extend Country = tostring(GeoInfo.country), State = tostring(GeoInfo.state), City = tostring(GeoInfo.city) | where Country != 'United States' and RemoteIP !has '127.0.0.1' | project TimeGenerated, DeviceName, ActionType, RemoteIP, RemotePort, RemoteUrl, City, State, Country, InitiatingProcessFileName | order by TimeGenerated desc | top 2 by DeviceName", “db”: “WORKSPACENAMEHERE – WORKSPACEIDHERE” } Within this body, the query and workspace are defined. “csl” represents the query to run against the Sentinel data lake and “db” represents the Sentinel workspace/lake. This value is a combination of the workspace name – workspace ID. Both of these values can be found on the workspace overview blade within Azure. NOTE: The query must be one line in the JSON. Multi-line items will not be seen as valid JSON. With this, initial investigative querying via Sentinel data lake has been done the moment that the incident is triggered, allowing the SOC analyst responding to expediate their investigation and validating that the automated action of disabling the account was justified. For this Playbook, the results gathered from Sentinel data lake were placed into a comment and added to the incident within Defender, allowing SOC analysts to quickly review relevant details when beginning their work: Scenario 2: Execute a KQL query via API in code The following Python example demonstrates how to use a service principal to execute a KQL query on the Sentinel data lake via API. This example is provided for illustration purposes, but you can also call the API directly via common API tools. Within the code, the query and workspace are defined. “csl” represents the query to run against the Sentinel data lake and “db” represents the Sentinel workspace/lake. This value is a combination of the workspace name – workspace ID. Both of these values can be found on the workspace overview blade within Azure. You also need to use a token or a service principal. import requests import msal # ====== SPN / Entra app settings ====== TENANT_ID = "" CLIENT_ID = "" CLIENT_SECRET = "" # Token authority AUTHORITY = f"https://login.microsoftonline.com/{TENANT_ID}" # ---- IMPORTANT ---- # Most APIs use the resource + "/.default" pattern for client-credentials. # Try this first: SCOPE = ["4500ebfb-89b6-4b14-a480-7f749797bfcd/.default"] # ====== KQL query payload ====== KQL_QUERY = { "csl": "SigninLogs| take 10", "db": " workspace1-12345678-abcd-abcd-1234-1234567890ab ", "properties": { "Options": { "servertimeout": "00:04:00", "queryconsistency": "strongconsistency", "query_language": "kql", "request_readonly": False, "request_readonly_hardline": False } } } # ====== Acquire token using client credentials ====== app = msal.ConfidentialClientApplication( client_id=CLIENT_ID, authority=AUTHORITY, client_credential=CLIENT_SECRET ) result = app.acquire_token_for_client(scopes=SCOPE) if "access_token" not in result: raise RuntimeError( f"Token acquisition failed: {result.get('error')} - {result.get('error_description')}" ) access_token = result["access_token"] # ====== Call the KQL API ====== headers = { "Authorization": f"Bearer {access_token}", "Content-Type": "application/json" } url = "https://api.securityplatform.microsoft.com/lake/kql/v2/rest/query" # same endpoint response = requests.post(url, headers=headers, json=KQL_QUERY) if response.status_code == 200: print("Query Results:") print(response.json()) else: print(f"Error {response.status_code}: {response.text}") In summary, you need the following parameters in your API call: Request URI: https://api.securityplatform.microsoft.com/lake/kql/v2/rest/query Method: POST Sample payload: { "csl": " SigninLogs | take 10", "db": "workspace1-12345678-abcd-abcd-1234-1234567890ab", } Limitations and considerations The following considerations should be considered when planning to execute KQL queries on a data lake: Service principal permissions When using a service principal, Azure RBAC roles can be assigned at the Sentinel workspace level. Entra ID roles or XDR unified RBAC role are not supported for this scenario. Alternatively, user tokens with Entra ID roles can be used. Result size limits Queries are subject to limits on execution time and response size. Review Microsoft Sentinel data lake query service limits when designing your workflows. Summary Running KQL queries on Sentinel data lake via APIs unlocks a new class of scenarios, from intelligent agents to fully automated analytics pipelines. By decoupling query execution from user interfaces, customers gain flexibility, scalability, and control over how insights are generated and consumed. If you’re already using KQL for interactive analysis, API access is the natural next step toward production grade analytics. Happy hunting! Resources Run KQL queries on Sentinel data lake: Run KQL queries against the Microsoft Sentinel data lake - Microsoft Security | Microsoft Learn Service parameters and limits: Microsoft Sentinel data lake service limits - Microsoft Security | Microsoft LearnEstimate Microsoft Sentinel Costs with Confidence Using the New Sentinel Cost Estimator
One of the first questions teams ask when evaluating Microsoft Sentinel is simple: what will this actually cost? Today, many customers and partners estimate Sentinel costs using the Azure Pricing Calculator, but it doesn’t provide the Sentinel-specific usage guidance needed to understand how each Sentinel meter contributes to overall spend. As a result, it can be hard to produce accurate, trustworthy estimates, especially early on, when you may not know every input upfront. To make these conversations easier and budgets more predictable, Microsoft is introducing the new Sentinel Cost Estimator (public preview) for Microsoft customers and partners. The Sentinel Cost Estimator gives organizations better visibility into spend and more confidence in budgeting as they operate at scale. You can access the Microsoft Sentinel Cost Estimator here: https://microsoft.com/en-us/security/pricing/microsoft-sentinel/cost-estimator What the Sentinel Cost Estimator does The new Sentinel Cost Estimator makes pricing transparent and predictable for Microsoft customers and partners. The Sentinel Cost Estimator helps you understand what drives costs at a meter level and ensures your estimates are accurate with step-by-step guidance. You can model multi-year estimates with built-in projections for up to three years, making it easy to anticipate data growth, plan for future spend, and avoid budget surprises as your security operations mature. Estimates can be easily shared with finance and security teams to support better budgeting and planning. When to Use the Sentinel Cost Estimator Use the Sentinel Cost Estimator to: Model ingestion growth over time as new data sources are onboarded Explore tradeoffs between Analytics and Data Lake storage tiers Understand the impact of retention requirements on total spend Estimate compute usage for notebooks and advanced queries Project costs across a multi‑year deployment timeline For broader Azure infrastructure cost planning, the Azure Pricing Calculator can still be used alongside the Sentinel Cost Estimator. Cost Estimator Example Let’s walk through a practical example using the Cost Estimator. A medium-sized company that is new to Microsoft Sentinel wants a high-level estimate of expected costs. In their previous SIEM, they performed proactive threat hunting across identity, endpoint, and network logs; ran detections on high-security-value data sources from multiple vendors; built a small set of dashboards; and required three years of retention for compliance and audit purposes. Based on their prior SIEM, they estimate they currently ingest about 2 TB per day. In the Cost Estimator, they select their region and enter their daily ingestion volume. As they are not currently using Sentinel data lake, they can explore different ways of splitting ingestion between tiers to understand the potential cost benefit of using the data lake. Their retention requirement is three years. If they choose to use Sentinel data lake, they can plan to retain 90 days in the Analytics tier (included with Microsoft Sentinel) and keep the remaining data in Sentinel data lake for the full three years. As notebooks are new to them, they plan to evaluate notebooks for SOC workflows and graph building. They expect to start in the light usage tier and may move to medium as they mature. Since they occasionally query data older than 90 days to build trends—and anticipate using the Sentinel MCP server for SOC workflows on Sentinel lake data—they expect to start in the medium query volume tier. Note: These tiers are for estimation purposes only; they do not lock in pricing when using the Microsoft Sentinel platform. Because this customer is upgrading from Microsoft 365 E3 to E5, they may be eligible for free ingestion based on their user count. Combined with their eligible server data from Defender for Servers, this can reduce their billable ingestion. In the review step, the Cost Estimator projects costs across a three-year window and breaks down drivers such as data tiers, commitment tiers, and comparisons with alternative storage options. From there, the customer can go back to earlier steps to adjust inputs and explore different scenarios. Once done, the estimate report can be exported for reference with Microsoft representatives and internal leadership when discussing the deployment of Microsoft Sentinel and Sentinel Platform. Finalize Your Estimate with Microsoft The Microsoft Sentinel Cost Estimator is designed to provide directional guidance and help organizations understand how architectural decisions may influence cost. Final pricing may vary based on factors such as deployment architecture, commitment tiers, and applicable discounts. We recommend working with your Microsoft account team or a Security sales specialist to develop a formal proposal tailored to your organization’s requirements. Try the Microsoft Sentinel Cost Estimator Start building your Microsoft Sentinel cost estimate today: https://microsoft.com/en-us/security/pricing/microsoft-sentinel/cost-estimator.1.4KViews0likes1CommentHow to Ingest Microsoft Intune Logs into Microsoft Sentinel
For many organizations using Microsoft Intune to manage devices, integrating Intune logs into Microsoft Sentinel is an essential for security operations (Incorporate the device into the SEIM). By routing Intune’s device management and compliance data into your central SIEM, you gain a unified view of endpoint events and can set up alerts on critical Intune activities e.g. devices falling out of compliance or policy changes. This unified monitoring helps security and IT teams detect issues faster, correlate Intune events with other security logs for threat hunting and improve compliance reporting. We’re publishing these best practices to help unblock common customer challenges in configuring Intune log ingestion. In this step-by-step guide, you’ll learn how to successfully send Intune logs to Microsoft Sentinel, so you can fully leverage Intune data for enhanced security and compliance visibility. Prerequisites and Overview Before configuring log ingestion, ensure the following prerequisites are in place: Microsoft Sentinel Enabled Workspace: A Log Analytics Workspace with Microsoft Sentinel enabled; For information regarding setting up a workspace and onboarding Microsoft Sentinel, see: Onboard Microsoft Sentinel - Log Analytics workspace overview. Microsoft Sentinel is now available in the Defender Portal, connect your Microsoft Sentinel Workspace to the Defender Portal: Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations. Intune Administrator permissions: You need appropriate rights to configure Intune Diagnostic Settings. For information, see: Microsoft Entra built-in roles - Intune Administrator. Log Analytics Contributor role: The account configuring diagnostics should have permission to write to the Log Analytics workspace. For more information on the different roles, and what they can do, go to Manage access to log data and workspaces in Azure Monitor. Intune diagnostic logging enabled: Ensure that Intune diagnostic settings are configured to send logs to Azure Monitor / Log Analytics, and that devices and users are enrolled in Intune so that relevant management and compliance events are generated. For more information, see: Send Intune log data to Azure Storage, Event Hubs, or Log Analytics. Configure Intune to Send Logs to Microsoft Sentinel Sign in to the Microsoft Intune admin center. Select Reports > Diagnostics settings. If it’s the first time here, you may be prompted to “Turn on” diagnostic settings for Intune; enable it if so. Then click “+ Add diagnostic setting” to create a new setting: Select Intune Log Categories. In the “Diagnostic setting” configuration page, give the setting a name (e.g. “Microsoft Sentinel Intune Logs Demo”). Under Logs to send, you’ll see checkboxes for each Intune log category. Select the categories you want to forward. For comprehensive monitoring, check AuditLogs, OperationalLogs, DeviceComplianceOrg, and Devices. The selected log categories will be sent to a table in the Microsoft Sentinel Workspace. Configure Destination Details – Microsoft Sentinel Workspace. Under Destination details on the same page, select your Azure Subscription then select the Microsoft Sentinel workspace. Save the Diagnostic Setting. After you click save, the Microsoft Intune Logs will will be streamed to 4 tables which are in the Analytics Tier. For pricing on the analytic tier check here: Plan costs and understand pricing and billing. Verify Data in Microsoft Sentinel. After configuring Intune to send diagnostic data to a Microsoft Sentinel Workspace, it’s crucial to verify that the Intune logs are successfully flowing into Microsoft Sentinel. You can do this by checking specific Intune log tables both in the Microsoft 365 Defender portal and in the Azure Portal. The key tables to verify are: IntuneAuditLogs IntuneOperationalLogs IntuneDeviceComplianceOrg IntuneDevices Microsoft 365 Defender Portal (Unified) Azure Portal (Microsoft Sentinel) 1. Open Advanced Hunting: Sign in to the https://security.microsoft.com (the unified portal). Navigate to Advanced Hunting. – This opens the unified query editor where you can search across Microsoft Defender data and any connected Sentinel data. 2. Find Intune Tables: In the Advanced hunting Schema pane (on the left side of the query editor), scroll down past the Microsoft Sentinel Tables. Under the LogManagement Section Look for IntuneAuditLogs, IntuneOperationalLogs, IntuneDeviceComplianceOrg, and IntuneDevices in the list. Microsoft Sentinel in Defender Portal – Tables 1. Navigate to Logs: Sign in to the https://portal.azure.com and open Microsoft Sentinel. Select your Sentinel workspace, then click Logs (under General). 2. Find Intune Tables: In the Logs query editor that opens, you’ll see a Schema or tables list on the left. If it’s collapsed, click >> to expand it. Scroll down to find LogManagement and expand it; look for these Intune-related tables: IntuneAuditLogs, IntuneOperationalLogs, IntuneDeviceComplianceOrg, and IntuneDevices Microsoft Sentinel in Azure Portal – Tables Querying Intune Log Tables in Sentinel – Once the tables are present, use Kusto Query Language (KQL) in either portal to view and analyze Intune data: Microsoft 365 Defender Portal (Unified) Azure Portal (Microsoft Sentinel) In the Advanced Hunting page, ensure the query editor is visible (select New query if needed). Run a simple KQL query such as: IntuneDevice | take 5 Click Run query to display sample Intune device records. If results are returned, it confirms that Intune data is being ingested successfully. Note that querying across Microsoft Sentinel data in the unified Advanced Hunting view requires at least the Microsoft Sentinel Reader role. In the Azure Logs blade, use the query editor to run a simple KQL query such as: IntuneDevice | take 5 Select Run to view the results in a table showing sample Intune device data. If results appear, it confirms that your Intune logs are being collected successfully. You can select any record to view full event details and use KQL to further explore or filter the data - for example, by querying IntuneDeviceComplianceOrg to identify devices that are not compliant and adjust the query as needed. Once Microsoft Intune logs are flowing into Microsoft Sentinel, the real value comes from transforming that raw device and audit data into actionable security signals. To achieve this, you should set up detection rules that continuously analyze the Intune logs and automatically flag any risky or suspicious behavior. In practice, this means creating custom detection rules in the Microsoft Defender portal (part of the unified XDR experience) see [https://learn.microsoft.com/en-us/defender-xdr/custom-detection-rules] and scheduled analytics rules in Microsoft Sentinel (in either the Azure Portal or the unified Defender portal interface) see:[Create scheduled analytics rules in Microsoft Sentinel | Microsoft Learn]. These detection rules will continuously monitor your Intune telemetry – tracking device compliance status, enrollment activity, and administrative actions – and will raise alerts whenever they detect suspicious or out-of-policy events. For example, you can be alerted if a large number of devices fall out of compliance, if an unusual spike in enrollment failures occurs, or if an Intune policy is modified by an unexpected account. Each alert generated by these rules becomes an incident in Microsoft Sentinel (and in the XDR Defender portal’s unified incident queue), enabling your security team to investigate and respond through the standard SOC workflow. In turn, this converts raw Intune log data into high-value security insights: you’ll achieve proactive detection of potential issues, faster investigation by pivoting on the enriched Intune data in each incident, and even automated response across your endpoints (for instance, by triggering playbooks or other automated remediation actions when an alert fires). Use this Detection Logic to Create a detection Rule IntuneDeviceComplianceOrg | where TimeGenerated > ago(24h) | where ComplianceState != "Compliant" | summarize NonCompliantCount = count() by DeviceName, TimeGenerated | where NonCompliantCount > 3 Additional Tips: After confirming data ingestion and setting up alerts, you can leverage other Microsoft Sentinel features to get more value from your Intune logs. For example: Workbooks for Visualization: Create custom workbooks to build dashboards for Intune data (or check if community-contributed Intune workbooks are available). This can help you monitor device compliance trends and Intune activities visually. Hunting and Queries: Use advanced hunting (KQL queries) to proactively search through Intune logs for suspicious activities or trends. The unified Defender portal’s Advanced Hunting page can query both Sentinel (Intune logs) and Defender data together, enabling correlation across Intune and other security data. For instance, you might join IntuneDevices data with Azure AD sign-in logs to investigate a device associated with risky sign-ins. Incident Management: Leverage Sentinel’s Incidents view (in Azure portal) or the unified Incidents queue in Defender to investigate alerts triggered by your new rules. Incidents in Sentinel (whether created in Azure or Defender portal) will appear in the connected portal, allowing your security operations team to manage Intune-related alerts just like any other security incident. Built-in Rules & Content: Remember that Microsoft Sentinel provides many built-in Analytics Rule templates and Content Hub solutions. While there isn’t a native pre-built Intune content pack as of now, you can use general Sentinel features to monitor Intune data. Frequently Asked Questions If you’ve set everything up but don’t see logs in Sentinel, run through these checks: Check Diagnostic Settings Go to the Microsoft Intune admin center → Reports → Diagnostic settings. Make sure the setting is turned ON and sending the right log categories to the correct Microsoft Sentinel workspace. Confirm the Right Workspace Double-check that the Azure subscription and Microsoft Sentinel workspace are selected. If you have multiple tenants/directories, make sure you’re in the right one. Verify Permissions Make Sure Logs Are Being Generated If no devices are enrolled or no actions have been taken, there may be nothing to log yet. Try enrolling a device or changing a policy to trigger logs. Check Your Queries Make sure you’re querying the correct workspace and time range in Microsoft Sentinel. Try a direct query like: IntuneAuditLogs | take 5 Still Nothing? Try deleting and re-adding the diagnostic setting. Most issues come down to permissions or selecting the wrong workspace. How long are Intune logs retained, and how can I keep them longer? The analytics tier keeps data in the interactive retention state for 90 days by default, extensible for up to two years. This interactive state, while expensive, allows you to query your data in unlimited fashion, with high performance, at no charge per query: Log retention tiers in Microsoft Sentinel. We hope this helps you to successfully connect your resources and end-to-end ingest Intune logs into Microsoft Sentinel. If you have any questions, leave a comment below or reach out to us on X @MSFTSecSuppTeam!790Views2likes0CommentsHow Granular Delegated Admin Privileges (GDAP) allows Sentinel customers to delegate access
Simplifying Defender SIEM and XDR delegated access As Microsoft Sentinel and Defender converge into a unified experience, organizations face a fundamental challenge: the lack of a scalable, comprehensive, delegated access model that works seamlessly across Entra ID and Sentinel’s Azure Resource Manage creating a significant barrier for Managed Security Service Providers (MSSPs) and large enterprises with complex multi-tenant structures. Extending GDAP beyond CSPs: a strategic solution In response to these challenges, we have developed an extension to GDAP that makes it available to all Sentinel and Defender customers, including non-CSP organizations. This expansion enables both MSSPs and customers with multi-tenant organizational structures to establish secure, granular delegated access relationships directly through the Microsoft Defender portal. This will be available in public preview in April 2026. The GDAP extension aligns with zero-trust security principles through a three-way handshake model requiring explicit mutual consent between governing and governed tenants before any relationship is established. This consent-based approach enhances transparency and accountability, reducing risks associated with broad, uncontrolled permissions. By integrating with Microsoft Defender, GDAP enables advanced threat detection and response capabilities across tenant boundaries while maintaining granular permission management through Entra ID roles and Unified RBAC custom permissions. Delivering unified management of delegated access across SIEM and XDR With GDAP, customers gain a truly unified way to manage access across both Microsoft Sentinel and Defender—using a single, consistent delegated access model for SIEM and XDR. For Sentinel customers, this brings parity with the Azure portal experience: where delegated access was previously managed through Azure Lighthouse, it can now be handled directly in the Defender portal using GDAP. More importantly, for organizations running SIEM and XDR together, GDAP eliminates the need to switch between portals—allowing teams to view, manage, and govern security access from one centralized experience. The result is simpler administration, reduced operational friction, and a more cohesive way to secure multi-tenant environments at scale. How GDAP for non-CSPs works: the three-step handshake The GDAP handshake model implements a security-first approach through three distinct steps, each requiring explicit approval to prevent unauthorized access. Step 1 begins with the governed tenant initiating the relationship, allowing the governing tenant to request GDAP access. Step 2 shifts control to the governing tenant, which creates and sends a delegated access request with specific requested permissions through the multi-tenant organization (MTO) portal. Step 3 returns to the governed tenant for final approval. The approach provides customers with complete visibility and control over who can access their security data and with what permissions, while giving MSSPs a streamlined, Microsoft-supported mechanism for managing delegated relationships at scale. Step 4 assigns Sentinel permissions. In Azure resource management, assign governing tenant’s groups with Sentinel workspaces permissions (in the governed tenant), selecting the governing tenant’s security groups used in the created relationship.1.5KViews1like2CommentsRSAC 2026: New Microsoft Sentinel Connectors Announcement
Microsoft Sentinel helps organizations detect, investigate, and respond to security threats across increasingly complex environments. With the rollout of the Microsoft Sentinel data lake in the fall, and the App Assure-backed Sentinel promise that supports it, customers now have access to long-term, cost-effective storage for security telemetry, creating a solid foundation for emerging Agentic AI experiences. Since our last announcement at Ignite 2025, the Microsoft Sentinel connector ecosystem has expanded rapidly, reflecting continued investment from software development partners building for our shared customers. These connectors bring diverse security signals together, enabling correlation at scale and delivering richer investigation context across the Sentinel platform. Below is a snapshot of Microsoft Sentinel connectors newly available or recently enhanced since our last announcement, highlighting the breadth of partner solutions contributing data into, and extending the value of, the Microsoft Sentinel ecosystem. New and notable integrations Acronis Cyber Protect Cloud Acronis Cyber Protect Cloud integrates with Microsoft Sentinel to bring data protection and security telemetry into a centralized SOC view. The connector streams alerts, events, and activity data - spanning backup, endpoint protection, and workload security - into Microsoft Sentinel for correlation with other signals. This integration helps security teams investigate ransomware and data-centric threats more effectively, leverage built-in hunting queries and detection rules, and improve visibility across managed environments without adding operational complexity. Anvilogic Anvilogic integrates with Microsoft Sentinel to help security teams operationalize detection engineering at scale. The connector streams Anvilogic alerts into Microsoft Sentinel, giving SOC analysts centralized visibility into high-fidelity detections and faster context for investigation and triage. By unifying detection workflows, reducing alert noise, and improving prioritization, this integration supports more efficient threat detection and response while helping teams extend coverage across evolving attack techniques. BigID BigID integrates with Microsoft Sentinel to extend data security posture management (DSPM) insights into security operations workflows. The solution brings visibility into sensitive, regulated, and critical data across cloud, SaaS, and on‑premises environments, helping security teams understand where high‑risk data resides and how it may be exposed. By incorporating data‑centric risk context into Sentinel, this integration supports more informed investigation and prioritization, enabling organizations to reduce data‑related risk and align security operations with data protection and compliance objectives. Commvault Cloud Commvault Cloud integrates with Microsoft Sentinel to bring data protection and cyber‑resilience telemetry into security operations workflows. The connector ingests security‑relevant signals from Commvault Cloud—such as backup anomalies, malware and ransomware indicators, and other threat‑related events—into Sentinel, enabling centralized detection, investigation, and automated response. By correlating backup intelligence with broader Sentinel telemetry, this integration helps security teams reduce blind spots, validate the scope of incidents, and improve coordination between security and recovery operations. CyberArk Audit CyberArk Audit integrates with Microsoft Sentinel to centralize visibility into privileged identity and access activity. By streaming detailed audit logs - covering system events, user actions, and administrative activity - into Microsoft Sentinel, security teams can correlate identity-driven risks with broader security telemetry. This integration supports faster investigations, improved monitoring of privileged access, and more effective incident response through automated workflows and enriched context for SOC analysts. Cyera Cyera integrates with Microsoft Sentinel to extend AI-native data security posture management into security operations. The connector brings Cyera’s data context and actionable intelligence across multi-cloud, on-premises, and SaaS environments into Microsoft Sentinel, helping teams understand where sensitive data resides and how it is accessed, exposed, and used. Built on Sentinel’s modern framework, the integration feeds context-rich data risk signals into the Sentinel data lake, enabling more informed threat hunting, automation, and decision-making around data, user, and AI-related risk. TacitRed CrowdStrike IOC Automation Data443 TacitRed CS IOC Automation integrates with Microsoft Sentinel to streamline the operationalization of compromised credential intelligence. The solution uses Sentinel playbooks to automatically push TacitRed indicators of compromise into CrowdStrike via Sentinel playbooks, helping security teams turn identity-based threat intelligence into action. By automating IOC handling and reducing manual effort, this integration supports faster response to credential exposure and strengthens protection against account-driven attacks across the environment. TacitRed SentinelOne IOC Automation Data443 TacitRed SentinelOne IOC Automation integrates with Microsoft Sentinel to help operationalize identity-focused threat intelligence at the endpoint layer. The solution uses Sentinel playbooks to automatically consume TacitRed indicators and push curated indicators into SentinelOne via Sentinel playbooks and API-based enforcement, enabling faster enforcement of high-risk IOCs without manual handling. By automating the flow of compromised credential intelligence from Sentinel into EDR, this integration supports quicker response to identity-driven attacks and improves coordination between threat intelligence and endpoint protection workflows. TacitRed Threat Intelligence Data443 TacitRed Threat Intelligence integrates with Microsoft Sentinel to provide enhanced visibility into identity-based risks, including compromised credentials and high-risk user exposure. The solution ingests curated TacitRed intelligence directly into Sentinel, enriching incidents with context that helps SOC teams identify credential-driven threats earlier in the attack lifecycle. With built-in analytics, workbooks, and hunting queries, this integration supports proactive identity threat detection, faster triage, and more informed response across the SOC. Cyren Threat Intelligence Cyren Threat Intelligence integrates with Microsoft Sentinel to enhance detection of network-based threats using curated IP reputation and malware URL intelligence. The connector ingests Cyren threat feeds into Sentinel using the Codeless Connector Framework (CCF), transforming raw indicators into actionable insights, dashboards, and enriched investigations. By adding context to suspicious traffic and phishing infrastructure, this integration helps SOC teams improve alert accuracy, accelerate triage, and make more confident response decisions across their environments. TacitRed Defender Threat Intelligence Data443 TacitRed Defender Threat Intelligence integrates with Microsoft Sentinel to surface early indicators of credential exposure and identity-driven risk. The solution automatically ingests compromised credential intelligence from TacitRed into Sentinel and can support synchronization of validated indicators with Microsoft Defender Threat Intelligence through Sentinel workflows, helping SOC teams detect account compromise before abuse occurs. By enriching Sentinel incidents with actionable identity context, this integration supports faster triage, proactive remediation, and stronger protection against credential-based attacks. Datawiza Access Proxy (DAP) Datawiza Access Proxy integrates with Microsoft Sentinel to provide centralized visibility into application access and authentication activity. By streaming access and MFA logs from Datawiza into Sentinel, security teams can correlate identity and session-level events with broader security telemetry. This integration supports detection of anomalous access patterns, faster investigation through session traceability, and more effective response using Sentinel automation, helping organizations strengthen Zero Trust controls and meet auditing and compliance requirements. Endace Endace integrates with Microsoft Sentinel to provide deep network visibility by providing always-on, packet-level evidence. The connector enables one-click pivoting from Sentinel alerts directly to recorded packet data captured by EndaceProbes. This helps SOC and NetOps teams reconstruct events and validate threats with confidence. By combining Sentinel’s AI-driven analytics with Endace’s always-on, full-packet capture across on-premises, hybrid, and cloud environments, this integration supports faster investigations, improved forensic accuracy, and more decisive incident response. Feedly Feedly integrates with Microsoft Sentinel to ingest curated threat intelligence directly into security operations workflows. The connector automatically imports Indicators of Compromise (IoCs) from Feedly Team Boards and folders into Sentinel, enriching detections and investigations with context from the original intelligence articles. By bringing analyst‑curated threat intelligence into Sentinel in a structured, automated way, this integration helps security teams stay current on emerging threats and reduce the manual effort required to operationalize external intelligence. Gigamon Gigamon integrates with Microsoft Sentinel through a new connector that provides access to Gigamon Application Metadata Intelligence (AMI), delivering high-fidelity network-derived telemetry with rich application metadata from inspected traffic directly into Sentinel. This added context helps security teams detect suspicious activity, encrypted threats, and lateral movement faster and with greater precision. By enriching analytics without requiring full packet ingestion, organizations can reduce noise, manage SIEM costs, and extend visibility across hybrid cloud infrastructure. Halcyon Halcyon integrates with Microsoft Sentinel to provide purpose-built ransomware detection and automated containment across the Microsoft security ecosystem. The connector surfaces Halcyon ransomware alerts directly within Sentinel, enabling SOC teams to correlate ransomware behavior with Microsoft Defender and broader Microsoft telemetry. By supporting Sentinel analytics and automation workflows, this integration helps organizations detect ransomware earlier, investigate faster using native Sentinel tools, and isolate affected endpoints to prevent lateral spread and reinfection. Illumio The Illumio platform identifies and contains threats across hybrid multi-cloud environments. By integrating AI-driven insights with Microsoft Sentinel and Microsoft Graph, Illumio Insights enables SOC analysts to visualize attack paths, prioritize high-risk activity, and investigate threats with greater precision. Illumio Segmentation secures critical assets, workloads, and devices and then publishes segmentation policy back into Microsoft Sentinel to ensure compliance monitoring. Joe Sandbox Joe Sandbox integrates with Microsoft Sentinel to enrich incidents with dynamic malware and URL analysis. The connector ingests Joe Sandbox threat intelligence and automatically detonates suspicious files and URLs associated with Sentinel incidents, returning behavioral and contextual analysis results directly into investigation workflows. By adding sandbox-driven insights to indicators, alerts, and incident comments, this integration helps SOC teams validate threats faster, reduce false positives, and improve response decisions using deeper visibility into malicious behavior. Keeper Security The Keeper Security integration with Microsoft Sentinel brings advanced password and secrets management telemetry into your SIEM environment. By streaming audit logs and privileged access events from Keeper into Sentinel, security teams gain centralized visibility into credential usage and potential misuse. The connector supports custom queries and automated playbooks, helping organizations accelerate investigations, enforce Zero Trust principles, and strengthen identity security across hybrid environments. Lookout Mobile Threat Defense (MTD) Lookout Mobile Threat Defense integrates with Microsoft Sentinel to extend SOC visibility to mobile endpoints across Android, iOS, and Chrome OS. The connector streams device, threat, and audit telemetry from Lookout into Sentinel, enabling security teams to correlate mobile risk signals such as phishing, malicious apps, and device compromise, with broader enterprise security data. By incorporating mobile threat intelligence into Sentinel analytics, dashboards, and alerts, this integration helps organizations detect mobile driven attacks earlier and strengthen protection for an increasingly mobile workforce. Miro Miro integrates with Microsoft Sentinel to provide centralized visibility into collaboration activity across Miro workspaces. The connector ingests organization-wide audit logs and content activity logs into Sentinel, enabling security teams to monitor authentication events, administrative actions, and content changes alongside other enterprise signals. By bringing Miro collaboration telemetry into Sentinel analytics and dashboards, this integration helps organizations detect suspicious access patterns, support compliance and eDiscovery needs, and maintain stronger oversight of collaborative environments without disrupting productivity. Obsidian Activity Threat The Obsidian Threat and Activity Feed for Microsoft Sentinel delivers deep visibility into SaaS and AI applications, helping security teams detect account compromise and insider threats. By streaming user behavior and configuration data into Sentinel, organizations can correlate application risks with enterprise telemetry for faster investigations. Prebuilt analytics and dashboards enable proactive monitoring, while automated playbooks simplify response workflows, strengthening security posture across critical cloud apps. OneTrust for Purview DSPM OneTrust integrates with Microsoft Sentinel to bring privacy, compliance, and data governance signals into security operations workflows. The connector enriches Sentinel with privacy relevant events and risk indicators from OneTrust, helping organizations detect sensitive data exposure, oversharing, and compliance risks across cloud and non-Microsoft data sources. By unifying privacy intelligence with Sentinel analytics and automation, this integration enables security and privacy teams to respond more quickly to data risk events and support responsible data use and AI-ready governance. Pathlock Pathlock integrates with Microsoft Sentinel to bring SAP-specific threat detection and response signals into centralized security operations. The connector forwards security-relevant SAP events into Sentinel, enabling SOC teams to correlate SAP activity with broader enterprise telemetry and investigate threats using familiar SIEM workflows. By enriching Sentinel with SAP security context and focused detection logic, this integration helps organizations improve visibility into SAP landscapes, reduce noise, and accelerate detection and response for risks affecting critical business systems. Quokka Q-scout Quokka Q-scout integrates with Microsoft Sentinel to centralize mobile application risk intelligence across Microsoft Intune-managed devices. The connector automatically ingests app inventories from Intune, analyzes them using Quokka’s mobile app vetting engines, and streams security, privacy, and compliance risk findings into Sentinel. By surfacing app-level risks through Sentinel analytics and alerts, this integration helps security teams identify malicious or high-risk mobile apps, prioritize remediation, and strengthen mobile security posture without deploying agents or disrupting users. Semperis Lightning Semperis Lightning integrates with Microsoft Sentinel to deliver deep visibility into identity‑centric risk across Active Directory and Microsoft Entra environments. The connector ingests identity security telemetry such as indicators of exposure, Tier 0 assets, and attack path insights into Sentinel, enabling security teams to correlate identity risks with broader security signals. By bringing rich identity context into Sentinel analytics, hunting, and investigations, this integration helps organizations detect, prioritize, and respond to identity‑driven attacks more effectively across hybrid identity infrastructures. Synqly Synqly integrates with Microsoft Sentinel to simplify and scale security integrations through a unified API approach. The connector enables organizations and security vendors to establish a bi‑directional connection with Sentinel without relying on brittle, point‑to‑point integrations. By abstracting common integration challenges such as authentication handling, retries, and schema changes, Synqly helps teams orchestrate security data flows into and out of Sentinel more reliably, supporting faster onboarding of new data sources and more maintainable integrations at scale. Versasec vSEC:CMS Versasec vSEC:CMS integrates with Microsoft Sentinel to provide centralized visibility into credential lifecycle and system health events. The connector securely streams vSEC:CMS and vSEC:CLOUD alerts and status data into Sentinel using the Codeless Connector Framework (CCF), transforming credential management activity into correlation-ready security signals. By bringing smart card, token, and passkey management telemetry into Sentinel, this integration helps security teams monitor authentication infrastructure health, investigate credential-related incidents, and unify identity security operations within their SIEM workflows. VirtualMetric DataStream VirtualMetric DataStream integrates with Microsoft Sentinel to optimize how security telemetry is collected, normalized, and routed across the Microsoft security ecosystem. Acting as a high-performance telemetry pipeline, DataStream intelligently filters and enriches logs, sending high-value security data to Sentinel while routing less-critical data to Sentinel data lake or Azure Blob Storage for cost-effective retention. By reducing noise upstream and standardizing logs to Sentinel ready schemas, this integration helps organizations control ingestion costs, improve detection quality, and streamline threat hunting and compliance workflows. VMRay VMRay integrates with Microsoft Sentinel to enrich SIEM and SOAR workflows with automated sandbox analysis and high-fidelity, behavior-based threat intelligence. The connector enables suspicious files and phishing URLs to be submitted directly from Sentinel to VMRay for dynamic analysis, while validated, high-confidence indicators of compromise (IOCs) are streamed back into Sentinel’s Threat Intelligence repository for correlation and detection. By adding detailed attack-chain visibility and enriched incident context, this integration helps SOC teams reduce investigation time, improve detection accuracy, and strengthen automated response workflows across Sentinel environments. XBOW XBOW integrates with Microsoft Sentinel to bring autonomous penetration testing insights directly into security operations workflows. The connector ingests automated penetration test findings from the XBOW platform into Sentinel, enabling security teams to analyze validated exploit activity alongside alerts, incidents, and other security telemetry. By correlating offensive testing results with Sentinel detections, this integration helps organizations identify monitoring gaps, validate detection coverage, and strengthen defensive controls using real‑world, continuously generated attack evidence. Zero Networks Segment Audit Zero Networks Segment integrates with Microsoft Sentinel to provide visibility into micro-segmentation and access-control activity across the network. The connector can collect audit logs or activities from Zero Networks Segment, enabling security teams to monitor policy changes, administrative actions, and access events related to MFA-based network segmentation. By bringing segmentation audit telemetry into Sentinel, this integration supports compliance monitoring, investigation of suspicious changes, and faster detection of attempts to bypass lateral-movement controls within enterprise environments. Zscaler Internet Access (ZIA) Zscaler Internet Access integrates with Microsoft Sentinel to centralize cloud security telemetry from web and firewall traffic. The connector enables ZIA logs to be ingested into Sentinel, allowing security teams to correlate Zscaler Internet Access signals with other enterprise data for improved threat detection, investigation, and response. By bringing ZIA web, firewall, and security events into Sentinel analytics and hunting workflows, this integration helps organizations gain broader visibility into internet-based threats and strengthen Zero Trust security operations. In addition to these solutions from our third-party partners, we are also excited to announce the following connector published by the Microsoft Sentinel team: GitHub Enterprise Audit Logs Microsoft’s Sentinel Promise For Customers Every connector in the Microsoft Sentinel ecosystem is built to work out of the box. In the unlikely event a customer encounters any issue with a connector, the App Assure team stands ready to assist. For Software Developers Software partners in need of assistance in creating or updating a Sentinel solution can also leverage Microsoft’s Sentinel Promise to support our shared customers. For developers seeking to build agentic experiences utilizing Sentinel data lake, we are excited to announce the launch of our Sentinel Advisory Service to guide developers across their Sentinel journey. Customers and developers alike can reach out to us via our intake form. Learn More Microsoft Sentinel data lake Microsoft Sentinel data lake: Unify signals, cut costs, and power agentic AI Introducing Microsoft Sentinel data lake What is Microsoft Sentinel data lake Unlocking Developer Innovation with Microsoft Sentinel data lake Microsoft Sentinel Codeless Connector Framework (CCF) Create a codeless connector for Microsoft Sentinel Public Preview Announcement: Microsoft Sentinel CCF Push What’s New in Microsoft Sentinel Monthly Blog Microsoft App Assure App Assure home page App Assure services App Assure blog App Assure Request Assistance Form App Assure Sentinel Advisory Services announcement App Assure’s promise: Migrate to Sentinel with confidence App Assure’s Sentinel promise now extends to Microsoft Sentinel data lake Ignite 2025 new Microsoft Sentinel connectors announcement Microsoft Security Microsoft’s Secure Future Initiative Microsoft Unified SecOps Editor's Note - April 7th, 2026: This blog was updated to include connector descriptions for BigID, Commvault, Semperis, and XBOW.1.6KViews0likes0CommentsIntroducing the New Microsoft Sentinel Logstash Output Plugin (Public Preview!)
Many organizations rely on Logstash as a flexible, trusted data pipeline for collecting, transforming, and forwarding logs from on-premises and hybrid environments. Microsoft Sentinel has long supported a Logstash output plugin, enabling customers to send data directly into Sentinel as part of their existing pipelines. The original plugin was implemented in Ruby, and while it has served its purpose, it no longer meets Microsoft’s Secure Future Initiative (SFI) standards and has limited engineering support. To address both security and sustainability, we have rebuilt the plugin from the ground up in Java, a language that is more secure, better supported across Microsoft, and aligned with long-term platform investments. To ensure a seamless transition, the new implementation is still packaged and distributed as a standard Logstash Ruby gem. This means the installation and usage experience remains unchanged for customers, while benefiting from a more secure and maintainable foundation. What's New in This Version Java‑based and SFI‑compliant Same Logstash plugin experience, now rebuilt on a stronger foundation. The new implementation is fully Java‑based, aligning with Microsoft’s Secure Future Initiative (SFI) and providing improved security, supportability, and long-term maintainability. Modern, DCR‑based ingestion The plugin now uses the Azure Monitor Logs Ingestion API with Data Collection Rules (DCRs), replacing the legacy HTTP Data Collection API (For more info, see Migrate from the HTTP Data Collector API to the Log Ingestion API - Azure Monitor | Microsoft Learn). This gives customers full schema control, enables custom log tables, and supports ingestion into standard Microsoft Sentinel tables as well as Microsoft Sentinel data lake. Flexible authentication options Authentication is automatically determined based on your configuration, with support for: Client secret (App registration / service principal) Managed identity, eliminating the need to store credentials in configuration files Sovereign cloud support: The plugin supports Azure sovereign clouds, including Azure US Government, Azure China, and Azure Germany. Standard Logstash distribution model The plugin is published on RubyGems.org, the standard distribution channel for Logstash plugins, and can be installed directly using the Logstash plugin manager, no change to your existing installation workflow. What the Plugin Does Logstash plugin operates as a three-stage data pipeline: Input → Filter → Output. Input: You control how data enters the pipeline, using sources such as syslog, filebeat, Kafka, Event Hubs, databases (via JDBC), files, and more. Filter: You enrich and transform events using Logstash’s powerful filtering ecosystem, including plugins like grok, mutate, and Json, shaping data to match your security and operational needs. Output: This is where Microsoft comes in. The Microsoft Sentinel Logstash Output Plugin securely sends your processed events to an Azure Monitor Data Collection Endpoint, where they are ingested into Sentinel via a Data Collection Rule (DCR). With this model, you retain full control over your Logstash pipeline and data processing logic, while the Sentinel plugin provides a secure, reliable path to ingest data into Microsoft Sentinel. Getting Started Prerequisites Logstash installed and running An Azure Monitor Data Collection Endpoint (DCE) and Data Collection Rule (DCR) in your subscription Contributor role on your Log Analytics workspace Who Is This For? Organizations that already have Logstash pipelines, need to collect from on-premises or legacy systems, and operate in distributed/hybrid environments including air-gapped networks. To learn more, see: microsoft-sentinel-log-analytics-logstash-output-plugin | RubyGems.org | your community gem host739Views1like0CommentsWhat’s new in Microsoft Sentinel: RSAC 2026
Security is entering a new era, one defined by explosive data growth, increasingly sophisticated threats, and the rise of AI-enabled operations. To keep pace, security teams need an AI-powered approach to collect, reason over, and act on security data at scale. At RSA Conference 2026 (RSAC), we’re unveiling the next wave of Sentinel innovations designed to help organizations move faster, see deeper, and defend smarter with AI-ready tools. These updates include AI-driven playbooks that accelerate SOC automation, Granular Delegated Admin Privileges (GDAP) and granular role-based access controls (RBAC) that let you scale your SOC, accelerated data onboarding through new connectors, and data federation that enables analysis in place without duplication. Together, they give teams greater clarity, control, and speed. Come see us at RSAC to view these innovations in action. Hear from Sentinel leaders during our exclusive Microsoft Pre-Day, then visit Microsoft booth #5744 for demos, theater sessions, and conversations with Sentinel experts. Read on to explore what’s new. See you at RSAC! Sentinel feature innovations: Sentinel SIEM Sentinel data lake Sentinel graph Sentinel MCP Threat Intelligence Microsoft Security Store Sentinel promotions Sentinel SIEM Playbook generator [Now in public preview] The Sentinel playbook generator delivers a new era of automation capabilities. You can vibe code complex automations, integrate with different tools to ensure timely and compliant workflows throughout your SOC and feel confident in the results with built in testing and documentation. Customers and partners are already seeing benefit from this innovation. “The playbook generator gives security engineers the flexibility and speed of AI-assisted coding while delivering the deterministic outcomes that enterprise security operations require. It's the best of both worlds, and it lives natively in Defender where the engineers already work.” – Jaime Guimera Coll | Security and AI Architect | BlueVoyant Learn more about playbook generator. SIEM migration experience [General availability now] The Sentinel SIEM migration experience helps you plan and execute SIEM migrations through a guided, in-product workflow. You can upload Splunk or QRadar exports to generate recommendations for best‑fit Sentinel analytics rules and required data connectors, then assess migration scope, validate detection coverage, and migrate from Splunk or QRadar to Sentinel in phases while tracking progress. “The tool helps turn a Splunk to Sentinel migration into a practical decision process. It gives clear visibility into which detections are relevant, how they align to real security use cases, and where it makes sense to enable or prioritize coverage—especially with cost and data sources in mind.” – Deniz Mutlu | Director | Swiss Post Cybersecurity Ltd Learn more about SIEM migration experience. GDAP, unified RBAC, and row-level RBAC for Sentinel [Public preview, April 1] As Sentinel environments grow for enterprises, MSSPs, hyperscalers, and partners operating across shared or multiple environments, the challenge becomes managing access control efficiently and consistently at scale. Sentinel’s expanded permissions and access capabilities are designed to meet these needs. Granular Delegated Admin Privileges (GDAP) lets you streamline management across multiple governed tenants using your primary account, based on existing GDAP relationships. Unified RBAC allows you to opt in to managing permissions for Sentinel workspaces through a single pane of glass, configuring and enforcing access across Sentinel experiences in the analytics tier and data lake in the Defender portal. This simplifies administration and improves operational efficiency by reducing the number of permission models you need to manage. Row-level RBAC scoping within tables enables precise, scoped access to data in the Sentinel data lake. Multiple SOC teams can operate independently within a shared Sentinel environment, querying only the data they are authorized to see, without separating workspaces or introducing complex data flow changes. Consistent, reusable scope definitions ensure permissions are applied uniformly across tables and experiences, while maintaining strong security boundaries. To learn more, read our technical deep dives on RBAC and GDAP. Sentinel data lake Sentinel data federation [Public preview, April 1] Sentinel data federation lets you analyze security data in place without copying or duplicating your data. Powered by Microsoft Fabric, you can now federate data from Fabric, Azure Data Lake Storage (ADLS), and Azure Databricks into Sentinel data lake. Federated data appears alongside native Sentinel data, so you can use familiar tools like KQL hunting, notebooks, and custom graphs to correlate signals and investigate across your entire digital estate, all while preserving governance and compliance. You can start analyzing data in place and progressively ingest data into Sentinel for deeper security insights, advanced automation, and AI-powered defense at scale. You are billed only when you run analytics on federated data using existing Sentinel data lake query and advanced insights meters. les for unified investigation and hunting Sentinel cost estimation tool [Public Preview, April 9] The new Sentinel cost estimation tool offers all Microsoft customers and partners a guided, meter-level cost estimation experience that makes pricing transparent and predictable. A built-in three-year cost projection lets you model data growth and ramp-up over time, anticipate spend, and avoid surprises. Get transparent estimates into spend as you scale your security operations. All other customers can continue to use the Azure calculator for Sentinel pricing estimates. See the Sentinel pricing page for more information. Sentinel data connectors A365 Observability connector [Public preview, April 15] Bring AI agent telemetry into the Sentinel data lake to investigate agent behavior, tool usage, prompts, reasoning and execution using hunting, graph, and MCP workflows. GitHub audit log connector using API polling [General availability, March 6] Ingest GitHub enterprise audit logs into Sentinel to monitor user and administrator activity, detect risky changes, and investigate security events across your development environment. Google Kubernetes Engine (GKE) connector [General availability, March 6] Collect Google Kubernetes Engine (GKE) audit and workload logs in Sentinel to monitor cluster activity, analyze workload behavior, and detect security threats across Kubernetes environments. Microsoft Entra and Azure Resource Graph (ARG) connector enhancements [Public preview, April 15] Enable new Entra assets (EntraDevices, EntraOrgContacts) and ARG assets (ARGRoleDefinitions) in existing asset connectors, expanding inventory coverage and powering richer, built‑in graph experiences for greater visibility. With over 350 Sentinel data connectors, customers achieve broad visibility into complex digital environments and can expand their security operations effectively. “Microsoft Sentinel data lake forms the core of our agentic SOC. By unifying large volumes of Microsoft and third-party data, enabling graph-based analysis, and supporting MCP-driven workflows, it allows us to investigate faster, at lower cost, and with greater confidence.” – Øyvind Bergerud | Head of Security Operations | Storebrand Learn more about Sentinel data connectors. Sentinel connector builder agent using Sentinel Visual Studio Code extension [Public preview, March 31] Build Sentinel data connectors in minutes instead of weeks using the AI‑assisted Connector Builder agent in Visual Studio Code. This low‑code experience guides developers and ISVs end-to-end, automatically generating schemas, deployment assets, connector UI, secure secret handling, and polling logic. Built‑in validation surfaces issues early, so you can validate event logs before deployment and ingestion. Example prompt in GitHub Copilot Chat: @sentinel-connector-builder Create a new connector for OpenAI audit logs using https://api.openai.com/v1/organization/audit_logs Get started with custom connectors and learn more in our blog. Data filtering and splitting [Public preview, March 30] As security teams ingest more data, the challenge shifts from scale to relevance. With filtering and splitting now built into the Defender portal, teams can shape data before it lands in Sentinel, without switching tools or managing custom JSON files. Define simple KQL‑based transformations directly in the UI to filter low‑value events and intelligently route data, making ingestion optimization faster, more intuitive, and easier to manage at scale. Filtering at ingest time allows you to remove low-value or benign events to reduce noise, cut unnecessary processing, and ensure that high-signal data drives detections and investigations. Splitting enables intelligent routing of data between the analytics tier and the data lake tier based on relevance and usage. Together, these two capabilities help you balance cost and performance while scaling data ingestion sustainably as your digital estate grows. Create workbook reports directly from the data lake [Public preview, April 1] Sentinel workbooks can now directly run on the data lake using KQL, enabling you to visualize and monitor security data straight from the data lake. By selecting the data lake as the workbook data source, you can now create trend analysis and executive reporting. Sentinel graph Custom graphs [Public preview, April 1] Custom graphs let you build tailored security graphs tuned to your unique security scenarios using data from Sentinel data lake as well as non-Microsoft sources. With custom graph, powered by Fabric, you can build, query, and visualize connected data, uncover hidden patterns and attack paths, and help surface risks that are hard to detect when data is analyzed in isolation. These graphs provide the knowledge context that enables AI-powered agent experiences to work more effectively, speeding investigations, revealing blast radius, and helping you move from noisy, disconnected alerts to confident decisions at scale. In the words of our preview customers: “We ingested our Databricks management-plane telemetry into the Sentinel data lake and built a custom security graph. Without writing a single detection rule, the graph surfaced unusual patterns of activity and overprivileged access that we escalated for investigation. We didn't know what we were looking for, the graph surfaced the risk for us by revealing anomalous activity patterns and unusual access combinations driven by relationships, not alerts.” – SVP, Security Solutions | Financial Services organization Custom graph API usage for creating graph and querying graph will be billed starting April 1, 2026, according to the Sentinel graph meter. Creating custom graph Using the Sentinel VS Code extension, you can generate graphs to validate hunting hypotheses, such as understanding attack paths and blast radius of a phishing campaign, reconstructing multi‑step attack chains, and identifying structurally unusual or high‑risk behavior, making it accessible to your team and AI agents. Once persisted via a schedule job, you can access these custom graphs from the ready-to-use section in the graph experience in the Defender portal. Graphs experience in the Microsoft Defender portal After creating your custom graphs, you can access them in the graphs section of the Defender portal under Sentinel. From there, you’ll be able to perform interactive graph-based investigations, such as using a graph built for phishing analysis to help you quickly evaluate the impact of a recent incident, profile the attacker, and trace its paths across Microsoft telemetry and third-party data. The new graph experience lets you run Graph Query Language (GQL) queries, view the graph schema, visualize the graph, view graph results in tabular format, and interactively travers the graph to the next hop with a simple click. Sentinel MCP Sentinel MCP entity analyzer [General availability, April 1] Entity analyzer provides reasoned, out-of-the-box risk assessments that help you quickly understand whether a URL or user identity represents potential malicious activity. The capability analyzes data across modalities including threat intelligence, prevalence, and organizational context to generate clear, explainable verdicts you can trust. Entity analyzer integrates easily with your agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms, or with your SOAR workflows through Logic Apps. The entity analyzer is also a trusted foundation for the Defender Triage Agent and delivers more accurate alert classifications and deeper investigative reasoning. This removes the need to manually engineer evaluation logic and creates trust for analysts and AI agents to act with higher accuracy and confidence. Learn more about entity analyzer and in our blog here. Entity analyzer will be billed starting April 1, 2026, based on Security Compute Units (SCU) consumption. Learn more about MCP billing. Sentinel MCP graph tool collection [Public preview, April 20] Graph tool collection helps you visualize and explore relationships between identities and device assets, threats and activities signals ingested by data connectors and alerted by analytic rules. The tool provides a clear graph view that highlights dependencies and configuration gaps, which makes it easier to understand how content interacts across your environment. This helps security teams assess coverage, optimize content deployment, and identify areas that may need tuning or additional data sources, all from a single, interactive workspace. Executing graph queries via the MCP tools will trigger the graph meter. Claude MCP connector [Public preview, April 1] Anthropic Claude can connect to Sentinel through a custom MCP connector, giving you AI-assisted analysis across your Sentinel environment. Microsoft provides step-by-step guidance for configuring a custom connector in Claude that securely connects to a Sentinel MCP server. With this connection you can summarize incidents, investigate alerts, and reason over security signals while keeping data inside Microsoft's security boundary. Access to large language models (LLMs) is managed through Microsoft authentication and role-based controls, supporting faster triage and investigation workflows while maintaining compliance and visibility. Threat Intelligence CVEs of interest in the Threat Intelligence Briefing Agent [Public preview in April] The Threat Intelligence Briefing Agent delivers curated intelligence based on your organization’s configuration, preferences, and unique industry and geographic needs. CVEs of interest which highlights vulnerabilities actively discussed across the security landscape and assesses their potential impact on your environment, delivering more timely threat intelligence insights. The agent automatically incorporates internet exposure data powered by the Sentinel platform to surface threats targeting technologies exposed in your organization. Together, these enhancements help you focus faster on the threats that matter most, without manual investigation. Microsoft Security Store Security Store embedded in Entra [General availability, March 23] As identity environments grow more complex, teams need to move faster and extend Entra with trusted third‑party capabilities that address operational, compliance, and risk challenges. The Security Store embedded directly into Entra lets you discover and adopt Entra‑ready agents and solutions in your workflow. You can extend Entra with identity‑focused agents that surface privileged access risk, identity posture gaps, network access insights, and overall identity health, turning identity data into clear recommendations and reports teams can use immediately. You can also enhance Entra with Verified ID and External ID integrations that strengthen identity verification, streamline account recovery, and reduce fraud across workforce, consumer, and external identities. Security Store embedded in Microsoft Purview [General availability, March 31] Extending data security across the digital estate requires visibility and enforcement into new data sources and risk surfaces, often requiring a partnered approach. The Security Store embedded directly into Purview lets you discover and evaluate integrated solutions inside your data security workflows. Relevant partner capabilities surface alongside context, making it easier to strengthen data protection, address regulatory requirements, and respond to risk without disrupting existing processes. You can quickly assess which solutions align to data security scenarios, especially with respect to securing AI use, and how they can leverage established classifiers, policies, and investigation workflows in Purview. Keeping integration discovery in‑flow and purchases centralized through the Security Store means you move faster from evaluation to deployment, reducing friction and maintaining a secure, consistent transaction experience. Security Store Advisor [General availability, March 23] Security teams today face growing complexity and choice. Teams often know the security outcome they need, whether that's strengthening identity protection, improving ransomware resilience, or reducing insider risk, but lack a clear, efficient way to determine which solutions will help them get there. Security Store Advisor provides a guided, natural-language discovery experience that shifts security evaluation from product‑centric browsing to outcome‑driven decision‑making. You can describe your goal in plain language, and the Advisor surfaces the most relevant Microsoft and partner agents, solutions, and services available in the Security Store, without requiring deep product knowledge. This approach simplifies discovery, reduces time spent navigating catalogs and documentation, and helps you understand how individual capabilities fit together to deliver meaningful security outcomes. Sentinel promotions Extending signups for promotional 50 GB commitment tier [Through June 2026] The Sentinel promotional 50 GB commitment tier offers small and mid-sized organizations a cost-effective entry point into Sentinel. Sign up for the 50 GB commitment tier until June 30, 2026, and maintain the promotional rate until March 31, 2027. This promotion is available globally with regional variations in pricing and accessible through EA, CSP, and Direct channels. Visit the Sentinel pricing page for details and to get started. Sentinel RSAC 2026 sessions All week – Sentinel product demos, Microsoft Booth #5744 Mon Mar 23, 3:55 PM – RSAC 2026 main stage Keynote with CVP Vasu Jakkal [KEY-M10W] Ambient and autonomous security: Building trust in the agentic AI era Tue Mar 24, 10:30 AM – Live Q&A session, Microsoft booth #5744 and online Ask me anything with Microsoft Security SMEs and real practitioners Tue Mar 24, 11 AM – Sentinel data lake theater session, Microsoft booth #5744 From signals to insights: How Microsoft Sentinel data lake powers modern security operations Tue Mar 24, 2 PM – Sentinel SIEM theater session, Microsoft booth #5744 Vibe-coding SecOps automations with the Sentinel playbook generator Wed Mar 25, 12 PM – Executive event at Palace Hotel with Threat Protection GM Scott Woodgate The AI risk equation: Visibility, control, and threat acceleration Wed Mar 25, 1:30 PM – Sentinel graph theater session, Microsoft booth #5744 Bringing knowledge-driven context to security with Microsoft Sentinel graph Wed Mar 25, 5 PM – MISA theater session, Microsoft booth #5744 Cut SIEM costs without reducing protection: A Sentinel data lake case study Thu Mar 26, 1 PM – Security Store theater session, Microsoft booth #5744 What's next for Security Store: Expanding in portal and smarter discovery All week – 1:1 meetings with Microsoft security experts Meet with Microsoft Defender and Sentinel SIEM and Defender Security Operations Additional resources Sentinel data lake video playlist Explore the full capabilities of Sentinel data lake as a unified, AI-ready security platform that is deeply integrated into the Defender portal Sentinel data lake FAQ blog Get answers to many of the questions we’ve heard from our customers and partners on Sentinel data lake and billing AI‑powered SIEM migration experience ninja training Walk through the SIEM migration experience, see how it maps detections, surfaces connector requirements, and supports phased migration decisions SIEM migration experience documentation Learn how the SIEM migration experience analyzes your exports, maps detections and connectors, and recommends prioritized coverage Accenture collaborates with Microsoft to bring agentic security and business resilience to the front lines of cyber defense Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Sentinel. We’ll see you in the next edition!6.6KViews6likes0CommentsAccelerate Agent Development: Hacks for Building with Microsoft Sentinel data lake
As a Senior Product Manager | Developer Architect on the App Assure team working to bring Microsoft Sentinel and Security Copilot solutions to market, I interact with many ISVs building agents on Microsoft Sentinel data lake for the first time. I’ve written this article to walk you through one possible approach for agent development – the process I use when building sample agents internally at Microsoft. If you have questions about this, or other methods for building your agent, App Assure offers guidance through our Sentinel Advisory Service. Throughout this post, I include screenshots and examples from Gigamon’s Security Posture Insight Agent. This article assumes you have: An existing SaaS or security product with accessible telemetry. A small ISV team (2–3 engineers + 1 PM). Focus on a single high value scenario for the first agent. The Composite Application Model (What You Are Building) When I begin designing an agent, I think end-to-end, from data ingestion requirements through agentic logic, following the Composite application model. The Composite Application Model consists of five layers: Data Sources – Your product’s raw security, audit, or operational data. Ingestion – Getting that data into Microsoft Sentinel. Sentinel data lake & Microsoft Graph – Normalization, storage, and correlation. Agent – Reasoning logic that queries data and produces outcomes. End User – Security Copilot or SaaS experiences that invoke the agent. This separation allows for evolving data ingestion and agent logic simultaneously. It also helps avoid downstream surprises that require going back and rearchitecting the entire solution. Optional Prerequisite You are enrolled in the ISV Success Program, so you can earn Azure Credits to provision Security Compute Units (SCUs) for Security Copilot Agents. Phase 1: Data Ingestion Design & Implementation Choose Your Ingestion Strategy The first choice I face when designing an agent is how the data is going to flow into my Sentinel workspace. Below I document two primary methods for ingestion. Option A: Codeless Connector Framework (CCF) This is the best option for ISVs with REST APIs. To build a CCF solution, reference our documentation for getting started. Option B: CCF Push (Public Preview) In this instance, an ISV pushes events directly to Sentinel via a CCF Push connector. Our MS Learn documentation is a great place to get started using this method. Additional Note: In the event you find that CCF does not support your needs, reach out to App Assure so we can capture your requirements for future consideration. Azure Functions remains an option if you’ve documented your CCF feature needs. Phase 2: Onboard to Microsoft Sentinel data lake Once my data is flowing into Sentinel, I onboard a single Sentinel workspace to data lake. This is a one-time action and cannot be repeated for additional workspaces. Onboarding Steps Go to the Defender portal. Follow the Sentinel Data lake onboarding instructions. Validate that tables are visible in the lake. See Running KQL Queries in data lake for additional information. Phase 3: Build and Test the Agent in Microsoft Foundry Once my data is successfully ingested into data lake, I begin the agent development process. There are multiple ways to build agents depending on your needs and tooling preferences. For this example, I chose Microsoft Foundry because it fit my needs for real-time logging, cost efficiency, and greater control. 1. Create a Microsoft Foundry Instance Foundry is used as a tool for your development environment. Reference our QuickStart guide for setting up your Foundry instance. Required Permissions: Security Reader (Entra or Subscription) Azure AI Developer at the resource group After setup, click Create Agent. 2. Design the Agent A strong first agent: Solves one narrow security problem. Has deterministic outputs. Uses explicit instructions, not vague prompts. Example agent responsibilities: To query Sentinel data lake (Sentinel data exploration tool). To summarize recent incidents. To correlate ISVs specific signals with Sentinel alerts and other ISV tables (Sentinel data exploration tool). 3. Implement Agent Instructions Well-designed agent instructions should include: Role definition ("You are a security investigation agent…"). Data sources it can access. Step by step reasoning rules. Output format expectations. Sample Instructions can be found here: Agent Instructions 4. Configure the Microsoft Model Context Protocol (MCP) tooling for your agent For your agent to query, summarize and correlate all the data your connector has sent to data lake, take the following steps: Select Tools, and under Catalog, type Sentinel, and then select Microsoft Sentinel Data Exploration. For more information about the data exploration tool collection in MCP server, see our documentation. I always test repeatedly with real data until outputs are consistent. For more information on testing and validating the agent, please reference our documentation. Phase 4: Migrate the Agent to Security Copilot Once the agent works in Foundry, I migrate it to Security Copilot. To do this: Copy the full instruction set from Foundry Provision a SCU for your Security Copilot workspace. For instructions, please reference this documentation. Make note of this process as you will be charged per hour per SCU Once you are done testing you will need to deprovision the capacity to prevent additional charges Open Security Copilot and use Create From Scratch Agent Builder as outlined here. Add Sentinel data exploration MCP tools (these are the same instructions from the Foundry agent in the previous step). For more information on linking the Sentinel MCP tools, please refer to this article. Paste and adapt instructions. At this stage, I always validate the following: Agent Permissions – I have confirmed the agent has the necessary permissions to interact with the MCP tool and read data from your data lake instance. Agent Performance – I have confirmed a successful interaction with measured latency and benchmark results. This step intentionally avoids reimplementation. I am reusing proven logic. Phase 5: Execute, Validate, and Publish After setting up my agent, I navigate to the Agents tab to manually trigger the agent. For more information on testing an agent you can refer to this article. Now that the agent has been executed successfully, I download the agent Manifest file from the environment so that it can be packaged. Click View code on the Agent under the Build tab as outlined in this documentation. Publishing to the Microsoft Security Store If I were publishing my agent to the Microsoft Security Store, these are the steps I would follow: Finalize ingestion reliability. Document required permissions. Define supported scenarios clearly. Package agent instructions and guidance (by following these instructions). Summary Based on my experience developing Security Copilot agents on Microsoft Sentinel data lake, this playbook provides a practical, repeatable framework for ISVs to accelerate their agent development and delivery while maintaining high standards of quality. This foundation enables rapid iteration—future agents can often be built in days, not weeks, by reusing the same ingestion and data lake setup. When starting on your own agent development journey, keep the following in mind: To limit initial scope. To reuse Microsoft managed infrastructure. To separate ingestion from intelligence. What Success Looks Like At the end of this development process, you will have the following: A Microsoft Sentinel data connector live in Content Hub (or in process) that provides a data ingestion path. Data visible in data lake. A tested agent running in Security Copilot. Clear documentation for customers. A key success factor I look for is clarity over completeness. A focused agent is far more likely to be adopted. Need help? If you have any issues as you work to develop your agent, please reach out to the App Assure team for support via our Sentinel Advisory Service . Or if you have any other tips, please comment below, I’d love to hear your feedback.456Views2likes0CommentsMicrosoft Sentinel is now supported in Unified RBAC with row-level access
Enabling streamlined, granular, and scalable permissions We’re excited to announce the Public Preview of Unified Role Based Access Control (URBAC) for Microsoft Sentinel, together with row-level access. This new capability, available in April, extends the Microsoft Defender Unified RBAC model to Sentinel, enabling streamlined, granular, and scalable permissions management across your security workloads. With the addition of row-level scoping, multiple teams can operate securely within a shared Sentinel environment while using consistent and reusable scope definitions across tables and experiences. What’s new? Unified RBAC for Sentinel Manage in Defender portal: Sentinel permissions can now be managed directly in the Microsoft Defender portal. Assignments can automatically include future data sources and workspaces as they’re added. Unified permissions model: Manage user privileges for Sentinel and other Defender workloads in a single, consistent system. Easy role migration: Import existing roles and assignments from Azure Sentinel for easy migration. Sentinel Scoping Create and assign scope: Scope can now be created from the permissions page and assigned to users or user groups across workspaces. Tag data: Scope tags can be applied to rows in tables, using ‘Table Management’, allowing you to create rules that tag newly ingested data automatically with the scope. Access data: Scoped users can manage alerts, incidents, and hunt over scoped data (including the lake), allowing them to see only the data within their own scope. Common use-cases include accommodating multiple SOC teams within a shared environment (for example segregated by business unit, geography or discipline), providing access to teams outside of the SOC, or restricting sensitive data. How does it work? Sentinel in Unified RBAC 1) Create a custom role Go to the Permissions page in Defender, and select Defender XDR -> Roles You can also click Import roles to re-create existing roles in URBAC automatically. In the Roles page, click Create a custom role. Enter a role name and description, and select the required permissions using this mapping: Sentinel Role Unified RBAC Permissions Microsoft Sentinel Reader · Security operations \ Security data basic (read) Microsoft Sentinel Responder · Security operations \ Security data basic (read) · Security operations \ Alerts (manage) · Security operations \ Response (manage) Microsoft Sentinel Contributor · Security operations \ Security data basic (read) · Security operations \ Alerts (manage) · Security operations \ Response (manage) · Authorization and settings \ Detection tuning (manage) Click Add assignment and name the assignment. Select users and/or groups. Choose the Sentinel workspaces for the assignment. (Optional) Enable Include future data sources automatically. Click Submit. 2) Activate Unified RBAC for Sentinel Go back to the Roles page. Click on Activate workloads button in the top. Click on Manage workspaces. Select the desired workspaces to enable URBAC on. Click Activate workspaces. 3) Edit, Delete, or Export Roles To edit: Select the role, click Edit, and update as needed. To delete: Select the role and click Delete. To export: Click Export to download a CSV of roles, permissions, and assignments. Prerequisites Access to the Microsoft Defender portal: Ensure you can sign in at https://security.microsoft.com. Global administrator, combined with being a subscription owner OR combined with having user access administrator + Sentinel contributor role on the workspace. Sentinel workspaces onboarded to Defender portal: Sentinel workspaces must be available in the Defender portal before roles and permissions can be assigned. Learn more For more information, see Microsoft Defender XDR Unified role-based access control (RBAC) - Microsoft Defender XDR | Microsoft Learn Sentinel Scoping 1) Create & assign scope First, we are going to create a scope tag that we will use to assign to users Navigate to the Permissions page in the Defender portal Click Microsoft Defender XDR, and then the Scopes tab Click on Add Sentinel scope On this screen, fill out a name for your scope, and optionally a description That’s it! Scope is created. You can create more scope tags if you like. Next, we are going to assign this scope tag to users: Go back to the Permissions page, this time to the Roles tab. Click on Create custom role. Fill out the basics (role name, description). Next, assign permissions as appropriate. In the assignments screen, select the right users or user groups, the data. sources and data collections (Sentinel workspaces) as appropriate. Now, select the Sentinel scope Edit button. Select the scopes (one or more) that you want to assign to this role. Once you are happy with all the settings, go ahead and submit to save the role. 2) Tag tables with scope Now that we have created scope and assigned it to users, we are going to tag the tables from which these users should be allowed to see data. Navigate to the Tables page under Microsoft Sentinel. Select the table you would like to assign scope to. Click the Scope tag rule button. Enable scoping by clicking the toggle on Allow use of scope tags for RBAC. Then, click the toggle under Scope tag rule to enable your first rule. Add a KQL rule/expression which will specify which rows in this table should be tagged to the scope that you will attach. This will create a Data Collection Rule, and supports transformKQL supported operators and limits. Be aware of how to write expressions here: for example, to scope by location you can write: Location == 'Spain'. Then, select the scope tag that should be assigned to those rows. Once you are done, go ahead and click Save. From now on, all data/each row that gets ingested into this table that meets the KQL rule/expression will be tagged with the scope tag that you’ve selected. 3) Access data a) Now that we’ve created scope, assigned it to users, and tagged the right data, we are ready to start using the scope. b) From now on, newly ingested data automatically gets tagged with scope. Historic (previously ingested) data is not included. c) You can add this scope to your detection rules by referencing the ‘SentinelScope_CF’ field. d) Alerts generated based on scoped data are now automatically tagged with the associated scope. e) Your scoped users can now access the different experiences where scoped data is visible, for example: f) View alerts that have resulted from data tagged with scope. g) View incidents that contain alerts with data tagged with scope. h) Run advanced hunting queries over the rows in the tables that the user is allowed to see. i) Run KQL queries over the Sentinel lake. Prerequisites Access to the Microsoft Defender portal: https://security.microsoft.com. Sentinel workspaces onboarded to Defender portal: Sentinel workspaces must be available in the Defender portal before roles and permissions can be assigned. Sentinel in URBAC: You must have enabled Sentinel in URBAC before using this feature. Permissions (for the person creating/assigning scope and tagging tables). Security Authorization (Manage) permission (URBAC): Allowing you to create scope and assignments. Data Operations (Manage) permission (URBAC): for Table Management. Subscription owner or assigned with the “Microsoft.Insights/DataCollectionRules/Write” permission. FAQ What happens to legacy Sentinel roles after activating Unified RBAC? URBAC becomes the primary source of your permissions for Sentinel instead of Azure RBAC, so ensure the right permissions are set up on URBAC. Once URBAC is activated for a Sentinel workspace, continue to manage your permissions in URBAC. What about roles that are not yet supported? For example, the Automation Contributor role. You can assign these on Azure RBAC and they will continue to be respected. Can I revert to managing Sentinel roles in Azure after enabling Unified RBAC? Yes, you can deactivate Unified RBAC for Sentinel in the Defender portal’s workload settings. This will revert to legacy Sentinel roles and their associated access controls. Does Unified RBAC support both Sentinel Analytics and Lake workspaces? Yes, Unified RBAC supports both Sentinel Analytics and lake workspaces for consistent access management. What happens if an alert contains data outside of the user’s scope? The scoped user can only see the data associated with their scope. If the alert contains entities/evidence that the user has no access to, they will not be able to see those. If the user has access to at least one of the associated entities, they can see the alert itself. What happens to incidents that contain multiple alerts? The scoped user can see an incident if they have access to at least one of the underlying alerts. A scoped user can manage the incident if they have access to all of the underlying alerts and if they have the required permission. Unscoped users can see all alerts and incidents. What about other experiences; detection rules, playbooks, etc.? A scoped user can only have read access to other resources/experiences for the moment, unless you create a separate assignment where you grant them elevated permissions. In the next few months, we will be introducing scoping for resources such as detection rules, automation rules, playbooks, etc. Is Sentinel Scoping also applicable to the Sentinel lake? Yes, to any tables that support transformations (data collection rules). Can I apply scope to a full table? What about previously ingested data? You can create a KQL query that captures all fields in the table, which essentially creates ‘table level’ scope. Currently, it is not possible to grant access at full-table level (meaning, scoping previously ingested data). Can I scope XDR tables? Not currently. This includes XDR tables which received extended retention in the lake. If I ingest Defender data into Sentinel, is their scope (e.g. Device Groups, Cloud Scopes) maintained in Sentinel tables? No, if you ingest Defender data into Sentinel, that scoping is not propagated. Please keep this in mind when deciding how to apply scope to Sentinel tables. Interested in learning more? Stay tuned for a webinar coming in April.2.3KViews0likes0CommentsAgentic Use Cases for Developers on the Microsoft Sentinel Platform
Interested in building an agent with Sentinel platform solutions but not sure where to start? This blog will help you understand some common use cases for agent development that we’ve seen across our partner ecosystem. SOC teams don’t need more alerts - they need fast, repeatable investigation and response workflows. Security Copilot agents can help orchestrate the steps analysts perform by correlating across the Sentinel data lake, executing targeted KQL queries, fetching related entities, enriching with context, and producing an evidence-backed decision without forcing analysts to switch tools. Microsoft Sentinel platform is a strong foundation for agentic experiences because it exposes a normalized security data layer, an investigation surface based on incidents and entities, and extensive automation capabilities. An agent can use these primitives to correlate identity, endpoint, cloud, and network telemetry; traverse entity relationships; and recommend remediation actions. In this blog, I will break down common agentic use cases that developers can implement on Sentinel platform, framed in buildable and repeatable patterns: Identify the investigation scenario Understand the required Sentinel data connectors and KQL queries Build enrichment and correlation logic Summarize findings with supporting evidence and recommended remediation steps Use Case 1: Identity & Access Intelligence Investigation Scenario: Is this risky sign-in part of an attack path? Signals Correlated: Identity access telemetry: Source user, IPs, target resources, MFA logs Authentication outcomes and diversity: Success vs. failure, Geographic spread Identity risk posture: User risk level/state Post-auth endpoint execution: Suspicious LOLBins Correlation Logic: An analyst receives a risky sign-in signal for a user and needs to determine whether the activity reflects expected behavior - such as travel, remote access, or MFA friction - or if it signals the early stage of an identity compromise that could escalate into privileged access and downstream workload impact. Practical Example: Silverfort Identity Threat Triage Agent, which is built on a similar framework, takes the user’s UPN as input and builds a bounded, last-24-hour investigation across authentication activity, MFA logs, user risk posture, and post-authentication endpoint behavior. Outcome: By correlating identity risk signals, MFA logs, sign-in success and failure patterns, and suspicious execution activity following authentication, the agent connects the initial risky sign-in to endpoint behavior, enabling the analyst to quickly assess compromise likelihood, identify escalation indicators, and determine appropriate remediation actions. “Our collaboration with Microsoft Sentinel and Security Copilot underscores the central role identity plays across every stage of attack path triage. By integrating Silverfort’s identity risk signals with Microsoft Entra ID and Defender for Endpoint, and sharing rich telemetry across platforms, we enable Security Copilot Agent to distinguish isolated anomalies from true identity-driven intrusions - while dramatically reducing the manual effort traditionally required for incident response and threat hunting. AI-driven agents accelerate analysis, enrich investigative context, reduce dwell time, and speed detection. Instead of relying on complex queries or deep familiarity with underlying data structures, security teams can now perform seamless, identity-centric reasoning within a single interaction.” - Frank Gasparovic, Director of Solution Architecture, Technology Alliances, Silverfort Use Case 2: Cyber Resilience, Backup & Recovery Investigation Scenario: Are the threats detected on a backup indicative of production impact and recovery risk? Signals Correlated: Backup threat telemetry: Backup threat scan alerts, risk analysis events, affected host/workload, detection timestamps Cross-vendor security alerts: Endpoint, network, and cloud security alerts for the same host/workload in the same time window Correlation Logic: The agent correlates threat signals originating from the backup environment with security telemetry associated with same host/workload to validate whether there is corroborating evidence in the production environment and whether activity aligns in time. Practical Example: Commvault Security Investigation Agent, which is built on a similar framework, takes a hostname as input and builds an investigation across Commvault Threat Scan / Risk Analysis events and third-party security telemetry. By correlating backup-originating detections with production security activity for the same host, the agent determines whether the backup threat signal aligns with observable production impact. Outcome: By correlating backup threat detections with endpoint, network, and cloud security telemetry while validating timing alignment, event spikes, and data coverage, the agent connects a backup originating threat signal to production evidence, enabling the analyst to quickly assess impact likelihood and determine appropriate actions such as containment or recovery-point validation. Use Case 3: Network, Exposure & Connectivity Investigation Scenario: Is this activity indicative of legitimate remote access, or does it demonstrate suspicious connectivity and access attempts that increase risk to private applications and internal resources. Signals Correlated: User access telemetry: Source user, source IPs/geo, device/context, destinations Auth and enforcement outcomes: Success vs. failure, MFA allow/block Behavior drift: new/rare IPs/locations, unusual destination/app diversity. Suspicious activity indicators: Risky URLs/categories, known-bad indicators, automated/bot-like patterns, repeated denied private app access attempts Correlation Logic: An analyst receives an alert for a specific user and needs to determine whether the activity reflects expected behavior such as travel, remote work, or VPN usage, or whether it signals the early stages of a compromise that could later extend into private application access. Practical Example: Zscaler ZIA ZPA Correlation Agent starts with a username and builds a bounded, last-24-hour investigation across Zscaler Internet Access and Zscaler Private Access activity. By correlating user internet behavior, access context, and private application interactions, the agent connects the initial Zscaler alert to any downstream access attempts or authentication anomalies, enabling the analyst to quickly assess risk, identify suspicious patterns, and determine whether Zscaler policy adjustments are required. Outcome: Provides a last‑24‑hour verdict on whether the activity reflects expected access patterns or escalation toward private application access, and recommends next actions—such as closing as benign drift, escalating for containment, or tuning access policy—based on correlated evidence. Use Case 4: Endpoint & Runtime Intelligence Investigation Scenario: Is this process malicious or a legitimate admin action? Signals Correlated: Execution context: Process chain, full command line, signer, unusual path Account & logon: Initiating user, logon type (RDP/service), recent risky sign-ins Tooling & TTPs: LOLBins, credential access hints, lateral movement tooling Network behavior: Suspicious connections, repeated callbacks/beaconing Correlation Logic: A PowerShell alert triggers on a production server. The agent ties the process to its parent (e.g., spawned by a web worker vs. an admin shell), validates the command-line indicators, correlates outbound connections from the same PID to a first-seen destination, and checks for immediate follow-on persistence and any adjacent runtime alerts in the same time window. Outcome: Classifies the activity as malicious vs. admin and produces an evidence pack (process tree, key command indicators, destinations, persistence/tamper artifacts) as well as the recommended containment step (isolate host and revoke/reset initiating credentials). Use Case 5: Exposure & Exploitability Investigation Scenario: What is the likelihood of exploitation and blast radius? Signals Correlated: Asset exposure: Internet-facing status, exposed services/ports, and identity or network paths required to reach the workload Exploit activity: Defender alerts on the resource, IDS/WAF hits, IOC matches, and first seen exploit or probing attempts Risk amplification signals: Internet communication, high privilege access paths, and indicators that the workload processes PII or sensitive data Blast radius: Downstream reachability to crown jewel systems (e.g., databases, key vaults) and trust relationships that could enable escalation Correlation Logic: An analyst receives a Medium/High Microsoft Defender for Cloud alert on a workload and needs to determine whether it’s a standalone detection or an exploitable exposure that can quickly progress into privilege abuse and data impact. The agent correlates exposure evidence signals such as internet reachability, high-privilege paths, and indicators that workload handles sensitive data by analyzing suspicious network connections in the same bounded time window. Outcome: Produces a resource-specific risk analysis that explains why the Defender for Cloud alert is likely to be exploited, based on asset attack surface and effective privileges, plus any supporting activity in the same 24-hour window. Use Case 6: Threat Intelligence & Adversary Context Investigation Scenario: Is this activity aligned with known attacker behavior? Signals Correlated: Behavior sequence: ordered events identity → execution → network. Technique mapping: MITRE ATT&CK technique IDs, typical progression, and required prerequisites. Threat intel match: campaign/adversary, TTPs, IOCs Correlation Logic: A chain of identity compromise, PowerShell obfuscation, and periodic outbound HTTPS is observed. The agent maps the sequence to ATT&CK techniques and correlates it with threat intel that matches a known adversary campaign. Outcome: Surfaces adversary-aligned behavioral insights and TTP context to help analysts assess intrusion likelihood and guide the next investigation steps. Summary This blog is intended to help developers better understand the key use cases for building agents with Microsoft Sentinel platform along with practical patterns to apply when designing and implementing agent scenarios. Need help? If you have any issues as you work to develop your agent, the App Assure team is available to assist via our Sentinel Advisory Service. Reach out via our intake form. Resources Learn more: For a practical overview of how ISVs can move from Sentinel data lake onboarding to building agents, see the Accelerate Agent Development blog - https://aka.ms/AppAssure_AccelerateAgentDev. Get hands-on: Explore the end-to-end journey from Sentinel data lake onboarding to a working Security Copilot agent through the accompanying lab modules available on GitHub Repo: https://github.com/suchandanreddy/Microsoft-Sentinel-Labs.812Views1like0Comments