<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>rss.livelink.threads-in-node</title>
    <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/ct-p/microsoft-sentinel</link>
    <description>rss.livelink.threads-in-node</description>
    <pubDate>Sat, 02 May 2026 07:42:17 GMT</pubDate>
    <dc:creator>microsoft-sentinel</dc:creator>
    <dc:date>2026-05-02T07:42:17Z</dc:date>
    <item>
      <title>What’s new in Microsoft Sentinel: April 2026</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-in-microsoft-sentinel-april-2026/ba-p/4516354</link>
      <description>&lt;P&gt;Welcome to the April 2026 edition of What's new in Microsoft Sentinel. April brings a broad set of updates, with &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/what%E2%80%99s-new-in-microsoft-sentinel-rsac-2026/4503971" target="_blank" rel="noopener" data-lia-auto-title="RSAC 2026 announcements" data-lia-auto-title-active="0"&gt;RSAC 2026 announcements&lt;/A&gt; rolling out alongside new features. Highlights include cost limit enforcement to prevent runaway query costs, curated open-source intelligence in Threat Analytics, and new data connectors for CrowdStrike, Imperva, AWS, and Logstash. Together, these innovations help security teams control costs, stay ahead of emerging threats, and broaden visibility without added complexity.&lt;/P&gt;
&lt;P&gt;Read on to learn what's new with Sentinel.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;What's new&lt;/STRONG&gt;&lt;/H2&gt;
&lt;H3&gt;&lt;STRONG&gt;OSINT&lt;/STRONG&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;reports in Threat Analytics&lt;/STRONG&gt;&lt;STRONG&gt; [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Customers can now consume curated OSINT articles alongside Microsoft-authored Threat Analytics reports, all in one place. (OSINT, or open-source intelligence, is any information readily available to the public.) These OSINT articles come enriched, as detailed in the following list, to help security teams move quickly from awareness to action.&lt;/P&gt;
&lt;P&gt;What’s included:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Curated OSINT articles derived from trusted&amp;nbsp;open-source research&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;Clear summaries with links back to original sources&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;Extracted indicators of compromise (IOCs)&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;Mapped MITRE ATT&amp;amp;CK tactics and techniques&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;Microsoft enrichment, analysis, and recommended actions (when available)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;By bringing OSINT directly into Threat Analytics, we’re reducing context switching, improving analyst efficiency, and helping customers operationalize open-source intelligence faster within their Defender workflows. &lt;A href="https://learn.microsoft.com/en-us/defender-xdr/threat-analytics" target="_blank" rel="noopener"&gt;Learn more&lt;/A&gt;.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Cost limit enforcement&lt;/STRONG&gt;&lt;STRONG&gt; for KQL queries and notebooks [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Sentinel data lake cost policies do more than just send an alert when usage gets too high. You can set hard limits for KQL queries, jobs, and notebook sessions that block new work once a threshold is exceeded, eliminating surprise bills from runaway queries or heavy workloads. For example, instead of finding out about cost spikes after you run large queries against the data lake tier, enforcement stops further queries before the damage is done. Anything already running still finishes normally, and you get clear messaging about what happened and what to do next. You can lift guardrails temporarily, adjust thresholds, or disable enforcement on the fly. &lt;A href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/enforce-cost-limits-on-kql-queries-and-notebooks-in-the-microsoft-sentinel-data-/4511329" target="_blank" rel="noopener"&gt;Learn more.&lt;/A&gt;&lt;/P&gt;
&lt;img&gt;Figure 1: Create cost management policies in the Microsoft Defender portal to automatically block new queries and jobs when usage limits are exceeded&lt;/img&gt;
&lt;H3&gt;&lt;STRONG&gt;Sentinel data connectors&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;With 380 Sentinel data connectors, customers achieve broad visibility into complex digital environments and can expand their security operations effectively. Below are the latest updates.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;CrowdStrike API Connector [Generally Available]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;The CrowdStrike API Connector ingests logs from CrowdStrike APIs into Sentinel, fetching details on hosts, detections, incidents, alerts, and vulnerabilities from your CrowdStrike environment.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Imperva Cloud WAF [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;The Imperva Cloud WAF data connector ingests Imperva logs into Sentinel through AWS S3 buckets, giving you visibility into web application traffic and threats detected by your Imperva deployment for monitoring, investigation, and threat hunting in Sentinel.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;AWS Elastic Load Balancer (ELB) [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;This connector allows you to ingest AWS Elastic Load Balancer (ALB, NLB, and GLB) logs into Sentinel. These logs contain detailed records for requests handled by your load balancers, including client IPs, latencies, request paths, and status codes. These logs are useful for monitoring traffic patterns, investigating anomalies, and ensuring security compliance.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Logstash Output Plugin&lt;/STRONG&gt;&lt;STRONG&gt; [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;For organizations that rely on Logstash to collect from on-premises, legacy, or air-gapped environments, the Sentinel Logstash Output Plugin has been rebuilt in Java to align with Microsoft's Secure Future Initiative (SFI) and provide improved security and long-term maintainability. The plugin uses the Azure Monitor Logs Ingestion API with Data Collection Rules (DCRs), giving you full schema control and the ability to ingest directly into Sentinel data lake as well as standard Sentinel tables. &lt;A href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/introducing-the-new-microsoft-sentinel-logstash-output-plugin-public-preview/4508904" target="_blank" rel="noopener"&gt;Learn more.&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Sentinel data federation &lt;/STRONG&gt;&lt;STRONG&gt;[P&lt;/STRONG&gt;&lt;STRONG&gt;review&lt;/STRONG&gt;&lt;STRONG&gt;]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Sentinel data federation enables unified visibility and security analytics across federated and ingested data, without compromising data governance. Security teams can quickly query data in Microsoft Fabric, Azure Data Lake Storage (ADLS) Gen2, and Azure Databricks directly from Sentinel, no data movement required. This approach allows teams to explore data broadly through federation, then selectively ingest what matters most into Sentinel to unlock advanced detections, automation, and AI‑powered analytics. &lt;A href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/microsoft-sentinel-data-federation-expand-visibility-while-preserving-governance/4511258" target="_blank" rel="noopener"&gt;Learn more.&lt;/A&gt;&lt;/P&gt;
&lt;img&gt;Figure 2: Federated data appears alongside native Sentinel tables for unified investigation and hunting&lt;/img&gt;
&lt;H3&gt;&lt;STRONG&gt;Sentinel cost estimation tool [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Customers and partners can confidently estimate Sentinel costs using the cost estimation tool. With meter-level guidance, you can model ingestion across analytics and data lake tiers, compare retention options, and estimate compute costs. Built‑in projections of up to three years offer transparency into spend, making it easier to plan, optimize, and share estimates. &lt;A href="https://azure.microsoft.com/en-us/pricing/calculator/" target="_blank" rel="noopener"&gt;Try the Sentinel Cost Estimator.&lt;/A&gt;&lt;/P&gt;
&lt;img&gt;Figure 3: A guided, meter-level Sentinel cost estimator with three-year projections helps organizations model data growth, predict spend, and plan Sentinel adoption with confidence.&lt;/img&gt;
&lt;H3&gt;&lt;STRONG&gt;Microsoft Entra and Azure Resource Graph (ARG) connector &lt;/STRONG&gt;&lt;STRONG&gt;enhancements&lt;/STRONG&gt;&lt;STRONG&gt; [&lt;/STRONG&gt;&lt;STRONG&gt;Preview&lt;/STRONG&gt;&lt;STRONG&gt;]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Enable new Entra assets (EntraDevices, EntraOrgContacts) and ARG assets (ARGRoleDefinitions) in existing asset connectors, expanding inventory coverage and powering richer, built‑in graph experiences for greater visibility.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Create workbook reports directly from the data lake [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Sentinel workbooks can directly run on the data lake using KQL, enabling you to visualize and monitor security data straight from the data lake. By selecting the data lake as the workbook data source, you can create trend analysis and executive reporting.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Custom graphs&lt;/STRONG&gt;&lt;STRONG&gt; [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Custom graphs let you model relationships unique to your organization using data from Sentinel data lake, non-Microsoft sources, and federated data sources, all powered by Fabric. Instead of stitching together dozens of tables manually, you can build graphs that surface blast radius, trace attack paths, map privilege chains, and spot structural outliers like unusually broad access or anomalous email exfiltration. You can generate custom graphs using AI-assisted coding in the Microsoft Sentinel VS Code extension, persist them via a schedule job, and access them in the graphs experience in the Defender portal. Run Graph Query Language (GQL) queries, visualize results, and interactively traverse the graph to the next hop with a single click. These graphs also provide the knowledge context that enables AI-powered agent experiences to work more effectively, speeding investigations and helping you move from disconnected alerts to confident decisions at scale. Custom graph API usage for creating and querying graphs is billed according to the Sentinel graph meter. &lt;A href="https://techcommunity.microsoft.com/blog/microsoft-security-blog/announcing-public-preview-of-custom-graphs-in-microsoft-sentinel/4507410" target="_blank" rel="noopener"&gt;Learn more.&lt;/A&gt;&lt;/P&gt;
&lt;img&gt;Figure 4: Query, visualize, and traverse custom graphs with the graph experience in Sentinel&lt;/img&gt;
&lt;H3&gt;&lt;STRONG&gt;MCP entity analyzer [General availability]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Entity analyzer provides reasoned, out-of-the-box risk assessments that help you quickly understand whether a URL or user identity represents potential malicious activity. It analyzes data across threat intelligence, prevalence, and organizational context to generate clear, explainable verdicts you can trust. Entity analyzer integrates with your agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms, or with your SOAR workflows through Logic Apps. It also serves as a trusted foundation for the Defender Triage Agent, delivering more accurate alert classifications and deeper investigative reasoning. Entity analyzer is billed based on Security Compute Units (SCU) consumption. &lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-mcp-data-exploration-tool#entity-analyzer" target="_blank" rel="noopener"&gt;Learn more about entity analyzer&lt;/A&gt; and &lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-mcp-billing" target="_blank" rel="noopener"&gt;MCP billing&lt;/A&gt;.&lt;/P&gt;
&lt;img&gt;Figure 5: Entity analyzer delivers explainable, multi-signal risk assessments for URLs and user identities directly within your investigation workflow&lt;/img&gt;
&lt;H3&gt;&lt;STRONG&gt;Sentinel MCP graph tool collection [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Graph tool collection helps security teams visualize and explore relationships between identities and device assets, threats, and activity signals ingested by data connectors and alerted by analytic rules. The tool provides a clear graph view that highlights dependencies and configuration gaps, which makes it easier to understand interactions across environments. This tool helps security teams assess coverage, optimize content deployment, and identify areas that may need tuning or additional data sources—all from a single, interactive workspace. Executing graph queries via the MCP tools triggers the graph meter.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Claude MCP connector [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Anthropic Claude can connect to Sentinel through a custom MCP connector, giving you AI-assisted analysis across your Sentinel environment. Microsoft provides step-by-step guidance for configuring a custom connector in Claude that securely connects to a Sentinel MCP server. With this connection you can summarize incidents, investigate alerts, and reason over security signals while keeping data inside Microsoft's security boundary. Access to large language models (LLMs) is managed through Microsoft authentication and role-based controls, supporting faster triage and investigation workflows while maintaining compliance and visibility.&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;CVEs of interest in the Threat Intelligence Briefing Agent [Preview]&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;The Threat Intelligence Briefing Agent delivers curated intelligence based on your organization’s configuration, preferences, and unique industry and geographic needs. The agent surfaces Common Vulnerabilities and Exposures (CVEs) of interest, highlighting vulnerabilities actively discussed across the security landscape and assessing their potential impact on your environment for more timely threat intelligence insights. The agent automatically incorporates internet exposure data powered by the Sentinel platform to surface threats targeting technologies exposed in your organization. Together, these enhancements help you focus faster on the threats that matter most, without manual investigation.&lt;/P&gt;
&lt;H2&gt;&lt;STRONG&gt;Additional resources&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Blogs and documentation:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Featured blog:&lt;/STRONG&gt; &lt;A href="https://aka.ms/SentinelAdvisoryService" target="_blank" rel="noopener"&gt;App Assure launches its Sentinel Advisory Service&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/AppAssure_AgenticUseCases" target="_blank" rel="noopener"&gt;Agentic use cases for developers on Microsoft Sentinel&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/microsoft-security-blog/the-unified-secops-transition-%E2%80%94-why-it-is-a-security-architecture-decision-not-j/4513815" target="_blank" rel="noopener"&gt;The Unified SecOps Transition: Why It Is a Security Architecture Decision, Not Just a Portal Change&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://aka.ms/DefenderNews" target="_blank" rel="noopener"&gt;What's new in Microsoft Defender – April 2026&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Webinars and training:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Featured webinar:&lt;/STRONG&gt; &lt;A href="https://info.microsoft.com/ww-landing-powering-the-agentic-soc.html?lcid=en-us" target="_blank" rel="noopener"&gt;Powering the Agentic SOC with Scott Woodgate, General Manager, Microsoft Threat Protection&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Featured training:&lt;/STRONG&gt; &lt;A href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/introducing-the-microsoft-sentinel-training-lab-hands-on-security-operations-in-/4513274" target="_blank" rel="noopener"&gt;Introducing the Microsoft Sentinel Training Lab. Hands-On Security Operations in Minutes&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.youtube.com/watch?v=xJTw_Q2WVD8" target="_blank" rel="noopener"&gt;Beyond KQL – Unlocking SOC Insights with Sentinel data lake Jupyter Notebooks&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://forms.office.com/r/ha21YfCgaR" target="_blank" rel="noopener"&gt;Hyper scale your SOC: Manage delegated access and role-based scoping in Microsoft Defender&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;STRONG&gt;Stay connected&lt;/STRONG&gt;&lt;/H2&gt;
&lt;P&gt;Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of &lt;A href="https://aka.ms/microsoftsentinel" target="_blank" rel="noopener"&gt;Microsoft Sentinel&lt;/A&gt;. We’ll see you in the next edition!&lt;/P&gt;</description>
      <pubDate>Thu, 30 Apr 2026 20:59:31 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-in-microsoft-sentinel-april-2026/ba-p/4516354</guid>
      <dc:creator>vkokkengada</dc:creator>
      <dc:date>2026-04-30T20:59:31Z</dc:date>
    </item>
    <item>
      <title>Use Data Wrangler to Streamline Your Microsoft Sentinel data lake Notebook Development</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/use-data-wrangler-to-streamline-your-microsoft-sentinel-data/ba-p/4490214</link>
      <description>&lt;P&gt;One of the many exciting features of the &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-lake-overview" target="_blank" rel="noopener"&gt;Microsoft Sentinel data lake&lt;/A&gt; is a built-in &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/notebooks" target="_blank" rel="noopener"&gt;advanced analytics engine&lt;/A&gt;, powered by &lt;A class="lia-external-url" href="https://spark.apache.org/" target="_blank" rel="noopener"&gt;Apache Spark&lt;/A&gt;. This Spark cluster has access to data that is within Sentinel data lake, and can work with this data through Jupyter notebooks in Visual Studio Code. As with any coding effort, creating the right data set can be an iterative process, and sometimes making those changes purely through code can be a little tricky. Wouldn't it be great if you could visualize the distribution of your data, apply some actions to shape and refine it, and then translate those actions to code? Well, you can do that with the &lt;A class="lia-external-url" href="https://code.visualstudio.com/docs/datascience/data-wrangler" target="_blank" rel="noopener"&gt;Data Wrangler&lt;/A&gt; extension in VSCode in conjunction with the Sentinel data lake's &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-provider-class-reference" target="_blank" rel="noopener"&gt;MicrosoftSentinelProvider&lt;/A&gt; class. This blog will walk you through how to enable Data Wrangler in VSCode, how to use some of its functionality, and incorporating refinement actions back into your data lake notebook.&lt;/P&gt;
&lt;H1&gt;Scenario&lt;/H1&gt;
&lt;P&gt;The dataframe that is being built will be sourced from SignInLogs but will be used in a later algorithm. I need to clean up some of the columns by replacing missing values with default values, removing rows meeting certain criteria, and creating some categorical columns for later machine learning tasks.&lt;/P&gt;
&lt;H1&gt;Initial DataFrame&lt;/H1&gt;
&lt;P&gt;An essential data structure that you use in Jupyter notebooks is a DataFrame. A DataFrame is an in-memory representation of your data, like a database table that has columns and rows.&lt;/P&gt;
&lt;P&gt;Let's start with a basic DataFrame that contains some sign-in events from the SigninLogs table from the data lake. The returned data is useful, but for our later investigations we will need to "clean" the data by removing some missing values, renaming columns, creating true/false columns for analysis, and some other operations.&amp;nbsp; In our notebook cell, we'll perform the following actions.&lt;/P&gt;
&lt;H2&gt;Initial Includes&lt;/H2&gt;
&lt;P&gt;Before you can use the Sentinel data lake in your notebook, you need to include the proper class from the sentinel_lake.providers module. This module contains a class named MicrosoftSentinelProvider that provides functions that let you read from and write to the data lake. We also will be using a few other Python libraries in our example, and this would look like the following:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;from sentinel_lake.providers import MicrosoftSentinelProvider
from pyspark.sql.functions import col, from_json
from pyspark.sql.types import StructType, StructField, StringType, IntegerType
import pandas as pd
from datetime import datetime, timedelta&lt;/LI-CODE&gt;
&lt;H2&gt;Variable Definitions&lt;/H2&gt;
&lt;P&gt;Our sample will pull the last 30 days of SigninLogs from the data lake in order to assist with the investigation. This will be a variable that is defined once in the notebook and can be used elsewhere if needed. The same will be done for the name of the workspace in the data lake that will be queried, since the read_table and save_as_table functions can take the workspace name as a parameter and I only want to define the name once and avoid typos with multiple calls.&lt;/P&gt;
&lt;P&gt;In addition is a very important step where we instantiate our connection to the Sentinel data lake. The "spark" variable we pass to the MicrosoftSentinelProvider class is a global variable representing your Spark session. The variable sentinel_provider exposes the read_table and save_as_table functions that enable reading from and writing to the data lake.&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;one_month_ago = datetime.now() - timedelta(days=30) 
workspaceName = "YOUR_WORKSPACE_NAME" 
sentinel_provider = MicrosoftSentinelProvider(spark)&lt;/LI-CODE&gt;
&lt;P&gt;Replace "YOUR_WORKSPACE_NAME" with the name of the Sentinel workspace that you will be working with in the data lake.&lt;/P&gt;
&lt;H2&gt;Complex Type Definitions&lt;/H2&gt;
&lt;P&gt;Part of our query of SigninLogs will return complex types that contain name/value pairs. The LocationDetails and Status columns have nested values like city and state for LocationDetails and errorCode and failureReason for Status. To be able to easily access those nested values, the use of a &lt;A class="lia-external-url" href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.types.StructType.html" target="_blank" rel="noopener"&gt;StructType&lt;/A&gt; allows us to define that structure and we'll use this when retrieving the DataFrame.&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;location_schema = StructType(
    [
        StructField("city", StringType(), True),
        StructField("state", StringType(), True),
        StructField("countryOrRegion", StringType(), True),
    ]
)

status_schema = StructType(
    [
        StructField("errorCode", IntegerType(), True),
        StructField("failureReason", StringType(), True),
        StructField("additionalDetails", StringType(), True),
    ]
)&lt;/LI-CODE&gt;
&lt;H2&gt;Dataframe Definition&lt;/H2&gt;
&lt;P&gt;We now have the parts needed to make a call to create a DataFrame for the last 30 days of data from the SigninLogs table in the lake. Our code to define the DataFrame uses our time definition as a filter for TimeGenerated, defines a handful of columns that we want returned, breaking down our complex types using the StructTypes defined earlier, and retrieves those nested column names as individual DataFrame columns.&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;signin_events_df = (
    sentinel_provider.read_table("SigninLogs", workspaceName)
    .filter(col("TimeGenerated") &amp;gt;= one_month_ago) 
    .filter(col("UserPrincipalName") != "")  
    .select(
        col("TimeGenerated"),
        col("AppDisplayName"),
        col("IPAddress"),
        col("IsRisky"),
        col("RiskState"),
        col("RiskLevelAggregated"),
        col("RiskLevelDuringSignIn"),
        col("ConditionalAccessStatus"),
        col("ClientAppUsed"),
        col("IsInteractive"),
        col("UserType"),
        col("MfaDetail"),
        col("LocationDetails"),
        col("Status"),
    )  
    .withColumn("loc", from_json(col("LocationDetails"), location_schema))
    .withColumn("status", from_json(col("Status"), status_schema))
    .select(
        "*",
        col("loc.city").alias("City"),
        col("loc.state").alias("State"),
        col("loc.countryOrRegion").alias("Country"),
        col("status.errorCode").alias("ErrorCode"),
        col("status.failureReason").alias("FailureReason"),
        col("status.additionalDetails").alias("AdditionalDetails"),
    )
    .drop("loc", "status")
)&lt;/LI-CODE&gt;
&lt;H2&gt;Final Code (for now)&lt;/H2&gt;
&lt;P&gt;Putting all of these steps together results in the following code for our cell that retrieves the last 30 days of SigninLogs into a DataFrame.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Running that cell and then calling show() on the resulting DataFrame produces the following output:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;It's great data, but not the most visually appealing. It would be nice to have a cleaner looking table. That's where Data Wrangler can help right away.&lt;/P&gt;
&lt;H1&gt;Install Data Wrangler&lt;/H1&gt;
&lt;P&gt;Data Wrangler is a VSCode extension that's published by Microsoft. You can find it from the VSCode Marketplace by searching for "Data Wrangler". Installing the extension is quick and only requires Python 3.8 or higher to be installed on your machine.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Data Wrangler View of a DataFrame&lt;/H1&gt;
&lt;P&gt;Data Wrangler, by default, works natively with &lt;A class="lia-external-url" href="https://www.bing.com/ck/a?!&amp;amp;&amp;amp;p=b992f7f0ef3456b24a5f1e2f2698b6a7575a35530579338cfefa494c1175ddf9JmltdHM9MTc3NDM5NjgwMA&amp;amp;ptn=3&amp;amp;ver=2&amp;amp;hsh=4&amp;amp;fclid=1681459b-668e-6e9c-2cd4-52bd67e56f32&amp;amp;psq=python+pandas&amp;amp;u=a1aHR0cHM6Ly9wYW5kYXMucHlkYXRhLm9yZy8" target="_blank" rel="noopener"&gt;Pandas &lt;/A&gt;&amp;nbsp;DataFrames. Pandas is an open-source Python library that is very popular with data scientists for data analysis and manipulation. When working with the MicrosoftSentinelProvider class, the DataFrame returned is a PySpark DataFrame. We can easily convert our PySpark DataFrames to Pandas DataFrames by calling `.toPandas()` on that DataFrame.&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;That's a much cleaner looking table. Clicking the ellipsis in the bottom right of the table and selecting "Show column insights" changes the view to provide a quick glance of the distribution of the data:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Now, just by glancing at the column headers, you can quickly assess the distribution of data in the DataFrame. You can see that 7% of conditional access attempts failed, that a number of sign-in events were for Security Copilot, and 30% of the sign-in events came from just three IP addresses.&lt;/P&gt;
&lt;H1&gt;Wrangling Your Data&lt;/H1&gt;
&lt;P&gt;A cleaner table view with data distribution statistics is nice, but the real power of Data Wrangler allows you to shape and refine your data for use elsewhere in your notebook. In the simple DataFrame we have created, let's perform some data cleansing steps so that you can more easily filter and join this DataFrame with other DataFrames later in my analysis. Upon first glance at the DataFrame there are a few data cleansing tasks to perform, namely:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Remove rows that have non-usable UserType values of -1&lt;/LI&gt;
&lt;LI&gt;Create a true/false column for whether the user is a Member or Guest, and drop the original UserType column&lt;/LI&gt;
&lt;LI&gt;Fill in column values that have missing data with a default value&lt;/LI&gt;
&lt;LI&gt;Filter out sign ins to the My Profile page&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Let's get started by opening Data Wrangler by clicking the Data Wrangler icon in the lower left corner of the DataFrame.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Data Wrangler will open in a new tab in VS Code. There's a lot going on in this tab, with the left-hand pane having sections for an operations toolbox, a data summary panel that lists some stats about your DataFrame, and cleaning steps that keeps track of the changes you have made to your DataFrame. The rest of the page is split in two, with the DataFrame view taking up the majority of real estate and the operation preview pane at the bottom.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;We'll spend most of our time in the operations pane, but we'll also use the operation preview pane to do some additional tasks. Let's dive in.&lt;/P&gt;
&lt;H3&gt;Task 1: Remove Rows&lt;/H3&gt;
&lt;P&gt;Looking at the DataFrame grid, I can see the UserType column has some rows with a value of "-1". I don't want those in my DataFrame, so we can remove them using a filter.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Selecting &lt;STRONG&gt;Filter &lt;/STRONG&gt;in the Operations panel allows me to enter my criteria. I want to exclude rows that have a "-1" for UserType. I'll enter that and if I wait a few seconds, my DataFrame will update allowing me to preview the change.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;I unchecked the "Keep matching rows" checkbox, so my filter is excluding rows that match my criteria of UserType "Equal to" the value "-1". In the DataFrame, UserType is highlighted and I see that -1 is now not part of the DataFrame. Below the DataFrame, in the operation preview, I can see the Python code that makes this change. And in the&lt;STRONG&gt; Cleaning Steps&lt;/STRONG&gt; pane, I see my Filter step is present. I can accept this change by clicking the &lt;STRONG&gt;Apply &lt;/STRONG&gt;button in the Operations pane.&lt;/P&gt;
&lt;P&gt;Once I do that, my DataFrame is updated with my Filter operation. Everything being done by Data Wrangler is done in a sandbox, so these steps do not affect my original DataFrame...at least not yet. (We'll get to that.) Let's make a few more changes.&lt;/P&gt;
&lt;H3&gt;Task 2: One-Hot Encoded Columns&lt;/H3&gt;
&lt;P&gt;I want to be able to filter on UserType later on in my notebook, but I don't want to do string comparisons. I'd rather filter on a simple binary column. That's where One-Hot columns are useful. I'd like to have a column for IsMember and one for IsGuest. Each column will be a 0 or a 1 (false or true) and allows me to quickly filter instead of doing string comparisons. Let's create those columns.&lt;/P&gt;
&lt;P&gt;In the Operations pane, expand Formulas and select &lt;STRONG&gt;One-hot encode&lt;/STRONG&gt;. The panel will switch so you can enter the column you're targeting. Select UserType, and in a few seconds, you'll see your DataFrame update with a preview of the new columns.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Notice the new columns created (UserType_Guest and UserType_Member) are in green. The UserType column is in red and will be dropped. Clicking &lt;STRONG&gt;Apply &lt;/STRONG&gt;accepts these changes, and you'll see the updated DataFrame.&lt;/P&gt;
&lt;P&gt;You can rename the new columns by selecting the &lt;STRONG&gt;Rename column&lt;/STRONG&gt; operation under Schema. In this case, we'll rename the new columns to be IsMember and IsGuest, and accept the changes. Your Data Wrangler tab should look similar to the below image.&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;Task 3: Provide Default Values for Missing Data&lt;/H3&gt;
&lt;P&gt;Scanning through the DataFrame, we can see that the FailureReason and AdditionalDetails columns have a number of missing values. We would prefer to have a value in a cell rather than missing values. Filling in default values for missing values is another operation. Under Find and Replace in the Operations pane, select &lt;STRONG&gt;Fill missing values&lt;/STRONG&gt;. You can set a default value for multiple columns in one swoop with this operation.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;I'm setting the same default value ("N/A") for both columns in one operation. The columns in red are the old values; the columns in green are the new values. Again, if this looks good, hit the &lt;STRONG&gt;Apply &lt;/STRONG&gt;button and the DataFrame is updated.&lt;/P&gt;
&lt;H3&gt;Task 4: Use Copilot to Create Operations&lt;/H3&gt;
&lt;P&gt;One last update that we wanted to make was to filter out rows where the target application was "My Profile". We've already created a filter operation earlier, but this time, we'll use Copilot to generate the operation. In the Operation Preview pane, below your DataFrame, there's a text box where you can type a prompt. Enter something like "For the column AppDisplayName, filter out the rows where the value is equal to My Profile". Hit Enter, and Copilot thinks for a few seconds and will display the code in the preview pane along with a modal dialog stating that the preview is paused. Since this change was generated by Copilot, you need to review the code before accepting the change.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;If the code looks good, click the &lt;STRONG&gt;Run code&lt;/STRONG&gt; link in the modal and your DataFrame will go back to preview mode. You'll see the filtered out rows highlighted, and if this all looks good, click &lt;STRONG&gt;Apply &lt;/STRONG&gt;to accept the operation.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Using Copilot to help create operations can be very helpful if you know what you want to do, but aren't sure what the operation is called, such as a One-Hot Encoding. But you should always examine the code generated before accepting it.&lt;/P&gt;
&lt;H1&gt;Applying the Changes to Our Notebook&lt;/H1&gt;
&lt;P&gt;We've created a number of operations and our DataFrame looks great, but how can we translate these operations back to our original notebook? Data Wrangler makes that easy by allowing you to export your operations back into the source notebook.&lt;/P&gt;
&lt;P&gt;Once you're satisfied with your changes, click the Export to notebook button above your DataFrame.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;This action will take all of the operations you created and create a new cell in your Jupyter notebook, right below the one where you kicked off the Data Wrangler tab, Your operations will be contained within a local function and a copy of your DataFrame will be sent to the function. The result of the function will be a new DataFrame that you can then work with throughout the rest of your notebook.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Since this is all code, you can change variable names or even the structure of the generated code. Personally, I like to change the DataFrame names from the generic "df" and "df_clean" to something more meaningful, and even the local function can be renamed to a more meaningful function name. This way, if others are working on the same notebook, they have a better understanding of what is happening in the code. It may look like this:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;def clean_signin_info(df):
    # Filter rows based on column: 'UserType'
    df = df[~(df["UserType"] == "-1")]
    # One-hot encode column: 'UserType'
    insert_loc = df.columns.get_loc("UserType")
    df = pd.concat(
        [
            df.iloc[:, :insert_loc],
            pd.get_dummies(df.loc[:, ["UserType"]]),
            df.iloc[:, insert_loc + 1 :],
        ],
        axis=1,
    )
    # Rename column 'UserType_Guest' to 'IsGuest'
    df = df.rename(columns={"UserType_Guest": "IsGuest"})
    # Rename column 'UserType_Member' to 'IsMember'
    df = df.rename(columns={"UserType_Member": "IsMember"})
    # Replace missing values with "N/A" in columns: 'FailureReason', 'AdditionalDetails'
    df = df.fillna({"FailureReason": "N/A", "AdditionalDetails": "N/A"})
    return df


signin_events_pandas_df = signin_events_df.toPandas()
cleaned_signin_events_df = clean_signin_info(signin_events_pandas_df)
cleaned_signin_events_df.head()&lt;/LI-CODE&gt;
&lt;P&gt;And my resulting DataFrame will have all of my cleaning steps applied.&lt;/P&gt;
&lt;H1&gt;Start Using Data Wrangler Today&lt;/H1&gt;
&lt;P&gt;You can get started using Data Wrangler with your Sentinel data lake notebooks today and explore all of the data wrangling tasks you can do with it. The Data Wrangler extension is available in the VS Code Marketplace and is free to download and use. It works well with the Microsoft Sentinel extension that you use with your Sentinel data lake notebook tasks, so install it today and start wrangling the data lake. Happy wrangling!&lt;/P&gt;
&lt;H1&gt;Resources&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/notebooks" target="_blank" rel="noopener"&gt;Running notebooks on the Microsoft Sentinel data lake - Microsoft Security | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-provider-class-reference" target="_blank" rel="noopener"&gt;Microsoft Sentinel data lake Microsoft Sentinel Provider class reference | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://code.visualstudio.com/docs/datascience/data-wrangler" target="_blank" rel="noopener"&gt;Getting Started with Data Wrangler in VS Code&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://youtu.be/xJTw_Q2WVD8" target="_blank" rel="noopener"&gt;Beyond KQL: Unlocking SOC Insights With Sentinel data lake Jupyter Notebooks | Microsoft Virtual Ninja Training&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 29 Apr 2026 19:18:17 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/use-data-wrangler-to-streamline-your-microsoft-sentinel-data/ba-p/4490214</guid>
      <dc:creator>David Hoerster</dc:creator>
      <dc:date>2026-04-29T19:18:17Z</dc:date>
    </item>
    <item>
      <title>Introducing the Microsoft Sentinel Training Lab. Hands-On Security Operations in Minutes</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/introducing-the-microsoft-sentinel-training-lab-hands-on/ba-p/4513274</link>
      <description>&lt;P&gt;&lt;STRONG&gt;A huge thanks to&amp;nbsp;Paul Kew - this lab wouldn't have been possible without his contributions.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Security operations is one of those things that’s hard to learn from slides alone. You need to&amp;nbsp;&lt;EM&gt;feel&lt;/EM&gt; what it’s like to triage a multi-stage incident, tune a noisy detection rule, or trace an attacker pivoting from an endpoint to the cloud. That’s exactly why we built the &lt;STRONG&gt;Microsoft Sentinel Training Lab&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;What Is It?&lt;/H2&gt;
&lt;P&gt;The &lt;A class="lia-external-url" href="https://github.com/Azure/Azure-Sentinel/blob/master/Tools/Microsoft-Sentinel-Training-Lab/README.md" target="_blank" rel="noopener"&gt;Sentinel Training Lab&lt;/A&gt; is an open-source, deploy-in-minutes training environment that gives you a fully functional Microsoft Sentinel workspace loaded with realistic attack telemetry. One click deploys everything - pre-recorded data from six different security products, custom detection rules that fire real incidents, workbooks, watchlists, and playbooks.&lt;/P&gt;
&lt;P&gt;No need to set up agents, configure connectors, or simulate attacks yourself. The lab does all of that for you so you can focus on what matters: &lt;STRONG&gt;learning how to detect, investigate, and respond&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;What’s Inside?&lt;/H2&gt;
&lt;P&gt;The lab simulates a &lt;STRONG&gt;multi-stage attack&lt;/STRONG&gt; that spans six data sources — just like what a real SOC analyst would encounter:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;CrowdStrike&lt;/STRONG&gt; — endpoint detections (malware execution, credential dumping)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Palo Alto Networks&lt;/STRONG&gt; — firewall logs (port scans, data exfiltration, C2 traffic)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Okta&lt;/STRONG&gt; — identity events (account takeover, MFA manipulation)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;AWS CloudTrail&lt;/STRONG&gt; — cloud activity (IAM escalation, backdoor accounts)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;GCP Audit Logs&lt;/STRONG&gt; — cloud infrastructure abuse (service account creation, firewall changes)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;MailGuard365&lt;/STRONG&gt; — email security (phishing campaigns bypassing filters)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;All of this data feeds into &lt;STRONG&gt;22 custom detection rules&lt;/STRONG&gt; that automatically generate a unified, multi-stage incident in Microsoft Defender XDR - with correlated alerts, entity graphs, and a full kill chain mapped to MITRE ATT&amp;amp;CK.&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;The Exercises&lt;/H2&gt;
&lt;P&gt;The lab comes with &lt;STRONG&gt;16 guided exercises&lt;/STRONG&gt; covering the full spectrum of security operations:&lt;/P&gt;
&lt;H3&gt;Getting Started&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Onboarding&lt;/STRONG&gt; — Set up your workspace and deploy the lab in under 30 minutes&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 1&lt;/STRONG&gt; — Explore your data with Advanced Hunting and create your first detection rule&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 2&lt;/STRONG&gt; — Enable Microsoft Defender Threat Intelligence and query IOCs&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 3&lt;/STRONG&gt; — Visualise your detection coverage on the MITRE ATT&amp;amp;CK matrix&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 4&lt;/STRONG&gt; — Automate incident enrichment with automation rules&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Detection Engineering&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 5&lt;/STRONG&gt; — Cross-platform device isolation (CrowdStrike alert → MDE response)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 6&lt;/STRONG&gt; — Tune port scan detection thresholds&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 7&lt;/STRONG&gt; — Detect Okta MFA factor manipulation&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 8&lt;/STRONG&gt; — Enrich detections with watchlists&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Operations &amp;amp; Cost Management&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 9&lt;/STRONG&gt; — Monitor ingestion costs and configure threshold policies&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 10&lt;/STRONG&gt; — Manage table tiers and retention settings&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Data Lake&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 11&lt;/STRONG&gt; — Create KQL jobs to aggregate data lake telemetry&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 12&lt;/STRONG&gt; — Compare real-time vs data lake detection approaches&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 13&lt;/STRONG&gt; — Interactive Jupyter notebook investigations&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Advanced&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 14&lt;/STRONG&gt; — 10 AI-powered prompts demonstrating the Sentinel MCP Server&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 15&lt;/STRONG&gt; — Federate external data from ADLS Gen2&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Exercise 16&lt;/STRONG&gt; — Split transformation to route data between Analytics and data Lake tiers&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;What Makes This Different?&lt;/H2&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 98.1481%; height: 357.334px; border-width: 1px;"&gt;&lt;thead&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Feature&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;What it means for you&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;STRONG&gt;One-click deployment&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;Deploy to Azure button — no manual configuration&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Realistic multi-source data&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;Six security products generating correlated incidents&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;STRONG&gt;22 detection rules&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;Pre-built rules that fire real XDR incidents with entity mapping&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;STRONG&gt;MITRE ATT&amp;amp;CK coverage&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;10 tactics covered across the attack chain&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Data lake exercises&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;KQL jobs, notebooks, federation, and split transformations&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;STRONG&gt;MCP Server prompts&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.6667px;"&gt;
&lt;P&gt;AI-powered investigation with GitHub Copilot&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 86.667px;"&gt;&lt;td style="height: 86.667px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Cost management&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 86.667px;"&gt;
&lt;P&gt;Threshold policies and tier optimisation guidance&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.0287%" /&gt;&lt;col style="width: 50.0287%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;Getting Started&lt;/H2&gt;
&lt;P&gt;Ready to try it? Here’s all you need:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;An Azure subscription (free trial works)&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Owner or Contributor role on the subscription&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;A Microsoft Sentinel workspace onboarded to Defender XDR&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Then head to the repo, follow the Onboarding guide and click &lt;STRONG&gt;Deploy to Azure&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;The deployment takes about 30 minutes. After that, your workspace will have ingested data, detection rules firing incidents, and all 16 exercises ready to go.&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;Open Source &amp;amp; Community&lt;/H2&gt;
&lt;P&gt;The entire lab is open source under the &lt;A href="https://github.com/Azure/Azure-Sentinel" target="_blank" rel="noopener"&gt;Azure/Azure-Sentinel&lt;/A&gt; repository. Contributions, feedback, and ideas are welcome. If you find something that could be better, open an issue or submit a PR.&lt;/P&gt;
&lt;P&gt;We built this lab because we believe the best way to learn security operations is by doing. We hope it helps you — whether you’re defending your first tenant or your hundredth.&lt;/P&gt;
&lt;P&gt;Get started now&lt;STRONG&gt; -&amp;nbsp;&lt;STRONG&gt;&lt;A href="https://github.com/Azure/Azure-Sentinel/blob/master/Tools/Microsoft-Sentinel-Training-Lab/README.md" target="_blank"&gt;Azure-Sentinel/Tools/Microsoft-Sentinel-Training-Lab/README.md at master · Azure/Azure-Sentinel&lt;/A&gt;&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2026 08:50:24 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/introducing-the-microsoft-sentinel-training-lab-hands-on/ba-p/4513274</guid>
      <dc:creator>AndreasKapetaniou</dc:creator>
      <dc:date>2026-04-23T08:50:24Z</dc:date>
    </item>
    <item>
      <title>Sentinel RBAC in the Unified portal: who has activated Unified RBAC, and how did it go?</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/sentinel-rbac-in-the-unified-portal-who-has-activated-unified/m-p/4513181#M12923</link>
      <description>&lt;P&gt;Following the RSAC 2026 announcements last month, I have been working through the full permission picture for the Unified portal and wanted to open a discussion here given how much has shifted in a short period.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A quick framing of where things stand. The baseline is still that Azure RBAC carries across for Sentinel SIEM access when you onboard, no changes required. But there are now two significant additions in public preview: Unified RBAC for Sentinel SIEM itself (extending the Defender Unified RBAC model to cover Sentinel directly), and a new Defender-native GDAP model for non-CSP organisations managing delegated access across tenants.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The GDAP piece in particular is worth discussing carefully, because I want to be precise about what has and has not changed. The existing limitation from Microsoft's onboarding documentation, that GDAP with Azure Lighthouse is not supported for Sentinel data in the Defender portal, has not changed. What is new is a separate, Defender-portal-native GDAP mechanism announced at RSAC, which is a different thing. These are not the same capability. If you were using Entra B2B as the interim path based on earlier guidance, that guidance was correct and that path remains the generally available option today.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A few things I would genuinely like to hear from practitioners:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;For those who have activated Unified RBAC for a Sentinel workspace in the Defender portal: what did the migration from Azure RBAC roles look like in practice? Did the import function bring roles across cleanly, or did you find gaps particularly around custom roles?&lt;/LI&gt;&lt;LI&gt;For environments using Playbook Operator, Automation Contributor, or Workbook Contributor role assignments: how are you handling the fact those three roles are not yet in Unified RBAC and still require Azure portal management? Is the dual-management posture creating operational friction?&lt;/LI&gt;&lt;LI&gt;For MSSPs evaluating the new Defender-native GDAP model against their existing Entra B2B setup: what factors are driving the decision either way at your scale?&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Writing this up as Part 3 of the migration series and the community experience here is directly useful for making sure the practitioner angle is grounded.&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 06:04:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/sentinel-rbac-in-the-unified-portal-who-has-activated-unified/m-p/4513181#M12923</guid>
      <dc:creator>AnthonyPorter</dc:creator>
      <dc:date>2026-04-21T06:04:28Z</dc:date>
    </item>
    <item>
      <title>Enforce Cost Limits on KQL Queries and Notebooks in the Microsoft Sentinel Data Lake</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/enforce-cost-limits-on-kql-queries-and-notebooks-in-the/ba-p/4511329</link>
      <description>&lt;P&gt;Security teams face a constant tension: run the advanced analytics you need to stay ahead of threats, or hold back to keep costs predictable. Until now, Microsoft Sentinel let you set alerts to get notified when data lake usage approached a threshold — useful for awareness, but not enough to prevent budget overruns.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;Today, we're excited to announce&amp;nbsp;&lt;STRONG&gt;threshold enforcement for KQL queries and notebooks in the Microsoft Sentinel data lake&lt;/STRONG&gt;. With this release, you can go beyond notifications and automatically block new queries and jobs when your configured usage limits are exceeded. Your analysts keep working confidently, and your budgets stay protected.&lt;/P&gt;
&lt;H2&gt;&lt;U&gt;What's new&lt;/U&gt;&lt;/H2&gt;
&lt;P&gt;Previously, the Configure Policies experience in Microsoft Sentinel let you set threshold-based alerts for data lake usage. You'd receive an email notification when consumption approached a limit — but nothing stopped usage from continuing past that point.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, you can enable&amp;nbsp;&lt;STRONG&gt;enforcement&lt;/STRONG&gt;&amp;nbsp;on those same policies. When enforcement is turned on and a threshold is exceeded, Microsoft Sentinel blocks new queries, jobs, and notebook sessions with a clear "Limit exceeded" error. No more surprise cost spikes from runaway queries or analysts who mistakenly run heavy workloads against data lake data.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Enforcement is supported for two data lake capability categories:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Lake Query&lt;/STRONG&gt; — interactive KQL queries and KQL jobs (scheduled and ad hoc)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Advanced Data Insights&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&amp;nbsp;— notebook runs and notebook jobs&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;U&gt;How it works&lt;/U&gt;&lt;/H2&gt;
&lt;H4&gt;Consistent controls across KQL queries and notebooks&lt;/H4&gt;
&lt;P&gt;Cost controls are enforced consistently across Sentinel data lake workloads, regardless of how analysts access the data. The same policy applies whether someone is running a quick investigation or executing a long-running job.&lt;/P&gt;
&lt;P&gt;Controls apply to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Interactive KQL queries in the data lake explorer in the Defender portal&lt;/LI&gt;
&lt;LI&gt;KQL jobs, including scheduled and ad-hoc jobs&lt;/LI&gt;
&lt;LI&gt;Notebook queries run through the Microsoft Sentinel VS Code extension&lt;/LI&gt;
&lt;LI&gt;Notebook jobs running as background or scheduled workloads&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This ensures advanced analytics remain powerful — but predictable and governed.&lt;/P&gt;
&lt;H4&gt;Clear enforcement without disruption&lt;/H4&gt;
&lt;P&gt;Enforcement is applied at execution and validation boundaries — not retroactively. This means:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Queries or jobs already running are not interrupted.&lt;/STRONG&gt; In-flight work completes normally.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;New queries, jobs, or notebook sessions are blocked&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; once limits are exceeded.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Failures occur early &lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;(for example, during validation), avoiding wasted compute.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;From an analyst's perspective, enforcement is explicit and consistent. Clear messaging appears in query editors, job validation responses, and notebooks when limits are reached — so your team always understands what happened and what to do next.&lt;/P&gt;
&lt;H2&gt;&lt;BR /&gt;&lt;U&gt;How to set it up&lt;/U&gt;&lt;/H2&gt;
&lt;H4&gt;Prerequisites&lt;/H4&gt;
&lt;P&gt;To configure enforcement policies, ensure you have the necessary permissions that are outlined here: &lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/billing-monitor-costs?source=recommendations#microsoft-sentinel-cost-management-in-the-microsoft-defender-portal" target="_blank" rel="noopener"&gt;Manage and monitor costs for Microsoft Sentinel | Microsoft Learn&lt;/A&gt;.&lt;/P&gt;
&lt;H4&gt;Where to access&lt;/H4&gt;
&lt;P&gt;Navigate to&amp;nbsp;&lt;STRONG&gt;Microsoft Sentinel &amp;gt; Cost management &amp;gt; Configure Policies&lt;/STRONG&gt; in the Microsoft Defender portal (&lt;U&gt;https://security.microsoft.com&lt;/U&gt;).&lt;/P&gt;
&lt;H4&gt;Step-by-step configuration&lt;/H4&gt;
&lt;OL&gt;
&lt;LI&gt;In&amp;nbsp;&lt;STRONG&gt;Microsoft Sentinel &amp;gt; Cost management&lt;/STRONG&gt;, select&amp;nbsp;&lt;STRONG&gt;Configure Policies&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Select the policy you want to edit (Data Lake Query or Advanced Data Insights).&lt;/LI&gt;
&lt;LI&gt;Enter the&amp;nbsp;&lt;STRONG&gt;total threshold value&lt;/STRONG&gt;&amp;nbsp;for the policy.&lt;/LI&gt;
&lt;LI&gt;Enter an&amp;nbsp;&lt;STRONG&gt;alert percentage&lt;/STRONG&gt;&amp;nbsp;to receive email notifications before the threshold is reached.&lt;/LI&gt;
&lt;LI&gt;Enable the&amp;nbsp;&lt;STRONG&gt;Enforcement&lt;/STRONG&gt;&amp;nbsp;toggle to block usage after the threshold is exceeded.&lt;/LI&gt;
&lt;LI&gt;Review your settings and select&amp;nbsp;&lt;STRONG&gt;Submit&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Once enforcement is active, administrators receive advance notifications as usage approaches the threshold. If circumstances change — for example, during an active breach — you can adjust the threshold, disable enforcement temporarily, or modify the policy to give your SOC the room it needs to respond without being blocked.&lt;/P&gt;
&lt;H2&gt;&lt;U&gt;Real-world scenario: Preventing unexpected cost spikes&lt;/U&gt;&lt;/H2&gt;
&lt;P&gt;Consider a large SOC that ingests roughly 6 TB of data per day, with 1 TB going to the Sentinel Analytics tier and the remaining 5 TB going to the Sentinel data lake. Analysts are proactively hunting for threats, performing investigations, and running automation. Tier 3 analysts are also running Jupyter Notebooks against the Sentinel data lake to build graphs, execute queries, and automate incident investigation and remediation with code.&lt;/P&gt;
&lt;P&gt;Last month, the SOC experienced a cost spike after a newly hired analyst ran large, frequent queries against data lake data — mistakenly thinking it was Analytics tier. The SOC manager needs to prevent this from happening again.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With enforcement now available, the SOC manager can navigate to&amp;nbsp;&lt;STRONG&gt;Microsoft Sentinel &amp;gt; Cost management &amp;gt; Configure Policies&lt;/STRONG&gt;&amp;nbsp;in the Defender portal and set up two policies:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A &lt;STRONG&gt;Data Lake Query&lt;/STRONG&gt;&amp;nbsp;policy to cap data processing for KQL queries&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;An &lt;STRONG&gt;Advanced Data Insights&lt;/STRONG&gt;&amp;nbsp;policy to cap notebook compute consumption&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With these policies in place, the SOC manager gets notified in advance when consumption approaches the threshold while having confidence that the thresholds set will be enforced to prevent unexpected consumption and cost. Analysts can continue their day-to-day work without worrying about accidental overages. Should a breach scenario demand more capacity, the SOC manager can quickly adjust or temporarily disable the policies — keeping the team unblocked while maintaining overall budget governance. Outside of a breach scenario, should the same SOC analyst generate large amounts of data scanned, the threshold will take action and prevent queries from being performed.&lt;/P&gt;
&lt;H2&gt;Learn more&lt;/H2&gt;
&lt;P&gt;With enforceable KQL and notebook guardrails, Microsoft Sentinel data lake helps security teams scale advanced analytics with confidence. You can control usage in production and keep investigations moving — without tradeoffs between visibility, analytics, and budget.&lt;/P&gt;
&lt;P&gt;To get started, visit the documentation:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/billing-monitor-costs?source=recommendations#notification" target="_blank" rel="noopener"&gt;Manage and monitor costs for Microsoft Sentinel | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We'd love to hear your feedback. Share your thoughts in the comments below or reach out through your usual Microsoft support channels.&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 20:26:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/enforce-cost-limits-on-kql-queries-and-notebooks-in-the/ba-p/4511329</guid>
      <dc:creator>shubh_khandhadia</dc:creator>
      <dc:date>2026-04-15T20:26:56Z</dc:date>
    </item>
    <item>
      <title>Running KQL queries on Microsoft Sentinel data lake using API</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/running-kql-queries-on-microsoft-sentinel-data-lake-using-api/ba-p/4503128</link>
      <description>&lt;H5&gt;&lt;STRONG&gt;Co-Authors: &lt;A class="lia-internal-link lia-internal-url lia-internal-url-user" href="https://techcommunity.microsoft.com/users/zeinab%20mokhtarian%20koorabbasloo/218831" target="_blank" rel="noopener" data-lia-auto-title="Zeinab Mokhtarian Koorabbasloo" data-lia-auto-title-active="0"&gt;Zeinab Mokhtarian Koorabbasloo&lt;/A&gt; and &lt;A class="lia-internal-link lia-internal-url lia-internal-url-user" href="https://techcommunity.microsoft.com/users/matt_lowe/572591" target="_blank" rel="noopener" data-lia-auto-title="Matthew Lowe" data-lia-auto-title-active="0"&gt;Matthew Lowe&lt;/A&gt;&lt;/STRONG&gt;&lt;/H5&gt;
&lt;P&gt;As security data lakes become the backbone of modern analytics platforms, organizations need new ways to operationalize their data. While interactive tools and portals support data exploration, many real-world workflows increasingly require flexible programmatic access that enables automation, scale, and seamless integration.&lt;/P&gt;
&lt;P&gt;By running KQL (Kusto Query Language) queries on Microsoft Sentinel data lake through APIs, you can embed analytics directly into automation workflows, background services, and intelligent agents, without relying on manual query execution.&lt;/P&gt;
&lt;P&gt;In this post, we explore API based KQL query execution, review some of the scenarios where it delivers the most value, and what you need to get started.&lt;/P&gt;
&lt;H4&gt;Why run KQL queries on Sentinel data lake via API?&lt;/H4&gt;
&lt;P&gt;Traditional query experiences, such as dashboards and query editors, are optimized for human interaction. APIs, on the other hand, are optimized for systems.&lt;/P&gt;
&lt;P&gt;Running KQL through an API enables:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Automation-first analytics&lt;/LI&gt;
&lt;LI&gt;Repeatable and scheduled insights&lt;/LI&gt;
&lt;LI&gt;Integration with external systems and agents&lt;/LI&gt;
&lt;LI&gt;Consistent query execution at scale&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Instead of asking &lt;EM&gt;“How do I run this query?”&lt;/EM&gt;, our customers are asking &lt;EM&gt;“How do I embed analytics into my workflow?”&lt;/EM&gt;&lt;/P&gt;
&lt;H4&gt;Scenarios where API-based KQL queries add value&lt;/H4&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt; Automated monitoring and alerting&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;SOC teams often want to continuously analyze data in their lake to detect anomalies, trends, or policy violations.&lt;/P&gt;
&lt;P&gt;With API-based KQL execution, they can:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Run queries as part of automated workflows and playbooks&lt;/LI&gt;
&lt;LI&gt;Evaluate query results programmatically&lt;/LI&gt;
&lt;LI&gt;Trigger downstream actions such as alerts, tickets, or notifications&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This turns KQL into a signal engine, not just an exploration tool.&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;&lt;STRONG&gt; Powering intelligent agents&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;AI agents require programmatic access to data lakes to retrieve timely, relevant context for decision making. Using KQL over an API allows agents to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Dynamically query data lake based on user intent or system context&lt;/LI&gt;
&lt;LI&gt;Retrieve aggregated or filtered results on demand&lt;/LI&gt;
&lt;LI&gt;Combine analytical results with reasoning and decision logic&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In this model, KQL acts as the analytical retrieval layer, while the agent focuses on orchestration, reasoning, and action.&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;&lt;STRONG&gt; Embedding analytics into business workflows&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Many organizations want analytics embedded directly into CI/CD and operational pipelines. Instead of exporting data or duplicating logic, they can:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Run KQL queries inline via API&lt;/LI&gt;
&lt;LI&gt;Use results as inputs to other systems&lt;/LI&gt;
&lt;LI&gt;Keep analytics logic centralized and consistent&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This reduces drift between “analytics code” and “application code.”&lt;/P&gt;
&lt;H4&gt;High-level flow: What happens when you run KQL via API&lt;/H4&gt;
&lt;P&gt;At a conceptual level, the flow looks like this:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;A client authenticates to Microsoft Sentinel data lake platform.&lt;/LI&gt;
&lt;LI&gt;The client submits a KQL query via an API.&lt;/LI&gt;
&lt;LI&gt;The query executes against data stored in the data lake.&lt;/LI&gt;
&lt;LI&gt;Results are returned in a structured, machine-readable format.&lt;/LI&gt;
&lt;LI&gt;The client processes or acts on the results.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H4&gt;Prerequisites&lt;/H4&gt;
&lt;P&gt;To run KQL queries against the Sentinel data lake using APIs, you will need:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A user token or a service principal&lt;/LI&gt;
&lt;LI&gt;Appropriate permissions to execute queries on the Sentinel data lake. Azure RBAC roles such as Log Analytics reader or Log Analytics contributor on the workspace are needed.&lt;/LI&gt;
&lt;LI&gt;Familiarity with KQL and API based query execution patterns&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;Scenario 1: Execute a KQL query via API within a Playbook&lt;/H4&gt;
&lt;P&gt;The following Sentinel SOAR playbook example demonstrates how data within Sentinel data lake can be used within automation. This example leverages a service principal that will be used to query the DeviceNetworkEvent logs that are within Sentinel data lake to enrich an incident involving a device before taking action on it.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Within this playbook, the entities involved within the incident are retrieved, then queries are executed against the Sentinel data lake to gain insights on each host involved. For this example, the API call to the Sentinel data lake to retrieve events from the DeviceNetworkEvents table to find relevant information that shows network connections with the host where the IP originated from outside of the United States.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As this action does not have a gallery artifacts within Azure Logic Apps, the action must be built out by using the HTTP action that is offered within Logic Apps. This action requires the API details for the API call as well as the authentication details that will be used to run the API. The step that executes the query leverages the Sentinel data lake API by performing the following call: POST &lt;A class="lia-external-url" href="https://api.securityplatform.microsoft.com/lake/kql/v2/rest/query" target="_blank" rel="noopener"&gt;https://api.securityplatform.microsoft.com/lake/kql/v2/rest/query&lt;/A&gt;. The service principal being used has read permissions on the Sentinel data lake that contains the relevant details and is authenticating to Entra ID OAuth when running the API call.&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;P&gt;NOTE: When using API calls to query Sentinel data lake, use 4500ebfb-89b6-4b14-a480-7f749797bfcd/.default as the scope/audience when retrieving a token for the service principal. This GUID is associated with the query service for Sentinel data lake.&lt;/P&gt;
&lt;P&gt;The body of the query is the following:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
"csl": "DeviceNetworkEvents | where TimeGenerated &amp;gt;= ago(30d) | where DeviceName has '' | where ActionType in (\"ConnectionSuccess\", \"ConnectionAttempted\", \"InboundConnectionAccepted\") | extend GeoInfo = geo_info_from_ip_address(RemoteIP) | extend Country = tostring(GeoInfo.country), State = tostring(GeoInfo.state), City = tostring(GeoInfo.city) | where Country != 'United States' and RemoteIP !has '127.0.0.1' | project TimeGenerated, DeviceName, ActionType, RemoteIP, RemotePort, RemoteUrl, City, State, Country, InitiatingProcessFileName | order by TimeGenerated desc | top 2 by DeviceName", “db”: “WORKSPACENAMEHERE – WORKSPACEIDHERE”
}
&lt;/LI-CODE&gt;
&lt;P&gt;Within this body, the query and workspace are defined. “csl” represents the query to run against the Sentinel data lake and “db” represents the Sentinel workspace/lake. This value is a combination of the workspace name – workspace ID. Both of these values can be found on the workspace overview blade within Azure.&lt;/P&gt;
&lt;P&gt;NOTE: The query must be one line in the JSON. Multi-line items will not be seen as valid JSON.&lt;/P&gt;
&lt;P&gt;With this, initial investigative querying via Sentinel data lake has been done the moment that the incident is triggered, allowing the SOC analyst responding to expediate their investigation and validating that the automated action of disabling the account was justified. For this Playbook, the results gathered from Sentinel data lake were placed into a comment and added to the incident within Defender, allowing SOC analysts to quickly review relevant details when beginning their work:&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;Scenario 2: Execute a KQL query via API in code&lt;/H4&gt;
&lt;P&gt;The following Python example demonstrates how to use a service principal to execute a KQL query on the Sentinel data lake via API. This example is provided for illustration purposes, but you can also call the API directly via common API tools. Within the code, the query and workspace are defined. “csl” represents the query to run against the Sentinel data lake and “db” represents the Sentinel workspace/lake. This value is a combination of the workspace name – workspace ID. Both of these values can be found on the workspace overview blade within Azure.&lt;/P&gt;
&lt;P&gt;You also need to use a token or a service principal.&lt;/P&gt;
&lt;LI-CODE lang=""&gt;import requests
import msal

# ====== SPN / Entra app settings ======
TENANT_ID = ""
CLIENT_ID = ""
CLIENT_SECRET = ""

# Token authority
AUTHORITY = f"https://login.microsoftonline.com/{TENANT_ID}"

# ---- IMPORTANT ----
# Most APIs use the resource + "/.default" pattern for client-credentials.
# Try this first:
SCOPE = ["4500ebfb-89b6-4b14-a480-7f749797bfcd/.default"]

# ====== KQL query payload ======
KQL_QUERY = {
    "csl": "SigninLogs| take 10",
    "db": " workspace1-12345678-abcd-abcd-1234-1234567890ab ",
    "properties": {
        "Options": {
            "servertimeout": "00:04:00",
            "queryconsistency": "strongconsistency",
            "query_language": "kql",
            "request_readonly": False,
            "request_readonly_hardline": False
        }
    }
}
# ====== Acquire token using client credentials ======
app = msal.ConfidentialClientApplication(
    client_id=CLIENT_ID,
    authority=AUTHORITY,
    client_credential=CLIENT_SECRET
)

result = app.acquire_token_for_client(scopes=SCOPE)

if "access_token" not in result:
    raise RuntimeError(
        f"Token acquisition failed: {result.get('error')} - {result.get('error_description')}"
    )

access_token = result["access_token"]

# ====== Call the KQL API ======
headers = {
    "Authorization": f"Bearer {access_token}",
    "Content-Type": "application/json"
}

url = "https://api.securityplatform.microsoft.com/lake/kql/v2/rest/query"  # same endpoint
response = requests.post(url, headers=headers, json=KQL_QUERY)

if response.status_code == 200:
    print("Query Results:")
    print(response.json())
else:
    print(f"Error {response.status_code}: {response.text}")
&lt;/LI-CODE&gt;
&lt;P&gt;In summary, you need the following parameters in your API call:&lt;/P&gt;
&lt;P&gt;Request URI: https://api.securityplatform.microsoft.com/lake/kql/v2/rest/query&lt;/P&gt;
&lt;P&gt;Method: POST&lt;/P&gt;
&lt;P&gt;Sample payload:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;{
    "csl": " SigninLogs | take 10",
    "db": "workspace1-12345678-abcd-abcd-1234-1234567890ab",

 }
&lt;/LI-CODE&gt;
&lt;H4&gt;Limitations and considerations&lt;/H4&gt;
&lt;P&gt;The following considerations should be considered when planning to execute KQL queries on a data lake:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Service principal permissions&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;When using a service principal, Azure RBAC roles can be assigned at the Sentinel workspace level. Entra ID roles or XDR unified RBAC role are not supported for this scenario. Alternatively, user tokens with Entra ID roles can be used.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Result size limits&lt;/STRONG&gt;&lt;BR /&gt;Queries are subject to limits on execution time and response size. Review Microsoft Sentinel data lake &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-lake-service-limits#service-parameters-and-limits-for-kql-queries-in-the-lake-tier" target="_blank" rel="noopener"&gt;query service limits&lt;/A&gt; when designing your workflows.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;Summary&lt;/H4&gt;
&lt;P&gt;Running KQL queries on Sentinel data lake via APIs unlocks a new class of scenarios, from intelligent agents to fully automated analytics pipelines. By decoupling query execution from user interfaces, customers gain flexibility, scalability, and control over how insights are generated and consumed.&lt;/P&gt;
&lt;P&gt;If you’re already using KQL for interactive analysis, API access is the natural next step toward production grade analytics.&lt;/P&gt;
&lt;P&gt;Happy hunting!&lt;/P&gt;
&lt;H4&gt;Resources&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Run KQL queries on Sentinel data lake: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/kql-queries" target="_blank" rel="noopener"&gt;Run KQL queries against the Microsoft Sentinel data lake - Microsoft Security | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Service parameters and limits: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-lake-service-limits#service-parameters-and-limits-for-kql-queries-in-the-lake-tier" target="_blank" rel="noopener"&gt;Microsoft Sentinel data lake service limits - Microsoft Security | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 14 Apr 2026 17:15:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/running-kql-queries-on-microsoft-sentinel-data-lake-using-api/ba-p/4503128</guid>
      <dc:creator>Zeinab Mokhtarian Koorabbasloo</dc:creator>
      <dc:date>2026-04-14T17:15:33Z</dc:date>
    </item>
    <item>
      <title>Microsoft Sentinel data federation: Expand visibility while preserving governance</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-data-federation-expand-visibility-while/ba-p/4511258</link>
      <description>&lt;P&gt;Security data volumes are growing faster than ever, but visibility across the entire digital estate hasn’t kept pace. As organizations expand across cloud, hybrid, and SaaS environments, critical security-relevant data is increasingly stored across multiple data stores due to governance and compliance requirements.&lt;/P&gt;
&lt;P&gt;Microsoft understands this reality. &lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/data-federation-overview" target="_blank"&gt;Microsoft Sentinel data federation&lt;/A&gt;&lt;STRONG&gt; &lt;/STRONG&gt;now in public preview,&lt;STRONG&gt; &lt;/STRONG&gt;is designed to meet customers where their data already lives, while preserving governance. Powered by Microsoft Fabric, customers can federate data from Microsoft Fabric, Azure Data Lake Storage (ADLS) Gen 2, and Azure Databricks into Sentinel data lake—without copying or duplicating the data. Security teams can analyze data in place, unifying detections, investigations, and hunting across a broader digital estate, while data owners retain full ownership and policies remain intact.&lt;/P&gt;
&lt;H1&gt;Sentinel data federation benefit&lt;/H1&gt;
&lt;P&gt;Data federation ensures security data remains at its source while appearing seamlessly alongside native Sentinel data in the Sentinel data lake.&lt;/P&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;This allows security teams to work with governed data confidently—without duplicating data or disrupting existing data ownership and compliance models.&lt;/LI&gt;
&lt;LI&gt;Using familiar Sentinel tools such as KQL hunting, notebooks, and custom graphs, teams can correlate signals, investigate incidents, and hunt across federated and ingested data in a unified experience. Analysts can seamlessly connect signals across domains and accelerate investigations.&lt;/LI&gt;
&lt;LI&gt;By running analytics on federated data first, customers can evaluate which datasets consistently deliver security value. Over time, high‑value data can be ingested into Sentinel data lake to unlock deeper detections, automation, and AI‑driven insights.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Sentinel data federation allows security teams to start broad, stay governed, and scale security analytics with clarity.&lt;/P&gt;
&lt;H1&gt;Customer use cases enabled by Sentinel data federation&lt;/H1&gt;
&lt;H3&gt;Cross-domain threat hunting across the enterprise&lt;/H3&gt;
&lt;P&gt;With Sentinel data federation, Sentinel becomes the orchestration layer for advanced security analytics, not just a repository for ingested data.&lt;/P&gt;
&lt;P&gt;By querying external data sources in place, security teams can:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Run single KQL threat hunts that span Sentinel-native tables and federated data sources.&lt;/LI&gt;
&lt;LI&gt;Correlate security signals with business, identity, fraud, and application telemetry that may never be ingested into Sentinel.&lt;/LI&gt;
&lt;LI&gt;Perform investigations across years of historical data without migrating or reshaping it.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This enables SOC teams to move beyond siloed investigations and uncover patterns that only emerge when data is analyzed across the full digital estate.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;“Microsoft leads the market in cloud‑native SIEM. Now with support for data federation, Sentinel is enabling security teams to analyze security data wherever it lives, preserving governance. For customers and managed security providers alike, this marks a pivotal shift in what a modern cloud SIEM can enable."&lt;/P&gt;
&lt;P&gt;—&amp;nbsp;&lt;STRONG&gt;Micah Heaton |&lt;/STRONG&gt; &lt;STRONG&gt;Strategy Leader at BlueVoyant&lt;/STRONG&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H3&gt;AI-ready tooling for advanced security analytics&lt;/H3&gt;
&lt;P&gt;By extending Sentinel’s security knowledge layer across federated data, customers can:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Leverage Sentinel custom graphs and AI/MCP tools to run analytics across federated and ingested data together.&lt;/LI&gt;
&lt;LI&gt;Deliver immediate investigative value to SOC teams by enriching incidents with broader context.&lt;/LI&gt;
&lt;LI&gt;Enable data scientists and advanced analysts to perform iterative threat discovery without copying or staging massive datasets into the Sentinel data lake.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This approach allows customers to apply advanced analytics and AI techniques to security problems at scale, while preserving governance and avoiding unnecessary data movement.&lt;/P&gt;
&lt;H1&gt;Data federation as a path to deeper Sentinel value&lt;/H1&gt;
&lt;P&gt;Data federation is not a replacement for ingestion, it’s a &lt;STRONG&gt;governance-first approach&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Many customers follow a natural progression:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Federate&lt;/STRONG&gt; data to explore and investigate&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Identify high-value signals&lt;/STRONG&gt; through real investigations&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Ingest valuable security data&lt;/STRONG&gt; into Sentinel data lake overtime&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Unlock detections, automation, and AI-powered insights &lt;/STRONG&gt;at scale&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This approach helps customers realize Sentinel value earlier while setting them up for long-term success.&lt;/P&gt;
&lt;H1&gt;Learn more&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://www.microsoft.com/en-us/security/business/siem-and-xdr/microsoft-sentinel/?msockid=302788a71aaf677a36549fae1bf06650" target="_blank"&gt;Microsoft Sentinel—AI-Ready Platform&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://www.youtube.com/playlist?list=PL3ZTgFEc7LyvM-OlDTB8BDV_aARfmBMG9" target="_blank"&gt;Microsoft Sentinel data lake - YouTube&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/data-federation-overview" target="_blank"&gt;Data federation overview in Microsoft Sentinel data lake&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/using-data-federation" target="_blank"&gt;Use federated data sources in Microsoft Sentinel&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://youtu.be/pDA80Gr-0xc?si=GZzwafxpho8s0dy4" target="_blank"&gt;Sentinel data federation ninja training&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/microsoft-sentinel-data-lake-faq/4457728" target="_blank"&gt;Microsoft Sentinel data lake FAQ blog&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 14 Apr 2026 16:11:29 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-data-federation-expand-visibility-while/ba-p/4511258</guid>
      <dc:creator>chaitra_satish</dc:creator>
      <dc:date>2026-04-14T16:11:29Z</dc:date>
    </item>
    <item>
      <title>How to Ingest Microsoft Intune Logs into Microsoft Sentinel</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-ingest-microsoft-intune-logs-into-microsoft-sentinel/ba-p/4508562</link>
      <description>&lt;P&gt;For many organizations using Microsoft Intune to manage devices, integrating Intune logs into Microsoft Sentinel is an essential for security operations (Incorporate the device into the SEIM). By routing Intune’s device management and compliance data into your central SIEM, you gain a unified view of endpoint events and can set up alerts on critical Intune activities e.g. devices falling out of compliance or policy changes. This unified monitoring helps security and IT teams detect issues faster, correlate Intune events with other security logs for threat hunting and improve compliance reporting. We’re publishing these best practices to help unblock common customer challenges in configuring Intune log ingestion. In this step-by-step guide, you’ll learn how to successfully send Intune logs to Microsoft Sentinel, so you can fully leverage Intune data for enhanced security and compliance visibility.&lt;/P&gt;
&lt;H2&gt;Prerequisites and Overview&lt;/H2&gt;
&lt;P&gt;Before configuring log ingestion, ensure the following prerequisites are in place:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Sentinel Enabled Workspace&lt;/STRONG&gt;: A Log Analytics Workspace with Microsoft Sentinel enabled; For information regarding setting up a workspace and onboarding Microsoft Sentinel, see: &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/sentinel/quickstart-onboard?tabs=defender-portal" target="_blank" rel="noopener"&gt;Onboard Microsoft Sentinel - Log Analytics workspace overview&lt;/A&gt;. Microsoft Sentinel is now available in the Defender Portal, connect your Microsoft Sentinel Workspace to the Defender Portal:&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/unified-secops/microsoft-sentinel-onboard#unified-security-operations-prerequisites" target="_blank" rel="noopener"&gt;Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Intune Administrator permissions:&lt;/STRONG&gt; You need appropriate rights to configure Intune &lt;STRONG&gt;Diagnostic Settings&lt;/STRONG&gt;. For information, see: &lt;A class="lia-external-url" href="https://learn.microsoft.com/entra/identity/role-based-access-control/permissions-reference" target="_blank" rel="noopener"&gt;Microsoft Entra built-in roles - Intune Administrator&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Log Analytics Contributor role:&lt;/STRONG&gt; The account configuring diagnostics should have permission to write to the Log Analytics workspace. For more information on the different roles, and what they can do, go to &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/azure-monitor/logs/manage-access" target="_blank" rel="noopener"&gt;Manage access to log data and workspaces in Azure Monitor&lt;/A&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Intune diagnostic logging enabled:&lt;/STRONG&gt; Ensure that Intune diagnostic settings are configured to send logs to Azure Monitor / Log Analytics, and that devices and users are enrolled in Intune so that relevant management and compliance events are generated. For more information, see: &lt;A class="lia-external-url" href="https://learn.microsoft.com/intune/intune-service/fundamentals/review-logs-using-azure-monitor" target="_blank" rel="noopener"&gt;Send Intune log data to Azure Storage, Event Hubs, or Log Analytics&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;Configure Intune to Send Logs to Microsoft Sentinel&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;Sign in to the &lt;A href="https://go.microsoft.com/fwlink/?linkid=2109431" target="_blank" rel="noopener"&gt;Microsoft Intune admin center&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;Select&amp;nbsp;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Reports&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; &amp;gt; &lt;/SPAN&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Diagnostics settings&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;. If it’s the first time here, you may be prompted to “Turn on” diagnostic settings for Intune; enable it if so. Then click “+ Add diagnostic setting” to create a new setting:&lt;BR /&gt;&lt;/SPAN&gt;&lt;img&gt;&lt;EM&gt;Microsoft Intune Diagnostics settings page – Add diagnostic settings.&lt;/EM&gt;&lt;/img&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;BR /&gt;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;STRONG&gt;Select Intune Log Categories.&lt;/STRONG&gt; In the “Diagnostic setting” configuration page, give the setting a name (e.g. “&lt;EM&gt;Microsoft Sentinel Intune Logs Demo&lt;/EM&gt;”). Under &lt;STRONG&gt;Logs to send&lt;/STRONG&gt;, you’ll see checkboxes for each Intune log category. Select the categories you want to forward. For comprehensive monitoring, check &lt;STRONG&gt;AuditLogs&lt;/STRONG&gt;, &lt;STRONG&gt;OperationalLogs&lt;/STRONG&gt;, &lt;STRONG&gt;DeviceComplianceOrg&lt;/STRONG&gt;, and &lt;STRONG&gt;Devices&lt;/STRONG&gt;. The selected log categories will be sent to a table in the Microsoft Sentinel Workspace.&lt;BR /&gt;&lt;/SPAN&gt;&lt;img&gt;&lt;EM&gt;Microsoft Intune Diagnostics settings page – Log categories.&lt;/EM&gt;&lt;/img&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Configure Destination Details – Microsoft Sentinel Workspace.&lt;/STRONG&gt; Under &lt;STRONG&gt;Destination details&lt;/STRONG&gt; on the same page, select your &lt;STRONG&gt;Azure Subscription&lt;/STRONG&gt; then select the &lt;STRONG&gt;Microsoft Sentinel workspace.&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;img&gt;&lt;EM&gt;Microsoft Intune Diagnostics settings page - Destination details.&lt;/EM&gt;&lt;/img&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;STRONG&gt;Save the Diagnostic Setting.&lt;/STRONG&gt; After you click save, the Microsoft Intune Logs will &amp;nbsp;will be streamed to 4 tables which are in the Analytics Tier.&amp;nbsp; For pricing on the analytic tier check here: &lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/sentinel/billing?tabs=simplified%2Ccommitment-tiers#understand-your-microsoft-sentinel-bill" target="_blank" rel="noopener"&gt;Plan costs and understand pricing and billing&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;img&gt;&lt;EM&gt;Microsoft Intune Diagnostics settings page – Saved Diagnostic settings.&lt;/EM&gt;&lt;/img&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Verify Data in Microsoft Sentinel&lt;/STRONG&gt;.&lt;EM&gt; &lt;/EM&gt;
&lt;P&gt;After configuring Intune to send diagnostic data to a Microsoft Sentinel Workspace, it’s crucial to verify that the Intune logs are successfully flowing into Microsoft Sentinel. You can do this by checking specific Intune log tables both in &lt;STRONG&gt;the Microsoft 365 Defender portal &lt;/STRONG&gt;and in the&lt;STRONG&gt; Azure Portal&lt;/STRONG&gt;. The key tables to verify are:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;IntuneAuditLogs&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;IntuneOperationalLogs&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;IntuneDeviceComplianceOrg&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;IntuneDevices&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-style-solid" border="1" style="width: 1014px; height: 655.859px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 52.125px;"&gt;&lt;td style="height: 52.125px;"&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;Microsoft 365 Defender Portal (Unified)&lt;/STRONG&gt;&lt;STRONG&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 52.125px;"&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;Azure Portal (Microsoft Sentinel)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 603.734px;"&gt;&lt;td style="height: 603.734px;"&gt;
&lt;P&gt;&lt;STRONG&gt;1. &lt;/STRONG&gt;Open Advanced Hunting: Sign in to the &lt;STRONG&gt;&lt;A href="https://security.microsoft.com" target="_blank" rel="noopener"&gt;https://security.microsoft.com&lt;/A&gt;&lt;/STRONG&gt; (the unified portal). Navigate to &lt;STRONG&gt;Advanced Hunting&lt;/STRONG&gt;. &lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;– &lt;EM&gt;This opens the unified query editor where you can search across Microsoft Defender data and any connected Sentinel data.&lt;/EM&gt;&lt;STRONG&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;2. Find Intune Tables&lt;/STRONG&gt;: In the Advanced hunting Schema pane (on the left side of the query editor), scroll down past the &lt;STRONG&gt;Microsoft Sentinel Tables&lt;/STRONG&gt;. Under the &lt;STRONG&gt;LogManagement&lt;/STRONG&gt; Section Look for &lt;STRONG&gt;IntuneAuditLogs&lt;/STRONG&gt;, &lt;STRONG&gt;IntuneOperationalLogs&lt;/STRONG&gt;, &lt;STRONG&gt;IntuneDeviceComplianceOrg&lt;/STRONG&gt;, and &lt;STRONG&gt;IntuneDevices &lt;/STRONG&gt;in the list.&lt;/P&gt;
&lt;img /&gt;&lt;EM&gt;Microsoft Sentinel in Defender Portal – Tables&lt;/EM&gt;&lt;/td&gt;&lt;td style="height: 603.734px;"&gt;
&lt;P&gt;&lt;STRONG&gt;1. Navigate to&amp;nbsp;&lt;EM&gt;Logs&lt;/EM&gt;:&lt;/STRONG&gt; Sign in to the &lt;A href="https://portal.azure.com" target="_blank" rel="noopener"&gt;https://portal.azure.com&lt;/A&gt; and open &lt;STRONG&gt;Microsoft Sentinel&lt;/STRONG&gt;. Select your Sentinel workspace, then click &lt;STRONG&gt;Logs&lt;/STRONG&gt; (under &lt;STRONG&gt;General&lt;/STRONG&gt;).&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;2. Find Intune Tables:&lt;/STRONG&gt; In the Logs &lt;STRONG&gt;query editor&lt;/STRONG&gt; that opens, you’ll see a &lt;STRONG&gt;Schema&lt;/STRONG&gt; or tables list on the left. If it’s collapsed, click &lt;STRONG&gt;&amp;gt;&amp;gt;&lt;/STRONG&gt; to expand it. Scroll down to find &lt;STRONG&gt;LogManagement&lt;/STRONG&gt; and &lt;STRONG&gt;expand&lt;/STRONG&gt; it; look for these Intune-related tables: &lt;STRONG&gt;IntuneAuditLogs&lt;/STRONG&gt;, &lt;STRONG&gt;IntuneOperationalLogs&lt;/STRONG&gt;, &lt;STRONG&gt;IntuneDeviceComplianceOrg&lt;/STRONG&gt;, and &lt;STRONG&gt;IntuneDevices&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;&lt;EM&gt;Microsoft Sentinel in Azure Portal – Tables&lt;/EM&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt; &lt;/EM&gt;&lt;STRONG&gt;Querying Intune Log Tables in Sentinel&lt;/STRONG&gt; – &lt;EM&gt;Once the tables are present, use Kusto Query Language (KQL) in either portal to view and analyze Intune data:&lt;BR /&gt;&lt;BR /&gt;&lt;/EM&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-style-solid" border="1" style="width: 96.8269%; height: 918.766px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 39px;"&gt;&lt;td style="height: 39px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Microsoft 365 Defender Portal (Unified)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Portal (Microsoft Sentinel)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 879.766px;"&gt;&lt;td style="height: 879.766px;"&gt;
&lt;P&gt;In the&amp;nbsp;&lt;STRONG&gt;Advanced Hunting&lt;/STRONG&gt; page, ensure the query editor is visible (select &lt;STRONG&gt;New query&lt;/STRONG&gt; if needed). Run a simple KQL query such as:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;IntuneDevice
| take 5&lt;/LI-CODE&gt;
&lt;P&gt;Click&amp;nbsp;&lt;STRONG&gt;Run query&lt;/STRONG&gt; to display sample Intune device records. If results are returned, it confirms that Intune data is being ingested successfully. Note that querying across Microsoft Sentinel data in the unified Advanced Hunting view requires at least the &lt;STRONG&gt;Microsoft Sentinel Reader&lt;/STRONG&gt; role.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img&gt;&lt;EM&gt;Microsoft Sentinel in the Microsoft Defender Portal - Advanced hunting query.&lt;/EM&gt;&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 879.766px;"&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the&amp;nbsp;&lt;STRONG&gt;Azure Logs&lt;/STRONG&gt; blade, use the query editor to run a simple KQL query such as:&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;IntuneDevice
| take 5&lt;/LI-CODE&gt;
&lt;P&gt;Select&amp;nbsp;&lt;STRONG&gt;Run&lt;/STRONG&gt; to view the results in a table showing sample Intune device data. If results appear, it confirms that your Intune logs are being collected successfully. You can select any record to view full event details and use KQL to further explore or filter the data - for example, by querying IntuneDeviceComplianceOrg to identify devices that are not compliant and adjust the query as needed.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img&gt;&lt;EM&gt;Microsoft Sentinel in the Azure Portal - Sentinel logs query.&lt;/EM&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;EM&gt;&lt;BR /&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Once Microsoft Intune logs are flowing into Microsoft Sentinel, the real value comes from transforming that raw device and audit data into actionable security signals.&amp;nbsp;&lt;/STRONG&gt;
&lt;P&gt;To achieve this, you should set up detection rules that continuously analyze the Intune logs and automatically flag any risky or suspicious behavior. In practice, this means creating &lt;STRONG&gt;custom detection rules&lt;/STRONG&gt; in the Microsoft Defender portal (part of the unified XDR experience) see [&lt;A href="https://learn.microsoft.com/en-us/defender-xdr/custom-detection-rules" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/defender-xdr/custom-detection-rules&lt;/A&gt;] and &lt;STRONG&gt;scheduled analytics rules&lt;/STRONG&gt; in Microsoft Sentinel (in either the Azure Portal or the unified Defender portal interface) see:[&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/create-analytics-rules?tabs=azure-portal" target="_blank" rel="noopener"&gt;Create scheduled analytics rules in Microsoft Sentinel | Microsoft Learn]&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;These detection rules will continuously monitor your Intune telemetry – &lt;STRONG&gt;tracking device compliance status, enrollment activity, and administrative actions&lt;/STRONG&gt; – and will &lt;STRONG&gt;raise alerts whenever they detect suspicious or out-of-policy events&lt;/STRONG&gt;. For example, you can be alerted if a large number of devices fall out of compliance, if an unusual spike in enrollment failures occurs, or if an Intune policy is modified by an unexpected account. Each alert generated by these rules becomes an incident in Microsoft Sentinel (and in the XDR Defender portal’s unified incident queue), enabling your security team to investigate and respond through the standard SOC workflow. In turn, this &lt;STRONG&gt;converts raw Intune log data into high-value security insights&lt;/STRONG&gt;: you’ll achieve proactive detection of potential issues, faster investigation by pivoting on the enriched Intune data in each incident, and even automated response across your endpoints (for instance, by triggering &lt;STRONG&gt;playbooks&lt;/STRONG&gt; or other &lt;STRONG&gt;automated remediation&lt;/STRONG&gt; actions when an alert fires).&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Use this Detection Logic to Create a detection Rule&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;IntuneDeviceComplianceOrg
| where TimeGenerated &amp;gt; ago(24h)
| where ComplianceState != "Compliant"
| summarize NonCompliantCount = count() by DeviceName, TimeGenerated
| where NonCompliantCount &amp;gt; 3&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;Additional Tips:&lt;/STRONG&gt; After confirming data ingestion and setting up alerts, you can &lt;STRONG&gt;leverage other Microsoft Sentinel features&lt;/STRONG&gt; to get more value from your Intune logs. For example:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Workbooks for Visualization:&lt;/STRONG&gt; Create custom &lt;STRONG&gt;workbooks&lt;/STRONG&gt; to build dashboards for Intune data (or check if community-contributed Intune workbooks are available). This can help you monitor device compliance trends and Intune activities visually.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Hunting and Queries:&lt;/STRONG&gt; Use &lt;STRONG&gt;advanced hunting&lt;/STRONG&gt; (KQL queries) to proactively search through Intune logs for suspicious activities or trends. The unified Defender portal’s Advanced Hunting page can query both Sentinel (Intune logs) and Defender data together, enabling &lt;STRONG&gt;correlation across Intune and other security data&lt;/STRONG&gt;. For instance, you might join IntuneDevices data with Azure AD sign-in logs to investigate a device associated with risky sign-ins.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Incident Management:&lt;/STRONG&gt; Leverage Sentinel’s &lt;STRONG&gt;Incidents&lt;/STRONG&gt; view (in Azure portal) or the unified &lt;STRONG&gt;Incidents&lt;/STRONG&gt; queue in Defender to investigate alerts triggered by your new rules. Incidents in Sentinel (whether created in Azure or Defender portal) will appear in the connected portal, allowing your security operations team to manage Intune-related alerts just like any other security incident.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Built-in Rules &amp;amp; Content:&lt;/STRONG&gt; Remember that Microsoft Sentinel provides many built-in &lt;STRONG&gt;Analytics Rule templates&lt;/STRONG&gt; and &lt;STRONG&gt;Content Hub&lt;/STRONG&gt; solutions. While there isn’t a native pre-built Intune content pack as of now, you can use general Sentinel features to monitor Intune data.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Frequently Asked Questions&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;If you’ve set everything up but don’t see logs in Sentinel, run through these checks:
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Check Diagnostic Settings&lt;/STRONG&gt;
&lt;OL&gt;
&lt;LI&gt;Go to the Microsoft Intune admin center → Reports → Diagnostic settings.&lt;/LI&gt;
&lt;LI&gt;Make sure the setting is turned ON and sending the right log categories to the correct Microsoft Sentinel workspace.&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Confirm the Right Workspace&lt;/STRONG&gt;
&lt;OL&gt;
&lt;LI&gt;Double-check that the Azure subscription and Microsoft Sentinel workspace are selected.&lt;/LI&gt;
&lt;LI&gt;If you have multiple tenants/directories, make sure you’re in the right one.&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Verify Permissions&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Make Sure Logs Are Being Generated&lt;/STRONG&gt;
&lt;OL&gt;
&lt;LI&gt;If no devices are enrolled or no actions have been taken, there may be nothing to log yet.&lt;/LI&gt;
&lt;LI&gt;Try enrolling a device or changing a policy to trigger logs.&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;STRONG&gt;Check Your Queries&lt;/STRONG&gt;
&lt;OL&gt;
&lt;LI&gt;Make sure you’re querying the correct workspace and time range in Microsoft Sentinel.&lt;/LI&gt;
&lt;LI&gt;Try a direct query like:&lt;BR /&gt;&lt;LI-CODE lang="json"&gt;IntuneAuditLogs | take 5&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Still Nothing?&lt;/STRONG&gt;
&lt;OL&gt;
&lt;LI&gt;Try deleting and re-adding the diagnostic setting.&lt;/LI&gt;
&lt;LI&gt;Most issues come down to permissions or selecting the wrong workspace.&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;How long are Intune logs retained, and how can I keep them longer?&lt;BR /&gt;
&lt;OL&gt;
&lt;LI&gt;The &lt;STRONG&gt;analytics tier&lt;/STRONG&gt; keeps data in the &lt;STRONG&gt;interactive retention&lt;/STRONG&gt; state for &lt;STRONG&gt;90 days&lt;/STRONG&gt; by default, extensible for up to two years. This interactive state, while expensive, allows you to query your data in unlimited fashion, with high performance, at no charge per query:&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/azure/sentinel/log-plans#analytics-tier" target="_blank" rel="noopener"&gt;Log retention tiers in Microsoft Sentinel&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;We hope this helps you to successfully connect your resources and end-to-end ingest Intune logs into Microsoft Sentinel. If you have any questions, leave a comment below or reach out to us on X &lt;A class="lia-external-url" href="https://aka.ms/MSFTSecSuppTeam" target="_blank" rel="noopener"&gt;@MSFTSecSuppTeam&lt;/A&gt;!&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2026 16:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-ingest-microsoft-intune-logs-into-microsoft-sentinel/ba-p/4508562</guid>
      <dc:creator>PaulineMbabu</dc:creator>
      <dc:date>2026-04-10T16:00:00Z</dc:date>
    </item>
    <item>
      <title>Estimate Microsoft Sentinel Costs with Confidence Using the New Sentinel Cost Estimator</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/estimate-microsoft-sentinel-costs-with-confidence-using-the-new/ba-p/4507062</link>
      <description>&lt;P&gt;One of the first questions teams ask when evaluating Microsoft Sentinel is simple: what will this actually cost? Today, many customers and partners estimate Sentinel costs using the Azure Pricing Calculator, but it doesn’t provide the Sentinel-specific usage guidance needed to understand how each Sentinel meter contributes to overall spend. As a result, it can be hard to produce accurate, trustworthy estimates, especially early on, when you may not know every input upfront. To make these conversations easier and budgets more predictable, Microsoft is introducing the new Sentinel Cost Estimator (public preview) for Microsoft customers and partners.&lt;/P&gt;
&lt;P&gt;The Sentinel Cost Estimator gives organizations better visibility into spend and more confidence in budgeting as they operate at scale.&lt;/P&gt;
&lt;P&gt;You can access the Microsoft Sentinel Cost Estimator here: &lt;A href="https://microsoft.com/en-us/security/pricing/microsoft-sentinel/cost-estimator" target="_blank" rel="noopener"&gt;https://microsoft.com/en-us/security/pricing/microsoft-sentinel/cost-estimator&lt;/A&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;What the Sentinel Cost Estimator does &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The new Sentinel Cost Estimator makes pricing transparent and predictable for Microsoft customers and partners.&amp;nbsp;&amp;nbsp;The Sentinel Cost Estimator helps you understand what drives costs at a meter level and ensures your estimates are accurate with step-by-step guidance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can model multi-year estimates with built-in projections for up to three years, making it easy to anticipate data growth, plan for future spend, and avoid budget surprises as your security operations mature. Estimates can be easily shared with finance and security teams to support better budgeting and planning.&lt;/P&gt;
&lt;H2&gt;When to Use the Sentinel Cost Estimator&lt;/H2&gt;
&lt;P&gt;Use the Sentinel Cost Estimator to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Model ingestion growth over time as new data sources are onboarded&lt;/LI&gt;
&lt;LI&gt;Explore tradeoffs between Analytics and Data Lake storage tiers&lt;/LI&gt;
&lt;LI&gt;Understand the impact of retention requirements on total spend&lt;/LI&gt;
&lt;LI&gt;Estimate compute usage for notebooks and advanced queries&lt;/LI&gt;
&lt;LI&gt;Project costs across a multi‑year deployment timeline&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;For broader Azure infrastructure cost planning, the Azure Pricing Calculator can still be used alongside the Sentinel Cost Estimator.&lt;/P&gt;
&lt;H2&gt;Cost Estimator Example&lt;/H2&gt;
&lt;P&gt;Let’s walk through a practical example using the Cost Estimator. A medium-sized company that is new to Microsoft Sentinel wants a high-level estimate of expected costs. In their previous SIEM, they performed proactive threat hunting across identity, endpoint, and network logs; ran detections on high-security-value data sources from multiple vendors; built a small set of dashboards; and required three years of retention for compliance and audit purposes. Based on their prior SIEM, they estimate they currently ingest about 2 TB per day.&lt;/P&gt;
&lt;P&gt;In the Cost Estimator, they select their region and enter their daily ingestion volume. As they are not currently using Sentinel data lake, they can explore different ways of splitting ingestion between tiers to understand the potential cost benefit of using the data lake.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Their retention requirement is three years. If they choose to use Sentinel data lake, they can plan to retain 90 days in the Analytics tier (included with Microsoft Sentinel) and keep the remaining data in Sentinel data lake for the full three years.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;As notebooks are new to them, they plan to evaluate notebooks for SOC workflows and graph building. They expect to start in the light usage tier and may move to medium as they mature. Since they occasionally query data older than 90 days to build trends—and anticipate using the Sentinel MCP server for SOC workflows on Sentinel lake data—they expect to start in the medium query volume tier.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; These tiers are for estimation purposes only; they do not lock in pricing when using the Microsoft Sentinel platform.&lt;/P&gt;
&lt;P&gt;Because this customer is upgrading from Microsoft 365 E3 to E5, they may be eligible for free ingestion based on their user count. Combined with their eligible server data from Defender for Servers, this can reduce their billable ingestion.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;In the review step, the Cost Estimator projects costs across a three-year window and breaks down drivers such as data tiers, commitment tiers, and comparisons with alternative storage options. From there, the customer can go back to earlier steps to adjust inputs and explore different scenarios. Once done, the estimate report can be exported for reference with Microsoft representatives and internal leadership when discussing the deployment of Microsoft Sentinel and Sentinel Platform.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;Finalize Your Estimate with Microsoft&lt;/H2&gt;
&lt;P&gt;The Microsoft Sentinel Cost Estimator is designed to provide directional guidance and help organizations understand how architectural decisions may influence cost. Final pricing may vary based on factors such as deployment architecture, commitment tiers, and applicable discounts. We recommend working with your Microsoft account team or a Security sales specialist to develop a formal proposal tailored to your organization’s requirements.&lt;/P&gt;
&lt;H2&gt;Try the Microsoft Sentinel Cost Estimator&lt;/H2&gt;
&lt;P&gt;Start building your Microsoft Sentinel cost estimate today: &lt;A href="https://microsoft.com/en-us/security/pricing/microsoft-sentinel/cost-estimator" target="_blank" rel="noopener"&gt;https://microsoft.com/en-us/security/pricing/microsoft-sentinel/cost-estimator&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Thu, 09 Apr 2026 22:26:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/estimate-microsoft-sentinel-costs-with-confidence-using-the-new/ba-p/4507062</guid>
      <dc:creator>shubh_khandhadia</dc:creator>
      <dc:date>2026-04-09T22:26:18Z</dc:date>
    </item>
    <item>
      <title>Introducing the New Microsoft Sentinel Logstash Output Plugin (Public Preview!)</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/introducing-the-new-microsoft-sentinel-logstash-output-plugin/ba-p/4508904</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Many organizations rely on Logstash as a flexible, trusted data pipeline for collecting, transforming, and forwarding logs from on-premises and hybrid environments. Microsoft Sentinel has long supported a Logstash output plugin, enabling customers to send data directly into Sentinel as part of their existing pipelines. The original plugin was implemented in Ruby, and while it has served its purpose, it no longer meets Microsoft’s Secure Future Initiative (SFI) standards and has limited engineering support. To address both security and sustainability, we have &lt;A class="lia-external-url" href="https://rubygems.org/gems/microsoft-sentinel-log-analytics-logstash-output-plugin/versions/2.0.0-java" target="_blank"&gt;rebuilt the plugin&lt;/A&gt; from the ground up in Java, a language that is more secure, better supported across Microsoft, and aligned with long-term platform investments.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:246,&amp;quot;335559739&amp;quot;:246,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;To ensure a seamless transition, the new implementation is still packaged and distributed as a standard Logstash Ruby gem. This means the installation and usage experience remains unchanged for customers, while benefiting from a more secure and maintainable foundation.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;What's New in This Version&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Java&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;based&amp;nbsp;and SFI&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;compliant&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Same Logstash plugin experience, now rebuilt on a stronger foundation. The new implementation is fully Java&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;based, aligning with Microsoft’s Secure Future Initiative (SFI) and providing improved security, supportability, and&amp;nbsp;long-term&amp;nbsp;maintainability.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Modern,&amp;nbsp;DCR&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;based&amp;nbsp;ingestion&lt;/SPAN&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;The plugin now uses the Azure Monitor&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Logs Ingestion API&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Data Collection Rules (DCRs)&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, replacing the legacy HTTP Data Collection API (For more info,&amp;nbsp;see&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-monitor/logs/custom-logs-migrate" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Migrate from the HTTP Data Collector API to the Log Ingestion API - Azure Monitor | Microsoft Learn&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;). This gives customers full schema control, enables custom log tables, and supports ingestion into standard Microsoft Sentinel tables as well as Microsoft Sentinel data lake.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Flexible authentication options&lt;/SPAN&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;Authentication is automatically&amp;nbsp;determined&amp;nbsp;based on your configuration, with support for:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:1080,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Client secret&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;(App registration / service principal)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:1080,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Managed identity&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,&amp;nbsp;eliminating&amp;nbsp;the need to store credentials in configuration files&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559683&amp;quot;:0,&amp;quot;335559684&amp;quot;:-2,&amp;quot;335559685&amp;quot;:1080,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Sovereign cloud support:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;The plugin supports Azure sovereign clouds, including&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Azure US Government&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Azure China&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Azure Germany&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Standard Logstash distribution model&lt;/SPAN&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;The plugin is published on&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;RubyGems.org&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, the standard distribution channel for Logstash plugins, and can be installed directly using the Logstash plugin manager, no change to your existing installation workflow.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-contrast="auto"&gt;What the Plugin Does&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Logstash&amp;nbsp;plugin&amp;nbsp;operates&amp;nbsp;as a three-stage&amp;nbsp;data&amp;nbsp;pipeline:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Input → Filter → Output.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Input&lt;/STRONG&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;You control how data enters the pipeline,&amp;nbsp;using&amp;nbsp;sources such as&amp;nbsp;syslog,&amp;nbsp;filebeat, Kafka, Event Hubs, databases (via JDBC), files, and more.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Filter&lt;/STRONG&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;You enrich and transform events using Logstash’s powerful filtering ecosystem, including plugins like&amp;nbsp;grok,&amp;nbsp;mutate, and&amp;nbsp;Json, shaping data to match your security and operational needs.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Output&lt;/STRONG&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;This is where&amp;nbsp;Microsoft comes in.&amp;nbsp;The Microsoft Sentinel Logstash Output Plugin securely sends your processed events to an Azure Monitor Data Collection Endpoint, where they are ingested into Sentinel via a Data Collection Rule (DCR).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300,&amp;quot;335559991&amp;quot;:360}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300,&amp;quot;335559991&amp;quot;:360}"&gt;&lt;SPAN data-contrast="auto"&gt;With this model, you&amp;nbsp;retain&amp;nbsp;full control over your Logstash pipeline and data processing logic, while the Sentinel plugin provides a secure, reliable path to ingest data into Microsoft Sentinel.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300,&amp;quot;335559991&amp;quot;:360}"&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:0}"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Getting Started&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Prerequisites&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Logstash installed and running&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;An Azure Monitor Data Collection Endpoint (DCE) and Data Collection Rule (DCR) in your subscription&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Contributor role&amp;nbsp;on your Log Analytics workspace&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Who Is This For?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Organizations that already have Logstash pipelines, need to collect from on-premises or legacy systems,&amp;nbsp;and&amp;nbsp;operate&amp;nbsp;in distributed/hybrid environments including air-gapped networks.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:210,&amp;quot;335559739&amp;quot;:210,&amp;quot;335559740&amp;quot;:300}"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;To learn&amp;nbsp;more,&amp;nbsp;see&lt;/STRONG&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://rubygems.org/gems/microsoft-sentinel-log-analytics-logstash-output-plugin/versions/2.0.0-java" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;microsoft-sentinel-log-analytics-logstash-output-plugin | RubyGems.org | your community gem host&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Apr 2026 22:28:06 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/introducing-the-new-microsoft-sentinel-logstash-output-plugin/ba-p/4508904</guid>
      <dc:creator>JamesAde</dc:creator>
      <dc:date>2026-04-06T22:28:06Z</dc:date>
    </item>
    <item>
      <title>Accelerate Agent Development: Hacks for Building with Microsoft Sentinel data lake</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/accelerate-agent-development-hacks-for-building-with-microsoft/ba-p/4503039</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;As a Senior Product Manager | Developer Architect&amp;nbsp;on the App Assure team&amp;nbsp;working&amp;nbsp;to bring&amp;nbsp;Microsoft Sentinel&amp;nbsp;and Security Copilot&amp;nbsp;solutions to market, I&amp;nbsp;interact with&amp;nbsp;many&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ISVs building&amp;nbsp;agents&amp;nbsp;on Microsoft Sentinel data lake for the first time.&amp;nbsp;I’ve&amp;nbsp;written&amp;nbsp;this article&amp;nbsp;to walk&amp;nbsp;you through one&amp;nbsp;possible approach&amp;nbsp;for agent development&amp;nbsp;– the&amp;nbsp;process&amp;nbsp;I use when building sample agents internally at Microsoft.&amp;nbsp;If you have questions about this, or other methods for building your agent, App Assure offers guidance through&amp;nbsp;our&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/SentinelAdvisoryService" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Sentinel Advisory Service&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Throughout this post, I include screenshots and examples from&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://securitystore.microsoft.com/solutions/gigamon-inc.gigamon-security-posture-agent" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Gigamon’s Security Posture Insight Agent&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;This&amp;nbsp;article&amp;nbsp;assumes&amp;nbsp;you have:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;An existing SaaS or security product with accessible telemetry.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;A small ISV team (2–3 engineers + 1 PM).&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Focus on a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;single high value scenario&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;for the first&amp;nbsp;agent.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;The Composite Application Model (What You Are Building)&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;When I begin designing an agent, I think end-to-end, from data ingestion requirements through agentic logic, following the Composite application model.&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;The Composite Application Model consists of five layers:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Data&amp;nbsp;Sources&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; – Your product’s raw security, audit, or operational&amp;nbsp;data.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Ingestion&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; – Getting that&amp;nbsp;data&amp;nbsp;into Microsoft Sentinel.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Sentinel data lake &amp;amp; Microsoft Graph&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; – Normalization, storage, and correlation.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Agent&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; – Reasoning logic that queries&amp;nbsp;data&amp;nbsp;and produces outcomes.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;End User&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; – Security Copilot or SaaS experiences that invoke the agent.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This separation&amp;nbsp;allows for&amp;nbsp;evolving&amp;nbsp;data&amp;nbsp;ingestion and agent logic&amp;nbsp;simultaneously. It&amp;nbsp;also&amp;nbsp;helps avoid downstream surprises that require going back and&amp;nbsp;rearchitecting the entire solution.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Optional Prerequisite&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;You are enrolled in the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.microsoft.com/en-us/software-development-companies/offers-benefits/isv-success?msockid=0bcbf8c731f8678d2260ebd5304266ea" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;ISV Success Program&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;,&amp;nbsp;so you can earn Azure Credits to provision Security Compute Units (SCUs) for Security Copilot Agents.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Phase 1:&amp;nbsp;Data&amp;nbsp;Ingestion Design &amp;amp; Implementation&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;U&gt;&lt;SPAN data-contrast="auto"&gt;Choose Your Ingestion Strategy&lt;/SPAN&gt;&lt;/U&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&lt;SPAN data-contrast="auto"&gt;The first&amp;nbsp;choice I face when designing an agent is how&amp;nbsp;the&amp;nbsp;data is going to flow into my&amp;nbsp;Sentinel workspace.&amp;nbsp;Below I document&amp;nbsp;two primary&amp;nbsp;methods for ingestion.&lt;/SPAN&gt; &amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Option&amp;nbsp;A: Codeless Connector Framework (CCF)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;This is the best&amp;nbsp;option&amp;nbsp;for ISVs with REST APIs. To build a CCF solution, reference&amp;nbsp;our&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/create-codeless-connector" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;documentation&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;for getting started.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Option B: &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/public-preview-announcement-empower-real-time-security-with-microsoft-sentinel%E2%80%99s/4483884" target="_blank" rel="noopener" data-lia-auto-title="CCF Push (Public Preview)" data-lia-auto-title-active="0"&gt;CCF Push (Public Preview)&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="18" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;In this instance, an&amp;nbsp;ISV pushes events directly to Sentinel&amp;nbsp;via a CCF Push connector.&amp;nbsp;Our&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/create-push-codeless-connector" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;MS Learn documentation&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;is&amp;nbsp;a great place&amp;nbsp;to get started using this method.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Additional Note:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;In the event you find that CCF does not support your needs, &lt;A href="https://aka.ms/AppAssure" target="_blank" rel="noopener"&gt;reach out to App Assure&lt;/A&gt; so we can capture your requirements for future consideration. Azure Functions remains an option if you’ve documented your CCF feature needs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Phase 2: Onboard to Microsoft Sentinel data lake&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Once&amp;nbsp;my&amp;nbsp;data&amp;nbsp;is flowing into Sentinel,&amp;nbsp;I&amp;nbsp;onboard a single Sentinel workspace to&amp;nbsp;data&amp;nbsp;lake.&amp;nbsp;This is a one-time action and cannot be repeated for additional workspaces.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;SPAN data-contrast="auto"&gt;Onboarding Steps&lt;/SPAN&gt;&lt;/U&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Go to the&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://security.microsoft.com/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Defender portal&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Follow the Sentinel&amp;nbsp;Data&amp;nbsp;lake&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/suchandanreddy/Microsoft-Sentinel-Labs/blob/main/01-Sentinel-DataLake-Onboarding.md" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;onboarding instructions&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Validate that tables are visible in the lake.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;See&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/kql-queries" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;R&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;unning KQL Queries in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;d&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;ata&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;&amp;nbsp;lake&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;for&amp;nbsp;additional&amp;nbsp;information.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Phase 3: Build and Test the Agent in Microsoft Foundry&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Once my data is successfully ingested into data lake, I begin the agent development process. There are multiple ways to build agents depending on your needs and tooling preferences. For this example, I chose Microsoft Foundry because it fit my needs for real-time logging, cost efficiency, and greater control.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;&lt;SPAN data-contrast="auto"&gt;1. Create a Microsoft Foundry Instance&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Foundry is used as a tool&amp;nbsp;for&amp;nbsp;your&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;development environment.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Reference our&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/suchandanreddy/Microsoft-Sentinel-Labs/blob/main/04-Building-an-Agent-in-Azure-AI-Foundry.md#step-2%EF%B8%8F%E2%83%A3--create-the-agent-in-ai-foundry" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;QuickStart guid&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;e&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;for&amp;nbsp;setting up&amp;nbsp;your&amp;nbsp;Foundry instance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;U&gt;&lt;SPAN data-contrast="auto"&gt;Required Permissions:&lt;/SPAN&gt;&lt;/U&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Security Reader (Entra or Subscription)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Azure AI Developer at the resource group&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;After setup, click&amp;nbsp;&lt;STRONG&gt;Create Agent&lt;/STRONG&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;2.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Design the Agent&lt;/SPAN&gt;&lt;/U&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;U&gt;A strong first agent:&lt;/U&gt;&lt;/SPAN&gt;&lt;U&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/U&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Solves one narrow security problem.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Has deterministic outputs.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Uses explicit instructions, not vague prompts.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;U&gt;Example agent responsibilities:&lt;/U&gt;&lt;/SPAN&gt;&lt;U&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/U&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;To query Sentinel data lake (Sentinel data exploration tool).&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;To summarize recent incidents.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;To correlate ISVs specific signals with Sentinel alerts and other ISV tables (Sentinel data exploration tool).&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;3. Implement Agent Instructions&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;U&gt;Well-designed agent instructions should include:&lt;/U&gt;&lt;/SPAN&gt;&lt;U&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/U&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Role definition ("You are a security investigation agent…").&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Data sources it can access.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Step by step reasoning rules.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Output format expectations.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&lt;SPAN data-contrast="auto"&gt;Sample Instructions can be found&amp;nbsp;here:&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/suchandanreddy/Microsoft-Sentinel-Labs/blob/main/04-Building-an-Agent-in-Azure-AI-Foundry.md#identitydrift-agent-instructions" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Agent Instructions&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;4. &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Configure the Microsoft Model Context Protocol (MCP) tooling for your agent&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For your agent to query, summarize and correlate all the data your connector has sent to data lake, take the following steps:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Select &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Tools&lt;/STRONG&gt;, &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;and under Catalog, type Sentinel, and then select &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Microsoft Sentinel&amp;nbsp;Data&amp;nbsp;Exploration&lt;/STRONG&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;For more information about the data exploration tool collection in MCP server,&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-mcp-data-exploration-tool" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;see our documentation.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;I always test repeatedly with real data until outputs are consistent. For more information on testing and validating the agent, please reference &lt;/SPAN&gt;&lt;A href="https://github.com/suchandanreddy/Microsoft-Sentinel-Labs/blob/main/04-Building-an-Agent-in-Azure-AI-Foundry.md#step-3%EF%B8%8F%E2%83%A3-test-ai-agent" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;our documentation&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Phase 4: Migrate the Agent to Security Copilot&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Once the agent works in Foundry,&amp;nbsp;I&amp;nbsp;migrate it to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Security Copilot&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;. To do this:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Copy the full instruction set from Foundry&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Provision a SCU for your Security Copilot workspace. For instructions, please reference &lt;/SPAN&gt;&lt;A href="https://github.com/suchandanreddy/Microsoft-Sentinel-Labs/blob/main/05-Building-an-Agent-in-Security-Copilot.md#step-1%EF%B8%8F%E2%83%A3--create-a-security-copilot-workspace" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;this documentation&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Make note of this process as you will be charged per hour per SCU&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Once&amp;nbsp;you&amp;nbsp;are done testing you will need to deprovision the capacity to prevent&amp;nbsp;additional&amp;nbsp;charges&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Open Security Copilot and u&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;se &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Create&amp;nbsp;From&amp;nbsp;Scratch Agent Builder&lt;/STRONG&gt;&amp;nbsp;as&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/suchandanreddy/Microsoft-Sentinel-Labs/blob/main/05-Building-an-Agent-in-Security-Copilot.md#navigate-to-agent-creation" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;outlined here.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Add Sentinel data exploration MCP tools (these are the same instructions from the Foundry agent in the previous step).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;For more information on linking the Sentinel MCP tools,&amp;nbsp;please refer to&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://github.com/suchandanreddy/Microsoft-Sentinel-Labs/blob/main/05-Building-an-Agent-in-Security-Copilot.md#step-4-add-tools" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;this article&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Paste and adapt instructions.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;At this stage,&amp;nbsp;I always&amp;nbsp;validate&amp;nbsp;the following:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Agent Permissions&lt;/STRONG&gt; – I have confirmed the agent has the necessary permissions to interact with the MCP tool and read data from your data lake instance. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Agent&amp;nbsp;Performance&lt;/STRONG&gt; – I have confirmed a s&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;uccessful interaction with measured latency and benchmark results.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This step intentionally avoids reimplementation. I am reusing proven logic.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Phase 5: Execute,&amp;nbsp;Validate, and Publish&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;After setting up my agent, I navigate to the &lt;STRONG&gt;Agents&lt;/STRONG&gt; tab to manually trigger the agent. For more information on testing an agent you can refer to &lt;A href="https://github.com/suchandanreddy/Microsoft-Sentinel-Labs/blob/main/05-Building-an-Agent-in-Security-Copilot.md#step-3%EF%B8%8F%E2%83%A3--set-up-the-identitydrift-investigation-agent" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;this article&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Now that the agent has been executed successfully, I download the agent Manifest file from the environment so that it can be packaged.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Click &lt;STRONG&gt;View code&lt;/STRONG&gt; on the Agent under the Build tab as outlined &lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/copilot/security/developer/create-agent-dev#view-code" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;in th&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;is&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;&amp;nbsp;documentation&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Publishing to the Microsoft Security Store&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&lt;SPAN data-contrast="auto"&gt;If I were publishing my agent to the&amp;nbsp;Microsoft Security Store, these&amp;nbsp;are the&amp;nbsp;steps&amp;nbsp;I would follow:&lt;/SPAN&gt; &amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Finalize ingestion reliability.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Document required permissions.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Define supported scenarios clearly.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Package agent instructions and guidance (by&lt;/SPAN&gt;&amp;nbsp;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/security/store/publish-a-security-copilot-agent-or-analytics-solution-in-security-store" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;following these instructions&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;)&lt;SPAN style="color: rgb(30, 30, 30);" data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Summary&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Based on my experience developing Security Copilot agents on Microsoft Sentinel data lake,&amp;nbsp;this&amp;nbsp;playbook provides a practical, repeatable framework for ISVs to accelerate their agent development and delivery while&amp;nbsp;maintaining&amp;nbsp;high standards&amp;nbsp;of quality.&amp;nbsp;This foundation enables rapid iteration—future agents can often be built in days, not weeks, by reusing the same ingestion and data lake setup.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When starting on your own agent development journey, keep&amp;nbsp;the following in mind:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;To limit initial&amp;nbsp;scope.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;To reuse Microsoft managed&amp;nbsp;infrastructure.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;To separate ingestion from intelligence.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;What Success Looks Like&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;At the end of this development process, you will have the following:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;A Microsoft Sentinel data connector&amp;nbsp;live&amp;nbsp;in Content Hub (or in process) that provides a&amp;nbsp;data ingestion&amp;nbsp;path. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Data visible&amp;nbsp;in data lake. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;A tested agent running&amp;nbsp;in&amp;nbsp;Security Copilot.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN data-contrast="auto"&gt;Clear documentation for customers.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;A key success factor&amp;nbsp;I look&amp;nbsp;for is clarity over completeness.&amp;nbsp;A focused agent is far more likely to be adopted.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Need help?&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;If you have any issues as you work to develop your agent, please&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/AppAssure" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;reach out to the App Assure team&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;for support via&amp;nbsp;our&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://aka.ms/SentinelAdvisoryService" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Sentinel Advisory Service&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;.&amp;nbsp;Or if you have any other tips, please comment below,&amp;nbsp;I’d&amp;nbsp;love to hear your feedback.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:278}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 02 Apr 2026 21:37:47 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/accelerate-agent-development-hacks-for-building-with-microsoft/ba-p/4503039</guid>
      <dc:creator>MitchellGulledge</dc:creator>
      <dc:date>2026-04-02T21:37:47Z</dc:date>
    </item>
    <item>
      <title>Microsoft Sentinel MCP Server with external AI models (Claude) for natural language investigations</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-mcp-server-with-external-ai-models-claude-for/ba-p/4507013</link>
      <description>&lt;P&gt;Security teams are increasingly exploring how AI assistants support them in investigating incidents, asking questions, and exploring their data. At the same time, controlling how data is accessed remains critical. Today, we’re sharing how Sentinel can support a third-party AI assistant like Claude through a new integration approach using&amp;nbsp;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/microsoft-security-blog/microsoft-sentinel-mcp-server---generally-available-with-exciting-new-capabiliti/4470125" target="_blank" rel="noopener" data-lia-auto-title="Sentinel’s Model Context Protocol (MCP) server," data-lia-auto-title-active="0"&gt;Sentinel’s Model Context Protocol (MCP) server,&lt;/A&gt; while continuing to rely on &lt;A class="lia-external-url" href="https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id" target="_blank" rel="noopener"&gt;Microsoft Entra ID&lt;/A&gt; for enterprise grade authentication and access control. This approach uses Microsoft Sentinel with Entra ID to let third-party AI tools access Sentinel data.&lt;/P&gt;
&lt;H4&gt;Why this matters&lt;/H4&gt;
&lt;P&gt;Sentinel customers can explore security data using natural language to assist investigations, while preserving strict tenant isolation and access controls, without managing app registrations or shared secrets. AI third-party assistants like Claude can access Sentinel through Microsoft’s existing security infrastructure. When a query is made, the assistant calls the Sentinel MCP server, which enforces authentication via Entra ID before returning data. This Claude third-party connector is now available.&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-mcp-chatgpt-claude-connector" target="_blank" rel="noopener"&gt;Click here for detailed guidance&lt;/A&gt;.&lt;/P&gt;
&lt;H4&gt;Resources&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Use the Microsoft Sentinel MCP connector in ChatGPT or Claude Code&lt;/STRONG&gt;&lt;BR /&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-mcp-chatgpt-claude-connector" target="_blank" rel="noopener"&gt;Use the Microsoft Sentinel MCP connector in ChatGPT or Claude&amp;nbsp;- Microsoft Security | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Sentinel MCP overview&lt;/STRONG&gt;&lt;BR /&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-mcp-tools-overview" target="_blank" rel="noopener"&gt;What is Microsoft Sentinel MCP server's tool collection? - Microsoft Security | Microsoft Learn&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Get started with Sentinel MCP&lt;/STRONG&gt;&lt;BR /&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/sentinel/datalake/sentinel-mcp-get-started" target="_blank" rel="noopener"&gt;Get started with Microsoft Sentinel MCP server - Microsoft Security | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Apr 2026 21:35:29 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-mcp-server-with-external-ai-models-claude-for/ba-p/4507013</guid>
      <dc:creator>mcasgrain</dc:creator>
      <dc:date>2026-04-22T21:35:29Z</dc:date>
    </item>
    <item>
      <title>Stuck looking up a watchlist value</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/stuck-looking-up-a-watchlist-value/m-p/4507743#M12907</link>
      <description>&lt;P&gt;Hiya,&lt;/P&gt;&lt;P&gt;I get stuck working with watchlists sometimes.&lt;/P&gt;&lt;P&gt;In this example, I'm wanting to focus on account activity from a list of UPNs.&lt;/P&gt;&lt;P&gt;If I split the elements up, I get the individual results, but can't seem to pull it all together.&lt;/P&gt;&lt;P&gt;=====================================================&lt;/P&gt;&lt;P&gt;In its entirety, the query returns zero results:&lt;/P&gt;&lt;P&gt;let ServiceAccounts=(_GetWatchlist('ServiceAccounts_Monitoring'))| project SearchKey;&lt;/P&gt;&lt;P&gt;let OpName = dynamic(['Reset password (self-service)','Reset User Password','Change user password','User reset password','User started password reset','Enable Account','Change password (self-service)','Update PasswordProfile','Self-service password reset flow activity progress']);&lt;/P&gt;&lt;P&gt;AuditLogs&lt;/P&gt;&lt;P&gt;| where OperationName has_any (OpName)&lt;/P&gt;&lt;P&gt;| extend upn = TargetResources.[0].userPrincipalName&lt;/P&gt;&lt;P&gt;| where upn in (ServiceAccounts) //&amp;lt;=This is where I think I'm wrong&lt;/P&gt;&lt;P&gt;| project upn&lt;/P&gt;&lt;P&gt;=====================================================&lt;/P&gt;&lt;P&gt;This line on its own, returns the user on the list:&lt;/P&gt;&lt;P&gt;let ServiceAccounts=(_GetWatchlist('ServiceAccounts_Monitoring'))| project SearchKey;&lt;/P&gt;&lt;P&gt;=====================================================&lt;/P&gt;&lt;P&gt;This section on its own, returns all the activity&lt;/P&gt;&lt;P&gt;let OpName = dynamic(['Reset password (self-service)','Reset User Password','Change user password','User reset password','User started password reset','Enable Account','Change password (self-service)','Update PasswordProfile','Self-service password reset flow activity progress']);&lt;/P&gt;&lt;P&gt;AuditLogs&lt;/P&gt;&lt;P&gt;| where OperationName has_any (OpName)&lt;/P&gt;&lt;P&gt;| extend upn = TargetResources.[0].userPrincipalName&lt;/P&gt;&lt;P&gt;| where upn contains "username" //This is the name on the watchlistlist - so I know the activity exists)&lt;/P&gt;&lt;P&gt;====================================================&lt;/P&gt;&lt;P&gt;I'm doing something wrong when I'm trying to use the watchlist cache (I think)&lt;/P&gt;&lt;P&gt;Any help\guidance or wisdom would be greatly appreciated!&lt;/P&gt;&lt;P&gt;Many thanks&lt;/P&gt;</description>
      <pubDate>Wed, 01 Apr 2026 14:25:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/stuck-looking-up-a-watchlist-value/m-p/4507743#M12907</guid>
      <dc:creator>MrD</dc:creator>
      <dc:date>2026-04-01T14:25:42Z</dc:date>
    </item>
    <item>
      <title>Security Copilot Integration with Microsoft Sentinel - Why Automation matters now</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/security-copilot-integration-with-microsoft-sentinel-why/m-p/4507293#M12906</link>
      <description>&lt;P&gt;Security Operations Centers face a relentless challenge - the volume of security alerts far exceeds the capacity of human analysts. On average, a mid-sized SOC receives thousands of alerts per day, and analysts spend up to 80% of their time on initial triage. That means determining whether an alert is a true positive, understanding its scope, and deciding on next steps. With Microsoft Security Copilot now deeply integrated into Microsoft Sentinel, there is finally a practical path to automating the most time-consuming parts of this workflow.&lt;/P&gt;&lt;P&gt;So I decided to walk you through how to combine Security Copilot with Sentinel to build an automated incident triage pipeline - complete with KQL queries, automation rule patterns, and practical scenarios drawn from common enterprise deployments.&lt;/P&gt;&lt;P&gt;Traditional triage workflows rely on analysts manually reviewing each incident - reading alert details, correlating entities across data sources, checking threat intelligence, and making a severity assessment. This is slow, inconsistent, and does not scale.&lt;/P&gt;&lt;P&gt;Security Copilot changes this equation by providing:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Natural language incident summarization&lt;/STRONG&gt; - turning complex, multi-alert incidents into analyst-readable narratives&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Automated entity enrichment&lt;/STRONG&gt; - pulling threat intelligence, user risk scores, and device compliance state without manual lookups&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Guided response recommendations&lt;/STRONG&gt; - suggesting containment and remediation steps based on the incident type and organizational context&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;The key insight is that Copilot does not replace analysts - it handles the repetitive first-pass triage so analysts can focus on decision-making and complex investigations.&lt;/P&gt;&lt;H2&gt;Architecture - How the Pieces Fit Together&lt;/H2&gt;&lt;P&gt;The automated triage pipeline consists of four layers:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Detection Layer - Sentinel analytics rules generate incidents from log data&lt;/LI&gt;&lt;LI&gt;Enrichment Layer - Automation rules trigger Logic Apps that call Security Copilot&lt;/LI&gt;&lt;LI&gt;Triage Layer - Copilot analyzes the incident, enriches entities, and produces a triage summary&lt;/LI&gt;&lt;LI&gt;Routing Layer - Based on Copilot's assessment, incidents are routed, re-prioritized, or auto-closed&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;(Forgive my AI-painted illustration here, but I find it a nice way to display dependencies.)&lt;/P&gt;&lt;LI-CODE lang=""&gt;+-----------------------------------------------------------+
|                    Microsoft Sentinel                     |
|                                                           |
|  Analytics Rules --&amp;gt; Incidents --&amp;gt; Automation Rules        |
|                                        |                  |
|                                        v                  |
|                              Logic App / Playbook         |
|                                        |                  |
|                                        v                  |
|                              Security Copilot API         |
|                              +-----------------+          |
|                              | Summarize       |          |
|                              | Enrich Entities |          |
|                              | Assess Risk     |          |
|                              | Recommend Action|          |
|                              +--------+--------+          |
|                                       |                   |
|                                       v                   |
|                     +-----------------------------+       |
|                     |  Update Incident            |       |
|                     |  - Add triage summary tag   |       |
|                     |  - Adjust severity          |       |
|                     |  - Assign to analyst/team   |       |
|                     |  - Auto-close false positive|       |
|                     +-----------------------------+       |
+-----------------------------------------------------------+&lt;/LI-CODE&gt;&lt;H2&gt;Step 1 - Identify High-Volume Triage Candidates&lt;/H2&gt;&lt;P&gt;Not every incident type benefits equally from automated triage. Start with alert types that are high in volume but often turn out to be false positives or low severity. Use this KQL query to identify your top candidates:&lt;/P&gt;&lt;LI-CODE lang="kusto"&gt;SecurityIncident
| where TimeGenerated &amp;gt; ago(30d)
| summarize
    TotalIncidents = count(),
    AutoClosed = countif(Classification == "FalsePositive" or Classification == "BenignPositive"),
    AvgTimeToTriageMinutes = avg(datetime_diff('minute', FirstActivityTime, CreatedTime))
    by Title
| extend FalsePositiveRate = round(AutoClosed * 100.0 / TotalIncidents, 1)
| where TotalIncidents &amp;gt; 10
| order by TotalIncidents desc
| take 20&lt;/LI-CODE&gt;&lt;P&gt;This query surfaces the incident types where automation will deliver the highest ROI. Based on publicly available data and community reports, the following categories consistently appear at the top:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Impossible travel alerts (high volume, around 60% false positive rate)&lt;/LI&gt;&lt;LI&gt;Suspicious sign-in activity from unfamiliar locations&lt;/LI&gt;&lt;LI&gt;Mass file download and share events&lt;/LI&gt;&lt;LI&gt;Mailbox forwarding rule creation&lt;/LI&gt;&lt;/UL&gt;&lt;H2&gt;Step 2 - Build the Copilot-Powered Triage Playbook&lt;/H2&gt;&lt;P&gt;Create a Logic App playbook that triggers on incident creation and leverages the Security Copilot connector. The core flow looks like this:&lt;/P&gt;&lt;P&gt;Trigger: Microsoft Sentinel Incident - When an incident is created&lt;/P&gt;&lt;P&gt;Action 1 - Get incident entities:&lt;/P&gt;&lt;LI-CODE lang="kusto"&gt;let incidentEntities = SecurityIncident
| where IncidentNumber == &amp;lt;IncidentNumber&amp;gt;
| mv-expand AlertIds
| join kind=inner (SecurityAlert | extend AlertId = SystemAlertId) on $left.AlertIds == $right.AlertId
| mv-expand Entities
| extend EntityData = parse_json(Entities)
| project EntityType = tostring(EntityData.Type),
          EntityValue = coalesce(
              tostring(EntityData.HostName),
              tostring(EntityData.Address),
              tostring(EntityData.Name),
              tostring(EntityData.DnsDomain)
          );
incidentEntities&lt;/LI-CODE&gt;&lt;P&gt;Note: The &amp;lt;IncidentNumber&amp;gt; placeholder above is a Logic App dynamic content variable. When building your playbook, select the incident number from the trigger output rather than hardcoding a value.&lt;/P&gt;&lt;P&gt;Action 2 - Copilot prompt session:&lt;/P&gt;&lt;P&gt;Send a structured prompt to Security Copilot that requests:&lt;/P&gt;&lt;LI-CODE lang=""&gt;Analyze this Microsoft Sentinel incident and provide a triage assessment:

Incident Title: {IncidentTitle}
Severity: {Severity}
Description: {Description}
Entities involved: {EntityList}
Alert count: {AlertCount}

Please provide:
1. A concise summary of what happened (2-3 sentences)
2. Entity risk assessment for each IP, user, and host
3. Whether this appears to be a true positive, benign positive, or false positive
4. Recommended next steps
5. Suggested severity adjustment (if any)&lt;/LI-CODE&gt;&lt;P&gt;Action 3 - Parse and route:&lt;/P&gt;&lt;P&gt;Use the Copilot response to update the incident. The Logic App parses the structured output and:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Adds the triage summary as an incident comment&lt;/LI&gt;&lt;LI&gt;Tags the incident with copilot-triaged&lt;/LI&gt;&lt;LI&gt;Adjusts severity if Copilot recommends it&lt;/LI&gt;&lt;LI&gt;Routes to the appropriate analyst tier based on the assessment&lt;/LI&gt;&lt;/UL&gt;&lt;H2&gt;Step 3 - Enrich with Contextual KQL Lookups&lt;/H2&gt;&lt;P&gt;Security Copilot's assessment improves dramatically when you feed it contextual data. Before sending the prompt, enrich the incident with organization-specific signals:&lt;/P&gt;&lt;LI-CODE lang="kusto"&gt;// Check if the user has a history of similar alerts (repeat offender vs. first time)
let userAlertHistory = SecurityAlert
| where TimeGenerated &amp;gt; ago(90d)
| mv-expand Entities
| extend EntityData = parse_json(Entities)
| where EntityData.Type == "account"
| where tostring(EntityData.Name) == "&amp;lt;UserPrincipalName&amp;gt;"
| summarize
    PriorAlertCount = count(),
    DistinctAlertTypes = dcount(AlertName),
    LastAlertTime = max(TimeGenerated)
| extend IsRepeatOffender = PriorAlertCount &amp;gt; 5;
userAlertHistory&lt;/LI-CODE&gt;&lt;LI-CODE lang="kusto"&gt;// Check user risk level from Entra ID Protection
AADUserRiskEvents
| where TimeGenerated &amp;gt; ago(7d)
| where UserPrincipalName == "&amp;lt;UserPrincipalName&amp;gt;"
| summarize
    arg_max(TimeGenerated, RiskLevel),
    RecentRiskEvents = count()
| project RiskLevel, RecentRiskEvents&lt;/LI-CODE&gt;&lt;P&gt;Including this context in the Copilot prompt transforms generic assessments into organization-aware triage decisions. A "suspicious sign-in" for a user who travels internationally every week is very different from the same alert for a user who has never left their home country.&lt;/P&gt;&lt;H2&gt;Step 4 - Implement Feedback Loops&lt;/H2&gt;&lt;P&gt;Automated triage is only as good as its accuracy over time. Build a feedback mechanism by tracking Copilot's assessments against analyst final classifications:&lt;/P&gt;&lt;LI-CODE lang="kusto"&gt;SecurityIncident
| where Tags has "copilot-triaged"
| where TimeGenerated &amp;gt; ago(30d)
| where Classification != ""
| mv-expand Comments
| extend CopilotAssessment = extract("Assessment: (True Positive|False Positive|Benign Positive)", 1, tostring(Comments))
| where isnotempty(CopilotAssessment)
| summarize
    Total = dcount(IncidentNumber),
    Correct = dcountif(IncidentNumber,
        (CopilotAssessment == "False Positive" and Classification == "FalsePositive") or
        (CopilotAssessment == "True Positive" and Classification == "TruePositive") or
        (CopilotAssessment == "Benign Positive" and Classification == "BenignPositive")
    )
    by bin(TimeGenerated, 7d)
| extend AccuracyPercent = round(Correct * 100.0 / Total, 1)
| order by TimeGenerated asc&lt;/LI-CODE&gt;&lt;P&gt;For this query to work reliably, the automation playbook must write the assessment in a consistent format within the incident comments. Use a structured prefix such as Assessment: True Positive so the regex extraction remains stable.&lt;/P&gt;&lt;P&gt;According to Microsoft's published benchmarks and community feedback, Copilot-assisted triage typically achieves 85-92% agreement with senior analyst classifications after prompt tuning - significantly reducing the manual triage burden.&lt;/P&gt;&lt;H2&gt;A Note on Licensing and Compute Units&lt;/H2&gt;&lt;P&gt;Security Copilot is licensed through Security Compute Units (SCUs), which are provisioned in Azure. Each prompt session consumes SCUs based on the complexity of the request. For automated triage at scale, plan your SCU capacity carefully - high-volume playbooks can accumulate significant usage. Start with a conservative allocation, monitor consumption through the Security Copilot usage dashboard, and scale up as you validate ROI. Microsoft provides detailed guidance on SCU sizing in the official Security Copilot documentation.&lt;/P&gt;&lt;H2&gt;Example Scenario - Impossible Travel at Scale&lt;/H2&gt;&lt;P&gt;Consider a typical enterprise that generates over 200 impossible travel alerts per week. The SOC team spends roughly 15 hours weekly just triaging these. Here is how automated triage addresses this:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Detection - Sentinel's built-in impossible travel analytics rule flags the incidents&lt;/LI&gt;&lt;LI&gt;Enrichment - The playbook pulls each user's typical travel patterns from sign-in logs over the past 90 days, VPN usage, and whether the "impossible" location matches any known corporate office or VPN egress point&lt;/LI&gt;&lt;LI&gt;Copilot Analysis - Security Copilot receives the enriched context and classifies each incident&lt;/LI&gt;&lt;LI&gt;Expected Result - Based on common deployment patterns, around 70-75% of impossible travel incidents are auto-closed as benign (VPN, known travel patterns), roughly 20% are downgraded to informational with a triage note, and only about 5% are escalated to analysts as genuine suspicious activity&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;This type of automation can reclaim over 10 hours per week - time that analysts can redirect to proactive threat hunting.&lt;/P&gt;&lt;H2&gt;Getting Started - Practical Recommendations&lt;/H2&gt;&lt;P&gt;For teams ready to implement automated triage with Security Copilot and Sentinel, here is a recommended approach:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Start small. Pick one high-volume, high-false-positive incident type. Do not try to automate everything at once.&lt;/LI&gt;&lt;LI&gt;Run in shadow mode first. Have the playbook add triage comments but do not auto-close or re-route. Let analysts compare Copilot's assessment with their own for two to four weeks.&lt;/LI&gt;&lt;LI&gt;Tune your prompts. Generic prompts produce generic results. Include organization-specific context - naming conventions, known infrastructure, typical user behavior patterns.&lt;/LI&gt;&lt;LI&gt;Monitor accuracy continuously. Use the feedback loop KQL above. If accuracy drops below 80%, pause automation and investigate.&lt;/LI&gt;&lt;LI&gt;Maintain human oversight. Even at 90%+ accuracy, keep a human review step for high-severity incidents. Automation handles volume - analysts handle judgment.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;The combination of Security Copilot and Microsoft Sentinel represents a genuine step forward for SOC efficiency. By automating the initial triage pass - summarizing incidents, enriching entities, and providing classification recommendations - analysts are freed to focus on what humans do best: making nuanced security decisions under uncertainty.&lt;/P&gt;&lt;P&gt;Feel free to like or/and connect :)&lt;/P&gt;</description>
      <pubDate>Tue, 31 Mar 2026 13:09:01 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/security-copilot-integration-with-microsoft-sentinel-why/m-p/4507293#M12906</guid>
      <dc:creator>Marcel_Graewer</dc:creator>
      <dc:date>2026-03-31T13:09:01Z</dc:date>
    </item>
    <item>
      <title>Webinar Cancellation</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/webinar-cancellation/m-p/4507045#M12905</link>
      <description>&lt;img /&gt;
&lt;P&gt;Hi everyone!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The webinar originally scheduled for April 14th on "Using distributed content to manage your multi-tenant SecOps" has unfortunately been cancelled for now. We apologize for the inconvenience and hope to reschedule it in the future.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please find other available webinars at: &lt;A class="lia-external-url" href="http://aka.ms/securitycommunity" target="_blank"&gt;http://aka.ms/securitycommunity&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;All the best,&lt;/P&gt;
&lt;P&gt;The Microsoft Security Community Team&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 21:40:40 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/webinar-cancellation/m-p/4507045#M12905</guid>
      <dc:creator>emilyfalla</dc:creator>
      <dc:date>2026-03-30T21:40:40Z</dc:date>
    </item>
    <item>
      <title>Your Sentinel AMA Logs &amp; Queries Are Public by Default — AMPLS Architectures to Fix That</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/your-sentinel-ama-logs-queries-are-public-by-default-ampls/m-p/4505699#M12899</link>
      <description>&lt;P&gt;When you deploy Microsoft Sentinel, security log ingestion travels over public Azure Data Collection Endpoints by default. The connection is encrypted, and the data arrives correctly — but the endpoint is publicly reachable, and so is the workspace itself, queryable from any browser on any network.&lt;/P&gt;&lt;P&gt;For many organisations, that trade-off is fine. For others — regulated industries, healthcare, financial services, critical infrastructure — it is the exact problem they need to solve.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Azure Monitor Private Link Scope (AMPLS)&lt;/STRONG&gt;&amp;nbsp;is how you solve it.&lt;/P&gt;&lt;H3&gt;What AMPLS Actually Does&lt;/H3&gt;&lt;P&gt;AMPLS is a single Azure resource that wraps your monitoring pipeline and controls two settings:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Where logs are allowed to go&lt;/STRONG&gt;&amp;nbsp;(ingestion mode:&amp;nbsp;Open&amp;nbsp;or&amp;nbsp;PrivateOnly)&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Where analysts are allowed to query from&lt;/STRONG&gt;&amp;nbsp;(query mode:&amp;nbsp;Open&amp;nbsp;or&amp;nbsp;PrivateOnly)&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Change those two settings and you fundamentally change the security posture — not as a policy recommendation, but as a&amp;nbsp;&lt;STRONG&gt;hard platform enforcement&lt;/STRONG&gt;. Set ingestion to&amp;nbsp;PrivateOnly&amp;nbsp;and the public endpoint stops working. It does not fall back gracefully. It returns an error. That is the point.&lt;/P&gt;&lt;P&gt;It is not a firewall rule someone can bypass or a policy someone can override. Control is baked in at the infrastructure level.&lt;/P&gt;&lt;H3&gt;Three Patterns — One Spectrum&lt;/H3&gt;&lt;P&gt;There is no universally correct answer. The right architecture depends on your organisation's risk appetite, existing network infrastructure, and how much operational complexity your team can realistically manage. These three patterns cover the full range:&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;STRONG&gt;Architecture 1 — Open / Public (Basic)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;No AMPLS. Logs travel to public Data Collection Endpoints over the internet. The workspace is open to queries from anywhere. This is the default — operational in minutes with zero network setup.&lt;/P&gt;&lt;P&gt;Cloud service connectors (Microsoft 365, Defender, third-party) work immediately because they are server-side/API/Graph pulls and are unaffected by AMPLS. Azure Monitor Agents and Azure Arc agents handle ingestion from cloud or on-prem machines via public network.&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;UL&gt;&lt;LI&gt;Simplicity: 9/10 | Security: 6/10&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Good for:&lt;/STRONG&gt;&amp;nbsp;Dev environments, teams getting started, low-sensitivity workloads&lt;/LI&gt;&lt;/UL&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;STRONG&gt;Architecture 2 — Hybrid: Private Ingestion, Open Queries (Recommended for most)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;AMPLS is in place. Ingestion is locked to&amp;nbsp;PrivateOnly&amp;nbsp;— logs from virtual machines travel through a Private Endpoint inside your own network, never touching a public route. On-premises or hybrid machines connect through Azure Arc over VPN or a dedicated circuit and feed into the same private pipeline.&lt;/P&gt;&lt;P&gt;Query access stays open, so analysts can work from anywhere without needing a VPN/Jumpbox to reach the Sentinel portal — the investigation workflow stays flexible, but the log ingestion path is fully ring-fenced. You can also split ingestion mode per DCE if you need some sources public and some private.&lt;/P&gt;&lt;P&gt;This is the architecture most organisations land on as their steady state.&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;UL&gt;&lt;LI&gt;Simplicity: 6/10 | Security: 8/10&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Good for:&lt;/STRONG&gt;&amp;nbsp;Organisations with mixed cloud and on-premises estates that need private ingestion without restricting analyst access&lt;/LI&gt;&lt;/UL&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;STRONG&gt;Architecture 3 — Fully Private (Maximum Control)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Infrastructure is essentially identical to Architecture 2 — AMPLS, Private Endpoints, Private DNS zones, VPN or dedicated circuit, Azure Arc for on-premises machines. The single difference:&amp;nbsp;&lt;STRONG&gt;query mode is also set to&amp;nbsp;PrivateOnly&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;Analysts can only reach Sentinel from inside the private network. VPN or Jumpbox required to access the portal. Both the pipe that carries logs in and the channel analysts use to read them are fully contained within the defined boundary.&lt;/P&gt;&lt;P&gt;This is the right choice when your organisation needs to demonstrate — not just claim — that security data never moves outside a defined network perimeter.&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;UL&gt;&lt;LI&gt;Simplicity: 2/10 | Security: 10/10&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Good for:&lt;/STRONG&gt;&amp;nbsp;Organisations with strict data boundary requirements (regulated industries, audit, compliance mandates)&lt;/LI&gt;&lt;/UL&gt;&lt;img /&gt;&lt;H3&gt;Quick Reference — Which Pattern Fits?&lt;/H3&gt;&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Scenario&lt;/th&gt;&lt;th&gt;Architecture&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Getting started / low-sensitivity workloads&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Arch 1&lt;/STRONG&gt;&amp;nbsp;— No network setup, public endpoints accepted&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Private log ingestion, analysts work anywhere&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Arch 2&lt;/STRONG&gt;&amp;nbsp;— AMPLS&amp;nbsp;PrivateOnly&amp;nbsp;ingestion, query mode open&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Both ingestion and queries must be fully private&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Arch 3&lt;/STRONG&gt;&amp;nbsp;— Same as Arch 2 + query mode set to&amp;nbsp;PrivateOnly&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;&lt;P&gt;&lt;STRONG&gt;One thing all three share:&lt;/STRONG&gt; Microsoft 365, Entra ID, and Defender connectors work in every pattern — they are server-side pulls by Sentinel and are not affected by your network posture.&lt;/P&gt;&lt;img /&gt;&lt;P&gt;Please feel free to reach out if you have any questions regarding the information provided.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Mar 2026 23:19:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/your-sentinel-ama-logs-queries-are-public-by-default-ampls/m-p/4505699#M12899</guid>
      <dc:creator>veesamprabhukiran</dc:creator>
      <dc:date>2026-03-25T23:19:39Z</dc:date>
    </item>
    <item>
      <title>What caught you off guard when onboarding Sentinel to the Defender portal?</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/what-caught-you-off-guard-when-onboarding-sentinel-to-the/m-p/4505366#M12897</link>
      <description>&lt;P&gt;Following on from a previous discussion around what actually changes versus what doesn't in the Sentinel to Defender portal migration, I wanted to open a more specific conversation around the onboarding moment itself.&lt;/P&gt;&lt;P&gt;One thing I have been writing about is how much happens automatically the moment you connect your workspace. The Defender XDR connector enables on its own, a bi-directional sync starts immediately, and if your Microsoft incident creation rules are still active across Defender for Endpoint, Identity, Office 365, Cloud Apps, and Entra ID Protection, you are going to see duplicate incidents before you have had a chance to do anything about it.&lt;/P&gt;&lt;P&gt;That is one of the reasons I keep coming back to the inventory phase as the most underestimated part of this migration. Most of the painful post-migration experiences I hear about trace back to things that could have been caught in a pre-migration audit: analytics rules with incident title dependencies, automation conditions that assumed stable incident naming, RBAC gaps that only become visible when someone tries to access the data lake for the first time.&lt;/P&gt;&lt;P&gt;A few things I would genuinely love to hear from practitioners who have been through this:&lt;/P&gt;&lt;P&gt;- When you onboarded, what was the first thing that behaved unexpectedly that you had not anticipated from the documentation?&lt;/P&gt;&lt;P&gt;- For those who have reviewed automation rules post-onboarding: did you find conditions relying on incident title matching that broke, and how did you remediate them?&lt;/P&gt;&lt;P&gt;- For anyone managing access across multiple tenants: how are you currently handling the GDAP gap while Microsoft completes that capability?&lt;/P&gt;&lt;P&gt;I am writing up a detailed pre-migration inventory framework covering all four areas and the community experience here is genuinely useful for making sure the practitioner angle covers the right ground.&lt;/P&gt;&lt;P&gt;Happy to discuss anything above in more detail.&lt;/P&gt;</description>
      <pubDate>Tue, 24 Mar 2026 23:51:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/what-caught-you-off-guard-when-onboarding-sentinel-to-the/m-p/4505366#M12897</guid>
      <dc:creator>AnthonyPorter</dc:creator>
      <dc:date>2026-03-24T23:51:42Z</dc:date>
    </item>
    <item>
      <title>Sentinel datalake: private link/private endpoint</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/sentinel-datalake-private-link-private-endpoint/m-p/4504688#M12894</link>
      <description>&lt;P&gt;Has anyone already configured Sentinel Datalake with a private link/private endpoint setup? I can't find any instructions for this specific case.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can I use the wizard in the Defender XDR portal, or does it require specific configuration steps?&lt;/P&gt;&lt;P&gt;Or does it require configuring a private link/private endpoint setup on the Datalake component after activation via the wizard?&lt;/P&gt;</description>
      <pubDate>Mon, 23 Mar 2026 11:40:25 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/sentinel-datalake-private-link-private-endpoint/m-p/4504688#M12894</guid>
      <dc:creator>munterweger</dc:creator>
      <dc:date>2026-03-23T11:40:25Z</dc:date>
    </item>
    <item>
      <title>RSAC 2026: What the Sentinel Playbook Generator actually means for SOC automation</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/rsac-2026-what-the-sentinel-playbook-generator-actually-means/m-p/4504463#M12893</link>
      <description>&lt;P&gt;RSAC 2026 brought a wave of Sentinel announcements, but the one I keep coming back to is the playbook generator. Not because it's the flashiest, but because it touches something that's been a real operational pain point for years: the gap between what SOC teams need to automate and what they can realistically build and maintain.&lt;/P&gt;&lt;P&gt;I want to unpack what this actually changes from an operational perspective, because I think the implications go further than "you can now vibe-code a playbook."&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;The problem it solves&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;If you've built and maintained Logic Apps playbooks in Sentinel at any scale, you know the friction. You need a connector for every integration. If there isn't one, you're writing custom HTTP actions with authentication handling, pagination, error handling - all inside a visual designer that wasn't built for complex branching logic. Debugging is painful. Version control is an afterthought. And when something breaks at 2am, the person on call needs to understand both the Logic Apps runtime AND the security workflow to fix it.&lt;/P&gt;&lt;P&gt;The result in most environments I've seen: teams build a handful of playbooks for the obvious use cases (isolate host, disable account, post to Teams) and then stop. The long tail of automation - the enrichment workflows, the cross-tool correlation, the conditional response chains - stays manual because building it is too expensive relative to the time saved.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;What's actually different now&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;The playbook generator produces Python. Not Logic Apps JSON, not ARM templates - actual Python code with documentation and a visual flowchart. You describe the workflow in natural language, the system proposes a plan, asks clarifying questions, and then generates the code once you approve.&lt;/P&gt;&lt;P&gt;The Integration Profile concept is where this gets interesting. Instead of relying on predefined connectors, you define a base URL, auth method, and credentials for any service - and the generator creates dynamic API calls against it. This means you can automate against ServiceNow, Jira, Slack, your internal CMDB, or any REST API without waiting for Microsoft or a partner to ship a connector.&lt;/P&gt;&lt;P&gt;The embedded VS Code experience with plan mode and act mode is a deliberate design choice. Plan mode lets you iterate on the workflow before any code is generated. Act mode produces the implementation. You can then validate against real alerts and refine through conversation or direct code edits. This is a meaningful improvement over the "deploy and pray" cycle most of us have with Logic Apps.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Where I see the real impact&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;For environments running Sentinel at scale, the playbook generator could unlock the automation long tail I mentioned above. The workflows that were never worth the Logic Apps development effort might now be worth a 15-minute conversation with the generator. Think: enrichment chains that pull context from three different tools before deciding on a response path, or conditional escalation workflows that factor in asset criticality, time of day, and analyst availability.&lt;/P&gt;&lt;P&gt;There's also an interesting angle for teams that operate across Microsoft and non-Microsoft tooling. If your SOC uses Sentinel for SIEM but has Palo Alto, CrowdStrike, or other vendors in the stack, the Integration Profile approach means you can build cross-vendor response playbooks without middleware.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;The questions I'd genuinely like to hear about&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;A few things that aren't clear from the documentation and that I think matter for production use:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Security Copilot dependency:&lt;/STRONG&gt; The prerequisites require a Security Copilot workspace with EU or US capacity. Someone in the blog comments already flagged this as a potential blocker for organizations that have Sentinel but not Security Copilot. Is this a hard requirement going forward, or will there be a path for Sentinel-only customers?&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Code lifecycle management:&lt;/STRONG&gt; The generated Python runs... where exactly? What's the execution runtime? How do you version control, test, and promote these playbooks across dev/staging/prod? Logic Apps had ARM templates and CI/CD patterns. What's the equivalent here?&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Integration Profile security:&lt;/STRONG&gt; You're storing credentials for potentially every tool in your security stack inside these profiles. What's the credential storage model? Is this backed by Key Vault? How do you rotate credentials without breaking running playbooks?&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Debugging in production:&lt;/STRONG&gt; When a generated playbook fails at 2am, what does the troubleshooting experience look like? Do you get structured logs, execution traces, retry telemetry? Or are you reading Python stack traces?&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Coexistence with Logic Apps:&lt;/STRONG&gt; Most environments won't rip and replace overnight. What's the intended coexistence model between generated Python playbooks and existing Logic Apps automation rules?&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;I'm genuinely optimistic about this direction. Moving from a low-code visual designer to an AI-assisted coding model with transparent, editable output feels like the right architectural bet for where SOC automation needs to go. But the operational details around lifecycle, security, and debugging will determine whether this becomes a production staple or stays a demo-only feature.&lt;/P&gt;&lt;P&gt;Would be interested to hear from anyone who's been in the preview - what's the reality like compared to the pitch?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 22 Mar 2026 12:22:31 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/rsac-2026-what-the-sentinel-playbook-generator-actually-means/m-p/4504463#M12893</guid>
      <dc:creator>Marcel_Graewer</dc:creator>
      <dc:date>2026-03-22T12:22:31Z</dc:date>
    </item>
    <item>
      <title>How Granular Delegated Admin Privileges (GDAP) allows Sentinel customers to delegate access</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-granular-delegated-admin-privileges-gdap-allows-sentinel/ba-p/4503123</link>
      <description>&lt;H4&gt;&lt;STRONG&gt;Simplifying Defender SIEM and XDR delegated access&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;As Microsoft Sentinel and Defender converge into a unified experience, organizations face a fundamental challenge: the lack of a scalable, comprehensive, delegated access model that works seamlessly across Entra ID and Sentinel’s Azure Resource Manage creating a significant barrier for Managed Security Service Providers (MSSPs) and large enterprises with complex multi-tenant structures.&lt;/P&gt;
&lt;H4&gt;Extending GDAP beyond CSPs: a strategic solution&lt;/H4&gt;
&lt;P&gt;In response to these challenges, we have developed an extension to GDAP that makes it available to all Sentinel and Defender customers, including non-CSP organizations. This expansion enables both MSSPs and customers with multi-tenant organizational structures to establish secure, granular delegated access relationships directly through the Microsoft Defender portal. This is now available in public preview.&lt;/P&gt;
&lt;P&gt;The GDAP extension aligns with zero-trust security principles through a three-way handshake model requiring explicit mutual consent between governing and governed tenants before any relationship is established. This consent-based approach enhances transparency and accountability, reducing risks associated with broad, uncontrolled permissions. By integrating with Microsoft Defender, GDAP enables advanced threat detection and response capabilities across tenant boundaries while maintaining granular permission management through Entra ID roles and Unified RBAC custom permissions.&lt;/P&gt;
&lt;H4&gt;Delivering unified management of delegated access across SIEM and XDR&lt;/H4&gt;
&lt;P&gt;With GDAP, customers gain a truly unified way to manage access across both Microsoft Sentinel and Defender—using a single, consistent delegated access model for SIEM and XDR. For Sentinel customers, this brings parity with the Azure portal experience: where delegated access was previously managed through Azure Lighthouse, it can now be handled directly in the Defender portal using GDAP. More importantly, for organizations running SIEM and XDR together, GDAP eliminates the need to switch between portals—allowing teams to view, manage, and govern security access from one centralized experience. The result is simpler administration, reduced operational friction, and a more cohesive way to secure multi-tenant environments at scale.&lt;/P&gt;
&lt;H4&gt;How GDAP for non-CSPs works: the three-step handshake&lt;/H4&gt;
&lt;P&gt;The GDAP handshake model implements a security-first approach through three distinct steps, each requiring explicit approval to prevent unauthorized access. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Step 1&lt;/STRONG&gt; begins with the governed tenant initiating the relationship, allowing the governing tenant to request GDAP access.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2&lt;/STRONG&gt; shifts control to the governing tenant, which creates and sends a delegated access request with specific requested permissions through the multi-tenant organization (MTO) portal.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3&lt;/STRONG&gt; returns to the governed tenant for final approval.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The approach provides customers with complete visibility and control over who can access their security data and with what permissions, while giving MSSPs a streamlined, Microsoft-supported mechanism for managing delegated relationships at scale.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4&lt;/STRONG&gt; assigns Sentinel permissions.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In Azure resource management, assign governing tenant’s groups with Sentinel workspaces permissions (in the governed tenant), selecting the governing tenant’s security groups used in the created relationship.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Learn more here: &lt;A href="https://learn.microsoft.com/en-us/unified-secops/governance-relationships" target="_blank"&gt;Configure delegated access with governance relationships for multitenant organizations - Unified se…&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 16 Apr 2026 14:40:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-granular-delegated-admin-privileges-gdap-allows-sentinel/ba-p/4503123</guid>
      <dc:creator>Yossi Basha</dc:creator>
      <dc:date>2026-04-16T14:40:52Z</dc:date>
    </item>
  </channel>
</rss>

