<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Microsoft Sentinel topics</title>
    <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/bd-p/MicrosoftSentinel</link>
    <description>Microsoft Sentinel topics</description>
    <pubDate>Sun, 26 Apr 2026 05:41:06 GMT</pubDate>
    <dc:creator>MicrosoftSentinel</dc:creator>
    <dc:date>2026-04-26T05:41:06Z</dc:date>
    <item>
      <title>Sentinel RBAC in the Unified portal: who has activated Unified RBAC, and how did it go?</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/sentinel-rbac-in-the-unified-portal-who-has-activated-unified/m-p/4513181#M12923</link>
      <description>&lt;P&gt;Following the RSAC 2026 announcements last month, I have been working through the full permission picture for the Unified portal and wanted to open a discussion here given how much has shifted in a short period.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A quick framing of where things stand. The baseline is still that Azure RBAC carries across for Sentinel SIEM access when you onboard, no changes required. But there are now two significant additions in public preview: Unified RBAC for Sentinel SIEM itself (extending the Defender Unified RBAC model to cover Sentinel directly), and a new Defender-native GDAP model for non-CSP organisations managing delegated access across tenants.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The GDAP piece in particular is worth discussing carefully, because I want to be precise about what has and has not changed. The existing limitation from Microsoft's onboarding documentation, that GDAP with Azure Lighthouse is not supported for Sentinel data in the Defender portal, has not changed. What is new is a separate, Defender-portal-native GDAP mechanism announced at RSAC, which is a different thing. These are not the same capability. If you were using Entra B2B as the interim path based on earlier guidance, that guidance was correct and that path remains the generally available option today.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A few things I would genuinely like to hear from practitioners:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;For those who have activated Unified RBAC for a Sentinel workspace in the Defender portal: what did the migration from Azure RBAC roles look like in practice? Did the import function bring roles across cleanly, or did you find gaps particularly around custom roles?&lt;/LI&gt;&lt;LI&gt;For environments using Playbook Operator, Automation Contributor, or Workbook Contributor role assignments: how are you handling the fact those three roles are not yet in Unified RBAC and still require Azure portal management? Is the dual-management posture creating operational friction?&lt;/LI&gt;&lt;LI&gt;For MSSPs evaluating the new Defender-native GDAP model against their existing Entra B2B setup: what factors are driving the decision either way at your scale?&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Writing this up as Part 3 of the migration series and the community experience here is directly useful for making sure the practitioner angle is grounded.&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 06:04:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/sentinel-rbac-in-the-unified-portal-who-has-activated-unified/m-p/4513181#M12923</guid>
      <dc:creator>AnthonyPorter</dc:creator>
      <dc:date>2026-04-21T06:04:28Z</dc:date>
    </item>
    <item>
      <title>Stuck looking up a watchlist value</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/stuck-looking-up-a-watchlist-value/m-p/4507743#M12907</link>
      <description>&lt;P&gt;Hiya,&lt;/P&gt;&lt;P&gt;I get stuck working with watchlists sometimes.&lt;/P&gt;&lt;P&gt;In this example, I'm wanting to focus on account activity from a list of UPNs.&lt;/P&gt;&lt;P&gt;If I split the elements up, I get the individual results, but can't seem to pull it all together.&lt;/P&gt;&lt;P&gt;=====================================================&lt;/P&gt;&lt;P&gt;In its entirety, the query returns zero results:&lt;/P&gt;&lt;P&gt;let ServiceAccounts=(_GetWatchlist('ServiceAccounts_Monitoring'))| project SearchKey;&lt;/P&gt;&lt;P&gt;let OpName = dynamic(['Reset password (self-service)','Reset User Password','Change user password','User reset password','User started password reset','Enable Account','Change password (self-service)','Update PasswordProfile','Self-service password reset flow activity progress']);&lt;/P&gt;&lt;P&gt;AuditLogs&lt;/P&gt;&lt;P&gt;| where OperationName has_any (OpName)&lt;/P&gt;&lt;P&gt;| extend upn = TargetResources.[0].userPrincipalName&lt;/P&gt;&lt;P&gt;| where upn in (ServiceAccounts) //&amp;lt;=This is where I think I'm wrong&lt;/P&gt;&lt;P&gt;| project upn&lt;/P&gt;&lt;P&gt;=====================================================&lt;/P&gt;&lt;P&gt;This line on its own, returns the user on the list:&lt;/P&gt;&lt;P&gt;let ServiceAccounts=(_GetWatchlist('ServiceAccounts_Monitoring'))| project SearchKey;&lt;/P&gt;&lt;P&gt;=====================================================&lt;/P&gt;&lt;P&gt;This section on its own, returns all the activity&lt;/P&gt;&lt;P&gt;let OpName = dynamic(['Reset password (self-service)','Reset User Password','Change user password','User reset password','User started password reset','Enable Account','Change password (self-service)','Update PasswordProfile','Self-service password reset flow activity progress']);&lt;/P&gt;&lt;P&gt;AuditLogs&lt;/P&gt;&lt;P&gt;| where OperationName has_any (OpName)&lt;/P&gt;&lt;P&gt;| extend upn = TargetResources.[0].userPrincipalName&lt;/P&gt;&lt;P&gt;| where upn contains "username" //This is the name on the watchlistlist - so I know the activity exists)&lt;/P&gt;&lt;P&gt;====================================================&lt;/P&gt;&lt;P&gt;I'm doing something wrong when I'm trying to use the watchlist cache (I think)&lt;/P&gt;&lt;P&gt;Any help\guidance or wisdom would be greatly appreciated!&lt;/P&gt;&lt;P&gt;Many thanks&lt;/P&gt;</description>
      <pubDate>Wed, 01 Apr 2026 14:25:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/stuck-looking-up-a-watchlist-value/m-p/4507743#M12907</guid>
      <dc:creator>MrD</dc:creator>
      <dc:date>2026-04-01T14:25:42Z</dc:date>
    </item>
    <item>
      <title>Security Copilot Integration with Microsoft Sentinel - Why Automation matters now</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/security-copilot-integration-with-microsoft-sentinel-why/m-p/4507293#M12906</link>
      <description>&lt;P&gt;Security Operations Centers face a relentless challenge - the volume of security alerts far exceeds the capacity of human analysts. On average, a mid-sized SOC receives thousands of alerts per day, and analysts spend up to 80% of their time on initial triage. That means determining whether an alert is a true positive, understanding its scope, and deciding on next steps. With Microsoft Security Copilot now deeply integrated into Microsoft Sentinel, there is finally a practical path to automating the most time-consuming parts of this workflow.&lt;/P&gt;&lt;P&gt;So I decided to walk you through how to combine Security Copilot with Sentinel to build an automated incident triage pipeline - complete with KQL queries, automation rule patterns, and practical scenarios drawn from common enterprise deployments.&lt;/P&gt;&lt;P&gt;Traditional triage workflows rely on analysts manually reviewing each incident - reading alert details, correlating entities across data sources, checking threat intelligence, and making a severity assessment. This is slow, inconsistent, and does not scale.&lt;/P&gt;&lt;P&gt;Security Copilot changes this equation by providing:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Natural language incident summarization&lt;/STRONG&gt; - turning complex, multi-alert incidents into analyst-readable narratives&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Automated entity enrichment&lt;/STRONG&gt; - pulling threat intelligence, user risk scores, and device compliance state without manual lookups&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Guided response recommendations&lt;/STRONG&gt; - suggesting containment and remediation steps based on the incident type and organizational context&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;The key insight is that Copilot does not replace analysts - it handles the repetitive first-pass triage so analysts can focus on decision-making and complex investigations.&lt;/P&gt;&lt;H2&gt;Architecture - How the Pieces Fit Together&lt;/H2&gt;&lt;P&gt;The automated triage pipeline consists of four layers:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Detection Layer - Sentinel analytics rules generate incidents from log data&lt;/LI&gt;&lt;LI&gt;Enrichment Layer - Automation rules trigger Logic Apps that call Security Copilot&lt;/LI&gt;&lt;LI&gt;Triage Layer - Copilot analyzes the incident, enriches entities, and produces a triage summary&lt;/LI&gt;&lt;LI&gt;Routing Layer - Based on Copilot's assessment, incidents are routed, re-prioritized, or auto-closed&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;(Forgive my AI-painted illustration here, but I find it a nice way to display dependencies.)&lt;/P&gt;&lt;LI-CODE lang=""&gt;+-----------------------------------------------------------+
|                    Microsoft Sentinel                     |
|                                                           |
|  Analytics Rules --&amp;gt; Incidents --&amp;gt; Automation Rules        |
|                                        |                  |
|                                        v                  |
|                              Logic App / Playbook         |
|                                        |                  |
|                                        v                  |
|                              Security Copilot API         |
|                              +-----------------+          |
|                              | Summarize       |          |
|                              | Enrich Entities |          |
|                              | Assess Risk     |          |
|                              | Recommend Action|          |
|                              +--------+--------+          |
|                                       |                   |
|                                       v                   |
|                     +-----------------------------+       |
|                     |  Update Incident            |       |
|                     |  - Add triage summary tag   |       |
|                     |  - Adjust severity          |       |
|                     |  - Assign to analyst/team   |       |
|                     |  - Auto-close false positive|       |
|                     +-----------------------------+       |
+-----------------------------------------------------------+&lt;/LI-CODE&gt;&lt;H2&gt;Step 1 - Identify High-Volume Triage Candidates&lt;/H2&gt;&lt;P&gt;Not every incident type benefits equally from automated triage. Start with alert types that are high in volume but often turn out to be false positives or low severity. Use this KQL query to identify your top candidates:&lt;/P&gt;&lt;LI-CODE lang="kusto"&gt;SecurityIncident
| where TimeGenerated &amp;gt; ago(30d)
| summarize
    TotalIncidents = count(),
    AutoClosed = countif(Classification == "FalsePositive" or Classification == "BenignPositive"),
    AvgTimeToTriageMinutes = avg(datetime_diff('minute', FirstActivityTime, CreatedTime))
    by Title
| extend FalsePositiveRate = round(AutoClosed * 100.0 / TotalIncidents, 1)
| where TotalIncidents &amp;gt; 10
| order by TotalIncidents desc
| take 20&lt;/LI-CODE&gt;&lt;P&gt;This query surfaces the incident types where automation will deliver the highest ROI. Based on publicly available data and community reports, the following categories consistently appear at the top:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Impossible travel alerts (high volume, around 60% false positive rate)&lt;/LI&gt;&lt;LI&gt;Suspicious sign-in activity from unfamiliar locations&lt;/LI&gt;&lt;LI&gt;Mass file download and share events&lt;/LI&gt;&lt;LI&gt;Mailbox forwarding rule creation&lt;/LI&gt;&lt;/UL&gt;&lt;H2&gt;Step 2 - Build the Copilot-Powered Triage Playbook&lt;/H2&gt;&lt;P&gt;Create a Logic App playbook that triggers on incident creation and leverages the Security Copilot connector. The core flow looks like this:&lt;/P&gt;&lt;P&gt;Trigger: Microsoft Sentinel Incident - When an incident is created&lt;/P&gt;&lt;P&gt;Action 1 - Get incident entities:&lt;/P&gt;&lt;LI-CODE lang="kusto"&gt;let incidentEntities = SecurityIncident
| where IncidentNumber == &amp;lt;IncidentNumber&amp;gt;
| mv-expand AlertIds
| join kind=inner (SecurityAlert | extend AlertId = SystemAlertId) on $left.AlertIds == $right.AlertId
| mv-expand Entities
| extend EntityData = parse_json(Entities)
| project EntityType = tostring(EntityData.Type),
          EntityValue = coalesce(
              tostring(EntityData.HostName),
              tostring(EntityData.Address),
              tostring(EntityData.Name),
              tostring(EntityData.DnsDomain)
          );
incidentEntities&lt;/LI-CODE&gt;&lt;P&gt;Note: The &amp;lt;IncidentNumber&amp;gt; placeholder above is a Logic App dynamic content variable. When building your playbook, select the incident number from the trigger output rather than hardcoding a value.&lt;/P&gt;&lt;P&gt;Action 2 - Copilot prompt session:&lt;/P&gt;&lt;P&gt;Send a structured prompt to Security Copilot that requests:&lt;/P&gt;&lt;LI-CODE lang=""&gt;Analyze this Microsoft Sentinel incident and provide a triage assessment:

Incident Title: {IncidentTitle}
Severity: {Severity}
Description: {Description}
Entities involved: {EntityList}
Alert count: {AlertCount}

Please provide:
1. A concise summary of what happened (2-3 sentences)
2. Entity risk assessment for each IP, user, and host
3. Whether this appears to be a true positive, benign positive, or false positive
4. Recommended next steps
5. Suggested severity adjustment (if any)&lt;/LI-CODE&gt;&lt;P&gt;Action 3 - Parse and route:&lt;/P&gt;&lt;P&gt;Use the Copilot response to update the incident. The Logic App parses the structured output and:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Adds the triage summary as an incident comment&lt;/LI&gt;&lt;LI&gt;Tags the incident with copilot-triaged&lt;/LI&gt;&lt;LI&gt;Adjusts severity if Copilot recommends it&lt;/LI&gt;&lt;LI&gt;Routes to the appropriate analyst tier based on the assessment&lt;/LI&gt;&lt;/UL&gt;&lt;H2&gt;Step 3 - Enrich with Contextual KQL Lookups&lt;/H2&gt;&lt;P&gt;Security Copilot's assessment improves dramatically when you feed it contextual data. Before sending the prompt, enrich the incident with organization-specific signals:&lt;/P&gt;&lt;LI-CODE lang="kusto"&gt;// Check if the user has a history of similar alerts (repeat offender vs. first time)
let userAlertHistory = SecurityAlert
| where TimeGenerated &amp;gt; ago(90d)
| mv-expand Entities
| extend EntityData = parse_json(Entities)
| where EntityData.Type == "account"
| where tostring(EntityData.Name) == "&amp;lt;UserPrincipalName&amp;gt;"
| summarize
    PriorAlertCount = count(),
    DistinctAlertTypes = dcount(AlertName),
    LastAlertTime = max(TimeGenerated)
| extend IsRepeatOffender = PriorAlertCount &amp;gt; 5;
userAlertHistory&lt;/LI-CODE&gt;&lt;LI-CODE lang="kusto"&gt;// Check user risk level from Entra ID Protection
AADUserRiskEvents
| where TimeGenerated &amp;gt; ago(7d)
| where UserPrincipalName == "&amp;lt;UserPrincipalName&amp;gt;"
| summarize
    arg_max(TimeGenerated, RiskLevel),
    RecentRiskEvents = count()
| project RiskLevel, RecentRiskEvents&lt;/LI-CODE&gt;&lt;P&gt;Including this context in the Copilot prompt transforms generic assessments into organization-aware triage decisions. A "suspicious sign-in" for a user who travels internationally every week is very different from the same alert for a user who has never left their home country.&lt;/P&gt;&lt;H2&gt;Step 4 - Implement Feedback Loops&lt;/H2&gt;&lt;P&gt;Automated triage is only as good as its accuracy over time. Build a feedback mechanism by tracking Copilot's assessments against analyst final classifications:&lt;/P&gt;&lt;LI-CODE lang="kusto"&gt;SecurityIncident
| where Tags has "copilot-triaged"
| where TimeGenerated &amp;gt; ago(30d)
| where Classification != ""
| mv-expand Comments
| extend CopilotAssessment = extract("Assessment: (True Positive|False Positive|Benign Positive)", 1, tostring(Comments))
| where isnotempty(CopilotAssessment)
| summarize
    Total = dcount(IncidentNumber),
    Correct = dcountif(IncidentNumber,
        (CopilotAssessment == "False Positive" and Classification == "FalsePositive") or
        (CopilotAssessment == "True Positive" and Classification == "TruePositive") or
        (CopilotAssessment == "Benign Positive" and Classification == "BenignPositive")
    )
    by bin(TimeGenerated, 7d)
| extend AccuracyPercent = round(Correct * 100.0 / Total, 1)
| order by TimeGenerated asc&lt;/LI-CODE&gt;&lt;P&gt;For this query to work reliably, the automation playbook must write the assessment in a consistent format within the incident comments. Use a structured prefix such as Assessment: True Positive so the regex extraction remains stable.&lt;/P&gt;&lt;P&gt;According to Microsoft's published benchmarks and community feedback, Copilot-assisted triage typically achieves 85-92% agreement with senior analyst classifications after prompt tuning - significantly reducing the manual triage burden.&lt;/P&gt;&lt;H2&gt;A Note on Licensing and Compute Units&lt;/H2&gt;&lt;P&gt;Security Copilot is licensed through Security Compute Units (SCUs), which are provisioned in Azure. Each prompt session consumes SCUs based on the complexity of the request. For automated triage at scale, plan your SCU capacity carefully - high-volume playbooks can accumulate significant usage. Start with a conservative allocation, monitor consumption through the Security Copilot usage dashboard, and scale up as you validate ROI. Microsoft provides detailed guidance on SCU sizing in the official Security Copilot documentation.&lt;/P&gt;&lt;H2&gt;Example Scenario - Impossible Travel at Scale&lt;/H2&gt;&lt;P&gt;Consider a typical enterprise that generates over 200 impossible travel alerts per week. The SOC team spends roughly 15 hours weekly just triaging these. Here is how automated triage addresses this:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Detection - Sentinel's built-in impossible travel analytics rule flags the incidents&lt;/LI&gt;&lt;LI&gt;Enrichment - The playbook pulls each user's typical travel patterns from sign-in logs over the past 90 days, VPN usage, and whether the "impossible" location matches any known corporate office or VPN egress point&lt;/LI&gt;&lt;LI&gt;Copilot Analysis - Security Copilot receives the enriched context and classifies each incident&lt;/LI&gt;&lt;LI&gt;Expected Result - Based on common deployment patterns, around 70-75% of impossible travel incidents are auto-closed as benign (VPN, known travel patterns), roughly 20% are downgraded to informational with a triage note, and only about 5% are escalated to analysts as genuine suspicious activity&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;This type of automation can reclaim over 10 hours per week - time that analysts can redirect to proactive threat hunting.&lt;/P&gt;&lt;H2&gt;Getting Started - Practical Recommendations&lt;/H2&gt;&lt;P&gt;For teams ready to implement automated triage with Security Copilot and Sentinel, here is a recommended approach:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Start small. Pick one high-volume, high-false-positive incident type. Do not try to automate everything at once.&lt;/LI&gt;&lt;LI&gt;Run in shadow mode first. Have the playbook add triage comments but do not auto-close or re-route. Let analysts compare Copilot's assessment with their own for two to four weeks.&lt;/LI&gt;&lt;LI&gt;Tune your prompts. Generic prompts produce generic results. Include organization-specific context - naming conventions, known infrastructure, typical user behavior patterns.&lt;/LI&gt;&lt;LI&gt;Monitor accuracy continuously. Use the feedback loop KQL above. If accuracy drops below 80%, pause automation and investigate.&lt;/LI&gt;&lt;LI&gt;Maintain human oversight. Even at 90%+ accuracy, keep a human review step for high-severity incidents. Automation handles volume - analysts handle judgment.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;The combination of Security Copilot and Microsoft Sentinel represents a genuine step forward for SOC efficiency. By automating the initial triage pass - summarizing incidents, enriching entities, and providing classification recommendations - analysts are freed to focus on what humans do best: making nuanced security decisions under uncertainty.&lt;/P&gt;&lt;P&gt;Feel free to like or/and connect :)&lt;/P&gt;</description>
      <pubDate>Tue, 31 Mar 2026 13:09:01 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/security-copilot-integration-with-microsoft-sentinel-why/m-p/4507293#M12906</guid>
      <dc:creator>Marcel_Graewer</dc:creator>
      <dc:date>2026-03-31T13:09:01Z</dc:date>
    </item>
    <item>
      <title>Webinar Cancellation</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/webinar-cancellation/m-p/4507045#M12905</link>
      <description>&lt;img /&gt;
&lt;P&gt;Hi everyone!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The webinar originally scheduled for April 14th on "Using distributed content to manage your multi-tenant SecOps" has unfortunately been cancelled for now. We apologize for the inconvenience and hope to reschedule it in the future.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please find other available webinars at: &lt;A class="lia-external-url" href="http://aka.ms/securitycommunity" target="_blank"&gt;http://aka.ms/securitycommunity&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;All the best,&lt;/P&gt;
&lt;P&gt;The Microsoft Security Community Team&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 21:40:40 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/webinar-cancellation/m-p/4507045#M12905</guid>
      <dc:creator>emilyfalla</dc:creator>
      <dc:date>2026-03-30T21:40:40Z</dc:date>
    </item>
    <item>
      <title>Your Sentinel AMA Logs &amp; Queries Are Public by Default — AMPLS Architectures to Fix That</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/your-sentinel-ama-logs-queries-are-public-by-default-ampls/m-p/4505699#M12899</link>
      <description>&lt;P&gt;When you deploy Microsoft Sentinel, security log ingestion travels over public Azure Data Collection Endpoints by default. The connection is encrypted, and the data arrives correctly — but the endpoint is publicly reachable, and so is the workspace itself, queryable from any browser on any network.&lt;/P&gt;&lt;P&gt;For many organisations, that trade-off is fine. For others — regulated industries, healthcare, financial services, critical infrastructure — it is the exact problem they need to solve.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Azure Monitor Private Link Scope (AMPLS)&lt;/STRONG&gt;&amp;nbsp;is how you solve it.&lt;/P&gt;&lt;H3&gt;What AMPLS Actually Does&lt;/H3&gt;&lt;P&gt;AMPLS is a single Azure resource that wraps your monitoring pipeline and controls two settings:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Where logs are allowed to go&lt;/STRONG&gt;&amp;nbsp;(ingestion mode:&amp;nbsp;Open&amp;nbsp;or&amp;nbsp;PrivateOnly)&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Where analysts are allowed to query from&lt;/STRONG&gt;&amp;nbsp;(query mode:&amp;nbsp;Open&amp;nbsp;or&amp;nbsp;PrivateOnly)&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Change those two settings and you fundamentally change the security posture — not as a policy recommendation, but as a&amp;nbsp;&lt;STRONG&gt;hard platform enforcement&lt;/STRONG&gt;. Set ingestion to&amp;nbsp;PrivateOnly&amp;nbsp;and the public endpoint stops working. It does not fall back gracefully. It returns an error. That is the point.&lt;/P&gt;&lt;P&gt;It is not a firewall rule someone can bypass or a policy someone can override. Control is baked in at the infrastructure level.&lt;/P&gt;&lt;H3&gt;Three Patterns — One Spectrum&lt;/H3&gt;&lt;P&gt;There is no universally correct answer. The right architecture depends on your organisation's risk appetite, existing network infrastructure, and how much operational complexity your team can realistically manage. These three patterns cover the full range:&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;STRONG&gt;Architecture 1 — Open / Public (Basic)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;No AMPLS. Logs travel to public Data Collection Endpoints over the internet. The workspace is open to queries from anywhere. This is the default — operational in minutes with zero network setup.&lt;/P&gt;&lt;P&gt;Cloud service connectors (Microsoft 365, Defender, third-party) work immediately because they are server-side/API/Graph pulls and are unaffected by AMPLS. Azure Monitor Agents and Azure Arc agents handle ingestion from cloud or on-prem machines via public network.&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;UL&gt;&lt;LI&gt;Simplicity: 9/10 | Security: 6/10&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Good for:&lt;/STRONG&gt;&amp;nbsp;Dev environments, teams getting started, low-sensitivity workloads&lt;/LI&gt;&lt;/UL&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;STRONG&gt;Architecture 2 — Hybrid: Private Ingestion, Open Queries (Recommended for most)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;AMPLS is in place. Ingestion is locked to&amp;nbsp;PrivateOnly&amp;nbsp;— logs from virtual machines travel through a Private Endpoint inside your own network, never touching a public route. On-premises or hybrid machines connect through Azure Arc over VPN or a dedicated circuit and feed into the same private pipeline.&lt;/P&gt;&lt;P&gt;Query access stays open, so analysts can work from anywhere without needing a VPN/Jumpbox to reach the Sentinel portal — the investigation workflow stays flexible, but the log ingestion path is fully ring-fenced. You can also split ingestion mode per DCE if you need some sources public and some private.&lt;/P&gt;&lt;P&gt;This is the architecture most organisations land on as their steady state.&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;UL&gt;&lt;LI&gt;Simplicity: 6/10 | Security: 8/10&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Good for:&lt;/STRONG&gt;&amp;nbsp;Organisations with mixed cloud and on-premises estates that need private ingestion without restricting analyst access&lt;/LI&gt;&lt;/UL&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;STRONG&gt;Architecture 3 — Fully Private (Maximum Control)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Infrastructure is essentially identical to Architecture 2 — AMPLS, Private Endpoints, Private DNS zones, VPN or dedicated circuit, Azure Arc for on-premises machines. The single difference:&amp;nbsp;&lt;STRONG&gt;query mode is also set to&amp;nbsp;PrivateOnly&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;Analysts can only reach Sentinel from inside the private network. VPN or Jumpbox required to access the portal. Both the pipe that carries logs in and the channel analysts use to read them are fully contained within the defined boundary.&lt;/P&gt;&lt;P&gt;This is the right choice when your organisation needs to demonstrate — not just claim — that security data never moves outside a defined network perimeter.&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;UL&gt;&lt;LI&gt;Simplicity: 2/10 | Security: 10/10&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Good for:&lt;/STRONG&gt;&amp;nbsp;Organisations with strict data boundary requirements (regulated industries, audit, compliance mandates)&lt;/LI&gt;&lt;/UL&gt;&lt;img /&gt;&lt;H3&gt;Quick Reference — Which Pattern Fits?&lt;/H3&gt;&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Scenario&lt;/th&gt;&lt;th&gt;Architecture&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Getting started / low-sensitivity workloads&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Arch 1&lt;/STRONG&gt;&amp;nbsp;— No network setup, public endpoints accepted&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Private log ingestion, analysts work anywhere&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Arch 2&lt;/STRONG&gt;&amp;nbsp;— AMPLS&amp;nbsp;PrivateOnly&amp;nbsp;ingestion, query mode open&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Both ingestion and queries must be fully private&lt;/td&gt;&lt;td&gt;&lt;STRONG&gt;Arch 3&lt;/STRONG&gt;&amp;nbsp;— Same as Arch 2 + query mode set to&amp;nbsp;PrivateOnly&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;&lt;P&gt;&lt;STRONG&gt;One thing all three share:&lt;/STRONG&gt; Microsoft 365, Entra ID, and Defender connectors work in every pattern — they are server-side pulls by Sentinel and are not affected by your network posture.&lt;/P&gt;&lt;img /&gt;&lt;P&gt;Please feel free to reach out if you have any questions regarding the information provided.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Mar 2026 23:19:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/your-sentinel-ama-logs-queries-are-public-by-default-ampls/m-p/4505699#M12899</guid>
      <dc:creator>veesamprabhukiran</dc:creator>
      <dc:date>2026-03-25T23:19:39Z</dc:date>
    </item>
    <item>
      <title>Sentinel datalake: private link/private endpoint</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/sentinel-datalake-private-link-private-endpoint/m-p/4504688#M12894</link>
      <description>&lt;P&gt;Has anyone already configured Sentinel Datalake with a private link/private endpoint setup? I can't find any instructions for this specific case.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can I use the wizard in the Defender XDR portal, or does it require specific configuration steps?&lt;/P&gt;&lt;P&gt;Or does it require configuring a private link/private endpoint setup on the Datalake component after activation via the wizard?&lt;/P&gt;</description>
      <pubDate>Mon, 23 Mar 2026 11:40:25 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/sentinel-datalake-private-link-private-endpoint/m-p/4504688#M12894</guid>
      <dc:creator>munterweger</dc:creator>
      <dc:date>2026-03-23T11:40:25Z</dc:date>
    </item>
    <item>
      <title>RSAC 2026: What the Sentinel Playbook Generator actually means for SOC automation</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/rsac-2026-what-the-sentinel-playbook-generator-actually-means/m-p/4504463#M12893</link>
      <description>&lt;P&gt;RSAC 2026 brought a wave of Sentinel announcements, but the one I keep coming back to is the playbook generator. Not because it's the flashiest, but because it touches something that's been a real operational pain point for years: the gap between what SOC teams need to automate and what they can realistically build and maintain.&lt;/P&gt;&lt;P&gt;I want to unpack what this actually changes from an operational perspective, because I think the implications go further than "you can now vibe-code a playbook."&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;The problem it solves&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;If you've built and maintained Logic Apps playbooks in Sentinel at any scale, you know the friction. You need a connector for every integration. If there isn't one, you're writing custom HTTP actions with authentication handling, pagination, error handling - all inside a visual designer that wasn't built for complex branching logic. Debugging is painful. Version control is an afterthought. And when something breaks at 2am, the person on call needs to understand both the Logic Apps runtime AND the security workflow to fix it.&lt;/P&gt;&lt;P&gt;The result in most environments I've seen: teams build a handful of playbooks for the obvious use cases (isolate host, disable account, post to Teams) and then stop. The long tail of automation - the enrichment workflows, the cross-tool correlation, the conditional response chains - stays manual because building it is too expensive relative to the time saved.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;What's actually different now&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;The playbook generator produces Python. Not Logic Apps JSON, not ARM templates - actual Python code with documentation and a visual flowchart. You describe the workflow in natural language, the system proposes a plan, asks clarifying questions, and then generates the code once you approve.&lt;/P&gt;&lt;P&gt;The Integration Profile concept is where this gets interesting. Instead of relying on predefined connectors, you define a base URL, auth method, and credentials for any service - and the generator creates dynamic API calls against it. This means you can automate against ServiceNow, Jira, Slack, your internal CMDB, or any REST API without waiting for Microsoft or a partner to ship a connector.&lt;/P&gt;&lt;P&gt;The embedded VS Code experience with plan mode and act mode is a deliberate design choice. Plan mode lets you iterate on the workflow before any code is generated. Act mode produces the implementation. You can then validate against real alerts and refine through conversation or direct code edits. This is a meaningful improvement over the "deploy and pray" cycle most of us have with Logic Apps.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Where I see the real impact&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;For environments running Sentinel at scale, the playbook generator could unlock the automation long tail I mentioned above. The workflows that were never worth the Logic Apps development effort might now be worth a 15-minute conversation with the generator. Think: enrichment chains that pull context from three different tools before deciding on a response path, or conditional escalation workflows that factor in asset criticality, time of day, and analyst availability.&lt;/P&gt;&lt;P&gt;There's also an interesting angle for teams that operate across Microsoft and non-Microsoft tooling. If your SOC uses Sentinel for SIEM but has Palo Alto, CrowdStrike, or other vendors in the stack, the Integration Profile approach means you can build cross-vendor response playbooks without middleware.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;The questions I'd genuinely like to hear about&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;A few things that aren't clear from the documentation and that I think matter for production use:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Security Copilot dependency:&lt;/STRONG&gt; The prerequisites require a Security Copilot workspace with EU or US capacity. Someone in the blog comments already flagged this as a potential blocker for organizations that have Sentinel but not Security Copilot. Is this a hard requirement going forward, or will there be a path for Sentinel-only customers?&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Code lifecycle management:&lt;/STRONG&gt; The generated Python runs... where exactly? What's the execution runtime? How do you version control, test, and promote these playbooks across dev/staging/prod? Logic Apps had ARM templates and CI/CD patterns. What's the equivalent here?&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Integration Profile security:&lt;/STRONG&gt; You're storing credentials for potentially every tool in your security stack inside these profiles. What's the credential storage model? Is this backed by Key Vault? How do you rotate credentials without breaking running playbooks?&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Debugging in production:&lt;/STRONG&gt; When a generated playbook fails at 2am, what does the troubleshooting experience look like? Do you get structured logs, execution traces, retry telemetry? Or are you reading Python stack traces?&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Coexistence with Logic Apps:&lt;/STRONG&gt; Most environments won't rip and replace overnight. What's the intended coexistence model between generated Python playbooks and existing Logic Apps automation rules?&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;I'm genuinely optimistic about this direction. Moving from a low-code visual designer to an AI-assisted coding model with transparent, editable output feels like the right architectural bet for where SOC automation needs to go. But the operational details around lifecycle, security, and debugging will determine whether this becomes a production staple or stays a demo-only feature.&lt;/P&gt;&lt;P&gt;Would be interested to hear from anyone who's been in the preview - what's the reality like compared to the pitch?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 22 Mar 2026 12:22:31 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/rsac-2026-what-the-sentinel-playbook-generator-actually-means/m-p/4504463#M12893</guid>
      <dc:creator>Marcel_Graewer</dc:creator>
      <dc:date>2026-03-22T12:22:31Z</dc:date>
    </item>
    <item>
      <title>Ingest Microsoft XDR Advanced Hunting Data into Microsoft Sentinel</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/ingest-microsoft-xdr-advanced-hunting-data-into-microsoft/m-p/4503622#M12889</link>
      <description>&lt;P&gt;I had difficulty finding a guide that can query Microsoft Defender vulnerability management Advanced Hunting tables in Microsoft Sentinel for alerting and automation. As a result, I put together this guide to demonstrate how to ingest Microsoft XDR Advanced Hunting query results into Microsoft Sentinel using Azure Logic Apps and System‑Assigned Managed Identity.&lt;/P&gt;&lt;P&gt;The solution allows you to:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Run Advanced Hunting queries on a schedule&lt;/LI&gt;&lt;LI&gt;Collect high‑risk vulnerability data (or other hunting results)&lt;/LI&gt;&lt;LI&gt;Send the results to a Sentinel workspace as custom logs&lt;/LI&gt;&lt;LI&gt;Create alerts and automation rules based on this data&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;This approach avoids credential storage and follows least privilege and managed identity best practices.&lt;/P&gt;&lt;P&gt;Prerequisites&lt;/P&gt;&lt;P&gt;Before you begin, ensure you have:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Microsoft Defender XDR access&lt;/LI&gt;&lt;LI&gt;Microsoft Sentinel deployed&lt;/LI&gt;&lt;LI&gt;Azure Logic Apps permission&lt;/LI&gt;&lt;LI&gt;Application Administrator or higher in Microsoft Entra ID&lt;/LI&gt;&lt;LI&gt;PowerShell with Az modules installed&lt;/LI&gt;&lt;LI&gt;Contributor access to the Sentinel workspace&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Architecture at a Glance&lt;/P&gt;&lt;P&gt;Logic App (Managed Identity)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; ↓&lt;/P&gt;&lt;P&gt;Microsoft XDR Advanced Hunting API&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; ↓&lt;/P&gt;&lt;P&gt;Logic App&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; ↓&lt;/P&gt;&lt;P&gt;Log Analytics Data Collector API&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; ↓&lt;/P&gt;&lt;P&gt;Microsoft Sentinel (Custom Log)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Step 1: Create a Logic App&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;In the Azure Portal, go to Logic Apps&lt;/LI&gt;&lt;LI&gt;Create a new Consumption Logic App&lt;/LI&gt;&lt;LI&gt;Choose the appropriate:&lt;/LI&gt;&lt;UL&gt;&lt;LI&gt;Subscription&lt;/LI&gt;&lt;LI&gt;Resource Group&lt;/LI&gt;&lt;LI&gt;Region&lt;/LI&gt;&lt;/UL&gt;&lt;/OL&gt;&lt;P&gt;Step 2: Enable System‑Assigned Managed Identity&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Open the Logic App&lt;/LI&gt;&lt;LI&gt;Navigate to Settings → Identity&lt;/LI&gt;&lt;LI&gt;Enable System‑assigned managed identity&lt;/LI&gt;&lt;LI&gt;Click Save&lt;/LI&gt;&lt;LI&gt;Note the Object ID&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This identity will later be granted permission to run Advanced Hunting queries.&lt;/P&gt;&lt;P&gt;Step 3: Locate the Logic App in Entra ID&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Go to Microsoft Entra ID → Enterprise Applications&lt;/LI&gt;&lt;LI&gt;Change filter to All Applications&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Search for your Logic App name&lt;/LI&gt;&lt;LI&gt;Select the app to confirm it exists&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;Step 4: Grant Advanced Hunting Permissions (PowerShell)&lt;/P&gt;&lt;P&gt;Advanced Hunting permissions cannot be assigned via the portal and must be done using PowerShell.&lt;/P&gt;&lt;P&gt;Required Permission&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;AdvancedQuery.Read.All&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;PowerShell Script&lt;/P&gt;&lt;LI-CODE lang="powershell"&gt;# Your tenant ID (in the Azure portal, under Azure Active Directory &amp;gt; Overview).
$TenantID=”Your TenantID”
Connect-AzAccount -TenantId $TenantID
# Get the ID of the managed identity for the app.
$spID = “Your Managed Identity”
# Get the service principal for Microsoft Graph by providing the AppID of WindowsDefender ATP
$GraphServicePrincipal = Get-AzADServicePrincipal -Filter "AppId eq 'fc780465-2017-40d4-a0c5-307022471b92'" | Select-Object Id
# Extract the Advanced query ID.
$AppRole = $GraphServicePrincipal.AppRole | `
Where-Object {$_.Value -contains "AdvancedQuery.Read.All"}
# If AppRoleID comes up with blank value, it can be replaced with 93489bf5-0fbc-4f2d-b901-33f2fe08ff05
# Now add the permission to the app to read the advanced queries
New-AzADServicePrincipalAppRoleAssignment -ServicePrincipalId $spID -ResourceId $GraphServicePrincipal.Id -AppRoleId $AppRole.Id
# Or
New-AzADServicePrincipalAppRoleAssignment -ServicePrincipalId $spID -ResourceId $GraphServicePrincipal.Id -AppRoleId 93489bf5-0fbc-4f2d-b901-33f2fe08ff05&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After successful execution, verify the permission under Enterprise Applications → Permissions.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Step 5: Build the Logic App Workflow&lt;/P&gt;&lt;P&gt;Open Logic App Designer and create the following flow:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;Trigger&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Recurrence (e.g., every 24 hours&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Run Advanced Hunting Query&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Connector: Microsoft Defender ATP&lt;/LI&gt;&lt;LI&gt;Authentication: System‑Assigned Managed Identity&lt;/LI&gt;&lt;LI&gt;Action: Run Advanced Hunting Query&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Sample KQL Query (High‑Risk Vulnerabilities)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;Send Data to Log Analytics (Sentinel)&lt;/P&gt;&lt;P&gt;On Send Data, create a new connection and provide the workspace information where the Sentinel log exists. Obtaining the Workspace Key is not straightforward, we need to retrieve using the PowerShell command.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="powershell"&gt;Get-AzOperationalInsightsWorkspaceSharedKey `
-ResourceGroupName "&amp;lt;ResourceGroupName&amp;gt;" `
-Name "&amp;lt;WorkspaceName&amp;gt;"&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Configuration Details&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Workspace ID&lt;/LI&gt;&lt;LI&gt;Primary key&lt;/LI&gt;&lt;LI&gt;Log Type (example): XDRVulnerability_CL&lt;/LI&gt;&lt;LI&gt;Request body: Results array from Advanced Hunting&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Step 6: Run the Logic app to return results&lt;/P&gt;&lt;P&gt;In the logic app designer select run,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If the run is successful data will be sent to sentinel workspace.&lt;/P&gt;&lt;P&gt;Step 7: Validate Data in Microsoft Sentinel&lt;/P&gt;&lt;P&gt;In Sentinel, run the query:&lt;/P&gt;&lt;LI-CODE lang="kusto"&gt;XDRVulnerability_CL
| where TimeGenerated &amp;gt; ago(24h)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If data appears, ingestion is successful.&lt;/P&gt;&lt;P&gt;Step 8: Create Alerts &amp;amp; Automation Rules&lt;/P&gt;&lt;P&gt;Use Sentinel to:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Create analytics rules for:&lt;/LI&gt;&lt;UL&gt;&lt;LI&gt;CVSS &amp;gt; 9&lt;/LI&gt;&lt;LI&gt;Exploit available&lt;/LI&gt;&lt;LI&gt;New vulnerabilities in last 24 hours&lt;/LI&gt;&lt;/UL&gt;&lt;LI&gt;Trigger:&lt;/LI&gt;&lt;UL&gt;&lt;LI&gt;Email notifications&lt;/LI&gt;&lt;LI&gt;Incident creation&lt;/LI&gt;&lt;LI&gt;SOAR playbooks&lt;/LI&gt;&lt;/UL&gt;&lt;/UL&gt;&lt;P&gt;Conclusion&lt;/P&gt;&lt;P&gt;By combining Logic Apps, Managed Identities, Microsoft XDR, and Microsoft Sentinel, you can create a powerful, secure, and scalable pipeline for ingesting hunting intelligence and triggering proactive detections.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2026 23:38:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/ingest-microsoft-xdr-advanced-hunting-data-into-microsoft/m-p/4503622#M12889</guid>
      <dc:creator>muraly005</dc:creator>
      <dc:date>2026-03-18T23:38:49Z</dc:date>
    </item>
    <item>
      <title>Clarification on UEBA Behaviors Layer Support for Zscaler and Fortinet Logs</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/clarification-on-ueba-behaviors-layer-support-for-zscaler-and/m-p/4496720#M12879</link>
      <description>&lt;P&gt;I would like to confirm whether the new UEBA Behaviors Layer in Microsoft Sentinel currently supports generating behavior insights for Zscaler and Fortinet log sources.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Based on the documentation, the preview version of the Behaviors Layer only supports specific vendors under CommonSecurityLog (CyberArk Vault and Palo Alto Threats), AWS CloudTrail services, and GCP Audit Logs. Since Zscaler and Fortinet are not listed among the supported vendors, I want to verify:&lt;/P&gt;&lt;P&gt;Does the UEBA Behaviors Layer generate behavior records for Zscaler and Fortinet logs, or are these vendors currently unsupported for behavior generation? As logs from Zscaler and Fortinet will also be get ingested in CommonSecurityLog table only.&lt;/P&gt;</description>
      <pubDate>Tue, 24 Feb 2026 16:54:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/clarification-on-ueba-behaviors-layer-support-for-zscaler-and/m-p/4496720#M12879</guid>
      <dc:creator>RohitN026</dc:creator>
      <dc:date>2026-02-24T16:54:36Z</dc:date>
    </item>
    <item>
      <title>McasShadowItReporting / Cloud Discovery in Azure Sentinel</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/mcasshadowitreporting-cloud-discovery-in-azure-sentinel/m-p/4495351#M12871</link>
      <description>&lt;P&gt;Hi!&lt;BR /&gt;&lt;BR /&gt;I´m trying to Query the&amp;nbsp;McasShadowItReporting Table, for Cloud App DISCOVERYs&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;The Table is empty at the moment, the connector is warning me that the Workspace is onboarded to Unified Security Operations Platform&lt;BR /&gt;So I cant activate it here&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I cant mange it via &lt;A class="lia-external-url" href="https://security.microsoft.com/," target="_blank"&gt;https://security.microsoft.com/,&lt;/A&gt; too&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;The Documentation ( &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/defender-cloud-apps/siem-sentinel#integrating-with-microsoft-sentinel" target="_blank"&gt;https://learn.microsoft.com/en-us/defender-cloud-apps/siem-sentinel#integrating-with-microsoft-sentinel&lt;/A&gt; )&amp;nbsp;&lt;/P&gt;&lt;P&gt;Leads me to the SIEM Integration, which is configured for (for a while)&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&lt;BR /&gt;I wonder if something is misconfigured here and why there is no log ingress / how I can query them&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Feb 2026 09:48:47 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/mcasshadowitreporting-cloud-discovery-in-azure-sentinel/m-p/4495351#M12871</guid>
      <dc:creator>Felix87</dc:creator>
      <dc:date>2026-02-17T09:48:47Z</dc:date>
    </item>
    <item>
      <title>CrowdStrike API Data Connector (via Codeless Connector Framework) (Preview)</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/crowdstrike-api-data-connector-via-codeless-connector-framework/m-p/4494852#M12870</link>
      <description>&lt;P&gt;API scopes created. Added to Connector however only streams observed are from Alerts and Hosts.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Detections is not logging? Anyone experiencing this issue? Github has post about it apears to be escalated for feature request.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;CrowdStrikeDetections. not ingested&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Anyone have this setup and working?&lt;/P&gt;</description>
      <pubDate>Fri, 13 Feb 2026 17:04:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/crowdstrike-api-data-connector-via-codeless-connector-framework/m-p/4494852#M12870</guid>
      <dc:creator>logger2115</dc:creator>
      <dc:date>2026-02-13T17:04:32Z</dc:date>
    </item>
    <item>
      <title>Salesforce Service Cloud (via Codeless Connector Framework)</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/salesforce-service-cloud-via-codeless-connector-framework/m-p/4494850#M12869</link>
      <description>&lt;P&gt;We have 3 environments, does this connector support multiple tenants or is it limited to onle one FQDN?&lt;/P&gt;</description>
      <pubDate>Fri, 13 Feb 2026 16:53:06 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/salesforce-service-cloud-via-codeless-connector-framework/m-p/4494850#M12869</guid>
      <dc:creator>logger2115</dc:creator>
      <dc:date>2026-02-13T16:53:06Z</dc:date>
    </item>
    <item>
      <title>Dedicated cluster for Sentinels in different tenants</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/dedicated-cluster-for-sentinels-in-different-tenants/m-p/4494529#M12868</link>
      <description>&lt;P&gt;Hello&lt;BR /&gt;&lt;BR /&gt;I see that there is a possibility to use a dedicated cluster for a workspace in the same Azure region. What about workspaces that reside in different tenants but are in the same Azure region? Is that possible?&lt;/P&gt;&lt;P&gt;We are utilizing multiple tenants, and we want to keep this operational model. However, there is a central SOC, and we wonder if there is a possibility to utilize a dedicated cluster for cost optimization.&lt;/P&gt;</description>
      <pubDate>Thu, 12 Feb 2026 08:47:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/dedicated-cluster-for-sentinels-in-different-tenants/m-p/4494529#M12868</guid>
      <dc:creator>de3no2</dc:creator>
      <dc:date>2026-02-12T08:47:30Z</dc:date>
    </item>
    <item>
      <title>How Should a Fresher Learn Microsoft Sentinel Properly?</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/how-should-a-fresher-learn-microsoft-sentinel-properly/m-p/4494249#M12866</link>
      <description>&lt;P&gt;Hello everyone,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am a fresher interested in learning Microsoft Sentinel and preparing for SOC roles.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Since Sentinel is a cloud-native enterprise tool and usually used inside organizations, I am unsure how individuals without company access are expected to gain real hands-on experience.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I would like to hear from professionals who actively use Sentinel:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;- How do freshers typically learn and practice Sentinel?&lt;/P&gt;&lt;P&gt;- What learning resources or environments are commonly used by beginners?&lt;/P&gt;&lt;P&gt;- What level of hands-on experience is realistically expected at entry level?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am looking for guidance based on real industry practice.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for your time.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 11 Feb 2026 02:38:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/how-should-a-fresher-learn-microsoft-sentinel-properly/m-p/4494249#M12866</guid>
      <dc:creator>Arjun34</dc:creator>
      <dc:date>2026-02-11T02:38:02Z</dc:date>
    </item>
    <item>
      <title>How do I import Purview Unified Audit Log data related to the use of the Audit Log into Sentinel?</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/how-do-i-import-purview-unified-audit-log-data-related-to-the/m-p/4488430#M12863</link>
      <description>&lt;P&gt;Dear Community, I would like to implement the following scenario on an environment with Microsoft 365 E5 licenses:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Scenario&lt;/STRONG&gt;: I want to import audit activities into an Azure Log Analytics workspace linked to Sentinel to generate alerts/incidents as soon as a search is performed in the Microsoft 365 Purview Unified Audit Log (primarily for IRM purposes).&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Challenge&lt;/STRONG&gt;: Neither the "Microsoft 365" connector, nor the "Defender XDR" or "Purview" (which appear to be exclusively Azure Purview) connectors are importing the necessary data.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Question&lt;/STRONG&gt;: Which connector do I have to use in order to obtain Purview Unified Audit Log activities about the use of the Purview Unified Audit Log so that I can identify...&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;...which&lt;/EM&gt;&amp;nbsp;user conducted&amp;nbsp;&lt;EM&gt;when&lt;/EM&gt;&amp;nbsp;an audit log search and with&amp;nbsp;&lt;EM&gt;what&lt;/EM&gt; kind of search query.&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Thu, 22 Jan 2026 09:29:43 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/how-do-i-import-purview-unified-audit-log-data-related-to-the/m-p/4488430#M12863</guid>
      <dc:creator>BM-HV</dc:creator>
      <dc:date>2026-01-22T09:29:43Z</dc:date>
    </item>
    <item>
      <title>Issue connecting Azure Sentinel GitHub app to Sentinel Instance when IP allow list is enabled</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/issue-connecting-azure-sentinel-github-app-to-sentinel-instance/m-p/4486172#M12862</link>
      <description>&lt;P&gt;Hi everyone,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I’m running into an issue connecting the Azure Sentinel GitHub app to my Sentinel workspace in order to create our CI/CD pipelines for our detection rules, and I’m hoping someone can point me in the right direction.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Symptoms:&lt;/P&gt;&lt;P&gt;When configuring the GitHub connection in Sentinel, the repository dropdown does not populate.&lt;/P&gt;&lt;P&gt;There are no explicit errors, but the connection clearly isn’t completing.&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If I disable my organization’s IP allow list, everything works as expected and the repos appear immediately.&lt;/P&gt;&lt;img /&gt;&lt;P&gt;I’ve seen that some GitHub Apps automatically add the IP ranges they require to an organization’s allow list. However, from what I can tell, the Azure Sentinel GitHub app does not seem to have this capability, and requires manual allow listing instead.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What I’ve tried / researched:&lt;/P&gt;&lt;P&gt;Reviewed Microsoft documentation for Sentinel ↔ GitHub integrations&lt;/P&gt;&lt;P&gt;Looked through Azure IP range and Service Tag documentation&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I’ve seen recommendations to allow list the IP ranges published at //api.github.com/meta, as many GitHub apps rely on these ranges&lt;/P&gt;&lt;P&gt;I’ve already tried allow listing multiple ranges from the GitHub meta endpoint, but the issue persists&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My questions:&lt;/P&gt;&lt;P&gt;Does anyone know which IP ranges are used by the Azure Sentinel GitHub app specifically?&lt;/P&gt;&lt;P&gt;Is there an official or recommended approach for using this integration in environments with strict IP allow lists?&lt;/P&gt;&lt;P&gt;Has anyone successfully configured this integration without fully disabling IP restrictions?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any insight, references, or firsthand experience would be greatly appreciated. Thanks in advance!&lt;/P&gt;</description>
      <pubDate>Fri, 16 Jan 2026 04:33:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/issue-connecting-azure-sentinel-github-app-to-sentinel-instance/m-p/4486172#M12862</guid>
      <dc:creator>JingleDingle</dc:creator>
      <dc:date>2026-01-16T04:33:45Z</dc:date>
    </item>
    <item>
      <title>How to Prevent Workspace Details from Appearing in LAQueryLogs During Cross-Workspace Queries</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/how-to-prevent-workspace-details-from-appearing-in-laquerylogs/m-p/4483079#M12860</link>
      <description>&lt;P&gt;I’ve onboarded multiple workspaces using Azure Lighthouse, and I’m running cross-workspace KQL queries using the workspace() function.&lt;BR /&gt;However, I’ve noticed that LAQueryLogs records the query in every referenced workspace, and the RequestContext field includes details about all other workspaces involved in the query.&lt;BR /&gt;Is there any way to run cross-workspace queries without having all workspace details logged in LAQueryLogs for each referenced workspace?&lt;/P&gt;</description>
      <pubDate>Mon, 05 Jan 2026 14:09:29 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/how-to-prevent-workspace-details-from-appearing-in-laquerylogs/m-p/4483079#M12860</guid>
      <dc:creator>ParthPatel50</dc:creator>
      <dc:date>2026-01-05T14:09:29Z</dc:date>
    </item>
    <item>
      <title>I'm stuck!</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/i-m-stuck/m-p/4476042#M12854</link>
      <description>&lt;P&gt;Logically, I'm not sure how\if I can do this.&lt;/P&gt;&lt;P&gt;I want to monitor for EntraID Group additions - I can get this to work for a single entry using this:&lt;/P&gt;&lt;P&gt;AuditLogs&lt;BR /&gt;| where TimeGenerated &amp;gt; ago(7d)&lt;BR /&gt;| where OperationName == "Add member to group"&lt;BR /&gt;| where TargetResources[0].type == "User"&lt;BR /&gt;| extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue)))&lt;BR /&gt;| where GroupName == "NameOfGroup" &amp;lt;-- This returns the single entry&lt;BR /&gt;| extend User = tostring(TargetResources[0].userPrincipalName)&lt;BR /&gt;| summarize ['Count of Users Added']=dcount(User), ['List of Users Added']=make_set(User) by GroupName&lt;BR /&gt;| sort by GroupName asc&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;However, I have a list of 20 Priv groups that I need to monitor.&amp;nbsp; I can do this using:&lt;/P&gt;&lt;P&gt;let PrivGroups = dynamic[('name1','name2','name3'});&lt;/P&gt;&lt;P&gt;and then call that like this:&lt;/P&gt;&lt;P&gt;blahblah&lt;/P&gt;&lt;P&gt;| where TargetResources[0].type == "User"&lt;BR /&gt;| extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue)))&lt;BR /&gt;| where GroupName has_any (PrivGroup)&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But that's a bit dirty to update - I wanted to call a watchlist.&amp;nbsp; I've tried defining with:&lt;/P&gt;&lt;P&gt;let PrivGroup = (_GetWatchlist('TestList'));&lt;/P&gt;&lt;P&gt;and tried calling like:&lt;/P&gt;&lt;P&gt;blahblah&lt;/P&gt;&lt;P&gt;| where TargetResources[0].type == "User"&lt;BR /&gt;| extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue)))&lt;BR /&gt;| where GroupName has_any ('PrivGroup')&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've tried dropping the let and attempted to lookup the watchlist directly:&lt;/P&gt;&lt;P&gt;| where GroupName has_any (_GetWatchlist('TestList'))&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The query runs but doesn't return any results (Obvs I know the result exists) - How do I lookup that extracted value on a Watchlist.&lt;/P&gt;&lt;P&gt;Any ideas or pointers why I'm wrong would be appreciated!&lt;/P&gt;&lt;P&gt;Many thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 08 Dec 2025 14:06:14 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/i-m-stuck/m-p/4476042#M12854</guid>
      <dc:creator>MrD</dc:creator>
      <dc:date>2025-12-08T14:06:14Z</dc:date>
    </item>
    <item>
      <title>Webinar Rescheduled: AI-Powered Entity Analysis in Sentinel's MCP Server</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/webinar-rescheduled-ai-powered-entity-analysis-in-sentinel-s-mcp/m-p/4475369#M12853</link>
      <description>&lt;P&gt;Hi folks!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The webinar: &lt;STRONG&gt;AI-Powered Entity Analysis in Sentinel's MCP Server&lt;/STRONG&gt; which was previously scheduled for: &lt;SPAN class="lia-text-color-8"&gt;January 13th, 2026&lt;/SPAN&gt;, has been rescheduled to: &lt;STRONG&gt;&lt;SPAN class="lia-text-color-11"&gt;January 27th, 2026, at 9:00 AM PT.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please delete the old invite from your calendar and find the new one at &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/microsoft-security-blog/welcome-to-the-microsoft-security-community/4471927" data-lia-auto-title="aka.ms/securitycommunity." data-lia-auto-title-active="0" target="_blank"&gt;aka.ms/securitycommunity.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We apologize for the inconvenience and hope to see you there!&lt;/P&gt;</description>
      <pubDate>Fri, 05 Dec 2025 00:50:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/webinar-rescheduled-ai-powered-entity-analysis-in-sentinel-s-mcp/m-p/4475369#M12853</guid>
      <dc:creator>emilyfalla</dc:creator>
      <dc:date>2025-12-05T00:50:30Z</dc:date>
    </item>
    <item>
      <title>Understand New Sentinel Pricing Model with Sentinel Data Lake Tier</title>
      <link>https://techcommunity.microsoft.com/t5/microsoft-sentinel/understand-new-sentinel-pricing-model-with-sentinel-data-lake/m-p/4473020#M12852</link>
      <description>&lt;H1&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-10"&gt;Introduction on Sentinel and its New Pricing Model&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H1&gt;
&lt;P&gt;Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the &lt;STRONG data-start="335" data-end="379"&gt;Analytics tier (Log Analytics workspace)&lt;/STRONG&gt;, which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a &lt;STRONG data-start="542" data-end="573"&gt;new dual-tier pricing model&lt;/STRONG&gt; consisting of the &lt;STRONG data-start="592" data-end="610"&gt;Analytics tier&lt;/STRONG&gt; and the &lt;STRONG data-start="619" data-end="637"&gt;Data Lake tier&lt;/STRONG&gt;. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides &lt;STRONG data-start="785" data-end="810"&gt;very low-cost storage&lt;/STRONG&gt; for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting.&lt;/P&gt;
&lt;P&gt;Please flow diagram depicts new sentinel pricing model:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Now let's understand this new pricing model with below scenarios:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario 1A (PAY GO)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario 1B (Usage Commitment)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario 2 (Data Lake Tier Only)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;H1&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-10"&gt;Scenario 1A (PAY GO)&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H1&gt;
&lt;H3 data-start="129" data-end="150"&gt;&lt;STRONG data-start="133" data-end="148"&gt;Requirement&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P data-start="151" data-end="350"&gt;Suppose you need to ingest &lt;STRONG data-start="178" data-end="203"&gt;10 GB of data per day&lt;/STRONG&gt;, and you must retain that data for &lt;STRONG data-start="239" data-end="250"&gt;2 years&lt;/STRONG&gt;. However, you will only &lt;STRONG data-start="275" data-end="313"&gt;frequently use, query, and analyze&lt;/STRONG&gt; the data for the &lt;STRONG data-start="331" data-end="349"&gt;first 6 months&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H3 data-start="352" data-end="370"&gt;&lt;STRONG data-start="356" data-end="368"&gt;Solution&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P data-start="371" data-end="705"&gt;To optimize cost, you can ingest the data into the &lt;STRONG data-start="422" data-end="440"&gt;Analytics tier&lt;/STRONG&gt; and retain it there for the first &lt;STRONG data-start="475" data-end="487"&gt;6 months&lt;/STRONG&gt;, where active querying and investigation happen. After that period, the remaining &lt;STRONG data-start="570" data-end="596"&gt;18 months of retention&lt;/STRONG&gt; can be shifted to the &lt;STRONG data-start="619" data-end="637"&gt;Data Lake tier&lt;/STRONG&gt;, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as &lt;STRONG&gt;Compute (D)&lt;/STRONG&gt; in pricing flow diagram.&lt;/P&gt;
&lt;H3 data-start="707" data-end="735"&gt;&lt;STRONG data-start="711" data-end="735"&gt;Pricing Flow / Notes&lt;/STRONG&gt;&lt;/H3&gt;
&lt;UL data-start="737" data-end="1231"&gt;
&lt;LI data-start="737" data-end="852"&gt;&lt;STRONG data-start="739" data-end="762"&gt;The first 10 GB/day&lt;/STRONG&gt; ingested into the Analytics tier is &lt;STRONG data-start="799" data-end="819"&gt;free for 31 days&lt;/STRONG&gt; under the Analytics logs plan.&lt;/LI&gt;
&lt;LI data-start="853" data-end="1000"&gt;&lt;STRONG data-start="855" data-end="926"&gt;All data ingested into the Analytics tier is automatically mirrored&lt;/STRONG&gt; to the Data Lake tier &lt;STRONG data-start="949" data-end="997"&gt;at no additional ingestion or retention cost&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI data-start="1001" data-end="1122"&gt;&lt;STRONG data-start="1003" data-end="1029"&gt;For the first 6 months&lt;/STRONG&gt;, you pay only for &lt;STRONG data-start="1048" data-end="1090"&gt;Analytics tier ingestion and retention&lt;/STRONG&gt;, excluding any free capacity.&lt;/LI&gt;
&lt;LI data-start="1123" data-end="1231"&gt;&lt;STRONG data-start="1125" data-end="1151"&gt;For the next 18 months&lt;/STRONG&gt;, you pay only for &lt;STRONG data-start="1170" data-end="1198"&gt;Data Lake tier retention&lt;/STRONG&gt;, which is significantly cheaper.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;&lt;STRONG&gt;Azure Pricing Calculator Equivalent&amp;nbsp;&lt;/STRONG&gt;&lt;/H1&gt;
&lt;P data-start="107" data-end="199"&gt;Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period:&lt;/P&gt;
&lt;P data-start="201" data-end="429"&gt;Although the Analytics tier retention is set to &lt;STRONG data-start="249" data-end="261"&gt;6 months&lt;/STRONG&gt;, the &lt;STRONG data-start="267" data-end="334"&gt;first 3 months of retention fall under the free retention limit&lt;/STRONG&gt;, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-10"&gt;Scenario 1B (Usage Commitment)&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H1&gt;
&lt;P data-start="112" data-end="296"&gt;Now, suppose you are ingesting &lt;STRONG data-start="143" data-end="161"&gt;100 GB per day&lt;/STRONG&gt;. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately &lt;STRONG data-start="274" data-end="295"&gt;$15,204 per month&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P data-start="298" data-end="588"&gt;However, you can reduce this cost by choosing a &lt;STRONG data-start="346" data-end="365"&gt;Commitment Tier&lt;/STRONG&gt;, where Analytics tier &lt;STRONG data-start="388" data-end="401"&gt;ingestion&lt;/STRONG&gt; is billed at a &lt;STRONG data-start="417" data-end="436"&gt;discounted rate&lt;/STRONG&gt;. Note that the discount applies &lt;STRONG data-start="469" data-end="490"&gt;only to Analytics tier ingestion&lt;/STRONG&gt;—it does &lt;STRONG data-start="499" data-end="506"&gt;not&lt;/STRONG&gt; apply to Analytics tier retention costs or to any Data Lake tier–related charges.&lt;/P&gt;
&lt;P data-start="590" data-end="681"&gt;Please refer to the pricing flow and the equivalent pricing calculator results shown below.&lt;/P&gt;
&lt;P data-start="683" data-end="751"&gt;&lt;STRONG data-start="683" data-end="708"&gt;Monthly cost savings:&lt;/STRONG&gt;&lt;BR data-start="708" data-end="711" /&gt;&lt;STRONG data-start="711" data-end="751"&gt;$15,204 – $11,184 = $4,020 per month&lt;/STRONG&gt;&lt;/P&gt;
&lt;P data-start="91" data-end="229"&gt;Now the question is: &lt;EM data-start="112" data-end="164"&gt;What happens if your usage reaches 150 GB per day?&lt;/EM&gt;&lt;BR data-start="164" data-end="167" /&gt;Will the additional 50 GB be billed at the Pay-As-You-Go rate?&lt;/P&gt;
&lt;P data-start="231" data-end="367"&gt;&lt;STRONG data-start="231" data-end="238"&gt;No.&lt;/STRONG&gt; The entire 150 GB/day will still be billed at the &lt;STRONG data-start="289" data-end="308"&gt;discounted rate&lt;/STRONG&gt; associated with the &lt;STRONG data-start="329" data-end="366"&gt;100 GB/day commitment tier bucket&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;H1&gt;&lt;STRONG&gt;Azure Pricing Calculator Equivalent (100 GB/ Day)&lt;/STRONG&gt;&lt;/H1&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;&lt;STRONG&gt;Azure Pricing Calculator Equivalent (150 GB/ Day)&lt;/STRONG&gt;&lt;/H1&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;&lt;SPAN class="lia-text-color-10"&gt;&lt;STRONG&gt;Scenario 2 (Data Lake Tier Only)&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H1&gt;
&lt;H3 data-start="90" data-end="111"&gt;&lt;STRONG data-start="94" data-end="109"&gt;Requirement&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P data-start="112" data-end="394"&gt;Suppose you need to store certain &lt;STRONG data-start="146" data-end="174"&gt;audit or compliance logs&lt;/STRONG&gt; amounting to &lt;STRONG data-start="188" data-end="205"&gt;10 GB per day&lt;/STRONG&gt;. These logs are &lt;STRONG data-start="222" data-end="277"&gt;not used for querying, analytics, or investigations&lt;/STRONG&gt; on a regular basis, but must be retained for &lt;STRONG data-start="323" data-end="334"&gt;2 years&lt;/STRONG&gt; as per your organization’s compliance or forensic policies.&lt;/P&gt;
&lt;H3 data-start="396" data-end="414"&gt;&lt;STRONG data-start="400" data-end="412"&gt;Solution&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P data-start="415" data-end="732"&gt;Since these logs are not actively analyzed, you should &lt;STRONG data-start="470" data-end="518"&gt;avoid ingesting them into the Analytics tier&lt;/STRONG&gt;, which is more expensive and optimized for active querying.&lt;BR data-start="578" data-end="581" /&gt;Instead, send them &lt;STRONG data-start="600" data-end="634"&gt;directly to the Data Lake tier&lt;/STRONG&gt;, where they can be retained cost-effectively for future &lt;STRONG data-start="691" data-end="725"&gt;audit, compliance, or forensic&lt;/STRONG&gt; needs.&lt;/P&gt;
&lt;H3 data-start="734" data-end="754"&gt;&lt;STRONG data-start="738" data-end="754"&gt;Pricing Flow&lt;/STRONG&gt;&lt;/H3&gt;
&lt;UL data-start="756" data-end="1215"&gt;
&lt;LI data-start="756" data-end="913"&gt;Because the data is ingested &lt;STRONG data-start="787" data-end="823"&gt;directly into the Data Lake tier&lt;/STRONG&gt;, you pay &lt;STRONG data-start="833" data-end="865"&gt;both ingestion and retention&lt;/STRONG&gt; costs there for the &lt;STRONG data-start="886" data-end="910"&gt;entire 2-year period&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI data-start="914" data-end="1084"&gt;If, at any point in the future, you need to perform &lt;STRONG data-start="968" data-end="1011"&gt;advanced analytics, querying, or search&lt;/STRONG&gt;, you will incur &lt;STRONG data-start="1028" data-end="1058"&gt;additional compute charges&lt;/STRONG&gt;, based on actual usage.&lt;/LI&gt;
&lt;LI data-start="1085" data-end="1215"&gt;Even with occasional compute charges, the cost remains &lt;STRONG data-start="1142" data-end="1165"&gt;significantly lower&lt;/STRONG&gt; than storing the same data in the Analytics tier.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 data-start="1222" data-end="1246"&gt;&lt;STRONG data-start="1226" data-end="1246"&gt;Realized Savings&lt;/STRONG&gt;&lt;/H3&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Scenario&lt;/th&gt;&lt;th&gt;Cost per Month&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG data-start="1309" data-end="1324"&gt;Scenario 1:&lt;/STRONG&gt; 10 GB/day in Analytics tier&lt;/td&gt;&lt;td&gt;&lt;STRONG data-start="1355" data-end="1368"&gt;$1,520.40&lt;/STRONG&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG data-start="1373" data-end="1388"&gt;Scenario 2:&lt;/STRONG&gt; 10 GB/day directly into Data Lake tier&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG data-start="1430" data-end="1441"&gt;$202.20 (without compute)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG data-start="1430" data-end="1441"&gt;&lt;STRONG data-start="1597" data-end="1624"&gt;$257.20 (with sample compute price)&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;UL data-start="1445" data-end="1624"&gt;
&lt;LI data-start="1445" data-end="1534"&gt;&lt;STRONG data-start="1447" data-end="1484"&gt;Savings with no compute activity:&lt;/STRONG&gt;&lt;BR data-start="1484" data-end="1487" /&gt;&lt;STRONG data-start="1489" data-end="1534"&gt;$1,520.40 – $202.20 = $1,318.20 per month&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI data-start="1536" data-end="1624"&gt;&lt;STRONG data-start="1538" data-end="1592"&gt;Savings with some compute activity (sample value):&lt;/STRONG&gt;&lt;BR data-start="1592" data-end="1595" /&gt;&lt;STRONG data-start="1489" data-end="1534"&gt;$1,520.40 – $257.20 = $1,263.20 per month&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;&lt;STRONG&gt;Azure calculator equivalent without compute&lt;/STRONG&gt;&lt;/H1&gt;
&lt;img /&gt;
&lt;H1&gt;&lt;STRONG&gt;Azure calculator equivalent with Sample Compute&lt;/STRONG&gt;&lt;/H1&gt;
&lt;img /&gt;
&lt;H1&gt;&lt;SPAN class="lia-text-color-10"&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;The combination of the &lt;STRONG data-start="157" data-end="175"&gt;Analytics tier&lt;/STRONG&gt; and the &lt;STRONG data-start="184" data-end="202"&gt;Data Lake tier&lt;/STRONG&gt; in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the &lt;STRONG data-start="414" data-end="432"&gt;Analytics tier&lt;/STRONG&gt;, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the &lt;STRONG data-start="653" data-end="671"&gt;Data Lake tier&lt;/STRONG&gt;, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.&lt;/P&gt;</description>
      <pubDate>Wed, 26 Nov 2025 12:43:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/microsoft-sentinel/understand-new-sentinel-pricing-model-with-sentinel-data-lake/m-p/4473020#M12852</guid>
      <dc:creator>Aaida_Aboobakkar</dc:creator>
      <dc:date>2025-11-26T12:43:46Z</dc:date>
    </item>
  </channel>
</rss>

