investigation
318 TopicsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.Solved2.2KViews2likes6CommentsAnnouncing Public Preview: Security Copilot’s Email Summary in Microsoft Defender
Co-Authors: Cristina Da Gama Henriquez and Ajaj Shaikh AI is rapidly reshaping both sides of the security landscape, and email remains one of the most common and complex entry points for attacks. As adversaries use AI to scale more sophisticated phishing and email-based threats, defenders are under pressure not just to detect them, but to quickly understand what actually happened. Microsoft continues to apply generative and agentic AI across the email protection stack to help stop threats before they reach the inbox and catch what inevitably gets through in the SOC. Still, for security analysts, understanding an email threat requires piecing together context across the incident and its related artifacts. Much of that context exists within the Email entity experience, but it is spread across metadata, timelines, URLs, and attachments, making it time-consuming to connect the dots and act with confidence. Today, we are excited to announce the public preview of Security Copilot’s Email summary capability, designed to bring those insights together and make email threat investigations faster, clearer, and more actionable. With Security Copilot included in Microsoft 365 E5, organizations will be able to bring AI directly into their flow of work—extending these benefits across the SOC at no additional cost.* Bringing clarity into the investigation workflow Email summary brings AI-generated context directly into the Email entity page, transforming fragmented detection data into a clear, natural-language explanation of what happened and why. Analysts can access it from the Security Copilot right-side pane, the same place where Copilot activity across Microsoft Defender is surfaced. Instead of navigating across multiple views to reconstruct the story, analysts can generate a summary that connects the signals and highlights what matters most. And it all happens in seconds. Built on Security Copilot’s summarization capabilities, Email summary uses the same data analysts already rely on, like email metadata, timeline events, URLs, and attachments, and turns it into a cohesive narrative. It explains how a message was evaluated, what actions were taken, and where risk exists, without requiring manual correlation. A summary that follows how analysts think The experience is intentionally embedded in the Email entity page, where investigations already happen, so analysts don’t have to change how they work to benefit from it. The output is structured to match how analysts approach an investigation. It starts with a concise overview of the email, including what was detected, what actions were taken, and any key indicators. From there, it walks through the timeline of events, helping reconstruct how the email was delivered, interacted with, and remediated. It also breaks down URLs and attachments, calling out malicious signals and explaining associated risks in plain language. Importantly, this is a user-triggered experience. Analysts generate a summary when they need it, ensuring the capability is both intentional and efficient. From fragmented data to confident decisions Email summary is a foundational step toward making email threat investigations more explainable and efficient. Today, it brings together existing signals into a clear, actionable narrative. Over time, it will evolve to incorporate additional signal depth: detonation (sandboxing) results, submission responses, and more granular insights from the filtering stack, further strengthening the completeness and fidelity of each investigation. As threats continue to grow in speed and sophistication, the ability to quickly understand and act is just as critical as detection itself. Email summary helps close that gap, giving analysts the clarity they need to respond with confidence. *Eligible Microsoft 365 E5 customers will have 400 Security Compute Units (SCUs) per month for every 1,000 user licenses, up to 10,000 SCUs per month. This included capacity is expected to support typical scenarios. Customers will have an option to pay for scaling beyond the allocated amount at a future date with $6 per SCU on a pay-as-you-go basis, and will get a 30-day advanced notification when this option is available. Learn more.MDO query of EmailEvents is not accepted in the flow which is why causing the badgateway error
When used the following MDO query of EmailEvents it is working in the Defender control panel but when applied through 'Advanced Hunting' action in Power automate application given bad gateway error. Is this query supported in this application?125Views0likes1CommentFull Automation Capabilities in Linux OS
Hello eveyone, We have configured Defender to detect viruses, and our goal is that if one of our assets downloads or encounters a virus, it is automatically hidden or removed. Based on the documentation regarding the automation levels in Automated Investigation and Remediation capabilities, we have set it to "Full - remediate threats automatically." While this works correctly on Windows devices, we have noticed that on Linux devices, the defender still detect the virus but it was not prevented. I was wondering if anyone has encountered this issue and, if so, how it was resolved? Additionally, as I am new to the Defender platform, I wanted to ask if could this issue potentially be resolved through specific Linux policies or functionalities? Best regards Mathiew107Views1like1CommentMicrosoft Sentinel MCP Entity Analyzer: Explainable risk analysis for URLs and identities
What makes this release important is not just that it adds another AI feature to Sentinel. It changes the implementation model for enrichment and triage. Instead of building and maintaining a chain of custom playbooks, KQL lookups, threat intel checks, and entity correlation logic, SOC teams can call a single analyzer that returns a reasoned verdict and supporting evidence. Microsoft positions the analyzer as available through Sentinel MCP server connections for agent platforms and through Logic Apps for SOAR workflows, which makes it useful both for interactive investigations and for automated response pipelines. Why this matters First, it formalizes Entity Analyzer as a production feature rather than a preview experiment. Second, it introduces a real cost model, which means organizations now need to govern usage instead of treating it as a free enrichment helper. Third, Microsoft’s documentation is now detailed enough to support repeatable implementation patterns, including prerequisites, limits, required tables, Logic Apps deployment, and cost behavior. From a SOC engineering perspective, Entity Analyzer is interesting because it focuses on explainability. Microsoft describes the feature as generating clear, explainable verdicts for URLs and user identities by analyzing multiple modalities, including threat intelligence, prevalence, and organizational context. That is a much stronger operational model than simple point-enrichment because it aims to return an assessment that analysts can act on, not just more raw evidence What Entity Analyzer actually does The Entity Analyzer tools are described as AI-powered tools that analyze data in the Microsoft Sentinel data lake and provide a verdict plus detailed insights on URLs, domains, and user entities. Microsoft explicitly says these tools help eliminate the need for manual data collection and complex integrations usually required for investigation and enrichment hat positioning is important. In practice, many SOC teams have built enrichment playbooks that fetch sign-in history, query TI feeds, inspect click data, read watchlists, and collect relevant alerts. Those workflows work, but they create maintenance overhead and produce inconsistent analyst experiences. Entity Analyzer centralizes that reasoning layer. For user entities, Microsoft’s preview architecture explains that the analyzer retrieves sign-in logs, security alerts, behavior analytics, cloud app events, identity information, and Microsoft Threat Intelligence, then correlates those signals and applies AI-based reasoning to produce a verdict. Microsoft lists verdict examples such as Compromised, Suspicious activity found, and No evidence of compromise, and also warns that AI-generated content may be incorrect and should be checked for accuracy. That warning matters. The right way to think about Entity Analyzer is not “automatic truth,” but “high-value, explainable triage acceleration.” It should reduce analyst effort and improve consistency, while still fitting into human review and response policy. Under the hood: the implementation model Technically, Entity Analyzer is delivered through the Microsoft Sentinel MCP data exploration tool collection. Microsoft documents that entity analysis is asynchronous: you start analysis, receive an identifier, and then poll for results. The docs note that analysis may take a few minutes and that the retrieval step may need to be run more than once if the internal timeout is not enough for long operations. That design has two immediate implications for implementers. First, this is not a lightweight synchronous enrichment call you should drop carelessly into every automation branch. Second, any production workflow should include retry logic, timeouts, and concurrency controls. If you ignore that, you will create fragile playbooks and unnecessary SCU burn. The supported access path for the data exploration collection requires Microsoft Sentinel data lake and one of the supported MCP-capable platforms. Microsoft also states that access to the tools is supported for identities with at least Security Administrator, Security Operator, or Security Reader. The data exploration collection is hosted at the Sentinel MCP endpoint, and the same documentation notes additional Entity Analyzer roles related to Security Copilot usage. The prerequisite many teams will miss The most important prerequisite is easy to overlook: Microsoft Sentinel data lake is required. This is more than a licensing footnote. It directly affects data quality, analyzer usefulness, and rollout success. If your organization has not onboarded the right tables into the data lake, Entity Analyzer will either fail or return reduced-confidence output. For user analysis, the following tables are required to ensure accuracy: AlertEvidence, SigninLogs, CloudAppEvents, and IdentityInfo. also notes that IdentityInfo depends on Defender for Identity, Defender for Cloud Apps, or Defender for Endpoint P2 licensing. The analyzer works best with AADNonInteractiveUserSignInLogs and BehaviorAnalytics as well. For URL analysis, the analyzer works best with EmailUrlInfo, UrlClickEvents, ThreatIntelIndicators, Watchlist, and DeviceNetworkEvents. If those tables are missing, the analyzer returns a disclaimer identifying the missing sources A practical architecture view An incident, hunting workflow, or analyst identifies a high-interest URL or user. A Sentinel MCP client or Logic App calls Entity Analyzer. Entity Analyzer queries relevant Sentinel data lake sources and correlates the findings. AI reasoning produces a verdict, evidence narrative, and recommendations. The result is returned to the analyst, incident record, or automation workflow for next-step action. This model is especially valuable because it collapses a multi-query, multi-tool investigation pattern into a single explainable decisioning step. Where it fits in real Sentinel operations Entity Analyzer is not a replacement for analytics rules, UEBA, or threat intelligence. It is a force multiplier for them. For identity triage, it fits naturally after incidents triggered by sign-in anomaly detections, UEBA signals, or Defender alerts because it already consumes sign-in logs, cloud app events, and behavior analytics as core evidence sources. For URL triage, it complements phishing and click-investigation workflows because it uses TI, URL activity, watchlists, and device/network context. Implementation path 1: MCP clients and security agents Microsoft states that Entity Analyzer integrates with agents through Sentinel MCP server connections to first-party and third-party AI runtime platforms. In practice, this makes it attractive for analyst copilots, engineering-side investigation agents, and guided triage experiences The benefit of this model is speed. A security engineer or analyst can invoke the analyzer directly from an MCP-capable client without building a custom orchestration layer. The tradeoff is governance: once you make the tool widely accessible, you need a clear policy for who can run it, when it should be used, and how results are validated before action is taken. Implementation path 2: Logic Apps and SOAR playbooks For SOC teams, Logic Apps is likely the most immediately useful deployment model. Microsoft documents an entity analyzer action inside the Microsoft Sentinel MCP tools connector and provides the required parameters for adding it to an existing logic app. These include: Workspace ID Look Back Days Properties payload for either URL or User The documented payloads are straightforward: { "entityType": "Url", "url": "[URL]" } And { "entityType": "User", "userId": "[Microsoft Entra object ID or User Principal Name]" } Also states that the connector supports Microsoft Entra ID, service principals, and managed identities, and that the Logic App identity requires Security Reader to operate. This makes playbook integration a strong pattern for incident enrichment. A high-severity incident can trigger a playbook, extract entities, invoke Entity Analyzer, and post the verdict back to the incident as a comment or decision artifact. The concurrency lesson most people will learn the hard way Unusually direct guidance on concurrency: to avoid timeouts and threshold issues, turn on Concurrency control in Logic Apps loops and start with a degree of parallelism of . The data exploration doc repeats the same guidance, stating that running multiple instances at once can increase latency and recommending starting with a maximum of five concurrent analyses. This is a strong indicator that the correct implementation pattern is selective analysis, not blanket analysis. Do not analyze every entity in every incident. Analyze the entities that matter most: external URLs in phishing or delivery chains accounts tied to high-confidence alerts entities associated with high-severity or high-impact incidents suspicious users with multiple correlated signals That keeps latency, quota pressure, and SCU consumption under control. KQL still matters Entity Analyzer does not eliminate KQL. It changes where KQL adds value. Before running the analyzer, KQL is still useful for scoping and selecting the right entities. After the analyzer returns, KQL is useful for validation, deeper hunting, and building custom evidence views around the analyzer’s verdict. For example, a simple sign-in baseline for a target user: let TargetUpn = "email address removed for privacy reasons"; SigninLogs | where TimeGenerated between (ago(7d) .. now()) | where UserPrincipalName == TargetUpn | summarize Total=count(), Failures=countif(ResultType != "0"), Successes=countif(ResultType == "0"), DistinctIPs=dcount(IPAddress), Apps=make_set(AppDisplayName, 20) by bin(TimeGenerated, 1d) | order by TimeGenerated desc And a lightweight URL prevalence check: let TargetUrl = "omicron-obl.com"; UrlClickEvents | where TimeGenerated between (ago(7d) .. now()) | search TargetUrl | take 50 Cost, billing, and governance GA is where technical excitement meets budget reality. Microsoft’s Sentinel billing documentation says there is no extra cost for the MCP server interface itself. However, for Entity Analyzer, customers are charged for the SCUs used for AI reasoning and also for the KQL queries executed against the Microsoft Sentinel data lake. Microsoft further states that existing Security Copilot entitlements apply The April 2026 “What’s new” entry also explicitly says that starting April 1, 2026, customers are charged for the SCUs required when using Entity Analyzer. That means every rollout should include a governance plan: define who can invoke the analyzer decide when playbooks are allowed to call it monitor SCU consumption limit unnecessary repeat runs preserve results in incident records so you do not rerun the same analysis within a short period Microsoft’s MCP billing documentation also defines service limits: 200 total runs per hour, 500 total runs per day, and around 15 concurrent runs every five minutes, with analysis results available for one hour. Those are not just product limits. They are design requirements. Limitations you should state clearly The analyze_user_entity supports a maximum time window of seven days and only works for users with a Microsoft Entra object ID. On-premises Active Directory-only users are not supported for user analysis. Microsoft also says Entity Analyzer results expire after one hour and that the tool collection currently supports English prompts only. Recommended rollout pattern If I were implementing this in a production SOC, I would phase it like this: Start with a narrow set of high-value use cases, such as suspicious user identities and phishing-related URLs. Confirm that the required tables are present in the data lake. Deploy a Logic App enrichment pattern for incident-triggered analysis. Add concurrency control and retry logic. Persist returned verdicts into incident comments or case notes. Then review SCU usage and analyst value before expanding coverage.650Views8likes0CommentsCustom data collection in MDE - what is default?
So you just announced the preview of "Custom data collection in Microsoft Defender for Endpoint (Preview)" which lets me ingest custom data to sentinel. Is there also an overview of what is default and what I can add? e.g. we want to examine repeating disconnects from AzureVPN clients (yes, it's most likely just Microsoft's fault, as the app ratings show 'everyone' is having them) How do I know which data I can add to DeviceCustomNetworkEvents which isnt already in DeviceNetworkEvents?Solved161Views1like1CommentObserved Automation Discrepancies
Hi Team ... I want to know the logic behind the Defender XDR Automation Engine . How it works ? I have observed Defender XDR Automation Engine Behavior contrary to expectations of identical incident and automation handling in both environments, discrepancies were observed. Specifically, incidents with high-severity alerts were automatically closed by Defender XDR's automation engine before reaching their SOC for review, raising concerns among clients and colleagues. Automation rules are clearly logged in the activity log, whereas actions performed by Microsoft Defender XDR are less transparent . A high-severity alert related to a phishing incident was closed by Defender XDR's automation, resulting in the associated incident being closed and removed from SOC review. Wherein the automation was not triggered by our own rules, but by Microsoft's Defender XDR, and sought clarification on the underlying logic.202Views2likes4CommentsMissing details in Azure Activity Logs – MICROSOFT.SECURITYINSIGHTS/ENTITIES/ACTION
The Azure Activity Logs are crucial for tracking access and actions within Sentinel. However, I’m encountering a significant lack of documentation and clarity regarding some specific operation types. Resources consulted: https://learn.microsoft.com/en-us/azure/sentinel/audit-sentinel-data https://learn.microsoft.com/en-us/rest/api/securityinsights/entities?view=rest-securityinsights-2024-01-01-preview https://learn.microsoft.com/en-us/rest/api/securityinsights/operations/list?view=rest-securityinsights-2024-09-01&tabs=HTTP My issue: I observed unauthorized activity on our Sentinel workspace. The Azure Activity Logs clearly indicate the user involved, the resource, and the operation type: "MICROSOFT.SECURITYINSIGHTS/ENTITIES/ACTION" But that’s it. No detail about what the action was, what entity it targeted, or how it was triggered. This makes auditing extremely difficult. It's clear the person was in Sentinel and perform an activity through it, from search, KQL, logs to find an entity from a KQL query. But, that's all... Strangely, this operation is not even listed in the official Sentinel Operations documentation linked above. My question: Has anyone encountered this and found a way to interpret this operation type properly? Any insight into how to retrieve more meaningful details (action context, target entity, etc.) from these events would be greatly appreciated.251Views0likes3CommentsRSAC 2026: New Microsoft Sentinel Connectors Announcement
Microsoft Sentinel helps organizations detect, investigate, and respond to security threats across increasingly complex environments. With the rollout of the Microsoft Sentinel data lake in the fall, and the App Assure-backed Sentinel promise that supports it, customers now have access to long-term, cost-effective storage for security telemetry, creating a solid foundation for emerging Agentic AI experiences. Since our last announcement at Ignite 2025, the Microsoft Sentinel connector ecosystem has expanded rapidly, reflecting continued investment from software development partners building for our shared customers. These connectors bring diverse security signals together, enabling correlation at scale and delivering richer investigation context across the Sentinel platform. Below is a snapshot of Microsoft Sentinel connectors newly available or recently enhanced since our last announcement, highlighting the breadth of partner solutions contributing data into, and extending the value of, the Microsoft Sentinel ecosystem. New and notable integrations Acronis Cyber Protect Cloud Acronis Cyber Protect Cloud integrates with Microsoft Sentinel to bring data protection and security telemetry into a centralized SOC view. The connector streams alerts, events, and activity data - spanning backup, endpoint protection, and workload security - into Microsoft Sentinel for correlation with other signals. This integration helps security teams investigate ransomware and data-centric threats more effectively, leverage built-in hunting queries and detection rules, and improve visibility across managed environments without adding operational complexity. Anvilogic Anvilogic integrates with Microsoft Sentinel to help security teams operationalize detection engineering at scale. The connector streams Anvilogic alerts into Microsoft Sentinel, giving SOC analysts centralized visibility into high-fidelity detections and faster context for investigation and triage. By unifying detection workflows, reducing alert noise, and improving prioritization, this integration supports more efficient threat detection and response while helping teams extend coverage across evolving attack techniques. BigID BigID integrates with Microsoft Sentinel to extend data security posture management (DSPM) insights into security operations workflows. The solution brings visibility into sensitive, regulated, and critical data across cloud, SaaS, and on‑premises environments, helping security teams understand where high‑risk data resides and how it may be exposed. By incorporating data‑centric risk context into Sentinel, this integration supports more informed investigation and prioritization, enabling organizations to reduce data‑related risk and align security operations with data protection and compliance objectives. Commvault Cloud Commvault Cloud integrates with Microsoft Sentinel to bring data protection and cyber‑resilience telemetry into security operations workflows. The connector ingests security‑relevant signals from Commvault Cloud—such as backup anomalies, malware and ransomware indicators, and other threat‑related events—into Sentinel, enabling centralized detection, investigation, and automated response. By correlating backup intelligence with broader Sentinel telemetry, this integration helps security teams reduce blind spots, validate the scope of incidents, and improve coordination between security and recovery operations. CyberArk Audit CyberArk Audit integrates with Microsoft Sentinel to centralize visibility into privileged identity and access activity. By streaming detailed audit logs - covering system events, user actions, and administrative activity - into Microsoft Sentinel, security teams can correlate identity-driven risks with broader security telemetry. This integration supports faster investigations, improved monitoring of privileged access, and more effective incident response through automated workflows and enriched context for SOC analysts. Cyera Cyera integrates with Microsoft Sentinel to extend AI-native data security posture management into security operations. The connector brings Cyera’s data context and actionable intelligence across multi-cloud, on-premises, and SaaS environments into Microsoft Sentinel, helping teams understand where sensitive data resides and how it is accessed, exposed, and used. Built on Sentinel’s modern framework, the integration feeds context-rich data risk signals into the Sentinel data lake, enabling more informed threat hunting, automation, and decision-making around data, user, and AI-related risk. TacitRed CrowdStrike IOC Automation Data443 TacitRed CS IOC Automation integrates with Microsoft Sentinel to streamline the operationalization of compromised credential intelligence. The solution uses Sentinel playbooks to automatically push TacitRed indicators of compromise into CrowdStrike via Sentinel playbooks, helping security teams turn identity-based threat intelligence into action. By automating IOC handling and reducing manual effort, this integration supports faster response to credential exposure and strengthens protection against account-driven attacks across the environment. TacitRed SentinelOne IOC Automation Data443 TacitRed SentinelOne IOC Automation integrates with Microsoft Sentinel to help operationalize identity-focused threat intelligence at the endpoint layer. The solution uses Sentinel playbooks to automatically consume TacitRed indicators and push curated indicators into SentinelOne via Sentinel playbooks and API-based enforcement, enabling faster enforcement of high-risk IOCs without manual handling. By automating the flow of compromised credential intelligence from Sentinel into EDR, this integration supports quicker response to identity-driven attacks and improves coordination between threat intelligence and endpoint protection workflows. TacitRed Threat Intelligence Data443 TacitRed Threat Intelligence integrates with Microsoft Sentinel to provide enhanced visibility into identity-based risks, including compromised credentials and high-risk user exposure. The solution ingests curated TacitRed intelligence directly into Sentinel, enriching incidents with context that helps SOC teams identify credential-driven threats earlier in the attack lifecycle. With built-in analytics, workbooks, and hunting queries, this integration supports proactive identity threat detection, faster triage, and more informed response across the SOC. Cyren Threat Intelligence Cyren Threat Intelligence integrates with Microsoft Sentinel to enhance detection of network-based threats using curated IP reputation and malware URL intelligence. The connector ingests Cyren threat feeds into Sentinel using the Codeless Connector Framework (CCF), transforming raw indicators into actionable insights, dashboards, and enriched investigations. By adding context to suspicious traffic and phishing infrastructure, this integration helps SOC teams improve alert accuracy, accelerate triage, and make more confident response decisions across their environments. TacitRed Defender Threat Intelligence Data443 TacitRed Defender Threat Intelligence integrates with Microsoft Sentinel to surface early indicators of credential exposure and identity-driven risk. The solution automatically ingests compromised credential intelligence from TacitRed into Sentinel and can support synchronization of validated indicators with Microsoft Defender Threat Intelligence through Sentinel workflows, helping SOC teams detect account compromise before abuse occurs. By enriching Sentinel incidents with actionable identity context, this integration supports faster triage, proactive remediation, and stronger protection against credential-based attacks. Datawiza Access Proxy (DAP) Datawiza Access Proxy integrates with Microsoft Sentinel to provide centralized visibility into application access and authentication activity. By streaming access and MFA logs from Datawiza into Sentinel, security teams can correlate identity and session-level events with broader security telemetry. This integration supports detection of anomalous access patterns, faster investigation through session traceability, and more effective response using Sentinel automation, helping organizations strengthen Zero Trust controls and meet auditing and compliance requirements. Endace Endace integrates with Microsoft Sentinel to provide deep network visibility by providing always-on, packet-level evidence. The connector enables one-click pivoting from Sentinel alerts directly to recorded packet data captured by EndaceProbes. This helps SOC and NetOps teams reconstruct events and validate threats with confidence. By combining Sentinel’s AI-driven analytics with Endace’s always-on, full-packet capture across on-premises, hybrid, and cloud environments, this integration supports faster investigations, improved forensic accuracy, and more decisive incident response. Feedly Feedly integrates with Microsoft Sentinel to ingest curated threat intelligence directly into security operations workflows. The connector automatically imports Indicators of Compromise (IoCs) from Feedly Team Boards and folders into Sentinel, enriching detections and investigations with context from the original intelligence articles. By bringing analyst‑curated threat intelligence into Sentinel in a structured, automated way, this integration helps security teams stay current on emerging threats and reduce the manual effort required to operationalize external intelligence. Gigamon Gigamon integrates with Microsoft Sentinel through a new connector that provides access to Gigamon Application Metadata Intelligence (AMI), delivering high-fidelity network-derived telemetry with rich application metadata from inspected traffic directly into Sentinel. This added context helps security teams detect suspicious activity, encrypted threats, and lateral movement faster and with greater precision. By enriching analytics without requiring full packet ingestion, organizations can reduce noise, manage SIEM costs, and extend visibility across hybrid cloud infrastructure. Halcyon Halcyon integrates with Microsoft Sentinel to provide purpose-built ransomware detection and automated containment across the Microsoft security ecosystem. The connector surfaces Halcyon ransomware alerts directly within Sentinel, enabling SOC teams to correlate ransomware behavior with Microsoft Defender and broader Microsoft telemetry. By supporting Sentinel analytics and automation workflows, this integration helps organizations detect ransomware earlier, investigate faster using native Sentinel tools, and isolate affected endpoints to prevent lateral spread and reinfection. Illumio The Illumio platform identifies and contains threats across hybrid multi-cloud environments. By integrating AI-driven insights with Microsoft Sentinel and Microsoft Graph, Illumio Insights enables SOC analysts to visualize attack paths, prioritize high-risk activity, and investigate threats with greater precision. Illumio Segmentation secures critical assets, workloads, and devices and then publishes segmentation policy back into Microsoft Sentinel to ensure compliance monitoring. Joe Sandbox Joe Sandbox integrates with Microsoft Sentinel to enrich incidents with dynamic malware and URL analysis. The connector ingests Joe Sandbox threat intelligence and automatically detonates suspicious files and URLs associated with Sentinel incidents, returning behavioral and contextual analysis results directly into investigation workflows. By adding sandbox-driven insights to indicators, alerts, and incident comments, this integration helps SOC teams validate threats faster, reduce false positives, and improve response decisions using deeper visibility into malicious behavior. Keeper Security The Keeper Security integration with Microsoft Sentinel brings advanced password and secrets management telemetry into your SIEM environment. By streaming audit logs and privileged access events from Keeper into Sentinel, security teams gain centralized visibility into credential usage and potential misuse. The connector supports custom queries and automated playbooks, helping organizations accelerate investigations, enforce Zero Trust principles, and strengthen identity security across hybrid environments. Lookout Mobile Threat Defense (MTD) Lookout Mobile Threat Defense integrates with Microsoft Sentinel to extend SOC visibility to mobile endpoints across Android, iOS, and Chrome OS. The connector streams device, threat, and audit telemetry from Lookout into Sentinel, enabling security teams to correlate mobile risk signals such as phishing, malicious apps, and device compromise, with broader enterprise security data. By incorporating mobile threat intelligence into Sentinel analytics, dashboards, and alerts, this integration helps organizations detect mobile driven attacks earlier and strengthen protection for an increasingly mobile workforce. Miro Miro integrates with Microsoft Sentinel to provide centralized visibility into collaboration activity across Miro workspaces. The connector ingests organization-wide audit logs and content activity logs into Sentinel, enabling security teams to monitor authentication events, administrative actions, and content changes alongside other enterprise signals. By bringing Miro collaboration telemetry into Sentinel analytics and dashboards, this integration helps organizations detect suspicious access patterns, support compliance and eDiscovery needs, and maintain stronger oversight of collaborative environments without disrupting productivity. Obsidian Activity Threat The Obsidian Threat and Activity Feed for Microsoft Sentinel delivers deep visibility into SaaS and AI applications, helping security teams detect account compromise and insider threats. By streaming user behavior and configuration data into Sentinel, organizations can correlate application risks with enterprise telemetry for faster investigations. Prebuilt analytics and dashboards enable proactive monitoring, while automated playbooks simplify response workflows, strengthening security posture across critical cloud apps. OneTrust for Purview DSPM OneTrust integrates with Microsoft Sentinel to bring privacy, compliance, and data governance signals into security operations workflows. The connector enriches Sentinel with privacy relevant events and risk indicators from OneTrust, helping organizations detect sensitive data exposure, oversharing, and compliance risks across cloud and non-Microsoft data sources. By unifying privacy intelligence with Sentinel analytics and automation, this integration enables security and privacy teams to respond more quickly to data risk events and support responsible data use and AI-ready governance. Pathlock Pathlock integrates with Microsoft Sentinel to bring SAP-specific threat detection and response signals into centralized security operations. The connector forwards security-relevant SAP events into Sentinel, enabling SOC teams to correlate SAP activity with broader enterprise telemetry and investigate threats using familiar SIEM workflows. By enriching Sentinel with SAP security context and focused detection logic, this integration helps organizations improve visibility into SAP landscapes, reduce noise, and accelerate detection and response for risks affecting critical business systems. Quokka Q-scout Quokka Q-scout integrates with Microsoft Sentinel to centralize mobile application risk intelligence across Microsoft Intune-managed devices. The connector automatically ingests app inventories from Intune, analyzes them using Quokka’s mobile app vetting engines, and streams security, privacy, and compliance risk findings into Sentinel. By surfacing app-level risks through Sentinel analytics and alerts, this integration helps security teams identify malicious or high-risk mobile apps, prioritize remediation, and strengthen mobile security posture without deploying agents or disrupting users. Semperis Lightning Semperis Lightning integrates with Microsoft Sentinel to deliver deep visibility into identity‑centric risk across Active Directory and Microsoft Entra environments. The connector ingests identity security telemetry such as indicators of exposure, Tier 0 assets, and attack path insights into Sentinel, enabling security teams to correlate identity risks with broader security signals. By bringing rich identity context into Sentinel analytics, hunting, and investigations, this integration helps organizations detect, prioritize, and respond to identity‑driven attacks more effectively across hybrid identity infrastructures. Synqly Synqly integrates with Microsoft Sentinel to simplify and scale security integrations through a unified API approach. The connector enables organizations and security vendors to establish a bi‑directional connection with Sentinel without relying on brittle, point‑to‑point integrations. By abstracting common integration challenges such as authentication handling, retries, and schema changes, Synqly helps teams orchestrate security data flows into and out of Sentinel more reliably, supporting faster onboarding of new data sources and more maintainable integrations at scale. Versasec vSEC:CMS Versasec vSEC:CMS integrates with Microsoft Sentinel to provide centralized visibility into credential lifecycle and system health events. The connector securely streams vSEC:CMS and vSEC:CLOUD alerts and status data into Sentinel using the Codeless Connector Framework (CCF), transforming credential management activity into correlation-ready security signals. By bringing smart card, token, and passkey management telemetry into Sentinel, this integration helps security teams monitor authentication infrastructure health, investigate credential-related incidents, and unify identity security operations within their SIEM workflows. VirtualMetric DataStream VirtualMetric DataStream integrates with Microsoft Sentinel to optimize how security telemetry is collected, normalized, and routed across the Microsoft security ecosystem. Acting as a high-performance telemetry pipeline, DataStream intelligently filters and enriches logs, sending high-value security data to Sentinel while routing less-critical data to Sentinel data lake or Azure Blob Storage for cost-effective retention. By reducing noise upstream and standardizing logs to Sentinel ready schemas, this integration helps organizations control ingestion costs, improve detection quality, and streamline threat hunting and compliance workflows. VMRay VMRay integrates with Microsoft Sentinel to enrich SIEM and SOAR workflows with automated sandbox analysis and high-fidelity, behavior-based threat intelligence. The connector enables suspicious files and phishing URLs to be submitted directly from Sentinel to VMRay for dynamic analysis, while validated, high-confidence indicators of compromise (IOCs) are streamed back into Sentinel’s Threat Intelligence repository for correlation and detection. By adding detailed attack-chain visibility and enriched incident context, this integration helps SOC teams reduce investigation time, improve detection accuracy, and strengthen automated response workflows across Sentinel environments. XBOW XBOW integrates with Microsoft Sentinel to bring autonomous penetration testing insights directly into security operations workflows. The connector ingests automated penetration test findings from the XBOW platform into Sentinel, enabling security teams to analyze validated exploit activity alongside alerts, incidents, and other security telemetry. By correlating offensive testing results with Sentinel detections, this integration helps organizations identify monitoring gaps, validate detection coverage, and strengthen defensive controls using real‑world, continuously generated attack evidence. Zero Networks Segment Audit Zero Networks Segment integrates with Microsoft Sentinel to provide visibility into micro-segmentation and access-control activity across the network. The connector can collect audit logs or activities from Zero Networks Segment, enabling security teams to monitor policy changes, administrative actions, and access events related to MFA-based network segmentation. By bringing segmentation audit telemetry into Sentinel, this integration supports compliance monitoring, investigation of suspicious changes, and faster detection of attempts to bypass lateral-movement controls within enterprise environments. Zscaler Internet Access (ZIA) Zscaler Internet Access integrates with Microsoft Sentinel to centralize cloud security telemetry from web and firewall traffic. The connector enables ZIA logs to be ingested into Sentinel, allowing security teams to correlate Zscaler Internet Access signals with other enterprise data for improved threat detection, investigation, and response. By bringing ZIA web, firewall, and security events into Sentinel analytics and hunting workflows, this integration helps organizations gain broader visibility into internet-based threats and strengthen Zero Trust security operations. In addition to these solutions from our third-party partners, we are also excited to announce the following connector published by the Microsoft Sentinel team: GitHub Enterprise Audit Logs Microsoft’s Sentinel Promise For Customers Every connector in the Microsoft Sentinel ecosystem is built to work out of the box. In the unlikely event a customer encounters any issue with a connector, the App Assure team stands ready to assist. For Software Developers Software partners in need of assistance in creating or updating a Sentinel solution can also leverage Microsoft’s Sentinel Promise to support our shared customers. For developers seeking to build agentic experiences utilizing Sentinel data lake, we are excited to announce the launch of our Sentinel Advisory Service to guide developers across their Sentinel journey. Customers and developers alike can reach out to us via our intake form. Learn More Microsoft Sentinel data lake Microsoft Sentinel data lake: Unify signals, cut costs, and power agentic AI Introducing Microsoft Sentinel data lake What is Microsoft Sentinel data lake Unlocking Developer Innovation with Microsoft Sentinel data lake Microsoft Sentinel Codeless Connector Framework (CCF) Create a codeless connector for Microsoft Sentinel Public Preview Announcement: Microsoft Sentinel CCF Push What’s New in Microsoft Sentinel Monthly Blog Microsoft App Assure App Assure home page App Assure services App Assure blog App Assure Request Assistance Form App Assure Sentinel Advisory Services announcement App Assure’s promise: Migrate to Sentinel with confidence App Assure’s Sentinel promise now extends to Microsoft Sentinel data lake Ignite 2025 new Microsoft Sentinel connectors announcement Microsoft Security Microsoft’s Secure Future Initiative Microsoft Unified SecOps Editor's Note - April 7th, 2026: This blog was updated to include connector descriptions for BigID, Commvault, Semperis, and XBOW.1.8KViews0likes0CommentsI'm stuck!
Logically, I'm not sure how\if I can do this. I want to monitor for EntraID Group additions - I can get this to work for a single entry using this: AuditLogs | where TimeGenerated > ago(7d) | where OperationName == "Add member to group" | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName == "NameOfGroup" <-- This returns the single entry | extend User = tostring(TargetResources[0].userPrincipalName) | summarize ['Count of Users Added']=dcount(User), ['List of Users Added']=make_set(User) by GroupName | sort by GroupName asc However, I have a list of 20 Priv groups that I need to monitor. I can do this using: let PrivGroups = dynamic[('name1','name2','name3'}); and then call that like this: blahblah | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName has_any (PrivGroup) But that's a bit dirty to update - I wanted to call a watchlist. I've tried defining with: let PrivGroup = (_GetWatchlist('TestList')); and tried calling like: blahblah | where TargetResources[0].type == "User" | extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue))) | where GroupName has_any ('PrivGroup') I've tried dropping the let and attempted to lookup the watchlist directly: | where GroupName has_any (_GetWatchlist('TestList')) The query runs but doesn't return any results (Obvs I know the result exists) - How do I lookup that extracted value on a Watchlist. Any ideas or pointers why I'm wrong would be appreciated! Many thanksSolved222Views0likes3Comments