siem
576 TopicsUpdate: Changing the Account Name Entity Mapping in Microsoft Sentinel
The upcoming update introduces more consistent and predictable entity data across analytics, incidents, and automation by standardizing how the Account Name property is populated when using UPN‑based mappings in analytic rules. Going forward, Account Name property will consistently contain only the UPN prefix, with new dedicated fields added for the full UPN and UPN suffix. While this improves consistency and enables more granular automation, customers who rely on specific Account Name values in automation rules or Logic App playbooks may need to take action. Timeline Effective date: July 1, 2026. The change will apply automatically - no opt-in is required. Scope of impact Analytics Rules which include mapping of User Principal Name (UPN) to the Account Name entity field, where the resulting alerts are processed by Automation Rules or Logic App Playbooks that reference the AccountName property. What’s changing Currently When an Analytic Rule includes mapping of a full UPN (for example: 'user@domain.com') to the Account Name field, the resulting input value for the Automation Rule or/and Logic App Playbook is inconsistent. In some cases, it contains only the UPN prefix: 'user', and in other cases the full UPN ('user@domain.com'). After July 1, 2026 Account Name property will consistently contain only the UPN prefix The following new fields will be added to the entity object: AccountName (UPN prefix) UPNSuffix UserPrincipalName (full UPN) This change provides an enhanced filtering and automation logic based on these new fields. Example Before Analytics Rule maps: 'user@domain.com' Automation Rule receives: Account Name: 'user' or 'user@domain.com' (inconsistent) After Analytics Rule maps: 'user@domain.com' Automation Rule receives: Account Name: 'user' UPNSuffix: 'domain.com' Logic App Playbook and SecurityAlert table receives: AccountName: 'user' UPNSuffix: 'domain.com' UserPrincipalName: 'user@domain.com' Feature / Location Before After SecurityAlert table Logic App Playbook Entity Why does it matter? If your automation logic relies on exact string comparisons against the full UPN stored in Account Name, those conditions may no longer match after the update. This most commonly affects: Automation Rules using "Equals" condition on Account Name Logic App Playbooks comparing entity field 'accountName' to a full UPN value Call to action Avoid strict equality checks against Account Name Use flexible operators such as: Contains Starts with Leverage the new UPNSuffix field for clearer intent Example update Before - Account name will show as 'user' or 'user@domain.com' After - Account Name will show as 'user' Recommended changes: Account Name Contains/Startswith 'user' UPNSuffix Equals/Startswith/Contains 'domain.com' This approach ensures compatibility both before and after the change takes effect. Where to update Review any filters, conditions, or branching logic that depend on Account Name values. Automation Rules: Use the 'Account name' field Logic App Playbooks: Update conditions referencing the entity: 'accountName' For example: Automation Rule before the change: Automation Rule after the change: Summary A consistency improvement to Account Name mapping is coming on July 1, 2026 The change affects Automation Rules and Logic App Playbooks that rely on UPN to Account Name mappings New UPN related fields provide better structure and control Customers should follow the recommendations above before the effective change date494Views0likes0CommentsWhat’s new in Microsoft Sentinel: February 2026
February brings a set of new innovations to Sentinel that helps you work with security content across your SOC. This month’s updates focus on how security teams ingest, manage, and operationalize content, with new connectors, multi-tenant content distribution capabilities, and an enhanced UEBA Essentials solution to surface high‑risk behavior faster across cloud and identity environments. We’re also introducing new partner-built agentic experiences available through Microsoft Security Store, enabling customers to extend Sentinel with specialized expertise directly inside their existing workflows. Together, these innovations help SOC teams move faster, scale smarter, and unlock deeper security insight without added complexity. Expand your visibility and capabilities with Sentinel content Seamlessly onboard security data with growing out-of-the-box connectors (general availability) Sentinel continues to expand its connector ecosystem, making it easier for security teams to bring together data from across cloud, SaaS, and on-premises‑premises environments so nothing critical slips through the cracks. With broader coverage and faster onboarding, SOCs can unlock unified visibility, stronger analytics, and deeper context across their entire security stack. Customers can now use out-of-the-box connectors and solutions for: o Mimecast Audit Logs o CrowdStrike Falcon Endpoint Protection o Vectra XDR o Palo Alto Networks Cloud NGFW o SocPrime o Proofpoint on Demand (POD) Email Security o Pathlock o MongoDB o Contrast ADR For the full list of connectors, see our documentation. Share your input on what to prioritize next with our App Assure team. Microsoft 365 Copilot data connector (public preview) The Microsoft 365 Copilot connector brings Microsoft 365 Copilot audit logs and activity data into Sentinel, giving security teams visibility into how Microsoft 365 Copilot is being used across their organization. Once ingested, this data can power analytics rules, custom detections, workbooks, automation, and investigations, helping SOC teams quickly spot anomalies, misuse, and policy violations. Customers can also send this data to the Sentinel data lake for advanced scenarios, such as custom graphs and MCP integrations, while benefiting from lower cost ingestion and flexible retention. Learn more here. Transition your Sentinel connectors to the codeless connector framework (CCF) Microsoft is modernizing data connectors by shifting from Azure Function based connectors to the codeless connector framework (CCF). CCF enables partners, customers, and developers to build custom connectors that ingest data into Sentinel with a fully SaaS managed experience, built-in health monitoring, centralized credential management, and enhanced performance. We recommend that customers review their deployed connectors and move to the latest CCF versions to ensure uninterrupted data collection and continued access to the latest Sentinel capabilities. As part of Azure’s modernization of custom data collection, the legacy custom data collection API will be retired in September 2026. Centrally manage and distribute Sentinel content across multiple tenants (public preview) For partners and SOCs managing multiple Sentinel tenants, you can centrally manage and distribute Sentinel content across multiple tenants from the Microsoft Defender portal. With multi-tenant content distribution, you can replicate analytics rules, automation rules, workbooks, and alert tuning rules across tenants instead of rebuilding the same detections, automation, and dashboards in one environment at a time. This helps you onboard new tenants faster, reduce configuration drift, and maintain a consistent security baseline while still keeping local execution in each target tenant under centralized control. Learn more: New content types supported in multi-tenant content distribution Find high-risk anomalous behavior faster with an enhanced UEBA essentials solution (public preview) UEBA Essentials solution now helps SOC teams uncover high‑risk anomalous behavior faster across Azure, AWS, GCP, and Okta. With expanded multi-cloud anomaly detection and new queries powered by the anomalies table, analysts can quickly surface the riskiest activity, establish reliable behavioral baselines, and understand anomalies in context without chasing noisy or disconnected signals. UEBA Essentials aligns activity to MITRE ATT&CK, highlights complex malicious IP patterns, and builds a comprehensive anomaly profile for users in seconds, reducing investigation time while improving signal quality across identity and cloud environments. UEBA Essentials is available directly from the Sentinel content hub, with 30+ prebuilt UEBA queries ready to deploy. Behavior analytics can be enabled automatically from the connectors page as new data sources are added, making it easy to turn deeper insight into immediate action. For more information, see: UEBA Solution Power Boost: Practical Tools for Anomaly Detection Extend Sentinel with partner-built Security Copilot agents in Microsoft Security Store (general availability) You can extend Sentinel with partner-built Security Copilot agents that are discoverable and deployable through Microsoft Security Store in the Defender experience. These AI-powered agents are created by trusted partners specifically to work with Sentinel to deliver packaged expertise for investigation, triage, and response without requiring you to build your own agentic workflows from scratch. These partner-built agents work with Sentinel analytics and incidents to help SOC teams triage faster, investigate deeper, and surface insights that would otherwise take hours of manual effort. For example, these agents can review Sentinel and Defender environments, map attacker activity, or automate forensic analysis and SOC reporting. BlueVoyant’s Watchtower agent helps optimize Sentinel and Defender configurations, AdaQuest’s Data Leak agent accelerates response by surfacing risky data exposure and identity misuse, and Glueckkanja’s Attack Mapping agent automatically maps fragmented entities and attacker behavior into a coherent investigation story. Together, these agents show how the Security Store turns partner innovation into enterprise-ready, Security Copilot-powered capabilities that you can use in your existing SOC workflows. Browse these and more partner-built Security Copilot agents in the Security Store within the Defender portal. At Ignite, we announced the native integration of Security Store within the Defender portal. Read more about the GA announcement here: Microsoft Security Store: Now Generally Available Explore Sentinel experience Enhanced reports in the Threat Intelligence Briefing Agent (general availability) The Threat Intelligence Briefing Agent now applies a structured knowledge graph to Microsoft Defender for Threat Intelligence, enabling it to surface fresher, more relevant threats tailored to a customer’s specific industry and region. Building on this foundation, the agent also features embedded, high‑fidelity Microsoft Threat Intelligence citations, providing authoritative context directly within each insight. With these advancements, security teams gain clearer, more actionable guidance and mitigation steps through context‑rich insights aligned to their environment, helping them focus on what matters most and respond more confidently to emerging threats. Learn more: Microsoft Security Copilot Threat Intelligence Briefing Agent in Microsoft Defender Microsoft Purview Data Security Investigations (DSI) integrated with Sentinel graph (general availability) Sentinel now brings together data‑centric and threat‑centric insights to help teams understand risk faster and respond with more confidence. By combining AI‑powered deep content analysis from Microsoft Purview with activity‑centric graph analytics in Sentinel, security teams can identify sensitive or risky data, see how it was accessed, moved, or exposed, and take action from a single experience. This gives SOC and data security teams a full, contextual view of the potential blast radius, connecting what happened to the data with who accessed it and how, so investigations are faster, clearer, and more actionable. Start using the Microsoft Purview Data Security Investigations (DSI) integration with the Sentinel graph to give your analysts richer context and streamline end‑to‑end data risk investigations. Deadline to migrate the Sentinel experience from Azure to Defender extended to March 2027 To reduce friction and support customers of all sizes, we are extending the sunset date for managing Sentinel in the Azure portal to March 31, 2027. This additional time ensures customers can transition confidently while taking advantage of new capabilities that are becoming available in the Defender portal. Learn more about this decision, why you should start planning your move today, and find helpful resources here: UPDATE: New timeline for transitioning Sentinel experience to Defender portal Events and webinars Stay connected with the latest security innovations and best practices through global conferences and expert‑led sessions that bring the community together to learn, connect, and explore how Microsoft is delivering AI‑driven, end‑to‑end security for the modern enterprise. Join us at RSAC, March 23–26, 2026 at the Moscone Center in San Francisco Register for RSAC and stop by the Microsoft booth to see our latest security innovations in action. Learn how Sentinel SIEM and platform help organizations stay ahead of threats, simplify operations, and protect what matters most. Register today! Microsoft Security Webinars Discover upcoming sessions on Sentinel SIEM & platform, Defender, and more. Sign up today and be part of the conversation that shapes security for everyone. Learn more about upcoming webinars. Additional resources Blogs: UPDATE: New timeline for transitioning Sentinel experience to Defender portal, Accelerate your move to Microsoft Sentinel with AI-powered SIEM migration tool, Automating Microsoft Sentinel: A blog series on enabling Smart Security, The Agentic SOC Era: How Sentinel MCP Enables Autonomous Security Reasoning Documentation: What Is a Security Graph? , SIEM migration tool, Onboarding to Microsoft Sentinel data lake from the Defender portal Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Sentinel. We’ll see you in the next edition!1.1KViews2likes1CommentSAP & Business-Critical App Security Connectors
I validated what it takes to make SAP and SAP-adjacent security signals operational in a SOC: reliable ingestion, stable schemas, and detections that survive latency and schema drift. I focus on four integrations into Microsoft Sentinel: SAP Enterprise Threat Detection (ETD) cloud edition (SAPETDAlerts_CL, SAPETDInvestigations_CL), SAP S/4HANA Cloud Public Edition agentless audit ingestion (ABAPAuditLog), Onapsis Defend (Onapsis_Defend_CL), and SecurityBridge (also ABAPAuditLog). Because vendor API specifics for ETD Retrieval API / Audit Retrieval API aren’t publicly detailed in the accessible primary sources I could retrieve, I explicitly label pagination/rate/time-window behaviors as unspecified where appropriate. Connector architectures and deployment patterns For SAP-centric telemetry I separate two planes: First is SAP application telemetry that lands in SAP-native tables, especially ABAPAuditLog, ABAPChangeDocsLog, ABAPUserDetails, and ABAPAuthorizationDetails. These tables are the foundation for ABAP-layer monitoring and are documented with typed columns in Azure Monitor Logs reference. Second is external “security product” telemetry (ETD alerts, Onapsis findings). These land in custom tables (*_CL) and typically require a SOC-owned normalization layer to avoid brittle detections. Within Microsoft’s SAP solution itself, there are two deployment models: agentless and containerized connector agent. The agentless connector uses SAP Cloud Connector and SAP Integration Suite to pull logs, and Microsoft documents it as the recommended approach; the containerized agent is being deprecated and disabled on September 14, 2026. On the “implementation technology” axis, Sentinel integrations generally show up as: - Codeless Connector Framework (CCF) pollers/pushers (SaaS-managed ingestion definitions with DCR support). - Function/Logic App custom pipelines using the Logs Ingestion API when you need custom polling, enrichment, or a vendor endpoint that isn’t modeled in CCF. In my view, ETD and S/4HANA Cloud connectors are “agentless” from the Sentinel side (API credentials only), while Onapsis Defend and SecurityBridge connectors behave like push pipelines because Microsoft requires an Entra app + DCR permissions (typical Logs Ingestion API pattern). Authentication and secrets handling Microsoft documents the required credentials per connector: - ETD cloud connector requires Client Id + Client Secret for ETD Retrieval API (token mechanics unspecified). - S/4HANA Cloud Public Edition connector requires Client Id + Client Secret for Audit Retrieval API (token mechanics unspecified), and Microsoft notes “alternative authentication mechanisms” exist (details in linked repo are unspecified in accessible sources). - Onapsis Defend and SecurityBridge connectors require a Microsoft Entra ID app registration and Azure permission to assign Monitoring Metrics Publisher on DCRs. This maps directly to the Logs Ingestion API guidance, where a service principal is granted DCR access via that role (or the Microsoft.Insights/Telemetry/Write data action). For production, I treat these as “SOC platform secrets”: - Store client secrets/certificates in Key Vault when you own the pipeline (Function/Logic App); rotate on an operational schedule; alert on auth failures and sudden ingestion drops. - For vendor-managed ingestion (Onapsis/SecurityBridge), I still require: documented ownership of the Entra app, explicit RBAC scope for the DCR, and change control for credential rotation because a rotated secret is effectively a data outage. API behaviors and ingestion reliability For ETD Retrieval API and Audit Retrieval API, pagination/rate limits/time windows are unspecified in the accessible vendor documentation I could retrieve. I therefore design ingestion and detections assuming non-ideal API behavior: late-arriving events, cursor/page limitations, and throttling. CCF’s RestApiPoller model supports explicit retry policy, windowing, and multiple paging strategies, so if/when you can obtain vendor API semantics, you can encode them declaratively (rather than writing fragile code). For the SAP solution’s telemetry plane, Microsoft provides strong operational cues: agentless collection flows through Integration Suite, and troubleshooting typically happens in the Integration Suite message log; this is where I validate delivery failures before debugging Sentinel-side parsers. For scheduled detections, I always account for ingestion delay explicitly. Microsoft’s guidance is to widen event lookback by expected delay and then constrain on ingestion_time() to prevent duplicates from overlap. Schema, DCR transformations, and normalization layer Connector attribute comparison Connector Auth method Sentinel tables Default polling Backfill Pagination Rate limits SAP ETD (cloud) Client ID + Secret (ETD Retrieval API) SAPETDAlerts_CL, SAPETDInvestigations_CL unspecified unspecified unspecified unspecified SAP S/4HANA Cloud (agentless) Client ID + Secret (Audit Retrieval API); alt auth referenced ABAPAuditLog unspecified unspecified unspecified unspecified Onapsis Defend Entra app + DCR permission (Monitoring Metrics Publisher) Onapsis_Defend_CL n/a (push pattern) unspecified n/a unspecified SecurityBridge Entra app + DCR permission (Monitoring Metrics Publisher) ABAPAuditLog n/a (push pattern) unspecified n/a unspecified Ingestion-time DCR transformations Sentinel supports ingestion-time transformations through DCRs to filter, enrich, and mask data before it’s stored. Example: I remove low-signal audit noise and mask email identifiers in ABAPAuditLog: source | where isnotempty(TransactionCode) and isnotempty(User) | where TransactionCode !in ("SM21","ST22") // example noise; tune per tenant | extend Email = iif(Email has "@", strcat(substring(Email,0,2),"***@", tostring(split(Email,"@")[1])), Email) Normalization functions Microsoft explicitly recommends using SAP solution functions instead of raw tables because they can change the infrastructure beneath without breaking detections. I follow the same pattern for ETD/Onapsis custom tables: I publish SOC-owned functions as a schema contract. .create-or-alter function with (folder="SOC/SAP") Normalize_ABAPAudit() { ABAPAuditLog | project TimeGenerated, SystemId, ClientId, User, TransactionCode, TerminalIpV6, MessageId, MessageClass, MessageText, AlertSeverityText, UpdatedOn } .create-or-alter function with (folder="SOC/SAP") Normalize_ETDAlerts() { SAPETDAlerts_CL | extend AlertId = tostring(coalesce(column_ifexists("AlertId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("User",""), column_ifexists("user",""))) | project TimeGenerated, AlertId, Severity, SapUser, * } .create-or-alter function with (folder="SOC/SAP") Normalize_Onapsis() { Onapsis_Defend_CL | extend FindingId = tostring(coalesce(column_ifexists("FindingId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("user",""))) | project TimeGenerated, FindingId, Severity, SapUser, * } Health/lag monitoring and anti-gap I monitor both connector health and ingestion delay. SentinelHealth is the native health table, and Microsoft provides a health workbook and a schema reference for the fields. let lookback=24h; union isfuzzy=true (ABAPAuditLog | extend T="ABAPAuditLog"), (SAPETDAlerts_CL | extend T="SAPETDAlerts_CL"), (Onapsis_Defend_CL | extend T="Onapsis_Defend_CL") | where TimeGenerated > ago(lookback) | summarize LastEvent=max(TimeGenerated), P95DelaySec=percentile(datetime_diff("second", ingestion_time(), TimeGenerated), 95), Events=count() by T Anti-gap scheduled-rule frame (Microsoft pattern): let ingestion_delay=10m; let rule_lookback=5m; ABAPAuditLog | where TimeGenerated >= ago(ingestion_delay + rule_lookback) | where ingestion_time() > ago(rule_lookback) SOC detections for ABAP privilege abuse, fraud/insider behavior, and audit readiness Privileged ABAP transaction monitoring ABAPAuditLog includes TransactionCode, User, SystemId, and terminal/IP fields, so I start with a curated high-risk tcode set and then add baselines. let PrivTCodes=dynamic(["SU01","PFCG","SM59","RZ10","SM49","SE37","SE16","SE16N"]); Normalize_ABAPAudit() | where TransactionCode in (PrivTCodes) | summarize Actions=count(), Ips=make_set(TerminalIpV6,5) by SystemId, User, TransactionCode, bin(TimeGenerated, 1h) | where Actions >= 3 Fraud/insider scenario: sensitive object change near privileged audit activity ABAPChangeDocsLog exposes ObjectClass, ObjectId, and change types; I correlate sensitive object changes to privileged transactions in a tight window. let w=10m; let Sensitive=dynamic(["BELEG","BPAR","PFCG","IDENTITY"]); ABAPChangeDocsLog | where ObjectClass in (Sensitive) | project ChangeTime=TimeGenerated, SystemId, User=tostring(column_ifexists("User","")), ObjectClass, ObjectId, TypeOfChange=tostring(column_ifexists("ItemTypeOfChange","")) | join kind=innerunique ( Normalize_ABAPAudit() | project AuditTime=TimeGenerated, SystemId, User, TransactionCode ) on SystemId, User | where AuditTime between (ChangeTime-w .. ChangeTime+w) | project ChangeTime, AuditTime, SystemId, User, ObjectClass, ObjectId, TransactionCode, TypeOfChange Audit-ready pipeline: monitoring continuity and configuration touchpoints I treat audit logging itself as a monitored control. A simple SOC-safe control is “volume drop” by system; it’s vendor-agnostic and catches pipeline breaks and deliberate suppression. Normalize_ABAPAudit() | summarize PerHour=count() by SystemId, bin(TimeGenerated, 1h) | summarize Avg=avg(PerHour), Latest=arg_max(TimeGenerated, PerHour) by SystemId | where Latest_PerHour < (Avg * 0.2) Where Onapsis/ETD are present, I increase fidelity by requiring “privileged ABAP activity” plus an external SAP-security product finding (field mappings are tenant-specific; normalize first): let win=1h; Normalize_ABAPAudit() | where TransactionCode in ("SU01","PFCG","SM59","SE16N") | join kind=leftouter (Normalize_Onapsis()) on $left.User == $right.SapUser | where isempty(FindingId) == false and TimeGenerated1 between (TimeGenerated .. TimeGenerated+win) | project TimeGenerated, SystemId, User, TransactionCode, FindingId, OnapsisSeverity=Severity Production validation, troubleshooting, and runbook For acceptance, I validate in this order: table creation, freshness/lag percentiles, connector health state, and cross-check of event counts against the upstream system for the same UTC window (where available). Connector health monitoring is built around SentinelHealth plus the Data collection health workbook. For SAP agentless ingestion, Microsoft states most troubleshooting happens in Integration Suite message logs—this is where I triage authentication/networking failures before tuning KQL. For Onapsis/SecurityBridge-style ingestion, I validate Entra app auth, DCR permission assignment (Monitoring Metrics Publisher), and a minimal ingestion test payload using the Logs Ingestion API tutorial flow. Operational runbook items I treat as non-optional: health alerts on connector failure and freshness drift; scheduled rule anti-gap logic; playbooks that capture evidence bundles (ABAPAuditLog slice + user context from ABAPUserDetails/ABAPAuthorizationDetails); DCR filters to reduce noise and cost; and change control for normalization functions and watchlists. SOC “definition of done” checklist (short): 1) Tables present and steadily ingesting; 2) P95 ingestion delay measured and rules use the anti-gap pattern; 3) SentinelHealth enabled with alerts; 4) SOC-owned normalization functions deployed; 5) at least one privileged-tcode rule + one change-correlation rule + one audit-continuity rule in production. Mermaid ingestion flow:50Views5likes0CommentsHow Should a Fresher Learn Microsoft Sentinel Properly?
Hello everyone, I am a fresher interested in learning Microsoft Sentinel and preparing for SOC roles. Since Sentinel is a cloud-native enterprise tool and usually used inside organizations, I am unsure how individuals without company access are expected to gain real hands-on experience. I would like to hear from professionals who actively use Sentinel: - How do freshers typically learn and practice Sentinel? - What learning resources or environments are commonly used by beginners? - What level of hands-on experience is realistically expected at entry level? I am looking for guidance based on real industry practice. Thank you for your time.29Views0likes1CommentObserved Automation Discrepancies
Hi Team ... I want to know the logic behind the Defender XDR Automation Engine . How it works ? I have observed Defender XDR Automation Engine Behavior contrary to expectations of identical incident and automation handling in both environments, discrepancies were observed. Specifically, incidents with high-severity alerts were automatically closed by Defender XDR's automation engine before reaching their SOC for review, raising concerns among clients and colleagues. Automation rules are clearly logged in the activity log, whereas actions performed by Microsoft Defender XDR are less transparent . A high-severity alert related to a phishing incident was closed by Defender XDR's automation, resulting in the associated incident being closed and removed from SOC review. Wherein the automation was not triggered by our own rules, but by Microsoft's Defender XDR, and sought clarification on the underlying logic.32Views1like1CommentMicrosoft Sentinel for SAP Agentless connector GA
Dear Community, Today is the day: Our new agentless connector for Microsoft Sentinel Solution for SAP applications is Generally Available now! Fully onboarded to SAP’s official Business Accelerator Hub and ready for prime time wherever your SAP systems are waiting – on-premises, hyperscalers, RISE, or GROW – to be protected. Let’s hear from an agentless customer: “With the Microsoft Sentinel Solution for SAP and its new agentless connector, we accelerated deployment across our SAP landscape without the complexity of containerized agents. This streamlined approach elevated our SOC’s visibility into SAP security events, strengthened our compliance posture, and enabled faster, more informed incident response” SOC Specialist, North American aviation company Use the video below to kick off your own agentless deployment today. #Kudos to the amazing mvigilante for showing us around the new connector! But we didn’t stop there! Security is being reengineered for the AI era - moving from static, rule-based controls to platform-driven, machine-speed defence that anticipates threats before they strike. Attackers think in graphs - Microsoft does too. We’re bringing relationship-aware context to Microsoft Security - so defenders and AI can see connections, understand the impact of a potential compromise (blast radius), and act faster across pre-breach and post-breach scenarios including SAP systems - your crown jewels. See it in action in below phishing-compromise which lead to an SAP login bypassing MFA with followed operating-system activities on the SAP host downloading trojan software. Enjoy this clickable experience for more details on the scenario. Shows how a phishing compromise escalated to an SAP MFA bypass, highlighting cross-domain correlation. The Sentinel Solution for SAP has AI-first in mind and directly integrates with our security platform on the Defender portal for enterprise-wide signal correlation, Security Copilot reasoning, and Sentinel Data Lake usage. Your real-time SAP detections operate on the Analytics tier for instant results and threat hunting, while the same SAP logs get mirrored to the lake for cost-efficient long-term storage (up to 12 years). Access that data for compliance reporting or historic analysis through KQL jobs on the lake. No more – yeah, I have the data stored somewhere to tick the audit report check box – but be able to query and use your SAP telemetry in long term storage at scale. Learn more here. Findings from the Agentless Connector preview During our preview we learned that majority of customers immediately profit from the far smoother onboarding experience compared to the Docker-based approach. Deployment efforts and time to first SAP log arrival in Sentinel went from days and weeks to hours. ⚠️ Deprecation notice for containerized data connector agent ⚠️ The containerised SAP data connector will be deprecated on September 14th, 2026. This change aligns with the discontinuation of the SAP RFC SDK, SAP's strategic integration roadmap, and customer demand for simpler integration. Migrate to the new agentless connector for simplified onboarding and compliance with SAP’s roadmap. All new deployments starting October 31, 2025, will only have the new agentless connector option, and existing customers should plan their migration using the guidance on Microsoft Learn. It will be billed at the same price as the containerized agent, ensuring no cost impact for customers. Note📌: To support transition for those of you on the Docker-based data connector, we have enhanced our built-in KQL functions for SAP to work across data sources for hybrid and parallel execution. Spotlight on new Features Inspired by the feedback of early adopters we are shipping two of the most requested new capabilities with GA right away. Customizable polling frequency: Balance threat detection value (1min intervals best value) with utilization of SAP Integration Suite resources based on your needs. ⚠️Warning! Increasing the intervals may result in message processing truncation to avoid SAP CPI saturation. See this blog for more insights. Refer to the max-rows parameter and SAP documentation to make informed decisions. Customizable API endpoint path suffix: Flexible endpoints allow running all your SAP security integration flows from the agentless connector and adherence to your naming strategies. Furthermore, you can add the community extensions like SAP S/4HANA Cloud public edition (GROW), the SAP Table Reader, and more. Displays the simplified onboarding flow for the agentless SAP connector You want more? Here is your chance to share additional feature requests to influence our backlog. We would like to hear from you! Getting Started with Agentless The new agentless connector automatically appears in your environment – make sure to upgrade to the latest version 3.4.05 or higher. Sentinel Content Hub View: Highlights the agentless SAP connector tile in Microsoft Defender portal, ready for one-click deployment and integration with your security platform The deployment experience on Sentinel is fully automatic with a single button click: It creates the Azure Data Collection Endpoint (DCE), Data Collection Rule (DCR), and Microsoft Entra ID app registration assigned with RBAC role "Monitoring Metrics Publisher" on the DCR to allow SAP log ingest. Explore partner add-ons that build on top of agentless The ISV partner ecosystem for the Microsoft Sentinel Solution for SAP is growing to tailor the agentless offering even further. The current cohort has flagship providers like our co-engineering partner SAP SE themselves with their security products SAP LogServ & SAP Enterprise Threat Detection (ETD), and our mutual partners Onapsis and SecurityBridge. Ready to go agentless? ➤ Get started from here ➤ Explore partner add-ons here. ➤ Share feature requests here. Next Steps Once deployed, I recommend to check AryaG’s insightful blog series for details on how to move to production with the built-in SAP content of agentless. Looking to expand protection to SAP Business Technology Platform? Here you go. #Kudos to the amazing Sentinel for SAP team and our incredible community contributors! That's a wrap 🎬. Remember: bringing SAP under the protection of your central SIEM isn't just a checkbox - it's essential for comprehensive security and compliance across your entire IT estate. Cheers, Martin1.8KViews1like0CommentsAutomating Microsoft Sentinel: Part 2: Automate the mundane away
Welcome to the second entry of our blog series on automating Microsoft Sentinel. In this series, we’re showing you how to automate various aspects of Microsoft Sentinel, from simple automation of Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. So far, we’ve covered Part 1: Introduction to Automating Microsoft Sentinel where we talked about why you would want to automate as well as an overview of the different types of automation you can do in Sentinel. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: Introduction to Automating Microsoft Sentinel Part 2: Automation Rules [You are here] – Automate the mundane away Part 3: Playbooks 1 – Playbooks Part I – Fundamentals Part 4: Playbooks 2 – Playbooks Part II – Diving Deeper Part 5: Azure Functions / Custom Code Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 2: Automation Rules – Automate the mundane away Automation rules can be used to automate Sentinel itself. For example, let’s say there is a group of machines that have been classified as business critical and if there is an alert related to those machines, then the incident needs to be assigned to a Tier 3 response team and the severity of the alert needs to be raised to at least “high”. Using an automation rule, you can take one analytic rule, apply it to the entire enterprise, but then have an automation rule that only applies to those business-critical systems to make those changes. That way only the items that need that immediate escalation receive it, quickly and efficiently. Automation Rules In Depth So, now that we know what Automation Rules are, let’s dive in to them a bit deeper to better understand how to configure them and how they work. Creating Automation Rules There are three main places where we can create an Automation Rule: 1) Navigating to Automation under the left menu 2) In an existing Incident via the “Actions” button 3) When writing an Analytic Rule, under the “Automated response” tab The process for each is generally the same, except for the Incident route and we’ll break that down more in a bit. When we create an Automation Rule, we need to give the rule a name. It should be descriptive and indicative of what the rule is going to do and what conditions it applies to. For example, a rule that automatically resolves an incident based on a known false positive condition on a server named SRV02021 could be titled “Automatically Close Incident When Affected Machine is SRV02021” but really it’s up to you to decide what you want to name your rules. Trigger The next thing we need to define for our Automation Rule is the Trigger. Triggers are what cause the automation rule to begin running. They can fire when an incident is created or updated, or when an alert is created. Of the two options (incident based or alert based), it’s preferred to use incident triggers as they’re potentially the aggregation of multiple alerts and the odds are that you’re going to want to take the same automation steps for all of the alerts since they’re all related. It’s better to reserve alert-based triggers for scenarios where an analytic rule is firing an alert, but is set to not create an incident. Conditions Conditions are, well, the conditions to which this rule applies. There are two conditions that are always present: The Incident provider and the Analytic rule name. You can choose multiple criterion and steps. For example, you could have it apply to all incident providers and all rules (as shown in the picture above) or only a specific provider and all rules, or not apply to a particular provider, etc. etc. You can also add additional Conditions that will either include or exclude the rule from running. When you create a new condition, you can build it out by multiple properties ranging from information about the Incident all the way to information about the Entities that are tagged in the incident Remember our earlier Automation Rule title where we said this was a false positive about a server name SRV02021? This is where we make the rule match that title by setting the Condition to only fire this automation if the Entity has a host name of “SRV2021” By combining AND and OR group clauses with the built in conditional filters, you can make the rule as specific as you need it to be. You might be thinking to yourself that it seems like while there is a lot of power in creating these conditions, it might be a bit onerous to create them for each condition. Recall earlier where I said the process for the three ways of creating Automation Rules was generally the same except using the Incident Action route? Well, that route will pre-fill variables for that selected instance. For example, for the image below, the rule automatically took the rule name, the rules it applies to as well as the entities that were mapped in the incident. You can add, remove, or modify any of the variables that the process auto-maps. NOTE: In the new Unified Security Operations Platform (Defender XDR + Sentinel) that has some new best practice guidance: If you've created an automation using "Title" use "Analytic rule name" instead. The Title value could change with Defender's Correlation engine. The option for "incident provider" has been removed and replaced by "Alert product names" to filter based on the alert provider. Actions Now that we’ve tuned our Automation Rule to only fire for the situations we want, we can now set up what actions we want the rule to execute. Clicking the “Actions” drop down list will show you the options you can choose When you select an option, the user interface will change to map to your selected option. For example, if I choose to change the status of the Incident, the UX will update to show me a drop down menu with options about which status I would like to set. If I choose other options (Run playbook, change severity, assign owner, add tags, add task) the UX will change to reflect my option. You can assign multiple actions within one Automation Rule by clicking the “Add action” button and selecting the next action you want the system to take. For example, you might want to assign an Incident to a particular user or group, change its severity to “High” and then set the status to Active. Notably, when you create an Automation rule from an Incident, Sentinel automatically sets a default action to Change Status. It sets the automation up to set the Status to “Closed” and a “Benign Positive – Suspicious by expected”. This default action can be deleted and you can then set up your own action. In a future episode of this blog we’re going to be talking about Playbooks in detail, but for now just know that this is the place where you can assign a Playbook to your Automation Rules. There is one other option in the Actions menu that I wanted to specifically talk about in this blog post though: Incident Tasks Incident Tasks Like most cybersecurity teams, you probably have a run book of the different tasks or steps that your analysts and responders should take for different situations. By using Incident Tasks, you can now embed those runbook steps directly in the Incident. Incident tasks can be as lightweight or as detailed as you need them to be and can include rich formatting, links to external content, images, etc. When an incident with Tasks is generated, the SOC team will see these tasks attached as part of the Incident and can then take the defined actions and check off that they’ve been completed. Rule Lifetime and Order There is one last section of Automation rules that we need to define before we can start automating the mundane away: when should the rule expire and in what order should the rule run compared to other rules. When you create a rule in the standalone automation UX, the default is for the rule to expire at an indefinite date and time in the future, e.g. forever. You can change the expiration date and time to any date and time in the future. If you are creating the automation rule from an Incident, Sentinel will automatically assume that this rule should have an expiration date and time and sets it automatically to 24 hours in the future. Just as with the default action when created from an incident, you can change the date and time of expiration to any datetime in the future, or set it to “Indefinite” by deleting the date. Conclusion In this blog post, we talked about Automation Rules in Sentinel and how you can use them to automate mundane tasks in Sentinel as well as leverage them to help your SOC analysts be more effective and consistent in their day-to-day with capabilities like Incident Tasks. Stay tuned for more updates and tips on automating Microsoft Sentinel!1.8KViews4likes4CommentsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.1.5KViews1like1CommentIntegrating Proofpoint and Mimecast Email Security with Microsoft Sentinel
Microsoft Sentinel can ingest rich email security telemetry from Proofpoint and Mimecast to power advanced phishing detection. The Proofpoint On Demand (POD) Email Security and Proofpoint Targeted Attack Protection (TAP) connectors pull threat logs (quarantines, spam, phishing attempts) and user click data into Sentinel. Similarly, the Mimecast Secure Email Gateway connector ingests detailed mail flow and targeted-threat logs (attachment/URL scans, impersonation events). These integrations use Azure-hosted ingestion (via Logic Apps or Azure Functions) and the new Codeless Connector framework to call vendor APIs on a schedule. The result is a consolidated dataset in Sentinel’s Log Analytics, enabling correlated alerting and hunting across email, identity, and endpoint signals. Figure: Phishing emails are processed by Mimecast’s gateway and Proofpoint POD/TAP services. Security logs (delivery/quarantine events, malicious attachments/links, user clicks) flow into Microsoft Sentinel. In Sentinel, these mail signals are correlated with identity (Azure AD), endpoint (Defender) and network telemetry for end-to-end phishing detection. Proofpoint POD (Email Protection) Connector The Proofpoint POD connector ingests core email protection logs. It creates two tables, ProofpointPODMailLog_CL and ProofpointPODMessage_CL. These logs include per-message metadata (senders, recipients, subject, message size, timestamps), threat scores (spamScore, phishScore, malwareScore, impostorScore), and attachment details (number of attachments, names, hash values and sandbox verdicts). Quarantine actions are recorded (quarantine folder/rule) and malicious indicators (URL or file hash) and campaign IDs are tagged in the threatsInfoMap field. For example, each ProofpointPODMessage_CL record may carry a sender_s (sender email domain hashed), recipient list, subject, and any detected threat type (Phish/Malware/Spam/Impostor) with associated threat hash or URL. Deployment: Proofpoint POD uses Sentinel’s codeless connector (an Azure Function behind the scenes). You must provide Proofpoint API credentials (Cluster ID and API token) in the connector UI. The connector periodically calls the Proofpoint SIEM API to fetch new log events (typically in 1–2 hour batches). The data lands in the above tables. (Older custom logic-app approaches similarly parse JSON output from the /v2/siem/messages endpoints.) Proofpoint TAP (Targeted Attack Protection) Connector Proofpoint TAP provides user-click and message-delivery events. Its connector creates four tables: ProofPointTAPMessagesDeliveredV2_CL, ProofPointTAPMessagesBlockedV2_CL, ProofPointTAPClicksPermittedV2_CL, and ProofPointTAPClicksBlockedV2_CL. The message tables report emails with detected threats (URL or attachment defense) that were delivered or blocked by TAP. They include the same fields as POD (message GUID, sender, recipients, subject, threat campaign ID, scores, attachments info). The click tables log when users click on URLs: each record has the URL, click timestamp (clickTime), the user’s IP (clickIP), user-agent, the message GUID, and the threat ID/category. These fields allow you to see who clicked which malicious link and when. As the connector description notes, these logs give “visibility into Message and Click events in Microsoft Sentinel” for hunting. Deployment: The TAP connector also uses the codeless framework. You supply a TAP API service principal and secret (proofpoint SIEM API credentials) in the Sentinel content connector. The function app calls TAP’s /v2/siem/clicks/blocked, /permitted, /messages/blocked, and /delivered endpoints. The Proofpoint SIEM API limits queries to 1-hour windows and 7-day history, with no paging (all events in the interval are returned). (A Logic App approach could also be used, as shown in the Tech Community blog: one HTTP GET per event type and a JSON Parse before sending to Log Analytics.) Mimecast Secure Email Gateway Connector The Mimecast connector ingests the Secure Email Gateway (SEG) logs and targeted-threat (TTP) logs. Inbound, outbound and internal mail events from the Mimecast MTA (receipt, processing, delivery stages) are pulled via the API. Typical fields include the unique message ID (aCode), sender, recipient, subject, attachment count/names, and the policy actions or holds (e.g. spam quarantine). For example, the Mimecast “Process” log shows AttCnt, AttNames, and if the message was held (Hld) for review. Delivery logs include the success/failure and TLS details. In addition, Mimecast TTP logs are collected: URL Protect logs (when a user clicks a blocked URL) include the clicked URL (url), category (urlCategory), sender/recipient, and block reason. Impersonation Protect logs capture spoofing detections (e.g. if an internal name is impersonated), with fields like Sender, Recipient, Definition and Action (hold/quarantine). Attachment Protect logs record malicious file detections (filename, hash, threat type). Deployment: Like Proofpoint, Mimecast’s connector uses Azure Functions via the Sentinel content hub. You install the Mimecast solution, open the connector page, then enter Azure app credentials and Mimecast API keys (API Application ID/Key and Access/Secret for the service account). As shown in the deployment guide, you must provide the Azure Subscription, Resource Group, Log Analytics Workspace and the Azure Client (App) ID, Tenant ID and Object ID of the admin performing the setup. On the Mimecast side, you supply the API Base URL (regional), App ID/Secret and user Access/Secret. The connector creates a Function App that polls Mimecast’s SIEM APIs on a cron schedule (default every 30 minutes). You can optionally specify a start date for backfilling up to 7 days of logs. The default tables are MimecastSIEM_CL (for email flow logs) and MimecastDLP_CL (for DLP/TTP events), though custom names can be set. Ingestion Considerations Data Latency: All these connectors are pull-based and typically run on a schedule (often 30–60 minutes). For example, the Proofpoint POD docs note hourly log increments, and Mimecast logs are aggregated every 30 minutes. Expect a delay of up to an hour or more from event occurrence to Sentinel ingestion. Schema Nuances: The APIs often return nested arrays and optional fields. For instance, the Proofpoint blog warns that some JSON fields can be null or vary in type, so the parse schema should account for all possibilities. Similarly, Mimecast logs come in pipe-delimited or JSON format, with values sometimes empty (e.g. no attachments). In KQL, use tostring() or parse_json() on the raw _CL columns, and mv-expand on any multivalue fields (like message parts or threat lists). Table Names: Use the connector’s tables as listed. For Proofpoint: ProofpointPODMailLog_CL and ProofpointPODMessage_CL; for TAP: ProofPointTAPMessagesDeliveredV2_CL, ProofPointTAPMessagesBlockedV2_CL, ProofPointTAPClicksPermittedV2_CL, ProofPointTAPClicksBlockedV2_CL. For Mimecast SEG/TTP: MimecastSIEM_CL (seg logs) and MimecastDLP_CL (TTP logs). API Behavior: The Proofpoint TAP API has no paging. Be aware of timezones (Proofpoint uses UTC) and use the Sentinal ingestion TimeGenerated or event timestamp fields for binning. Detection Engineering and Correlation To detect phishing effectively, we correlate these email logs with identity, endpoint and intel data: Identity (Azure AD): Mail logs contain recipient addresses and (hashed) sender user parts. A common tactic is to correlate SMTP recipients or sender domains with Azure AD user records. For example, join TAP clicks by recipient to the user’s UPN. The Proofpoint logs also include the clicker’s IP (clickIP); we can match that to Azure AD sign-in logs or VPN logs to find which device/location clicked a malicious link. Likewise, anomalous Azure AD sign-ins (impossible travel, MFA failure) after a suspicious email can strengthen the case. Endpoints (Defender): Once a user clicks a bad link or opens a malicious attachment (captured in TAP or Mimecast logs), watch for follow-on behaviors. For instance, use Sentinel’s DeviceSecurityEvents or DeviceProcessEvents to see if that user’s machine launched unusual processes. The threatID or URL hash from email events can be looked up in Defender’s file data. Correlate by username (if available) or IP: if the email log shows a link click from IP X, see if any endpoint alerts or logon events occurred from X around the same time. As the Mimecast integration touts, this enables “correlation across Mimecast events, cloud, endpoint, and network data”. Threat Intelligence: Use Sentinel’s ThreatIntelligenceIndicator tables or Microsoft’s TI feeds to tag known bad URLs/domains in the email logs. For example, join ProofPointTAPClicksBlockedV2_CL on the clicked url against ThreatIntelligenceIndicator (type=URL) to automatically flag hits. Proofpoint’s logs already classify threats (malware/phish) and provide a threatID; one can enrich that with external intel (e.g. check if the hash appears in TI feeds). Mimecast’s URL logs include a urlCategory field, which can be mapped to known malicious categories. Automated playbooks can also pull Intel: e.g. use Sentinel’s TI REST API or Azure Sentinel watchlists containing phishing domains to annotate events. In summary, a robust detection strategy might look like: (1) Identify malicious email events (high phish scores, quarantines, URL clicks). (2) Correlate these events by user with Azure AD logs (did the user log in from a new IP after a phish click?). (3) Correlate with endpoint alerts (Defender found malware on that device). (4) Augment with threat intelligence lookups on URLs and attachments from the email logs. By linking the Proofpoint/Mimecast signals to identity and endpoint events, one can detect the full attack chain from email compromise to endpoint breach. KQL Query Here are representative Kusto queries for common phishing scenarios (adapt table/field names as needed): Malicious URL Click Detection: Identify users who clicked known-malicious URLs. For example, join TAP click logs to TI indicators:This flags any permitted click where the URL matches a known threat indicator. Alternatively, aggregate by domain: let TI = ThreatIntelligenceIndicator | where Active == true and _EntityType == "URL"; ProofPointTAPClicksPermittedV2_CL | where url_s != "" | project ClickTime=TimeGenerated, Recipient=recipient_s, URL=url_s, SenderIP=senderIP_s | join kind=inner TI on $left.URL == TI._Value | project ClickTime, Recipient, URL, Description=TI.Description This flags any permitted click where the URL matches a known threat indicator. Alternatively, aggregate by domain: ProofPointTAPClicksPermittedV2_CL | extend clickedDomain = extract(@"https?://([^/]+)", 1, url_s) | summarize ClickCount=count() by clickedDomain | where clickedDomain has "maliciousdomain.com" or clickedDomain has "phish.example.com" Quarantine Spike (Burst) Detection: Detect sudden spikes in quarantined messages. For example, using POD mail log:This finds hours with an unusually high number of held (quarantined) emails, which may indicate a phishing campaign. You could similarly use ProofPointTAPMessagesBlockedV2_CL. ProofpointPODMailLog_CL | where action_s == "Held" | summarize HeldCount=count() by bin(TimeGenerated, 1h) | order by TimeGenerated desc | where HeldCount > 100 Targeted User Phishing: Find if a specific user received multiple malicious emails. E.g., for user email address removed for privacy reasons:This lists recent phish attempts targeting Username. You might also join with TAP click logs to see if she clicked anything. ProofpointPODMessage_CL | where recipient has "email address removed for privacy reasons" | where array_length(threatsInfoMap) > 0 and threatsInfoMap_classification_s == "Phish" | project TimeGenerated, sender_s, subject_s, threat=threatsInfoMap_threat_s Campaign-Level Analysis: Group emails by Proofpoint campaign ID to see scope of each campaign:This shows each campaign ID with how many unique recipients were hit and one example subject. Combining TAP and POD tables on GUID_s or QID_s can further link click events back to the originating message/campaign. ProofpointPODMessage_CL | mv-expand threatsInfoMap | summarize Recipients=make_set(recipient), Count=dcount(recipient) by CampaignID=threatsInfoMap_campaignId_s | project CampaignID, RecipientCount=Count, Recipients, SampleSubject=any(subject_s) Each query can be refined (for instance, filtering only within a recent time window) and embedded in Sentinel Analytics rules or hunting. The key is using the connectors’ fields – URLs, sender/recipient addresses, campaign IDs – to pivot between email data and other security signals.501Views2likes0CommentsCloud Posture + Attack Surface Signals in Microsoft Sentinel (Prisma Cloud + Cortex Xpanse)
Microsoft expanded Microsoft Sentinel’s connector ecosystem with Palo Alto integrations that pull cloud posture, cloud workload runtime, and external attack surface signals into the SIEM, so your SOC can correlate “what’s exposed” and “what’s misconfigured” with “what’s actively being attacked.” Specifically, the Ignite connectors list includes Palo Alto: Cortex Xpanse CCF and Palo Alto: Prisma Cloud CWPP. Why these connectors matter for Sentinel detection engineering Traditional SIEM pipelines ingest “events.” But exposure and posture are just as important as the events—because they tell you which incidents actually matter. Attack surface (Xpanse) tells you what’s reachable from the internet and what attackers can see. Posture (Prisma CSPM) tells you which controls are broken (public storage, permissive IAM, weak network paths). Runtime (Prisma CWPP) tells you what’s actively happening inside workloads (containers/hosts/serverless). In Sentinel, these become powerful when you can join them with your “classic” telemetry (cloud activity logs, NSG flow logs, DNS, endpoint, identity). Result: fewer false positives, faster triage, better prioritization. Connector overview (what each one ingests) 1) Palo Alto Prisma Cloud CSPM Solution What comes in: Prisma Cloud CSPM alerts + audit logs via the Prisma Cloud CSPM API. What it ships with: connector + parser + workbook + analytics rules + hunting queries + playbooks (prebuilt content). Best for: Misconfig alerts: public storage, overly permissive IAM, weak encryption, risky network exposure. Compliance posture drift + audit readiness (prove you’re monitoring and responding). 2) Palo Alto Prisma Cloud CWPP (Preview) What comes in: CWPP alerts via Prisma Cloud API (Compute/runtime side). Implementation detail: Built on Codeless Connector Platform (CCP). Best for: Runtime detections (host/container/serverless security alerts) “Exploit succeeded” signals that you need to correlate with posture and exposure. 3) Palo Alto Cortex Xpanse CCF What comes in: Alerts logs fetched from the Cortex Xpanse API, ingested using Microsoft Sentinel Codeless Connector Framework (CCF). Important: Supports DCR-based ingestion-time transformations that parse to a custom table for better performance. Best for: External exposure findings and “internet-facing risk” detection Turning exposure into incidents only when the asset is critical / actively targeted. Reference architecture (how the data lands in Sentinel) Here’s the mental model you want for all three: flowchart LR A[Palo Alto Prisma Cloud CSPM] -->|CSPM API: alerts + audit logs| S[Sentinel Data Connector] B[Palo Alto Prisma Cloud CWPP] -->|Prisma API: runtime alerts| S C[Cortex Xpanse] -->|Xpanse API: exposure alerts| S S -->|CCF/CCP + DCR Transform| T[(Custom Tables)] T --> K[KQL Analytics + Hunting] K --> I[Incidents] I -->P[SOAR Playbooks] K --> W[Workbooks / Dashboards] Key design point: Xpanse explicitly emphasizes DCR transformations at ingestion time, use that to normalize fields early so your queries stay fast under load. Deployment patterns (practical, SOC-friendly setup) Step 0 — Decide what goes to “analytics” vs “storage” If you’re using Sentinel’s data lake strategy, posture/exposure data is a perfect candidate for longer retention (trend + audit), while only “high severity” may need real-time analytics. Step 1 — Install solutions from Content Hub Install: Palo Alto Prisma Cloud CSPM Solution Palo Alto Prisma Cloud CWPP (Preview) Palo Alto Cortex Xpanse CCF Step 2 — Credentials & least privilege Create dedicated service accounts / API keys in Palo Alto products with read-only scope for: CSPM alerts + audit CWPP alerts Xpanse alerts/exposures Step 3 — Validate ingestion (don’t skip this) In Sentinel Logs: Locate the custom tables created by each solution (Tables blade). Run a basic sanity query: “All events last 1h” “Top 20 alert types” “Distinct severities” Tip: Save “ingestion smoke tests” as Hunting queries so you can re-run them after upgrades. Step 4 — Turn on included analytics content (then tune) The Prisma Cloud CSPM solution comes with multiple analytics rules, hunting queries, and playbooks out of the box—enable them gradually and tune thresholds before going wide. Detection engineering: high-signal correlation recipes Below are patterns that consistently outperform “single-source alerts.” I’m giving them as KQL templates using placeholder table names because your exact custom table names/columns are workspace-dependent (you’ll see them after install). Recipe 1 — “Internet-exposed + actively probed” (Xpanse + network logs) Goal: Only fire when exposure is real and there’s traffic evidence. let xpanse = <XpanseTable> | where TimeGenerated > ago(24h) | where Severity in ("High","Critical") | project AssetIp=<ip_field>, Finding=<finding_field>, Severity, TimeGenerated; let net = <NetworkFlowTable> | where TimeGenerated > ago(24h) | where Direction == "Inbound" | summarize Hits=count(), SrcIps=make_set(SrcIp, 50) by DstIp; xpanse | join kind=inner (net) on $left.AssetIp == $right.DstIp | where Hits > 50 | project TimeGenerated, Severity, Finding, AssetIp, Hits, SrcIps Why it works: Xpanse gives you exposure. Flow/WAF/Firewall gives you intent. Recipe 2 — “Misconfiguration that creates a breach path” (CSPM + identity or cloud activity) Goal: Prioritize posture findings that coincide with suspicious access or admin changes. let posture = <PrismaCSPMTable> | where TimeGenerated > ago(7d) | where PolicySeverity in ("High","Critical") | where FindingType has_any ("Public", "OverPermissive", "NoMFA", "EncryptionDisabled") | project ResourceId=<resource_id>, Finding=<finding>, PolicySeverity, FirstSeen=TimeGenerated; let activity = <CloudActivityTable> | where TimeGenerated > ago(7d) | where OperationName has_any ("RoleAssignmentWrite","SetIamPolicy","AddMember","CreateAccessKey") | project ResourceId=<resource_id>, Actor=<caller>, OperationName, TimeGenerated; posture | join kind=inner (activity) on ResourceId | project PolicySeverity, Finding, OperationName, Actor, FirstSeen, TimeGenerated | order by PolicySeverity desc, TimeGenerated desc Recipe 3 — “Runtime alert on a workload that was already high-risk” (CWPP + CSPM) Goal: Raise severity when runtime alerts occur on assets with known posture debt. let risky_assets = <PrismaCSPMTable> | where TimeGenerated > ago(30d) | where PolicySeverity in ("High","Critical") | summarize RiskyFindings=count() by AssetId=<asset_id>; <CWPPTable> | where TimeGenerated > ago(24h) | project AssetId=<asset_id>, AlertName=<alert>, Severity=<severity>, TimeGenerated, Details=<details> | join kind=leftouter (risky_assets) on AssetId | extend RiskScore = coalesce(RiskyFindings,0) | order by Severity desc, RiskScore desc, TimeGenerated desc SOC outcome: same runtime alert, different priority depending on posture risk. Operational (in real life) 1) Normalize severities early If Xpanse is using DCR transforms (it is), normalize severity to a consistent enum (“Informational/Low/Medium/High/Critical”) to simplify analytics. 2) Deduplicate exposure findings Attack surface tools can generate repeated findings. Use a dedup function (hash of asset + finding type + port/service) and alert only on “new or changed exposure.” 3) Don’t incident-everything Treat CSPM findings as: Incidents only when: critical + reachable + targeted OR tied to privileged activity Tickets when: high risk but not active Backlog when: medium/low with compensating controls 4) Make SOAR “safe by default” Automations should prefer reversible actions: Block IP (temporary) Add to watchlist Notify owners Open ticket with evidence bundle …and only escalate to destructive actions after confidence thresholds.228Views4likes0Comments