threat intelligence
210 TopicsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.1.5KViews2likes2CommentsThreat Intelligence & Identity Ecosystem Connectors
Microsoft Sentinel’s capability can be greatly enhanced by integrating third-party threat intelligence (TI) feeds (e.g. GreyNoise, Team Cymru) with identity and access logs (e.g. OneLogin, PingOne). This article provides a detailed dive into each connector, data types, and best practices for enrichment and false-positive reduction. We cover how GreyNoise (including PureSignal/Scout), Team Cymru, OneLogin IAM, PingOne, and Keeper integrate with Sentinel – including available connectors, ingested schemas, and configuration. We then outline technical patterns for building TI-based lookup pipelines, scoring, and suppression rules to filter benign noise (e.g. GreyNoise’s known scanners), and enrich alerts with context from identity logs. We map attack chains (credential stuffing, lateral movement, account takeover) to Sentinel data, and propose KQL analytics rules and playbooks with MITRE ATT&CK mappings (e.g. T1110: Brute Force, T1595: Active Scanning). The report also includes guidance on deployment (ARM/Bicep examples), performance considerations for high-volume TI ingestion, and comparison tables of connector features. A mermaid flowchart illustrates the data flow from TI and identity sources into Sentinel analytics. All recommendations are drawn from official documentation and industry sources. Threat Intel & Identity Connectors Overview GreyNoise (TI Feed): GreyNoise provides “internet background noise” intelligence on IPs seen scanning or probing the Internet. The Sentinel GreyNoise Threat Intelligence connector (Azure Marketplace) pulls data via GreyNoise’s API into Sentinel’s ThreatIntelligenceIndicator table. It uses a daily Azure Function to fetch indicators (IP addresses and metadata like classification, noise, last_seen) and injects them as STIX-format indicators (Network IPs with provider “GreyNoise”). This feed can then be queried in KQL. Authentication requires a GreyNoise API key and a Sentinel workspace app with Contributor rights. GreyNoise’s goal is to help “filter out known opportunistic traffic” so analysts can focus on real threats. Official docs describe deploying the content pack and workbook template. Ingested data: IP-based indicators (malicious vs. benign scans), classifications (noise, riot, etc.), organization names, last-seen dates. All fields from GreyNoise’s IP lookup (e.g. classification, last_seen) appear in ThreatIntelligenceIndicator.NetworkDestinationIP, IndicatorProvider="GreyNoise", and related fields. Query: ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" | summarize arg_max(TimeGenerated, *) by NetworkDestinationIP This yields the latest GreyNoise record per IP. Team Cymru Scout (TI Context): Team Cymru’s PureSignal™ Scout is a TI enrichment platform. The Team Cymru Scout connector (via Azure Marketplace) ingests contextual data (not raw logs) about IPs, domains, and account usage into Sentinel custom tables. It runs via an Azure Function that, given IP or domain inputs, populates tables like Cymru_Scout_IP_Data_* and Cymru_Scout_Domain_Data_CL. For example, an IP query yields multiple tables: Cymru_Scout_IP_Data_Foundation_CL, ..._OpenPorts_CL, ..._PDNS_CL, etc., containing open ports, passive DNS history, X.509 cert info, fingerprint data, etc. This feed requires a Team Cymru account (username/password) to access the Scout API. Data types: Structured TI metadata by IP/domain. No native ThreatIndicator insertion; instead, analysts query these tables to enrich events (e.g. join on SourceIP). The Sentintel TechCommunity notes that Scout “enriches alerts with real-time context on IPs, domains, and adversary infrastructure” and can help “reduce false positives”. OneLogin IAM (Identity Logs): The OneLogin IAM solution (Microsoft Sentinel content pack) ingests OneLogin platform events and user info via OneLogin’s REST API. Using the Codeless Connector Framework, it pulls from OneLogin’s Events API and Users API, storing data in custom tables OneLoginEventsV2_CL and OneLoginUsersV2_CL. Typical events include user sign-ins, MFA actions, app accesses, admin changes, etc. Prerequisites: create an OpenID Connect app in OneLogin (for client ID/secret) and register it in Azure (Global Admin). The connector queries hourly (or on schedule), within OneLogin’s rate limit of 5000 calls/hour. Data mapping: OneLoginEventsV2_CL (CL suffix indicates custom log) holds event records (time, user, IP, event type, result, etc.); OneLoginUsersV2_CL contains user account attributes. These can be joined or used in analytics. For example, a query might look for failed login events: OneLoginEventsV2_CL | where Event_type_s == "UserSessionStart" and Result_s == "Failed" (Actual field names depend on schema.) PingOne (Identity Logs): The PingOne Audit connector ingests audit activity from the PingOne Identity platform via its REST API. It creates the table PingOne_AuditActivitiesV2_CL. This includes administrator actions, user logins, console events, etc. You configure a PingOne API client (Client ID/Secret) and set up the Codeless Connector Framework. Logs are retrieved (with attention to PingOne’s license-based rate limits) and appended to the custom table. Analysts can query, for instance, PingOne_AuditActivitiesV2_CL for events like MFA failures or profile changes. Keeper (Password Vault Logs – optional): Keeper, a password management platform, can forward security events to Sentinel via Azure Monitor. As of latest docs, logs are sent to a custom log table (commonly KeeperLogs_CL) using Azure Data Collection Rules. In Keeper’s guide, you register an Azure AD app (“KeeperLogging”) and configure Azure Monitor data collection; then in the Keeper Admin Console you specify the DCR endpoint. Keeper events (e.g. user logins, vault actions, admin changes) are ingested into the table named (e.g.) Custom-KeeperLogs_CL. Authentication uses the app’s client ID/secret and a monitor endpoint URL. This is a bulk ingest of records, rather than a scheduled pull. Data ingested: custom Keeper events with fields like user, action, timestamp. Keeper’s integration is essentially via Azure Monitor (in the older Azure Sentinel approach). Connector Configuration & Data Ingestion Authentication and Rate Limits: Most connectors require API keys or OAuth credentials. GreyNoise and Team Cymru use single keys/credentials, with the Azure Function secured by a Managed Identity. OneLogin and PingOne use client ID/secret and must respect their API limits (OneLogin ~5k calls/hour; PingOne depends on licensing). GreyNoise’s enterprise API allows bulk lookups; the community API is limited (10/day for free), so production integration requires an Enterprise plan. Sentinel Tables: Data is inserted either into built-in tables or custom tables. GreyNoise feeds the ThreatIntelligenceIndicator table, populating fields like NetworkDestinationIP and ThreatSeverity (higher if classified “malicious”). Team Cymru’s Scout connector creates many Cymru_Scout_*_CL tables. OneLogin’s solution populates OneLoginEventsV2_CL and OneLoginUsersV2_CL. PingOne yields PingOne_AuditActivitiesV2_CL. Keeper logs appear in a custom table (e.g. KeeperLogs_CL) as shown in Keeper’s guide. Note: Sentinel’s built-in identity tables (IdentityInfo, SigninLogs) are typically for Microsoft identities; third-party logs can be mapped to them via parsers or custom analytic rules but by default arrive in these custom tables. Data Types & Schema: Threat Indicators: In ThreatIntelligenceIndicator, GreyNoise IPs appear as NetworkDestinationIP with associated fields (e.g. ThreatSeverity, IndicatorProvider="GreyNoise", ConfidenceScore, etc.). (Future STIX tables may be used after 2025.) Custom CL Logs: OneLogin events may include fields such as user_id_s, user_login_s, client_ip_s, event_time, etc. (The published parser issues indicate fields like app_name_s, role_id_d, etc.) PingOne logs include eventType, user, clientIP, result. Keeper logs contain Action, UserName, etc. These raw fields can be normalized in analytic rules or parsed by data transformations. Identity Info: Although not directly ingested, identity attributes from OneLogin/PingOne (e.g. user roles, group IDs) could be periodically fetched and synced to Sentinel (via custom logic) to populate IdentityInfo records, aiding user-centric hunts. Configuration Steps : GreyNoise: In Sentinel Content Hub, install the GreyNoise ThreatIntel solution. Enter your GreyNoise API key when prompted. The solution deploys an Azure Function (requires write access to Functions) and sets up an ingestion schedule. Verify the ThreatIntelligenceIndicator table is receiving GreyNoise entries Team Cymru: From Marketplace install “Team Cymru Scout”. Provide Scout credentials. The solution creates an Azure Function app. It defines a workflow to ingest or lookup IPs/domains. (Often, analysts trigger lookups rather than scheduled ingestion, since Scout is lookup-based.) Ensure roles: the Function’s managed identity needs Sentinel contributor rights. OneLogin: Use the Data Connectors UI. Authenticate OneLogin by creating a new Sentinel Web API authentication (with OneLogin’s client ID/secret). Enable both “OneLogin Events” and “OneLogin Users”. No agent is needed. After setup, data flows into OneLoginEventsV2_CL. PingOne: Similarly, configure the PingOne connector. Use the PingOne administrative console to register an OAuth client. In Sentinel’s connector blade, enter the client ID/secret and specify desired log types (Audit, maybe IDP logs). Confirm PingOne_AuditActivitiesV2_CL populates hourly. Keeper: Register an Azure AD app (“KeeperLogging”) and assign it Monitoring roles (Publisher/Contributor) to your workspace and data collection endpoint. Create an Azure Data Collection Rule (DCR) and table (e.g. KeeperLogs_CL). In Keeper’s Admin Console (Reporting & Alerts → Azure Monitor), enter the tenant ID, client ID/secret, and the DCR endpoint URL (format: https://<DCE>/dataCollectionRules/<DCR_ID>/streams/<table>?api-version=2023-01-01). Keeper will then push logs. KQL Lookup: To enrich a Sentinel event with these feeds, you might write: OneLoginEventsV2_CL | where EventType == "UserLogin" and Result == "Success" | extend UserIP = ClientIP_s | join kind=inner ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and ThreatSeverity >= 3 | project NetworkDestinationIP, Category ) on $left.UserIP == $right.NetworkDestinationIP This joins OneLogin sign-ins with GreyNoise’s list of malicious scanners. Enrichment & False-Positive Reduction IOC Enrichment Pipelines: A robust TI pipeline in Sentinel often uses Lookup Tables and Functions. For example, ingested TI (from GreyNoise or Team Cymru) can be stored in reference data or scheduled lookup tables to enrich incoming logs. Patterns include: - Normalization: Normalize diverse feeds into common STIX schema fields (e.g. all IPs to NetworkDestinationIP, all domains to DomainName) so rules can treat them uniformly. - Confidence Scoring: Assign a confidence score to each indicator (from vendor or based on recency/frequency). For GreyNoise, for instance, you might use classification (e.g. “malicious” vs. “benign”) and history to score IP reputation. In Sentinel’s ThreatIntelligenceIndicator.ConfidenceScore field you can set values (higher for high-confidence IOCs, lower for noisy ones). - TTL & Freshness: Some indicators (e.g. active C2 domains) expire, so setting a Time-To-Live is critical. Sentinel ingestion rules or parsers should use ExpirationDateTime or ValidUntil on indicators to avoid stale IOCs. For example, extend ValidUntil only if confidence is high. - Conflict Resolution: When the same IOC comes from multiple sources (e.g. an IP in both GreyNoise and TeamCymru), you can either merge metadata or choose the highest confidence. One approach: use the highest threat severity from any source. Sentinel’s ThreatType tags (e.g. malicious-traffic) can accommodate multiple providers. False-Positive Reduction Techniques: - GreyNoise Noise Scoring: GreyNoise’s primary utility is filtering. If an IP is labeled noise=true (i.e. just scanning, not actively malicious), rules can deprioritize alerts involving that IP. E.g. suppress an alert if its source IP appears in GreyNoise as benign scanner. - Team Cymru Reputation: Use Scout data to gauge risk; e.g. if an IP’s open port fingerprint or domain history shows no malicious tags, it may be low-risk. Conversely, known hostile IP (e.g. seen in ransomware networks) should raise alert level. Scout’s thousands of context tags help refine a binary IOC. - Contextual Identity Signals: Leverage OneLogin/PingOne context to filter alerts. For instance, if a sign-in event is associated with a high-risk location (e.g. new country) and the IP is a GreyNoise scan, flag it. If an IP is marked benign, drop or suppress. Correlate login failures: if a single IP causes many failures across multiple users, it might be credential stuffing (T1110) – but if that IP is known benign scanner, consider it low priority. - Thresholding & Suppression: Build analytic suppression rules. Example: only alert on >5 failed logins in 5 min from IP and that IP is not noise. Or ignore DNS queries to domains that TI flags as benign/whitelisted. Apply tag-based rules: some connectors allow tagging known internal assets or trusted scan ranges to avoid alerts. Use GreyNoise to suppress alerts: SecurityEvent | where EventID == 4625 and Account != "SYSTEM" | join kind=leftanti ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and Classification == "benign" | project NetworkSourceIP ) on $left.IPAddress == $right.NetworkSourceIP This rule filters out Windows 4625 login failures originating from GreyNoise-known benign scanners. Identity Attack Chains & Detection Rules Modern account attacks often involve sequential activities. By combining identity logs with TI, we can detect advanced patterns. Below are common chains and rule ideas: Credential Stuffing (MITRE T1110): Often seen as many login failures followed by a success. Detection: Look for multiple failed OneLogin/PingOne sign-ins for the same or different accounts from a single IP, then a success. Enrich with GreyNoise: if the source IP is in GreyNoise (indicating scanning), raise severity. Rule: let SuspiciousIP = OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Failed" | summarize CountFailed=count() by ClientIP_s | where CountFailed > 5; OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" and ClientIP_s in (SuspiciousIP | project ClientIP_s) | join kind=inner ( ThreatIntelligenceIndicator | where ThreatType == "ip" | extend GreyNoiseClass = tostring(Classification) | project IP=NetworkSourceIP, GreyNoiseClass ) on $left.ClientIP_s == $right.IP | where GreyNoiseClass == "malicious" | project TimeGenerated, Account_s, ClientIP_s, GreyNoiseClass Tactics: Initial Access (T1110) – Severity: High. Account Takeover / Impossible Travel (T1198): Sign-ins from unusual geographies or devices. Detection: Compare user’s current sign-in location against historical baseline. Use OneLogin/PingOne logs: if two logins by same user occur in different countries with insufficient time to travel, trigger. Enrich: if the login IP is also known infrastructure (Team Cymru PDNS, etc.), raise alert. Rule: PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" | extend loc = tostring(City_s) + ", " + tostring(Country_s) | sort by TimeGenerated desc | partition by User_s ( where TimeGenerated < ago(24h) // check last day | summarize count(), min(TimeGenerated), max(TimeGenerated) ) | where max_TimeGenerated - min_TimeGenerated < 1h and count_>1 and (range(loc) contains ",") | project User_s, TimeGenerated, loc (This pseudo-query checks multiple locations in <1 hour.) Tactics: Reconnaissance / Initial Access – Severity: Medium. Lateral Movement (T1021): Use of an account on multiple systems/apps. Detection: Two or more distinct application/service authentications by same user within a short time. Use OneLogin app-id fields or audit logs for access. If these are followed by suspicious network activity (e.g. contacting C2 via GreyNoise), escalate. Tactics: Lateral Movement – Severity: High. Privilege Escalation (T1098): If an admin account is changed or MFA factors reset in OneLogin/PingOne, especially after anomalous login. Detection: Monitor OneLogin admin events (“User updated”, “MFA enrolled/removed”). Cross-check the actor’s IP against threat feeds. Tactics: Credential Access – Severity: High. Analytics Rules (KQL) Below are six illustrative Sentinel analytics rules combining TI and identity logs. Each rule shows logic, tactics, severity, and MITRE IDs. (Adjust field names per your schemas and normalize CL tables as needed.) Multiple Failed Logins from Malicious Scanner (T1110) – High severity. Detect credential stuffing by identifying >5 failed login attempts from the same IP, where that IP is classified as malicious by GreyNoise. let BadIP = OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Failed" | summarize attempts=count() by SourceIP_s | where attempts >= 5; OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" and SourceIP_s in (BadIP | project SourceIP_s) | join ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and ThreatSeverity >= 4 | project MaliciousIP=NetworkDestinationIP ) on $left.SourceIP_s == $right.MaliciousIP | extend AttackFlow="CredentialStuffing", MITRE="T1110" | project TimeGenerated, UserName_s, SourceIP_s, MaliciousIP Logic: Correlate failed-then-success login from same IP plus GreyNoise-malign classification. Impossible Travel / Anomalous Geo (T1198) – Medium severity. A user signs in from two distant locations within an hour. // Get last two logins per user let lastLogins = PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" and Outcome_s == "Success" | sort by TimeGenerated desc | summarize first_place=arg_max(TimeGenerated, City_s, Country_s, SourceIP_s, TimeGenerated) by User_s; let prevLogins = PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" and Outcome_s == "Success" | sort by TimeGenerated desc | summarize last_place=arg_min(TimeGenerated, City_s, Country_s, SourceIP_s, TimeGenerated) by User_s; lastLogins | join kind=inner prevLogins on User_s | extend dist=geo_distance_2points(first_place_City_s, first_place_Country_s, last_place_City_s, last_place_Country_s) | where dist > 1000 and (first_place_TimeGenerated - last_place_TimeGenerated) < 1h | project Time=first_place_TimeGenerated, User=User_s, From=last_place_Country_s, To=first_place_Country_s, MITRE="T1198" Logic: Compute geographic distance between last two logins; flag if too far too fast. Suspicious Admin Change (T1098) – High severity. Detect a change to admin settings (like role assign or MFA reset) via PingOne, from a high-risk IP (Team Cymru or GreyNoise) or after failed logins. PingOne_AuditActivitiesV2_CL | where EventType_s in ("UserMFAReset", "UserRoleChange") // example admin events | extend ActorIP = tostring(InitiatingIP_s) | join ( ThreatIntelligenceIndicator | where ThreatSeverity >= 3 | project BadIP=NetworkDestinationIP ) on $left.ActorIP == $right.BadIP | extend MITRE="T1098" | project TimeGenerated, ActorUser_s, Action=EventType_s, ActorIP Logic: Raise if an admin action originates from known bad IP. Malicious Domain Access (T1498): Medium severity. Internal logs (e.g. DNS or Web proxy) show access to a domain listed by Team Cymru Scout as C2 or reconnaissance. DeviceDnsEvents | where QueryType == "A" | join kind=inner ( Cymru_Scout_Domain_Data_CL | where ThreatTag_s == "Command-and-Control" | project DomainName_s ) on $left.QueryText == $right.DomainName_s | extend MITRE="T1498" | project TimeGenerated, DeviceName, QueryText Logic: Correlate internal DNS queries with Scout’s flagged C2 domains. (Requires that domain data is ingested or synced.) Brute-Force Firewall Blocked IP (T1110): Low to Medium severity. Firewall logs show an IP blocked for many attempts, and that IP is not noise per GreyNoise (i.e., malicious scanner). AzureDiagnostics | where Category == "NetworkSecurityGroupFlowEvent" and msg_s contains "DIRECTION=Inbound" and Action_s == "Deny" | summarize attemptCount=count() by IP = SourceIp_s, FlowTime=bin(TimeGenerated, 1h) | where attemptCount > 50 | join kind=leftanti ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and Classification == "benign" | project NoiseIP=NetworkDestinationIP ) on $left.IP == $right.NoiseIP | extend MITRE="T1110" | project IP, attemptCount, FlowTime Logic: Many inbound denies (possible brute force) from an IP not whitelisted by GreyNoise. New Device Enrolled (T1078): Low severity. A user enrolls a new device or location for MFA after unusual login. OneLoginEventsV2_CL | where EventType == "NewDeviceEnrollment" | join kind=inner ( OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" | top 1 by TimeGenerated asc // assume prior login | project User_s, loginTime=TimeGenerated, loginIP=ClientIP_s ) on User_s | where loginIP != DeviceIP_s | extend MITRE="T1078" | project TimeGenerated, User_s, DeviceIP_s, loginIP Logic: Flag if new device added (strong evidence of account compromise). Note: The above rules are illustrative. Tune threshold values (e.g. attempt counts) to your environment. Map the event fields (EventType, Result, etc.) to your actual schema. Use Severity mapping in rule configs as indicated and tag with MITRE IDs for context. TI-Driven Playbooks and Automation Automated response can amplify TI. Patterns include: - IOC Blocking: On alert (e.g. suspicious IP login), an automation runbook can call Azure Firewall, Azure Defender, or external firewall APIs to block the offending IP. For instance, a Logic App could trigger on the analytic alert, use the TI feed IP, and call AzFWNetworkRule PowerShell to add a deny rule. - Enrichment Workflow: After an alert triggers, an Azure Logic App playbook can enrich the incident by querying TI APIs. E.g., given an IP from the alert, call GreyNoise API or Team Cymru Scout API in real-time (via HTTP action), add the classification into incident details, and tag the incident accordingly (e.g. GreyNoiseStatus: malicious). This adds context for the analyst. - Alert Suppression: Implement playbook-driven suppression. For example, an alert triggered by an external IP can invoke a playbook that checks GreyNoise; if the IP is benign, the playbook can auto-close the alert or mark as false-positive, reducing analyst load. - Automated TI Feed Updates: Periodically fetch open-source or commercial TI and use a playbook to push new indicators into Sentinel’s TI store via the Graph API. - Incident Enrichment: On incident creation, a playbook could query OneLogin/PingOne for additional user details (like department or location via their APIs) and add as note in the incident. Performance, Scalability & Costs TI feeds and identity logs can be high-volume. Key considerations: - Data Ingestion Costs: Every log and TI indicator ingested into Sentinel is billable by the GB. Bulk TI indicator ingestion (like GreyNoise pulling thousands of IPs/day) can add storage costs. Use Sentinel’s Data Collection Rules (DCR) to apply ingestion-time filters (e.g. only store indicators above a confidence threshold) to reduce volume. GreyNoise feed is typically modest (since it’s daily, maybe thousands of IPs). Identity logs (OneLogin/PingOne) depend on org size – could be megabytes per day. Use sentinel ingestion sl analytic filters to drop low-value logs. - Query Performance: Custom log tables (OneLogin, PingOne, KeeperLogs_CL) can grow large. Periodically archive old data (e.g. export >90 days to storage, then purge). Use materialized views or scheduled summary tables for heavy queries (e.g. pre-aggregate hourly login counts). For threat indicator tables, leverage built-in indices on IndicatorId and NetworkIP for fast joins. Use project-away _* to remove metadata from large join queries. - Retention & Storage: Configure retention per table. If historical TI is less needed, set shorter retention. Use Azure Monitor’s tiering/Archive for seldom-used data. For large TI volumes (e.g. feeding multiple TIPs), consider using Sentinel Data Lake (or connecting Log Analytics to ADLS Gen2) to offload raw ingest cheaply. - Scale-Out Architecture: For very large environments, use multiple Sentinel workspaces (e.g. regional) and aggregate logs via Azure Lighthouse or Sentinel Fusion. TI feeds can be shared: one workspace collects TI, then distribute to others via Azure Sentinel’s TI management (feeds can be published and shared cross-workspaces). - Connector Limits: API rate limits dictate update frequency. Schedule connectors accordingly (e.g. daily for TI, hourly for identity events). Avoid hourly pulls of already static data (users list can be daily). For OneLogin/PingOne, use incremental tokens or webhooks if possible to reduce load. - Monitoring Health: Use Sentinel’s Log Analytics and Monitor metrics to track ingestion volume and connector errors. For example, monitor the Functions running GreyNoise/Scout for failures or throttling. Deployment Checklist & Guide Prepare Sentinel Workspace: Ensure a Log Analytics workspace with Sentinel enabled. Record the workspace ID and region. Register Applications: In Azure AD, create and note any Service Principal needed for functions or connectors (e.g. a Sentinel-managed identity for Functions). In each vendor portal, register API apps and credentials (OneLogin OIDC App, PingOne API client, Keeper AD app). Network & Security: If needed, configure firewall rules to allow outbound to vendor APIs. Install Connectors: In Sentinel Content Hub or Marketplace, install the solutions for GreyNoise TI, Team Cymru Scout, OneLogin IAM, PingOne. Follow each wizard to input credentials. Verify the “Data Types” (Logs, Alerts, etc.) are enabled. Create Tables & Parsers (if manual): For Keeper or unsupported logs, manually create custom tables (via DCR in Azure portal). Import JSON to define fields as shown in Keeper’s docs Test Data Flow: After each setup, wait 1–24 hours and run a simple query on the destination table (e.g. OneLoginEventsV2_CL | take 5) to confirm ingestion. Deploy Ingestion Rules: Use Sentinel Threat intelligence ingestion rules to fine-tune feeds (e.g. mark high-confidence feeds to extend expiration). Optionally tag/whitelist known good. Configure Analytics: Enable or create rules using the KQL above. Place them in the correct threat hunting or incident rule categories (Credential Access, Lateral Movement, etc.). Assign appropriate alert severity. Set up Playbooks: For automated actions (alert enrichment, IOC blocking), create Logic App playbooks. Test with mock alerts (dry run) to ensure correct API calls. Tuning & Baseline: After initial alerts, tune queries (thresholds, whitelists) to reduce noise. Maintain suppression lists (e.g. internal pentest IPs). Use the MITRE mapping in rule details for clarity. Documentation & Training: Document field mappings (e.g. OneLoginEvents fields), and train SOC staff on new TI-enriched alert fields. Connectors Comparison Connector Data Sources Sent. Tables Update Freq. Auth Method Key Fields Enriched Limits/Cost Pros/Cons GreyNoise IP intelligence (scanners) ThreatIntelligenceIndicator Daily (scheduled pull) API Key IP classification, noise, classification API key required; paid license for large usage Pros: Filters benign scans, broad scan visibility Con: Only IP-based (no domain/file). Team Cymru Scout Global IP/domain telemetry Cymru_Scout_*_CL (custom tables) On-demand or daily Account credentials Detailed IP/domain context (ports, PDNS, ASN, etc.) Requires Team Cymru subscription. Potentially high cost for feed. Pros: Rich context (open ports, DNS, certs); great for IOC enrichment. Con: Complex setup, data in custom tables only. OneLogin IAM OneLogin user/auth logs OneLoginEventsV2_CL, OneLoginUsersV2_CL Polls hourly OAuth2 (client ID/secret) User, app, IP, event type (login, MFA, etc.) OneLogin API: 5K calls/hour. Data volume moderate. Pros: Direct insight into cloud identity use; built-in parser available. Cons: Limited to OneLogin environment only. PingOne Audit PingOne audit logs PingOne_AuditActivitiesV2_CL Polls hourly OAuth2 (client ID/secret) User actions, admin events, MFA logs Rate limited by Ping license. Data volume moderate. Pros: Captures critical identity events; widely used product. Cons: Requires PingOne Advanced license for audit logs. Keeper (custom) Keeper security events KeeperLogs_CL (custom) Push (continuous) OAuth2 (client ID/secret) + Azure DCR Vault logins, record accesses, admin changes None (push model); storage costs. Pros: Visibility into password vault activity (often blind spot). Cons: More manual setup; custom logs not parsed by default. Data Flow Diagram This flowchart shows GreyNoise (GN) feeding the Threat Intelligence table, Team Cymru feeding enrichment tables, and identity sources pushing logs. All data converges into Sentinel, where enrichment lookups inform analytics and automated responses.107Views4likes0CommentsObserved Automation Discrepancies
Hi Team ... I want to know the logic behind the Defender XDR Automation Engine . How it works ? I have observed Defender XDR Automation Engine Behavior contrary to expectations of identical incident and automation handling in both environments, discrepancies were observed. Specifically, incidents with high-severity alerts were automatically closed by Defender XDR's automation engine before reaching their SOC for review, raising concerns among clients and colleagues. Automation rules are clearly logged in the activity log, whereas actions performed by Microsoft Defender XDR are less transparent . A high-severity alert related to a phishing incident was closed by Defender XDR's automation, resulting in the associated incident being closed and removed from SOC review. Wherein the automation was not triggered by our own rules, but by Microsoft's Defender XDR, and sought clarification on the underlying logic.54Views2likes3CommentsProtection Against Email Bombs with Microsoft Defender for Office 365
In today's digital age, email remains a critical communication tool for businesses and individuals. However, with the increasing sophistication of cyberattacks, email security has become more important than ever. One such threat that has been growing is the email bombing, a form of net abuse that sends large volumes of email to an address to overflow the mailbox, overwhelm the server, or distract attention from important email messages indicating a security breach. Email bomb - Wikipedia Understanding Email Bombing Email bombing, typically involves subscribing victims to a large number of legitimate newsletter and subscription services. Each subscription service sends email notifications, which in aggregate create a large stream of emails into the victim’s inbox, making email triage for legitimate emails very difficult. This form of attack is essentially a denial-of-service (DDOS) on the victim's email triaging attention budget. Hybrid Attacks More recently, email subscription bombs have been coupled with simultaneous lures on Microsoft Teams, Zoom, or via phone calls. Attackers impersonate IT support and offer to help solve the email problem caused by the spike of unwanted emails, ultimately compromising the victim's system or installing malware on their system. This type of attack is brilliant because it creates a sense of urgency and legitimacy, making victims more likely to accept remote assistance and inadvertently allow malware planting or data theft. Read about the use of mail bombs where threat actors misused Quick Assist in social engineering attacks leading to ransomware | Microsoft Security Blog. Incidence and Purpose of Email Bombing Email bombing attacks have been around for many years but can have significant impacts on targeted individuals, such as enterprise executives, HR or finance representatives. These attacks are often used as precursors to more serious security incidents, including malware planting, ransomware, and data exfiltration. They can also mute important security alerts, making it easier for attackers to carry out fraudulent activities without detection. New Detection technology for Mail Bombing attacks To address these types of attacks Microsoft Defender has now released a comprehensive solution involving a durable block to limit the influx of emails, the majority of which are often spam. By intelligently tracking message volumes across different sources and time intervals, this new detection leverages historical patterns of the sender and signals related to spam content. It prevents mail bombs from being dropped into the user’s inbox and the messages are rather sent to the Junk folder (of Outlook). Note: Safe sender lists in Outlook continue to be honored, so emails from trustworthy sources are not unexpectedly moved to the Junk folder (in order to prevent false positives). Since the initial rollout that started in early May, we’ve seen a tremendous impact in blocking mail bombing attacks out of our customers’ inboxes: How to leverage new “Mail bombing” detection technology in SOC experiences 1. Investigation and hunting: SOC analysts can now view the new Detection technology as Mail bombing within the following surfaces: Threat Explorer, Email entity page and Advanced Hunting empowering them to investigate, filter and hunt for threats related to mail bombing. 2. Custom detection rule: To analyze the frequency and volume of attacks from mail bombing vector, or to have automated alerts configured to notify SOC user whenever there is a mail bombing attack, SOC analysts can utilize the custom detection rules in Advanced hunting by writing a KQL query using data in DetectionMethods column of EmailEvents table. Here’s a sample query to get you started: EmailEvents | where Timestamp > ago(1d) | where DetectionMethods contains "Mail bombing" | project Timestamp, NetworkMessageId, SenderFromAddress, Subject, ReportId The SOC experiences are rolled out worldwide to all customers. Conclusion Email bombs represent an incidental threat in the world of cybersecurity. With the new detection technology for Mail Bombing, Microsoft Defender for Office 365 protects users from these attacks and empowers Security Operations Center Analysts to ensure to gain visibility into such attacks and take quick actions to keep organizations safe! Note: The Mail bombing protection is available by default in Exchange Online Protection and Microsoft Defender for Office 365 plans. This blog post is associated with Message Center post MC1096885. Also read Part 2 of our blog series to learn more about protection against multi-modal attacks involving mail bombing and correlation of Microsoft Teams activity in Defender. Watch this video to learn more: Microsoft Defender for Office 365 | Mail Bombing and Mixed-Mode Attack Protection Learn: Detection technology details table What's on the Email entity page Filterable properties in the All email view in Threat ExplorerPart 3: Build custom email security reports with Power BI and workbooks in Microsoft Sentinel
TL;DR: We're releasing a brand-new Power BI template for email security reporting and a major update (v3) to the Microsoft Sentinel workbook. Both solutions share the same rich visuals and insights. Choose Power BI for quick deployment without Sentinel, or the Sentinel workbook for extended data retention and multi-tenant scenarios. Get started in minutes with either option. Introduction Security teams in both small and large organizations track key metrics to make critical security decisions and identify meaningful trends in their organizations. While Microsoft Defender for Office 365 provides rich, built-in reporting capabilities, many security teams need custom reporting solutions to create dedicated views, combine multiple data sources, and derive deeper insights tailored to their unique requirements. Earlier last year (Part 1 and Part 2) we shared examples of how you can use workbooks in Microsoft Sentinel to build a custom email security insights dashboard for Microsoft Defender for Office 365. Today, we are excited to announce the release of a new Power BI template file for Microsoft Defender for Office 365 customers, along with an updated version of the Microsoft Defender for Office 365 Detections and Insights workbook in Microsoft Sentinel. Both solutions share the same visual design and structure, giving you a consistent experience regardless of which platform you choose. Power BI template file - Microsoft Defender for Office 365 Detections and Insights: Microsoft Sentinel workbook - Microsoft Defender for Office 365 Detections and Insights: NEW: Power BI template file for Microsoft Defender for Office 365 Detections and Insights This custom reporting template file utilizes Power BI and Microsoft Defender XDR Advanced Hunting through the Microsoft Graph security API. It is designed for Microsoft Defender for Office 365 customers who have access to Advanced Hunting but are not using Microsoft Sentinel. Advanced Hunting data in Microsoft Defender for Office 365 tables is available for up to 30 days. The reporting template uses these same data tables to visualize insights into an organization's email security, including protection, detection, and response metrics provided by Microsoft Defender for Office 365. Note: If data retention beyond 30 days is required, customers can use the Microsoft Defender for Office 365 Detections and Insights workbook in Microsoft Sentinel. You can find the new .pbit template file and detailed instructions on how to set up and use it in the unified Microsoft Sentinel and Microsoft 365 Defender GitHub repository. This new Power BI template uses the same visuals and structure as the Microsoft Defender for Office 365 Detections and Insights workbook in Microsoft Sentinel, providing an easy way to gain deep email security insights across a wide range of use cases. UPDATED: Microsoft Defender for Office 365 Detections and Insights workbook in Microsoft Sentinel We are excited to announce the release of a new version (3.0.0) of the Microsoft Defender for Office 365 Detections and Insights workbook in Microsoft Sentinel. The workbook is part of the Microsoft Defender XDR solution in Microsoft Sentinel and can be installed and started to use with a few simple clicks. In this new release we incorporated feedback we have received from many customers in the past few months to add new visuals, updated existing visuals and add insights focusing on security operations. What’s New Here are some notable changes and new capabilities available in the updated workbook template. Improved structure: Headings and grouped insights have been added to tabs for easier navigation and understanding of metrics. Contextual explanations: Each tab, section, and visual now includes descriptions to help users interpret insights effectively. Drill-down capability: A single “Open query link” action allows users to view the underlying KQL query for each visual, enabling quick investigation and hunting by modifying conditions or removing summaries to access raw data. Detection Dashboard tab enhancements: Added an example Effectiveness metric, updated visuals to focus on overall Microsoft Defender for Office 365 protection values, and introduced new sections for Emerging Threats and Microsoft 365 Secure Email Gateway Performance. New Security Operations Center (SOC) Insights tab: Provides operational metrics such as Security Incident Response, Investigation, and Response Actions for SOC teams. Advanced threat insights: Includes new LLM-based content analysis detections and threat classification insights on the Emails – Phish Detections tab. External forwarding insights: Added deep visibility into Inbox rules and SMTP forwarding in Outlook, including destination details to assess potential data leakage risks. Geo-location improvements: Sender IPv4 insights now include top countries for better geographic context for each Threat types (Malware, Spam, Phish). Enhanced top attacked users and top senders: Added TotalEmailCount and Bad_Traffic_Percentage for richer context in top attacked users and senders charts. Expanded URL click insights: URL click-based threat detection visuals now include Microsoft 365 Copilot as a workload. How to use the workbook across multiple tenants If you manage multiple environments with Microsoft Sentinel — or you are an MSSP (Managed Security Service Provider) working across multiple customer tenants — you can also use the workbook in multi‑tenant scenarios. Once the required configuration is in place, you can change the Subscription and Workspace parameters in the workbook to be multi select and load data from one or multiple tenants. This enables to see deep email security insights in multi‑tenant environments, including: Aggregated multi‑tenant view: You can view aggregated insights across tenants in a single workbook view. By multi‑selecting tenants in the Subscription and Workspace parameters, the workbook automatically loads and combines data from all selected environments for all visuals on all tabs. Side‑by-side‑ comparison: For example, you can compare phishing detection trends or top attacked users across two or more tenants simply by opening the workbook in two browser windows placed side by side. Note: For the multiselect option‑ to work in the current workbook version, you need to manually adjust the Subscription and Workspace parameters. This configuration is planned to become the default in the next release of the workbook. Until then, you can simply apply this change using the workbook’s Edit mode. How to get the updated workbook version The latest version of the Microsoft Defender for Office 365 Detections and Insights workbook is available as part of the Microsoft Defender XDR solution in the Microsoft Sentinel - Content hub. Version 3.0.13 of the solution has the updated workbook template. If you already have the Microsoft Defender XDR solution deployed, version 3.0.13 is available now as an update. After you install the update, you will have the new workbook template available to use. Note: If you had the workbook saved from a previous template version, make sure you delete the old workbook and use the save button on the new template to recreate a new local version with the latest updates. If you install the Microsoft Defender XDR solution for the first time, you are deploying the latest version and will have the updated template ready to use. How to edit and share the workbook with others You can customize each visual easily. Simply edit the workbook after saving, then adjust the underlying KQL query, change the type of the visual, or create new insights. More information: Visualize your data using workbooks in Microsoft Sentinel | Microsoft Learn Granting other users access to the workbook also possible, see the Manage Access to Microsoft Sentinel Workbooks with Lower Scoped RBAC on the Microsoft Sentinel Blog. Do you have feedback related to reporting in Microsoft Defender for Office 365? You can provide direct feedback via filling the form: aka.ms/mdoreportingfeedback Do you have questions or feedback about Microsoft Defender for Office 365? Engage with the community and Microsoft experts in the Defender for Office 365 forum. More information Integrate Microsoft Defender XDR with Microsoft Sentinel Learn more about Microsoft Sentinel workbooks Learn more about Microsoft Defender XDRQuestion malware autodelete
A malware like Trojan:Win32/Wacatac.C!ml can download other malware, this other malware can perform the malicious action, this malware can delete itself and in the next scan of antivirus free this malware that deleted itself will not have any trace and will not be detected by the scan?117Views0likes2CommentsCloud Posture + Attack Surface Signals in Microsoft Sentinel (Prisma Cloud + Cortex Xpanse)
Microsoft expanded Microsoft Sentinel’s connector ecosystem with Palo Alto integrations that pull cloud posture, cloud workload runtime, and external attack surface signals into the SIEM, so your SOC can correlate “what’s exposed” and “what’s misconfigured” with “what’s actively being attacked.” Specifically, the Ignite connectors list includes Palo Alto: Cortex Xpanse CCF and Palo Alto: Prisma Cloud CWPP. Why these connectors matter for Sentinel detection engineering Traditional SIEM pipelines ingest “events.” But exposure and posture are just as important as the events—because they tell you which incidents actually matter. Attack surface (Xpanse) tells you what’s reachable from the internet and what attackers can see. Posture (Prisma CSPM) tells you which controls are broken (public storage, permissive IAM, weak network paths). Runtime (Prisma CWPP) tells you what’s actively happening inside workloads (containers/hosts/serverless). In Sentinel, these become powerful when you can join them with your “classic” telemetry (cloud activity logs, NSG flow logs, DNS, endpoint, identity). Result: fewer false positives, faster triage, better prioritization. Connector overview (what each one ingests) 1) Palo Alto Prisma Cloud CSPM Solution What comes in: Prisma Cloud CSPM alerts + audit logs via the Prisma Cloud CSPM API. What it ships with: connector + parser + workbook + analytics rules + hunting queries + playbooks (prebuilt content). Best for: Misconfig alerts: public storage, overly permissive IAM, weak encryption, risky network exposure. Compliance posture drift + audit readiness (prove you’re monitoring and responding). 2) Palo Alto Prisma Cloud CWPP (Preview) What comes in: CWPP alerts via Prisma Cloud API (Compute/runtime side). Implementation detail: Built on Codeless Connector Platform (CCP). Best for: Runtime detections (host/container/serverless security alerts) “Exploit succeeded” signals that you need to correlate with posture and exposure. 3) Palo Alto Cortex Xpanse CCF What comes in: Alerts logs fetched from the Cortex Xpanse API, ingested using Microsoft Sentinel Codeless Connector Framework (CCF). Important: Supports DCR-based ingestion-time transformations that parse to a custom table for better performance. Best for: External exposure findings and “internet-facing risk” detection Turning exposure into incidents only when the asset is critical / actively targeted. Reference architecture (how the data lands in Sentinel) Here’s the mental model you want for all three: flowchart LR A[Palo Alto Prisma Cloud CSPM] -->|CSPM API: alerts + audit logs| S[Sentinel Data Connector] B[Palo Alto Prisma Cloud CWPP] -->|Prisma API: runtime alerts| S C[Cortex Xpanse] -->|Xpanse API: exposure alerts| S S -->|CCF/CCP + DCR Transform| T[(Custom Tables)] T --> K[KQL Analytics + Hunting] K --> I[Incidents] I -->P[SOAR Playbooks] K --> W[Workbooks / Dashboards] Key design point: Xpanse explicitly emphasizes DCR transformations at ingestion time, use that to normalize fields early so your queries stay fast under load. Deployment patterns (practical, SOC-friendly setup) Step 0 — Decide what goes to “analytics” vs “storage” If you’re using Sentinel’s data lake strategy, posture/exposure data is a perfect candidate for longer retention (trend + audit), while only “high severity” may need real-time analytics. Step 1 — Install solutions from Content Hub Install: Palo Alto Prisma Cloud CSPM Solution Palo Alto Prisma Cloud CWPP (Preview) Palo Alto Cortex Xpanse CCF Step 2 — Credentials & least privilege Create dedicated service accounts / API keys in Palo Alto products with read-only scope for: CSPM alerts + audit CWPP alerts Xpanse alerts/exposures Step 3 — Validate ingestion (don’t skip this) In Sentinel Logs: Locate the custom tables created by each solution (Tables blade). Run a basic sanity query: “All events last 1h” “Top 20 alert types” “Distinct severities” Tip: Save “ingestion smoke tests” as Hunting queries so you can re-run them after upgrades. Step 4 — Turn on included analytics content (then tune) The Prisma Cloud CSPM solution comes with multiple analytics rules, hunting queries, and playbooks out of the box—enable them gradually and tune thresholds before going wide. Detection engineering: high-signal correlation recipes Below are patterns that consistently outperform “single-source alerts.” I’m giving them as KQL templates using placeholder table names because your exact custom table names/columns are workspace-dependent (you’ll see them after install). Recipe 1 — “Internet-exposed + actively probed” (Xpanse + network logs) Goal: Only fire when exposure is real and there’s traffic evidence. let xpanse = <XpanseTable> | where TimeGenerated > ago(24h) | where Severity in ("High","Critical") | project AssetIp=<ip_field>, Finding=<finding_field>, Severity, TimeGenerated; let net = <NetworkFlowTable> | where TimeGenerated > ago(24h) | where Direction == "Inbound" | summarize Hits=count(), SrcIps=make_set(SrcIp, 50) by DstIp; xpanse | join kind=inner (net) on $left.AssetIp == $right.DstIp | where Hits > 50 | project TimeGenerated, Severity, Finding, AssetIp, Hits, SrcIps Why it works: Xpanse gives you exposure. Flow/WAF/Firewall gives you intent. Recipe 2 — “Misconfiguration that creates a breach path” (CSPM + identity or cloud activity) Goal: Prioritize posture findings that coincide with suspicious access or admin changes. let posture = <PrismaCSPMTable> | where TimeGenerated > ago(7d) | where PolicySeverity in ("High","Critical") | where FindingType has_any ("Public", "OverPermissive", "NoMFA", "EncryptionDisabled") | project ResourceId=<resource_id>, Finding=<finding>, PolicySeverity, FirstSeen=TimeGenerated; let activity = <CloudActivityTable> | where TimeGenerated > ago(7d) | where OperationName has_any ("RoleAssignmentWrite","SetIamPolicy","AddMember","CreateAccessKey") | project ResourceId=<resource_id>, Actor=<caller>, OperationName, TimeGenerated; posture | join kind=inner (activity) on ResourceId | project PolicySeverity, Finding, OperationName, Actor, FirstSeen, TimeGenerated | order by PolicySeverity desc, TimeGenerated desc Recipe 3 — “Runtime alert on a workload that was already high-risk” (CWPP + CSPM) Goal: Raise severity when runtime alerts occur on assets with known posture debt. let risky_assets = <PrismaCSPMTable> | where TimeGenerated > ago(30d) | where PolicySeverity in ("High","Critical") | summarize RiskyFindings=count() by AssetId=<asset_id>; <CWPPTable> | where TimeGenerated > ago(24h) | project AssetId=<asset_id>, AlertName=<alert>, Severity=<severity>, TimeGenerated, Details=<details> | join kind=leftouter (risky_assets) on AssetId | extend RiskScore = coalesce(RiskyFindings,0) | order by Severity desc, RiskScore desc, TimeGenerated desc SOC outcome: same runtime alert, different priority depending on posture risk. Operational (in real life) 1) Normalize severities early If Xpanse is using DCR transforms (it is), normalize severity to a consistent enum (“Informational/Low/Medium/High/Critical”) to simplify analytics. 2) Deduplicate exposure findings Attack surface tools can generate repeated findings. Use a dedup function (hash of asset + finding type + port/service) and alert only on “new or changed exposure.” 3) Don’t incident-everything Treat CSPM findings as: Incidents only when: critical + reachable + targeted OR tied to privileged activity Tickets when: high risk but not active Backlog when: medium/low with compensating controls 4) Make SOAR “safe by default” Automations should prefer reversible actions: Block IP (temporary) Add to watchlist Notify owners Open ticket with evidence bundle …and only escalate to destructive actions after confidence thresholds.244Views4likes0CommentsBuild custom email security reports and dashboards with workbooks in Microsoft Sentinel
Security teams in both small and large organizations track key metrics to make critical security decisions and identify meaningful trends in their organizations. Defender for Office 365 has rich, built-in reporting capabilities that provide insights into your security posture to support these needs. However, sometimes security teams require custom reporting solutions to create dedicated views, combine multiple data sources, and get additional insights to meet their needs. We previously shared an example of how you can leverage Power BI and the Microsoft Defender XDR Advanced Hunting APIs to build a custom dashboard and shared a template that you can customize and extend. In this blog, we will showcase how you can use workbooks in Microsoft Sentinel to build a custom dashboard for Defender for Office 365. We will also share an example workbook that is now available and can be customized based on your organization’s needs. Why use workbooks in Microsoft Sentinel? There are many potential benefits to using workbooks if you already use Microsoft Sentinel and already stream the hunting data tables for Defender for Office 365: You can choose to store data for a longer period of time via configuring longer retention for tables you use for your workbooks. For example you can store Defender for Office 365 EmailEvents table data for 1 year and build visuals over longer period of time. You can customize your visuals easily based on your organization’s needs. You can configure auto-refresh for the workbook to keep the data shown up to date. You can access ready to use workbook templates and customize them if it's needed. Getting started After you connect your data sources to Microsoft Sentinel, you can visualize and monitor the data using workbooks in Microsoft Sentinel. Ensure that Microsoft Defender XDR is installed in your Microsoft Sentinel instance, so you can use Defender for Office 365 data with a few simple steps. Detection and other Defender for Office 365 insights are already available as raw data in the Microsoft Defender XDR advanced hunting tables: EmailEvents - contains information about all emails EmailAttachmentInfo - contains information about attachments in emails EmailUrlInfo - contains information about URLs in emails EmailPostDeliveryEvents – contains information about Zero-hour auto purge (ZAP) or Manual remediation events UrlClickEvents - contains information about Safe Links clicks from email messages, Microsoft Teams, and Office 365 apps in supported desktop, mobile, and web apps. CloudAppEvents – CloudAppEvents can be used to visualize user reported Phish emails and Admin submissions with Defender for Office 365. The Microsoft Defender XDR solution in Microsoft Sentinel provides a connector to stream the above data continuously into Microsoft Sentinel. Microsoft Sentinel then allows you to create custom workbooks across your data or use existing workbook templates available with packaged solutions or as standalone content from the content hub. How to access the workbook template We are excited to share a new workbook template for Defender for Office 365 detection and data visualization, which is available in the Microsoft Sentinel Content hub. The workbook is part of the Microsoft Defender XDR solution. If you are already using our solution, this update is now available for you. If you are installing the Microsoft Defender XDR solution for the first time, this workbook will be available automatically after installation. After the Microsoft Defender XDR solution is installed (or updated to the latest available version), simply navigate to the Workbooks area in Microsoft Sentinel and on the Templates tab select Microsoft Defender for Office 365 Detection and Insights. Using the “View Template” action loads the workbook. What insights are available in the template? The template has the following sections with each section deep diving into various areas of email security, providing details and insights to security team members: Detection overview Email - Malware Detections Email - Phish Detections Email - Spam Detections Email - Business Compromise Detections (BEC) Email - Sender Authentication based Detections URL Detections and Clicks Email - Top Users/Senders Email - Detection Overrides False Negative/Positive Submissions File - Malware Detections (SharePoint, Teams and OneDrive) Post Delivery Detections and Admin Actions Email - Malware Detections Email - Business Compromise Detections (BEC) Email - Sender Authentication based Detections URL Detections and Clicks False Negative/Positive Submissions File - Malware Detections (SharePoint, Teams and OneDrive) Post Delivery Detections and Admin Actions Can I customize the workbook? Yes, absolutely. Based on the email attributes in the Advanced Hunting schema, you can define more functions and visuals as needed. For example, you can use the DetectionMethods field to analyse detections caught by capabilities like Spoof detections, Safe Attachment, and detection for emails containing URLs extracted from QR codes. You can also bring other data sources into Microsoft Sentinel as tables and use them when creating visuals in the workbook. This sample workbook is a powerful showcase for how you can use the Defender for Office 365 raw detection data to visualize email security detection insights directly in Microsoft Sentinel. It enables organizations to easily create customized dashboards that can help them analyse, track their threat landscape, and respond quickly—based on unique requirements. Do you have questions or feedback about Microsoft Defender for Office 365? Engage with the community and Microsoft experts in the Defender for Office 365 forum. More information Integrate Microsoft Defender XDR with Microsoft Sentinel. Learn more about Microsoft Sentinel workbooks. Microsoft Defender for Office 365 Detection Details Report – Updated Power BI template for Microsoft Sentinel and Log Analytics Learn more about Microsoft Defender XDR.