remediation
83 TopicsThreat Intelligence & Identity Ecosystem Connectors
Microsoft Sentinel’s capability can be greatly enhanced by integrating third-party threat intelligence (TI) feeds (e.g. GreyNoise, Team Cymru) with identity and access logs (e.g. OneLogin, PingOne). This article provides a detailed dive into each connector, data types, and best practices for enrichment and false-positive reduction. We cover how GreyNoise (including PureSignal/Scout), Team Cymru, OneLogin IAM, PingOne, and Keeper integrate with Sentinel – including available connectors, ingested schemas, and configuration. We then outline technical patterns for building TI-based lookup pipelines, scoring, and suppression rules to filter benign noise (e.g. GreyNoise’s known scanners), and enrich alerts with context from identity logs. We map attack chains (credential stuffing, lateral movement, account takeover) to Sentinel data, and propose KQL analytics rules and playbooks with MITRE ATT&CK mappings (e.g. T1110: Brute Force, T1595: Active Scanning). The report also includes guidance on deployment (ARM/Bicep examples), performance considerations for high-volume TI ingestion, and comparison tables of connector features. A mermaid flowchart illustrates the data flow from TI and identity sources into Sentinel analytics. All recommendations are drawn from official documentation and industry sources. Threat Intel & Identity Connectors Overview GreyNoise (TI Feed): GreyNoise provides “internet background noise” intelligence on IPs seen scanning or probing the Internet. The Sentinel GreyNoise Threat Intelligence connector (Azure Marketplace) pulls data via GreyNoise’s API into Sentinel’s ThreatIntelligenceIndicator table. It uses a daily Azure Function to fetch indicators (IP addresses and metadata like classification, noise, last_seen) and injects them as STIX-format indicators (Network IPs with provider “GreyNoise”). This feed can then be queried in KQL. Authentication requires a GreyNoise API key and a Sentinel workspace app with Contributor rights. GreyNoise’s goal is to help “filter out known opportunistic traffic” so analysts can focus on real threats. Official docs describe deploying the content pack and workbook template. Ingested data: IP-based indicators (malicious vs. benign scans), classifications (noise, riot, etc.), organization names, last-seen dates. All fields from GreyNoise’s IP lookup (e.g. classification, last_seen) appear in ThreatIntelligenceIndicator.NetworkDestinationIP, IndicatorProvider="GreyNoise", and related fields. Query: ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" | summarize arg_max(TimeGenerated, *) by NetworkDestinationIP This yields the latest GreyNoise record per IP. Team Cymru Scout (TI Context): Team Cymru’s PureSignal™ Scout is a TI enrichment platform. The Team Cymru Scout connector (via Azure Marketplace) ingests contextual data (not raw logs) about IPs, domains, and account usage into Sentinel custom tables. It runs via an Azure Function that, given IP or domain inputs, populates tables like Cymru_Scout_IP_Data_* and Cymru_Scout_Domain_Data_CL. For example, an IP query yields multiple tables: Cymru_Scout_IP_Data_Foundation_CL, ..._OpenPorts_CL, ..._PDNS_CL, etc., containing open ports, passive DNS history, X.509 cert info, fingerprint data, etc. This feed requires a Team Cymru account (username/password) to access the Scout API. Data types: Structured TI metadata by IP/domain. No native ThreatIndicator insertion; instead, analysts query these tables to enrich events (e.g. join on SourceIP). The Sentintel TechCommunity notes that Scout “enriches alerts with real-time context on IPs, domains, and adversary infrastructure” and can help “reduce false positives”. OneLogin IAM (Identity Logs): The OneLogin IAM solution (Microsoft Sentinel content pack) ingests OneLogin platform events and user info via OneLogin’s REST API. Using the Codeless Connector Framework, it pulls from OneLogin’s Events API and Users API, storing data in custom tables OneLoginEventsV2_CL and OneLoginUsersV2_CL. Typical events include user sign-ins, MFA actions, app accesses, admin changes, etc. Prerequisites: create an OpenID Connect app in OneLogin (for client ID/secret) and register it in Azure (Global Admin). The connector queries hourly (or on schedule), within OneLogin’s rate limit of 5000 calls/hour. Data mapping: OneLoginEventsV2_CL (CL suffix indicates custom log) holds event records (time, user, IP, event type, result, etc.); OneLoginUsersV2_CL contains user account attributes. These can be joined or used in analytics. For example, a query might look for failed login events: OneLoginEventsV2_CL | where Event_type_s == "UserSessionStart" and Result_s == "Failed" (Actual field names depend on schema.) PingOne (Identity Logs): The PingOne Audit connector ingests audit activity from the PingOne Identity platform via its REST API. It creates the table PingOne_AuditActivitiesV2_CL. This includes administrator actions, user logins, console events, etc. You configure a PingOne API client (Client ID/Secret) and set up the Codeless Connector Framework. Logs are retrieved (with attention to PingOne’s license-based rate limits) and appended to the custom table. Analysts can query, for instance, PingOne_AuditActivitiesV2_CL for events like MFA failures or profile changes. Keeper (Password Vault Logs – optional): Keeper, a password management platform, can forward security events to Sentinel via Azure Monitor. As of latest docs, logs are sent to a custom log table (commonly KeeperLogs_CL) using Azure Data Collection Rules. In Keeper’s guide, you register an Azure AD app (“KeeperLogging”) and configure Azure Monitor data collection; then in the Keeper Admin Console you specify the DCR endpoint. Keeper events (e.g. user logins, vault actions, admin changes) are ingested into the table named (e.g.) Custom-KeeperLogs_CL. Authentication uses the app’s client ID/secret and a monitor endpoint URL. This is a bulk ingest of records, rather than a scheduled pull. Data ingested: custom Keeper events with fields like user, action, timestamp. Keeper’s integration is essentially via Azure Monitor (in the older Azure Sentinel approach). Connector Configuration & Data Ingestion Authentication and Rate Limits: Most connectors require API keys or OAuth credentials. GreyNoise and Team Cymru use single keys/credentials, with the Azure Function secured by a Managed Identity. OneLogin and PingOne use client ID/secret and must respect their API limits (OneLogin ~5k calls/hour; PingOne depends on licensing). GreyNoise’s enterprise API allows bulk lookups; the community API is limited (10/day for free), so production integration requires an Enterprise plan. Sentinel Tables: Data is inserted either into built-in tables or custom tables. GreyNoise feeds the ThreatIntelligenceIndicator table, populating fields like NetworkDestinationIP and ThreatSeverity (higher if classified “malicious”). Team Cymru’s Scout connector creates many Cymru_Scout_*_CL tables. OneLogin’s solution populates OneLoginEventsV2_CL and OneLoginUsersV2_CL. PingOne yields PingOne_AuditActivitiesV2_CL. Keeper logs appear in a custom table (e.g. KeeperLogs_CL) as shown in Keeper’s guide. Note: Sentinel’s built-in identity tables (IdentityInfo, SigninLogs) are typically for Microsoft identities; third-party logs can be mapped to them via parsers or custom analytic rules but by default arrive in these custom tables. Data Types & Schema: Threat Indicators: In ThreatIntelligenceIndicator, GreyNoise IPs appear as NetworkDestinationIP with associated fields (e.g. ThreatSeverity, IndicatorProvider="GreyNoise", ConfidenceScore, etc.). (Future STIX tables may be used after 2025.) Custom CL Logs: OneLogin events may include fields such as user_id_s, user_login_s, client_ip_s, event_time, etc. (The published parser issues indicate fields like app_name_s, role_id_d, etc.) PingOne logs include eventType, user, clientIP, result. Keeper logs contain Action, UserName, etc. These raw fields can be normalized in analytic rules or parsed by data transformations. Identity Info: Although not directly ingested, identity attributes from OneLogin/PingOne (e.g. user roles, group IDs) could be periodically fetched and synced to Sentinel (via custom logic) to populate IdentityInfo records, aiding user-centric hunts. Configuration Steps : GreyNoise: In Sentinel Content Hub, install the GreyNoise ThreatIntel solution. Enter your GreyNoise API key when prompted. The solution deploys an Azure Function (requires write access to Functions) and sets up an ingestion schedule. Verify the ThreatIntelligenceIndicator table is receiving GreyNoise entries Team Cymru: From Marketplace install “Team Cymru Scout”. Provide Scout credentials. The solution creates an Azure Function app. It defines a workflow to ingest or lookup IPs/domains. (Often, analysts trigger lookups rather than scheduled ingestion, since Scout is lookup-based.) Ensure roles: the Function’s managed identity needs Sentinel contributor rights. OneLogin: Use the Data Connectors UI. Authenticate OneLogin by creating a new Sentinel Web API authentication (with OneLogin’s client ID/secret). Enable both “OneLogin Events” and “OneLogin Users”. No agent is needed. After setup, data flows into OneLoginEventsV2_CL. PingOne: Similarly, configure the PingOne connector. Use the PingOne administrative console to register an OAuth client. In Sentinel’s connector blade, enter the client ID/secret and specify desired log types (Audit, maybe IDP logs). Confirm PingOne_AuditActivitiesV2_CL populates hourly. Keeper: Register an Azure AD app (“KeeperLogging”) and assign it Monitoring roles (Publisher/Contributor) to your workspace and data collection endpoint. Create an Azure Data Collection Rule (DCR) and table (e.g. KeeperLogs_CL). In Keeper’s Admin Console (Reporting & Alerts → Azure Monitor), enter the tenant ID, client ID/secret, and the DCR endpoint URL (format: https://<DCE>/dataCollectionRules/<DCR_ID>/streams/<table>?api-version=2023-01-01). Keeper will then push logs. KQL Lookup: To enrich a Sentinel event with these feeds, you might write: OneLoginEventsV2_CL | where EventType == "UserLogin" and Result == "Success" | extend UserIP = ClientIP_s | join kind=inner ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and ThreatSeverity >= 3 | project NetworkDestinationIP, Category ) on $left.UserIP == $right.NetworkDestinationIP This joins OneLogin sign-ins with GreyNoise’s list of malicious scanners. Enrichment & False-Positive Reduction IOC Enrichment Pipelines: A robust TI pipeline in Sentinel often uses Lookup Tables and Functions. For example, ingested TI (from GreyNoise or Team Cymru) can be stored in reference data or scheduled lookup tables to enrich incoming logs. Patterns include: - Normalization: Normalize diverse feeds into common STIX schema fields (e.g. all IPs to NetworkDestinationIP, all domains to DomainName) so rules can treat them uniformly. - Confidence Scoring: Assign a confidence score to each indicator (from vendor or based on recency/frequency). For GreyNoise, for instance, you might use classification (e.g. “malicious” vs. “benign”) and history to score IP reputation. In Sentinel’s ThreatIntelligenceIndicator.ConfidenceScore field you can set values (higher for high-confidence IOCs, lower for noisy ones). - TTL & Freshness: Some indicators (e.g. active C2 domains) expire, so setting a Time-To-Live is critical. Sentinel ingestion rules or parsers should use ExpirationDateTime or ValidUntil on indicators to avoid stale IOCs. For example, extend ValidUntil only if confidence is high. - Conflict Resolution: When the same IOC comes from multiple sources (e.g. an IP in both GreyNoise and TeamCymru), you can either merge metadata or choose the highest confidence. One approach: use the highest threat severity from any source. Sentinel’s ThreatType tags (e.g. malicious-traffic) can accommodate multiple providers. False-Positive Reduction Techniques: - GreyNoise Noise Scoring: GreyNoise’s primary utility is filtering. If an IP is labeled noise=true (i.e. just scanning, not actively malicious), rules can deprioritize alerts involving that IP. E.g. suppress an alert if its source IP appears in GreyNoise as benign scanner. - Team Cymru Reputation: Use Scout data to gauge risk; e.g. if an IP’s open port fingerprint or domain history shows no malicious tags, it may be low-risk. Conversely, known hostile IP (e.g. seen in ransomware networks) should raise alert level. Scout’s thousands of context tags help refine a binary IOC. - Contextual Identity Signals: Leverage OneLogin/PingOne context to filter alerts. For instance, if a sign-in event is associated with a high-risk location (e.g. new country) and the IP is a GreyNoise scan, flag it. If an IP is marked benign, drop or suppress. Correlate login failures: if a single IP causes many failures across multiple users, it might be credential stuffing (T1110) – but if that IP is known benign scanner, consider it low priority. - Thresholding & Suppression: Build analytic suppression rules. Example: only alert on >5 failed logins in 5 min from IP and that IP is not noise. Or ignore DNS queries to domains that TI flags as benign/whitelisted. Apply tag-based rules: some connectors allow tagging known internal assets or trusted scan ranges to avoid alerts. Use GreyNoise to suppress alerts: SecurityEvent | where EventID == 4625 and Account != "SYSTEM" | join kind=leftanti ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and Classification == "benign" | project NetworkSourceIP ) on $left.IPAddress == $right.NetworkSourceIP This rule filters out Windows 4625 login failures originating from GreyNoise-known benign scanners. Identity Attack Chains & Detection Rules Modern account attacks often involve sequential activities. By combining identity logs with TI, we can detect advanced patterns. Below are common chains and rule ideas: Credential Stuffing (MITRE T1110): Often seen as many login failures followed by a success. Detection: Look for multiple failed OneLogin/PingOne sign-ins for the same or different accounts from a single IP, then a success. Enrich with GreyNoise: if the source IP is in GreyNoise (indicating scanning), raise severity. Rule: let SuspiciousIP = OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Failed" | summarize CountFailed=count() by ClientIP_s | where CountFailed > 5; OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" and ClientIP_s in (SuspiciousIP | project ClientIP_s) | join kind=inner ( ThreatIntelligenceIndicator | where ThreatType == "ip" | extend GreyNoiseClass = tostring(Classification) | project IP=NetworkSourceIP, GreyNoiseClass ) on $left.ClientIP_s == $right.IP | where GreyNoiseClass == "malicious" | project TimeGenerated, Account_s, ClientIP_s, GreyNoiseClass Tactics: Initial Access (T1110) – Severity: High. Account Takeover / Impossible Travel (T1198): Sign-ins from unusual geographies or devices. Detection: Compare user’s current sign-in location against historical baseline. Use OneLogin/PingOne logs: if two logins by same user occur in different countries with insufficient time to travel, trigger. Enrich: if the login IP is also known infrastructure (Team Cymru PDNS, etc.), raise alert. Rule: PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" | extend loc = tostring(City_s) + ", " + tostring(Country_s) | sort by TimeGenerated desc | partition by User_s ( where TimeGenerated < ago(24h) // check last day | summarize count(), min(TimeGenerated), max(TimeGenerated) ) | where max_TimeGenerated - min_TimeGenerated < 1h and count_>1 and (range(loc) contains ",") | project User_s, TimeGenerated, loc (This pseudo-query checks multiple locations in <1 hour.) Tactics: Reconnaissance / Initial Access – Severity: Medium. Lateral Movement (T1021): Use of an account on multiple systems/apps. Detection: Two or more distinct application/service authentications by same user within a short time. Use OneLogin app-id fields or audit logs for access. If these are followed by suspicious network activity (e.g. contacting C2 via GreyNoise), escalate. Tactics: Lateral Movement – Severity: High. Privilege Escalation (T1098): If an admin account is changed or MFA factors reset in OneLogin/PingOne, especially after anomalous login. Detection: Monitor OneLogin admin events (“User updated”, “MFA enrolled/removed”). Cross-check the actor’s IP against threat feeds. Tactics: Credential Access – Severity: High. Analytics Rules (KQL) Below are six illustrative Sentinel analytics rules combining TI and identity logs. Each rule shows logic, tactics, severity, and MITRE IDs. (Adjust field names per your schemas and normalize CL tables as needed.) Multiple Failed Logins from Malicious Scanner (T1110) – High severity. Detect credential stuffing by identifying >5 failed login attempts from the same IP, where that IP is classified as malicious by GreyNoise. let BadIP = OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Failed" | summarize attempts=count() by SourceIP_s | where attempts >= 5; OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" and SourceIP_s in (BadIP | project SourceIP_s) | join ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and ThreatSeverity >= 4 | project MaliciousIP=NetworkDestinationIP ) on $left.SourceIP_s == $right.MaliciousIP | extend AttackFlow="CredentialStuffing", MITRE="T1110" | project TimeGenerated, UserName_s, SourceIP_s, MaliciousIP Logic: Correlate failed-then-success login from same IP plus GreyNoise-malign classification. Impossible Travel / Anomalous Geo (T1198) – Medium severity. A user signs in from two distant locations within an hour. // Get last two logins per user let lastLogins = PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" and Outcome_s == "Success" | sort by TimeGenerated desc | summarize first_place=arg_max(TimeGenerated, City_s, Country_s, SourceIP_s, TimeGenerated) by User_s; let prevLogins = PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" and Outcome_s == "Success" | sort by TimeGenerated desc | summarize last_place=arg_min(TimeGenerated, City_s, Country_s, SourceIP_s, TimeGenerated) by User_s; lastLogins | join kind=inner prevLogins on User_s | extend dist=geo_distance_2points(first_place_City_s, first_place_Country_s, last_place_City_s, last_place_Country_s) | where dist > 1000 and (first_place_TimeGenerated - last_place_TimeGenerated) < 1h | project Time=first_place_TimeGenerated, User=User_s, From=last_place_Country_s, To=first_place_Country_s, MITRE="T1198" Logic: Compute geographic distance between last two logins; flag if too far too fast. Suspicious Admin Change (T1098) – High severity. Detect a change to admin settings (like role assign or MFA reset) via PingOne, from a high-risk IP (Team Cymru or GreyNoise) or after failed logins. PingOne_AuditActivitiesV2_CL | where EventType_s in ("UserMFAReset", "UserRoleChange") // example admin events | extend ActorIP = tostring(InitiatingIP_s) | join ( ThreatIntelligenceIndicator | where ThreatSeverity >= 3 | project BadIP=NetworkDestinationIP ) on $left.ActorIP == $right.BadIP | extend MITRE="T1098" | project TimeGenerated, ActorUser_s, Action=EventType_s, ActorIP Logic: Raise if an admin action originates from known bad IP. Malicious Domain Access (T1498): Medium severity. Internal logs (e.g. DNS or Web proxy) show access to a domain listed by Team Cymru Scout as C2 or reconnaissance. DeviceDnsEvents | where QueryType == "A" | join kind=inner ( Cymru_Scout_Domain_Data_CL | where ThreatTag_s == "Command-and-Control" | project DomainName_s ) on $left.QueryText == $right.DomainName_s | extend MITRE="T1498" | project TimeGenerated, DeviceName, QueryText Logic: Correlate internal DNS queries with Scout’s flagged C2 domains. (Requires that domain data is ingested or synced.) Brute-Force Firewall Blocked IP (T1110): Low to Medium severity. Firewall logs show an IP blocked for many attempts, and that IP is not noise per GreyNoise (i.e., malicious scanner). AzureDiagnostics | where Category == "NetworkSecurityGroupFlowEvent" and msg_s contains "DIRECTION=Inbound" and Action_s == "Deny" | summarize attemptCount=count() by IP = SourceIp_s, FlowTime=bin(TimeGenerated, 1h) | where attemptCount > 50 | join kind=leftanti ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and Classification == "benign" | project NoiseIP=NetworkDestinationIP ) on $left.IP == $right.NoiseIP | extend MITRE="T1110" | project IP, attemptCount, FlowTime Logic: Many inbound denies (possible brute force) from an IP not whitelisted by GreyNoise. New Device Enrolled (T1078): Low severity. A user enrolls a new device or location for MFA after unusual login. OneLoginEventsV2_CL | where EventType == "NewDeviceEnrollment" | join kind=inner ( OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" | top 1 by TimeGenerated asc // assume prior login | project User_s, loginTime=TimeGenerated, loginIP=ClientIP_s ) on User_s | where loginIP != DeviceIP_s | extend MITRE="T1078" | project TimeGenerated, User_s, DeviceIP_s, loginIP Logic: Flag if new device added (strong evidence of account compromise). Note: The above rules are illustrative. Tune threshold values (e.g. attempt counts) to your environment. Map the event fields (EventType, Result, etc.) to your actual schema. Use Severity mapping in rule configs as indicated and tag with MITRE IDs for context. TI-Driven Playbooks and Automation Automated response can amplify TI. Patterns include: - IOC Blocking: On alert (e.g. suspicious IP login), an automation runbook can call Azure Firewall, Azure Defender, or external firewall APIs to block the offending IP. For instance, a Logic App could trigger on the analytic alert, use the TI feed IP, and call AzFWNetworkRule PowerShell to add a deny rule. - Enrichment Workflow: After an alert triggers, an Azure Logic App playbook can enrich the incident by querying TI APIs. E.g., given an IP from the alert, call GreyNoise API or Team Cymru Scout API in real-time (via HTTP action), add the classification into incident details, and tag the incident accordingly (e.g. GreyNoiseStatus: malicious). This adds context for the analyst. - Alert Suppression: Implement playbook-driven suppression. For example, an alert triggered by an external IP can invoke a playbook that checks GreyNoise; if the IP is benign, the playbook can auto-close the alert or mark as false-positive, reducing analyst load. - Automated TI Feed Updates: Periodically fetch open-source or commercial TI and use a playbook to push new indicators into Sentinel’s TI store via the Graph API. - Incident Enrichment: On incident creation, a playbook could query OneLogin/PingOne for additional user details (like department or location via their APIs) and add as note in the incident. Performance, Scalability & Costs TI feeds and identity logs can be high-volume. Key considerations: - Data Ingestion Costs: Every log and TI indicator ingested into Sentinel is billable by the GB. Bulk TI indicator ingestion (like GreyNoise pulling thousands of IPs/day) can add storage costs. Use Sentinel’s Data Collection Rules (DCR) to apply ingestion-time filters (e.g. only store indicators above a confidence threshold) to reduce volume. GreyNoise feed is typically modest (since it’s daily, maybe thousands of IPs). Identity logs (OneLogin/PingOne) depend on org size – could be megabytes per day. Use sentinel ingestion sl analytic filters to drop low-value logs. - Query Performance: Custom log tables (OneLogin, PingOne, KeeperLogs_CL) can grow large. Periodically archive old data (e.g. export >90 days to storage, then purge). Use materialized views or scheduled summary tables for heavy queries (e.g. pre-aggregate hourly login counts). For threat indicator tables, leverage built-in indices on IndicatorId and NetworkIP for fast joins. Use project-away _* to remove metadata from large join queries. - Retention & Storage: Configure retention per table. If historical TI is less needed, set shorter retention. Use Azure Monitor’s tiering/Archive for seldom-used data. For large TI volumes (e.g. feeding multiple TIPs), consider using Sentinel Data Lake (or connecting Log Analytics to ADLS Gen2) to offload raw ingest cheaply. - Scale-Out Architecture: For very large environments, use multiple Sentinel workspaces (e.g. regional) and aggregate logs via Azure Lighthouse or Sentinel Fusion. TI feeds can be shared: one workspace collects TI, then distribute to others via Azure Sentinel’s TI management (feeds can be published and shared cross-workspaces). - Connector Limits: API rate limits dictate update frequency. Schedule connectors accordingly (e.g. daily for TI, hourly for identity events). Avoid hourly pulls of already static data (users list can be daily). For OneLogin/PingOne, use incremental tokens or webhooks if possible to reduce load. - Monitoring Health: Use Sentinel’s Log Analytics and Monitor metrics to track ingestion volume and connector errors. For example, monitor the Functions running GreyNoise/Scout for failures or throttling. Deployment Checklist & Guide Prepare Sentinel Workspace: Ensure a Log Analytics workspace with Sentinel enabled. Record the workspace ID and region. Register Applications: In Azure AD, create and note any Service Principal needed for functions or connectors (e.g. a Sentinel-managed identity for Functions). In each vendor portal, register API apps and credentials (OneLogin OIDC App, PingOne API client, Keeper AD app). Network & Security: If needed, configure firewall rules to allow outbound to vendor APIs. Install Connectors: In Sentinel Content Hub or Marketplace, install the solutions for GreyNoise TI, Team Cymru Scout, OneLogin IAM, PingOne. Follow each wizard to input credentials. Verify the “Data Types” (Logs, Alerts, etc.) are enabled. Create Tables & Parsers (if manual): For Keeper or unsupported logs, manually create custom tables (via DCR in Azure portal). Import JSON to define fields as shown in Keeper’s docs Test Data Flow: After each setup, wait 1–24 hours and run a simple query on the destination table (e.g. OneLoginEventsV2_CL | take 5) to confirm ingestion. Deploy Ingestion Rules: Use Sentinel Threat intelligence ingestion rules to fine-tune feeds (e.g. mark high-confidence feeds to extend expiration). Optionally tag/whitelist known good. Configure Analytics: Enable or create rules using the KQL above. Place them in the correct threat hunting or incident rule categories (Credential Access, Lateral Movement, etc.). Assign appropriate alert severity. Set up Playbooks: For automated actions (alert enrichment, IOC blocking), create Logic App playbooks. Test with mock alerts (dry run) to ensure correct API calls. Tuning & Baseline: After initial alerts, tune queries (thresholds, whitelists) to reduce noise. Maintain suppression lists (e.g. internal pentest IPs). Use the MITRE mapping in rule details for clarity. Documentation & Training: Document field mappings (e.g. OneLoginEvents fields), and train SOC staff on new TI-enriched alert fields. Connectors Comparison Connector Data Sources Sent. Tables Update Freq. Auth Method Key Fields Enriched Limits/Cost Pros/Cons GreyNoise IP intelligence (scanners) ThreatIntelligenceIndicator Daily (scheduled pull) API Key IP classification, noise, classification API key required; paid license for large usage Pros: Filters benign scans, broad scan visibility Con: Only IP-based (no domain/file). Team Cymru Scout Global IP/domain telemetry Cymru_Scout_*_CL (custom tables) On-demand or daily Account credentials Detailed IP/domain context (ports, PDNS, ASN, etc.) Requires Team Cymru subscription. Potentially high cost for feed. Pros: Rich context (open ports, DNS, certs); great for IOC enrichment. Con: Complex setup, data in custom tables only. OneLogin IAM OneLogin user/auth logs OneLoginEventsV2_CL, OneLoginUsersV2_CL Polls hourly OAuth2 (client ID/secret) User, app, IP, event type (login, MFA, etc.) OneLogin API: 5K calls/hour. Data volume moderate. Pros: Direct insight into cloud identity use; built-in parser available. Cons: Limited to OneLogin environment only. PingOne Audit PingOne audit logs PingOne_AuditActivitiesV2_CL Polls hourly OAuth2 (client ID/secret) User actions, admin events, MFA logs Rate limited by Ping license. Data volume moderate. Pros: Captures critical identity events; widely used product. Cons: Requires PingOne Advanced license for audit logs. Keeper (custom) Keeper security events KeeperLogs_CL (custom) Push (continuous) OAuth2 (client ID/secret) + Azure DCR Vault logins, record accesses, admin changes None (push model); storage costs. Pros: Visibility into password vault activity (often blind spot). Cons: More manual setup; custom logs not parsed by default. Data Flow Diagram This flowchart shows GreyNoise (GN) feeding the Threat Intelligence table, Team Cymru feeding enrichment tables, and identity sources pushing logs. All data converges into Sentinel, where enrichment lookups inform analytics and automated responses.93Views4likes0CommentsObserved Automation Discrepancies
Hi Team ... I want to know the logic behind the Defender XDR Automation Engine . How it works ? I have observed Defender XDR Automation Engine Behavior contrary to expectations of identical incident and automation handling in both environments, discrepancies were observed. Specifically, incidents with high-severity alerts were automatically closed by Defender XDR's automation engine before reaching their SOC for review, raising concerns among clients and colleagues. Automation rules are clearly logged in the activity log, whereas actions performed by Microsoft Defender XDR are less transparent . A high-severity alert related to a phishing incident was closed by Defender XDR's automation, resulting in the associated incident being closed and removed from SOC review. Wherein the automation was not triggered by our own rules, but by Microsoft's Defender XDR, and sought clarification on the underlying logic.52Views2likes3CommentsSAP & Business-Critical App Security Connectors
I validated what it takes to make SAP and SAP-adjacent security signals operational in a SOC: reliable ingestion, stable schemas, and detections that survive latency and schema drift. I focus on four integrations into Microsoft Sentinel: SAP Enterprise Threat Detection (ETD) cloud edition (SAPETDAlerts_CL, SAPETDInvestigations_CL), SAP S/4HANA Cloud Public Edition agentless audit ingestion (ABAPAuditLog), Onapsis Defend (Onapsis_Defend_CL), and SecurityBridge (also ABAPAuditLog). Because vendor API specifics for ETD Retrieval API / Audit Retrieval API aren’t publicly detailed in the accessible primary sources I could retrieve, I explicitly label pagination/rate/time-window behaviors as unspecified where appropriate. Connector architectures and deployment patterns For SAP-centric telemetry I separate two planes: First is SAP application telemetry that lands in SAP-native tables, especially ABAPAuditLog, ABAPChangeDocsLog, ABAPUserDetails, and ABAPAuthorizationDetails. These tables are the foundation for ABAP-layer monitoring and are documented with typed columns in Azure Monitor Logs reference. Second is external “security product” telemetry (ETD alerts, Onapsis findings). These land in custom tables (*_CL) and typically require a SOC-owned normalization layer to avoid brittle detections. Within Microsoft’s SAP solution itself, there are two deployment models: agentless and containerized connector agent. The agentless connector uses SAP Cloud Connector and SAP Integration Suite to pull logs, and Microsoft documents it as the recommended approach; the containerized agent is being deprecated and disabled on September 14, 2026. On the “implementation technology” axis, Sentinel integrations generally show up as: - Codeless Connector Framework (CCF) pollers/pushers (SaaS-managed ingestion definitions with DCR support). - Function/Logic App custom pipelines using the Logs Ingestion API when you need custom polling, enrichment, or a vendor endpoint that isn’t modeled in CCF. In my view, ETD and S/4HANA Cloud connectors are “agentless” from the Sentinel side (API credentials only), while Onapsis Defend and SecurityBridge connectors behave like push pipelines because Microsoft requires an Entra app + DCR permissions (typical Logs Ingestion API pattern). Authentication and secrets handling Microsoft documents the required credentials per connector: - ETD cloud connector requires Client Id + Client Secret for ETD Retrieval API (token mechanics unspecified). - S/4HANA Cloud Public Edition connector requires Client Id + Client Secret for Audit Retrieval API (token mechanics unspecified), and Microsoft notes “alternative authentication mechanisms” exist (details in linked repo are unspecified in accessible sources). - Onapsis Defend and SecurityBridge connectors require a Microsoft Entra ID app registration and Azure permission to assign Monitoring Metrics Publisher on DCRs. This maps directly to the Logs Ingestion API guidance, where a service principal is granted DCR access via that role (or the Microsoft.Insights/Telemetry/Write data action). For production, I treat these as “SOC platform secrets”: - Store client secrets/certificates in Key Vault when you own the pipeline (Function/Logic App); rotate on an operational schedule; alert on auth failures and sudden ingestion drops. - For vendor-managed ingestion (Onapsis/SecurityBridge), I still require: documented ownership of the Entra app, explicit RBAC scope for the DCR, and change control for credential rotation because a rotated secret is effectively a data outage. API behaviors and ingestion reliability For ETD Retrieval API and Audit Retrieval API, pagination/rate limits/time windows are unspecified in the accessible vendor documentation I could retrieve. I therefore design ingestion and detections assuming non-ideal API behavior: late-arriving events, cursor/page limitations, and throttling. CCF’s RestApiPoller model supports explicit retry policy, windowing, and multiple paging strategies, so if/when you can obtain vendor API semantics, you can encode them declaratively (rather than writing fragile code). For the SAP solution’s telemetry plane, Microsoft provides strong operational cues: agentless collection flows through Integration Suite, and troubleshooting typically happens in the Integration Suite message log; this is where I validate delivery failures before debugging Sentinel-side parsers. For scheduled detections, I always account for ingestion delay explicitly. Microsoft’s guidance is to widen event lookback by expected delay and then constrain on ingestion_time() to prevent duplicates from overlap. Schema, DCR transformations, and normalization layer Connector attribute comparison Connector Auth method Sentinel tables Default polling Backfill Pagination Rate limits SAP ETD (cloud) Client ID + Secret (ETD Retrieval API) SAPETDAlerts_CL, SAPETDInvestigations_CL unspecified unspecified unspecified unspecified SAP S/4HANA Cloud (agentless) Client ID + Secret (Audit Retrieval API); alt auth referenced ABAPAuditLog unspecified unspecified unspecified unspecified Onapsis Defend Entra app + DCR permission (Monitoring Metrics Publisher) Onapsis_Defend_CL n/a (push pattern) unspecified n/a unspecified SecurityBridge Entra app + DCR permission (Monitoring Metrics Publisher) ABAPAuditLog n/a (push pattern) unspecified n/a unspecified Ingestion-time DCR transformations Sentinel supports ingestion-time transformations through DCRs to filter, enrich, and mask data before it’s stored. Example: I remove low-signal audit noise and mask email identifiers in ABAPAuditLog: source | where isnotempty(TransactionCode) and isnotempty(User) | where TransactionCode !in ("SM21","ST22") // example noise; tune per tenant | extend Email = iif(Email has "@", strcat(substring(Email,0,2),"***@", tostring(split(Email,"@")[1])), Email) Normalization functions Microsoft explicitly recommends using SAP solution functions instead of raw tables because they can change the infrastructure beneath without breaking detections. I follow the same pattern for ETD/Onapsis custom tables: I publish SOC-owned functions as a schema contract. .create-or-alter function with (folder="SOC/SAP") Normalize_ABAPAudit() { ABAPAuditLog | project TimeGenerated, SystemId, ClientId, User, TransactionCode, TerminalIpV6, MessageId, MessageClass, MessageText, AlertSeverityText, UpdatedOn } .create-or-alter function with (folder="SOC/SAP") Normalize_ETDAlerts() { SAPETDAlerts_CL | extend AlertId = tostring(coalesce(column_ifexists("AlertId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("User",""), column_ifexists("user",""))) | project TimeGenerated, AlertId, Severity, SapUser, * } .create-or-alter function with (folder="SOC/SAP") Normalize_Onapsis() { Onapsis_Defend_CL | extend FindingId = tostring(coalesce(column_ifexists("FindingId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("user",""))) | project TimeGenerated, FindingId, Severity, SapUser, * } Health/lag monitoring and anti-gap I monitor both connector health and ingestion delay. SentinelHealth is the native health table, and Microsoft provides a health workbook and a schema reference for the fields. let lookback=24h; union isfuzzy=true (ABAPAuditLog | extend T="ABAPAuditLog"), (SAPETDAlerts_CL | extend T="SAPETDAlerts_CL"), (Onapsis_Defend_CL | extend T="Onapsis_Defend_CL") | where TimeGenerated > ago(lookback) | summarize LastEvent=max(TimeGenerated), P95DelaySec=percentile(datetime_diff("second", ingestion_time(), TimeGenerated), 95), Events=count() by T Anti-gap scheduled-rule frame (Microsoft pattern): let ingestion_delay=10m; let rule_lookback=5m; ABAPAuditLog | where TimeGenerated >= ago(ingestion_delay + rule_lookback) | where ingestion_time() > ago(rule_lookback) SOC detections for ABAP privilege abuse, fraud/insider behavior, and audit readiness Privileged ABAP transaction monitoring ABAPAuditLog includes TransactionCode, User, SystemId, and terminal/IP fields, so I start with a curated high-risk tcode set and then add baselines. let PrivTCodes=dynamic(["SU01","PFCG","SM59","RZ10","SM49","SE37","SE16","SE16N"]); Normalize_ABAPAudit() | where TransactionCode in (PrivTCodes) | summarize Actions=count(), Ips=make_set(TerminalIpV6,5) by SystemId, User, TransactionCode, bin(TimeGenerated, 1h) | where Actions >= 3 Fraud/insider scenario: sensitive object change near privileged audit activity ABAPChangeDocsLog exposes ObjectClass, ObjectId, and change types; I correlate sensitive object changes to privileged transactions in a tight window. let w=10m; let Sensitive=dynamic(["BELEG","BPAR","PFCG","IDENTITY"]); ABAPChangeDocsLog | where ObjectClass in (Sensitive) | project ChangeTime=TimeGenerated, SystemId, User=tostring(column_ifexists("User","")), ObjectClass, ObjectId, TypeOfChange=tostring(column_ifexists("ItemTypeOfChange","")) | join kind=innerunique ( Normalize_ABAPAudit() | project AuditTime=TimeGenerated, SystemId, User, TransactionCode ) on SystemId, User | where AuditTime between (ChangeTime-w .. ChangeTime+w) | project ChangeTime, AuditTime, SystemId, User, ObjectClass, ObjectId, TransactionCode, TypeOfChange Audit-ready pipeline: monitoring continuity and configuration touchpoints I treat audit logging itself as a monitored control. A simple SOC-safe control is “volume drop” by system; it’s vendor-agnostic and catches pipeline breaks and deliberate suppression. Normalize_ABAPAudit() | summarize PerHour=count() by SystemId, bin(TimeGenerated, 1h) | summarize Avg=avg(PerHour), Latest=arg_max(TimeGenerated, PerHour) by SystemId | where Latest_PerHour < (Avg * 0.2) Where Onapsis/ETD are present, I increase fidelity by requiring “privileged ABAP activity” plus an external SAP-security product finding (field mappings are tenant-specific; normalize first): let win=1h; Normalize_ABAPAudit() | where TransactionCode in ("SU01","PFCG","SM59","SE16N") | join kind=leftouter (Normalize_Onapsis()) on $left.User == $right.SapUser | where isempty(FindingId) == false and TimeGenerated1 between (TimeGenerated .. TimeGenerated+win) | project TimeGenerated, SystemId, User, TransactionCode, FindingId, OnapsisSeverity=Severity Production validation, troubleshooting, and runbook For acceptance, I validate in this order: table creation, freshness/lag percentiles, connector health state, and cross-check of event counts against the upstream system for the same UTC window (where available). Connector health monitoring is built around SentinelHealth plus the Data collection health workbook. For SAP agentless ingestion, Microsoft states most troubleshooting happens in Integration Suite message logs—this is where I triage authentication/networking failures before tuning KQL. For Onapsis/SecurityBridge-style ingestion, I validate Entra app auth, DCR permission assignment (Monitoring Metrics Publisher), and a minimal ingestion test payload using the Logs Ingestion API tutorial flow. Operational runbook items I treat as non-optional: health alerts on connector failure and freshness drift; scheduled rule anti-gap logic; playbooks that capture evidence bundles (ABAPAuditLog slice + user context from ABAPUserDetails/ABAPAuthorizationDetails); DCR filters to reduce noise and cost; and change control for normalization functions and watchlists. SOC “definition of done” checklist (short): 1) Tables present and steadily ingesting; 2) P95 ingestion delay measured and rules use the anti-gap pattern; 3) SentinelHealth enabled with alerts; 4) SOC-owned normalization functions deployed; 5) at least one privileged-tcode rule + one change-correlation rule + one audit-continuity rule in production. Mermaid ingestion flow:75Views5likes0CommentsInvalidating kerberos tickets via XDR?
Since we have alerts every now and then, regarding suspected Pass the Ticket-incidents, I want to know if there's a way to make a user's kerberos ticket invalid? Like we have the "Revoke Session" in Entra ID, is there anything similar that we can do in XDR?25Views0likes1CommentIntegrating Proofpoint and Mimecast Email Security with Microsoft Sentinel
Microsoft Sentinel can ingest rich email security telemetry from Proofpoint and Mimecast to power advanced phishing detection. The Proofpoint On Demand (POD) Email Security and Proofpoint Targeted Attack Protection (TAP) connectors pull threat logs (quarantines, spam, phishing attempts) and user click data into Sentinel. Similarly, the Mimecast Secure Email Gateway connector ingests detailed mail flow and targeted-threat logs (attachment/URL scans, impersonation events). These integrations use Azure-hosted ingestion (via Logic Apps or Azure Functions) and the new Codeless Connector framework to call vendor APIs on a schedule. The result is a consolidated dataset in Sentinel’s Log Analytics, enabling correlated alerting and hunting across email, identity, and endpoint signals. Figure: Phishing emails are processed by Mimecast’s gateway and Proofpoint POD/TAP services. Security logs (delivery/quarantine events, malicious attachments/links, user clicks) flow into Microsoft Sentinel. In Sentinel, these mail signals are correlated with identity (Azure AD), endpoint (Defender) and network telemetry for end-to-end phishing detection. Proofpoint POD (Email Protection) Connector The Proofpoint POD connector ingests core email protection logs. It creates two tables, ProofpointPODMailLog_CL and ProofpointPODMessage_CL. These logs include per-message metadata (senders, recipients, subject, message size, timestamps), threat scores (spamScore, phishScore, malwareScore, impostorScore), and attachment details (number of attachments, names, hash values and sandbox verdicts). Quarantine actions are recorded (quarantine folder/rule) and malicious indicators (URL or file hash) and campaign IDs are tagged in the threatsInfoMap field. For example, each ProofpointPODMessage_CL record may carry a sender_s (sender email domain hashed), recipient list, subject, and any detected threat type (Phish/Malware/Spam/Impostor) with associated threat hash or URL. Deployment: Proofpoint POD uses Sentinel’s codeless connector (an Azure Function behind the scenes). You must provide Proofpoint API credentials (Cluster ID and API token) in the connector UI. The connector periodically calls the Proofpoint SIEM API to fetch new log events (typically in 1–2 hour batches). The data lands in the above tables. (Older custom logic-app approaches similarly parse JSON output from the /v2/siem/messages endpoints.) Proofpoint TAP (Targeted Attack Protection) Connector Proofpoint TAP provides user-click and message-delivery events. Its connector creates four tables: ProofPointTAPMessagesDeliveredV2_CL, ProofPointTAPMessagesBlockedV2_CL, ProofPointTAPClicksPermittedV2_CL, and ProofPointTAPClicksBlockedV2_CL. The message tables report emails with detected threats (URL or attachment defense) that were delivered or blocked by TAP. They include the same fields as POD (message GUID, sender, recipients, subject, threat campaign ID, scores, attachments info). The click tables log when users click on URLs: each record has the URL, click timestamp (clickTime), the user’s IP (clickIP), user-agent, the message GUID, and the threat ID/category. These fields allow you to see who clicked which malicious link and when. As the connector description notes, these logs give “visibility into Message and Click events in Microsoft Sentinel” for hunting. Deployment: The TAP connector also uses the codeless framework. You supply a TAP API service principal and secret (proofpoint SIEM API credentials) in the Sentinel content connector. The function app calls TAP’s /v2/siem/clicks/blocked, /permitted, /messages/blocked, and /delivered endpoints. The Proofpoint SIEM API limits queries to 1-hour windows and 7-day history, with no paging (all events in the interval are returned). (A Logic App approach could also be used, as shown in the Tech Community blog: one HTTP GET per event type and a JSON Parse before sending to Log Analytics.) Mimecast Secure Email Gateway Connector The Mimecast connector ingests the Secure Email Gateway (SEG) logs and targeted-threat (TTP) logs. Inbound, outbound and internal mail events from the Mimecast MTA (receipt, processing, delivery stages) are pulled via the API. Typical fields include the unique message ID (aCode), sender, recipient, subject, attachment count/names, and the policy actions or holds (e.g. spam quarantine). For example, the Mimecast “Process” log shows AttCnt, AttNames, and if the message was held (Hld) for review. Delivery logs include the success/failure and TLS details. In addition, Mimecast TTP logs are collected: URL Protect logs (when a user clicks a blocked URL) include the clicked URL (url), category (urlCategory), sender/recipient, and block reason. Impersonation Protect logs capture spoofing detections (e.g. if an internal name is impersonated), with fields like Sender, Recipient, Definition and Action (hold/quarantine). Attachment Protect logs record malicious file detections (filename, hash, threat type). Deployment: Like Proofpoint, Mimecast’s connector uses Azure Functions via the Sentinel content hub. You install the Mimecast solution, open the connector page, then enter Azure app credentials and Mimecast API keys (API Application ID/Key and Access/Secret for the service account). As shown in the deployment guide, you must provide the Azure Subscription, Resource Group, Log Analytics Workspace and the Azure Client (App) ID, Tenant ID and Object ID of the admin performing the setup. On the Mimecast side, you supply the API Base URL (regional), App ID/Secret and user Access/Secret. The connector creates a Function App that polls Mimecast’s SIEM APIs on a cron schedule (default every 30 minutes). You can optionally specify a start date for backfilling up to 7 days of logs. The default tables are MimecastSIEM_CL (for email flow logs) and MimecastDLP_CL (for DLP/TTP events), though custom names can be set. Ingestion Considerations Data Latency: All these connectors are pull-based and typically run on a schedule (often 30–60 minutes). For example, the Proofpoint POD docs note hourly log increments, and Mimecast logs are aggregated every 30 minutes. Expect a delay of up to an hour or more from event occurrence to Sentinel ingestion. Schema Nuances: The APIs often return nested arrays and optional fields. For instance, the Proofpoint blog warns that some JSON fields can be null or vary in type, so the parse schema should account for all possibilities. Similarly, Mimecast logs come in pipe-delimited or JSON format, with values sometimes empty (e.g. no attachments). In KQL, use tostring() or parse_json() on the raw _CL columns, and mv-expand on any multivalue fields (like message parts or threat lists). Table Names: Use the connector’s tables as listed. For Proofpoint: ProofpointPODMailLog_CL and ProofpointPODMessage_CL; for TAP: ProofPointTAPMessagesDeliveredV2_CL, ProofPointTAPMessagesBlockedV2_CL, ProofPointTAPClicksPermittedV2_CL, ProofPointTAPClicksBlockedV2_CL. For Mimecast SEG/TTP: MimecastSIEM_CL (seg logs) and MimecastDLP_CL (TTP logs). API Behavior: The Proofpoint TAP API has no paging. Be aware of timezones (Proofpoint uses UTC) and use the Sentinal ingestion TimeGenerated or event timestamp fields for binning. Detection Engineering and Correlation To detect phishing effectively, we correlate these email logs with identity, endpoint and intel data: Identity (Azure AD): Mail logs contain recipient addresses and (hashed) sender user parts. A common tactic is to correlate SMTP recipients or sender domains with Azure AD user records. For example, join TAP clicks by recipient to the user’s UPN. The Proofpoint logs also include the clicker’s IP (clickIP); we can match that to Azure AD sign-in logs or VPN logs to find which device/location clicked a malicious link. Likewise, anomalous Azure AD sign-ins (impossible travel, MFA failure) after a suspicious email can strengthen the case. Endpoints (Defender): Once a user clicks a bad link or opens a malicious attachment (captured in TAP or Mimecast logs), watch for follow-on behaviors. For instance, use Sentinel’s DeviceSecurityEvents or DeviceProcessEvents to see if that user’s machine launched unusual processes. The threatID or URL hash from email events can be looked up in Defender’s file data. Correlate by username (if available) or IP: if the email log shows a link click from IP X, see if any endpoint alerts or logon events occurred from X around the same time. As the Mimecast integration touts, this enables “correlation across Mimecast events, cloud, endpoint, and network data”. Threat Intelligence: Use Sentinel’s ThreatIntelligenceIndicator tables or Microsoft’s TI feeds to tag known bad URLs/domains in the email logs. For example, join ProofPointTAPClicksBlockedV2_CL on the clicked url against ThreatIntelligenceIndicator (type=URL) to automatically flag hits. Proofpoint’s logs already classify threats (malware/phish) and provide a threatID; one can enrich that with external intel (e.g. check if the hash appears in TI feeds). Mimecast’s URL logs include a urlCategory field, which can be mapped to known malicious categories. Automated playbooks can also pull Intel: e.g. use Sentinel’s TI REST API or Azure Sentinel watchlists containing phishing domains to annotate events. In summary, a robust detection strategy might look like: (1) Identify malicious email events (high phish scores, quarantines, URL clicks). (2) Correlate these events by user with Azure AD logs (did the user log in from a new IP after a phish click?). (3) Correlate with endpoint alerts (Defender found malware on that device). (4) Augment with threat intelligence lookups on URLs and attachments from the email logs. By linking the Proofpoint/Mimecast signals to identity and endpoint events, one can detect the full attack chain from email compromise to endpoint breach. KQL Query Here are representative Kusto queries for common phishing scenarios (adapt table/field names as needed): Malicious URL Click Detection: Identify users who clicked known-malicious URLs. For example, join TAP click logs to TI indicators:This flags any permitted click where the URL matches a known threat indicator. Alternatively, aggregate by domain: let TI = ThreatIntelligenceIndicator | where Active == true and _EntityType == "URL"; ProofPointTAPClicksPermittedV2_CL | where url_s != "" | project ClickTime=TimeGenerated, Recipient=recipient_s, URL=url_s, SenderIP=senderIP_s | join kind=inner TI on $left.URL == TI._Value | project ClickTime, Recipient, URL, Description=TI.Description This flags any permitted click where the URL matches a known threat indicator. Alternatively, aggregate by domain: ProofPointTAPClicksPermittedV2_CL | extend clickedDomain = extract(@"https?://([^/]+)", 1, url_s) | summarize ClickCount=count() by clickedDomain | where clickedDomain has "maliciousdomain.com" or clickedDomain has "phish.example.com" Quarantine Spike (Burst) Detection: Detect sudden spikes in quarantined messages. For example, using POD mail log:This finds hours with an unusually high number of held (quarantined) emails, which may indicate a phishing campaign. You could similarly use ProofPointTAPMessagesBlockedV2_CL. ProofpointPODMailLog_CL | where action_s == "Held" | summarize HeldCount=count() by bin(TimeGenerated, 1h) | order by TimeGenerated desc | where HeldCount > 100 Targeted User Phishing: Find if a specific user received multiple malicious emails. E.g., for user email address removed for privacy reasons:This lists recent phish attempts targeting Username. You might also join with TAP click logs to see if she clicked anything. ProofpointPODMessage_CL | where recipient has "email address removed for privacy reasons" | where array_length(threatsInfoMap) > 0 and threatsInfoMap_classification_s == "Phish" | project TimeGenerated, sender_s, subject_s, threat=threatsInfoMap_threat_s Campaign-Level Analysis: Group emails by Proofpoint campaign ID to see scope of each campaign:This shows each campaign ID with how many unique recipients were hit and one example subject. Combining TAP and POD tables on GUID_s or QID_s can further link click events back to the originating message/campaign. ProofpointPODMessage_CL | mv-expand threatsInfoMap | summarize Recipients=make_set(recipient), Count=dcount(recipient) by CampaignID=threatsInfoMap_campaignId_s | project CampaignID, RecipientCount=Count, Recipients, SampleSubject=any(subject_s) Each query can be refined (for instance, filtering only within a recent time window) and embedded in Sentinel Analytics rules or hunting. The key is using the connectors’ fields – URLs, sender/recipient addresses, campaign IDs – to pivot between email data and other security signals.535Views4likes0CommentsCloud Posture + Attack Surface Signals in Microsoft Sentinel (Prisma Cloud + Cortex Xpanse)
Microsoft expanded Microsoft Sentinel’s connector ecosystem with Palo Alto integrations that pull cloud posture, cloud workload runtime, and external attack surface signals into the SIEM, so your SOC can correlate “what’s exposed” and “what’s misconfigured” with “what’s actively being attacked.” Specifically, the Ignite connectors list includes Palo Alto: Cortex Xpanse CCF and Palo Alto: Prisma Cloud CWPP. Why these connectors matter for Sentinel detection engineering Traditional SIEM pipelines ingest “events.” But exposure and posture are just as important as the events—because they tell you which incidents actually matter. Attack surface (Xpanse) tells you what’s reachable from the internet and what attackers can see. Posture (Prisma CSPM) tells you which controls are broken (public storage, permissive IAM, weak network paths). Runtime (Prisma CWPP) tells you what’s actively happening inside workloads (containers/hosts/serverless). In Sentinel, these become powerful when you can join them with your “classic” telemetry (cloud activity logs, NSG flow logs, DNS, endpoint, identity). Result: fewer false positives, faster triage, better prioritization. Connector overview (what each one ingests) 1) Palo Alto Prisma Cloud CSPM Solution What comes in: Prisma Cloud CSPM alerts + audit logs via the Prisma Cloud CSPM API. What it ships with: connector + parser + workbook + analytics rules + hunting queries + playbooks (prebuilt content). Best for: Misconfig alerts: public storage, overly permissive IAM, weak encryption, risky network exposure. Compliance posture drift + audit readiness (prove you’re monitoring and responding). 2) Palo Alto Prisma Cloud CWPP (Preview) What comes in: CWPP alerts via Prisma Cloud API (Compute/runtime side). Implementation detail: Built on Codeless Connector Platform (CCP). Best for: Runtime detections (host/container/serverless security alerts) “Exploit succeeded” signals that you need to correlate with posture and exposure. 3) Palo Alto Cortex Xpanse CCF What comes in: Alerts logs fetched from the Cortex Xpanse API, ingested using Microsoft Sentinel Codeless Connector Framework (CCF). Important: Supports DCR-based ingestion-time transformations that parse to a custom table for better performance. Best for: External exposure findings and “internet-facing risk” detection Turning exposure into incidents only when the asset is critical / actively targeted. Reference architecture (how the data lands in Sentinel) Here’s the mental model you want for all three: flowchart LR A[Palo Alto Prisma Cloud CSPM] -->|CSPM API: alerts + audit logs| S[Sentinel Data Connector] B[Palo Alto Prisma Cloud CWPP] -->|Prisma API: runtime alerts| S C[Cortex Xpanse] -->|Xpanse API: exposure alerts| S S -->|CCF/CCP + DCR Transform| T[(Custom Tables)] T --> K[KQL Analytics + Hunting] K --> I[Incidents] I -->P[SOAR Playbooks] K --> W[Workbooks / Dashboards] Key design point: Xpanse explicitly emphasizes DCR transformations at ingestion time, use that to normalize fields early so your queries stay fast under load. Deployment patterns (practical, SOC-friendly setup) Step 0 — Decide what goes to “analytics” vs “storage” If you’re using Sentinel’s data lake strategy, posture/exposure data is a perfect candidate for longer retention (trend + audit), while only “high severity” may need real-time analytics. Step 1 — Install solutions from Content Hub Install: Palo Alto Prisma Cloud CSPM Solution Palo Alto Prisma Cloud CWPP (Preview) Palo Alto Cortex Xpanse CCF Step 2 — Credentials & least privilege Create dedicated service accounts / API keys in Palo Alto products with read-only scope for: CSPM alerts + audit CWPP alerts Xpanse alerts/exposures Step 3 — Validate ingestion (don’t skip this) In Sentinel Logs: Locate the custom tables created by each solution (Tables blade). Run a basic sanity query: “All events last 1h” “Top 20 alert types” “Distinct severities” Tip: Save “ingestion smoke tests” as Hunting queries so you can re-run them after upgrades. Step 4 — Turn on included analytics content (then tune) The Prisma Cloud CSPM solution comes with multiple analytics rules, hunting queries, and playbooks out of the box—enable them gradually and tune thresholds before going wide. Detection engineering: high-signal correlation recipes Below are patterns that consistently outperform “single-source alerts.” I’m giving them as KQL templates using placeholder table names because your exact custom table names/columns are workspace-dependent (you’ll see them after install). Recipe 1 — “Internet-exposed + actively probed” (Xpanse + network logs) Goal: Only fire when exposure is real and there’s traffic evidence. let xpanse = <XpanseTable> | where TimeGenerated > ago(24h) | where Severity in ("High","Critical") | project AssetIp=<ip_field>, Finding=<finding_field>, Severity, TimeGenerated; let net = <NetworkFlowTable> | where TimeGenerated > ago(24h) | where Direction == "Inbound" | summarize Hits=count(), SrcIps=make_set(SrcIp, 50) by DstIp; xpanse | join kind=inner (net) on $left.AssetIp == $right.DstIp | where Hits > 50 | project TimeGenerated, Severity, Finding, AssetIp, Hits, SrcIps Why it works: Xpanse gives you exposure. Flow/WAF/Firewall gives you intent. Recipe 2 — “Misconfiguration that creates a breach path” (CSPM + identity or cloud activity) Goal: Prioritize posture findings that coincide with suspicious access or admin changes. let posture = <PrismaCSPMTable> | where TimeGenerated > ago(7d) | where PolicySeverity in ("High","Critical") | where FindingType has_any ("Public", "OverPermissive", "NoMFA", "EncryptionDisabled") | project ResourceId=<resource_id>, Finding=<finding>, PolicySeverity, FirstSeen=TimeGenerated; let activity = <CloudActivityTable> | where TimeGenerated > ago(7d) | where OperationName has_any ("RoleAssignmentWrite","SetIamPolicy","AddMember","CreateAccessKey") | project ResourceId=<resource_id>, Actor=<caller>, OperationName, TimeGenerated; posture | join kind=inner (activity) on ResourceId | project PolicySeverity, Finding, OperationName, Actor, FirstSeen, TimeGenerated | order by PolicySeverity desc, TimeGenerated desc Recipe 3 — “Runtime alert on a workload that was already high-risk” (CWPP + CSPM) Goal: Raise severity when runtime alerts occur on assets with known posture debt. let risky_assets = <PrismaCSPMTable> | where TimeGenerated > ago(30d) | where PolicySeverity in ("High","Critical") | summarize RiskyFindings=count() by AssetId=<asset_id>; <CWPPTable> | where TimeGenerated > ago(24h) | project AssetId=<asset_id>, AlertName=<alert>, Severity=<severity>, TimeGenerated, Details=<details> | join kind=leftouter (risky_assets) on AssetId | extend RiskScore = coalesce(RiskyFindings,0) | order by Severity desc, RiskScore desc, TimeGenerated desc SOC outcome: same runtime alert, different priority depending on posture risk. Operational (in real life) 1) Normalize severities early If Xpanse is using DCR transforms (it is), normalize severity to a consistent enum (“Informational/Low/Medium/High/Critical”) to simplify analytics. 2) Deduplicate exposure findings Attack surface tools can generate repeated findings. Use a dedup function (hash of asset + finding type + port/service) and alert only on “new or changed exposure.” 3) Don’t incident-everything Treat CSPM findings as: Incidents only when: critical + reachable + targeted OR tied to privileged activity Tickets when: high risk but not active Backlog when: medium/low with compensating controls 4) Make SOAR “safe by default” Automations should prefer reversible actions: Block IP (temporary) Add to watchlist Notify owners Open ticket with evidence bundle …and only escalate to destructive actions after confidence thresholds.242Views4likes0CommentsStrengthening calendar security through enhanced remediation
In today’s evolving threat landscape, phishing attacks are becoming increasingly sophisticated, often leveraging meeting invites to bypass traditional defenses. While Security Operations (SOC) teams rely on Microsoft Defender’s remediation actions to remove malicious emails, a hidden risk persists: calendar entries created by Outlook during email delivery. These entries can remain active even after the email is deleted, leaving users exposed to harmful content. This update addresses that gap. Remediation supports cleaning up calendar entries SOC teams currently use remediation actions such as Move to Junk, Delete, Soft Delete, and Hard Delete to quickly eliminate email threats from user inboxes. However, meeting invite emails introduce an additional challenge. Even after the email is removed, Outlook automatically creates a calendar entry during delivery, which remains accessible to users. For example, consider a phishing email sent as a meeting invite. Despite the admin removing the email from the user’s inbox, the user can still interact with the same malicious content via the calendar entry. This residual entry may contain harmful links or phishing content, creating a security gap. With this update, we’re taking the first step toward closing that gap. Hard Delete will now also remove the associated calendar entry for any meeting invite email. This ensures threats are fully eradicated—not just from the inbox but also from the calendar—reducing the risk of user interaction with malicious content. This change applies to Hard Delete actions taken from any surface, including Explorer, Advanced Hunting, and API. Note: 1) Deleted calendar entries can be restored by resending the meeting invite. 2) This action does not remove calendar entries manually added by users via .ics files. Ability to Block URL domains via submission/TABL actions from Explorer SOC teams can currently add senders and URLs to the TABL block list when submitting false negatives to Microsoft. However, phishing campaigns often use variations of URLs under the same parent domain, making full URL blocking less effective. With this update, TABL options for URL domains are now dynamically surfaced, enabling SOC teams to block entire domains without leaving their workflow. This enhancement simplifies remediation and strengthens defenses against domain-based phishing attacks. These updates strengthen SOC remediation workflows by closing critical security gaps and ensuring threats are fully neutralized across all user touchpoints. By extending remediation to calendar entries and enabling domain-level URL blocking, we deliver comprehensive protection that reduces risk, streamlines operations, and safeguards user experiences. At Microsoft, our priority is your security, and we remain committed to empowering SOC teams with tools that make defense smarter and more effective. Learn more: Remediate malicious email that was delivered in Office 365 - Microsoft Defender for Office 365 | Microsoft LearnUnifying AWS and Azure Security Operations with Microsoft Sentinel
The Multi-Cloud Reality Most modern enterprises operate in multi-cloud environments using Azure for core workloads and AWS for development, storage, or DevOps automation. While this approach increases agility, it also expands the attack surface. Each platform generates its own telemetry: Azure: Activity Logs, Defender for Cloud, Entra ID sign-ins, Sentinel analytics AWS: CloudTrail, GuardDuty, Config, and CloudWatch Without a unified view, security teams struggle to detect cross-cloud threats promptly. That’s where Microsoft Sentinel comes in, bridging Azure and AWS into a single, intelligent Security Operations Center (SOC). Architecture Overview Connect AWS Logs to Sentinel AWS CloudTrail via S3 Connector Enable the AWS CloudTrail connector in Sentinel. Provide your S3 bucket and IAM role ARN with read access. Sentinel will automatically normalize logs into the AWSCloudTrail table. AWS GuardDuty Connector Use the AWS GuardDuty API integration for threat detection telemetry. Detected threats, such as privilege escalation or reconnaissance, appear in Sentinel as the AWSGuardDuty table. Normalize and Enrich Data Once logs are flowing, enrich them to align with Azure activity data. Example KQL for mapping CloudTrail to Sentinel entities: AWSCloudTrail | extend AccountId = tostring(parse_json(Resources)[0].accountId) | extend User = tostring(parse_json(UserIdentity).userName) | extend IPAddress = tostring(SourceIpAddress) | project TimeGenerated, EventName, User, AccountId, IPAddress, AWSRegion Then correlate AWS and Azure activities: let AWS = AWSCloudTrail | summarize AWSActivity = count() by User, bin(TimeGenerated, 1h); let Azure = AzureActivity | summarize AzureActivity = count() by Caller, bin(TimeGenerated, 1h); AWS | join kind=inner (Azure) on $left.User == $right.Caller | where AWSActivity > 0 and AzureActivity > 0 | project TimeGenerated, User, AWSActivity, AzureActivity Automate Cross-Cloud Response Once incidents are correlated, Microsoft Sentinel Playbooks (Logic Apps) can automate your response: Example Playbook: “CrossCloud-Containment.json” Disable user in Entra ID Send a command to the AWS API via Lambda to deactivate IAM key Notify SOC in Teams Create ServiceNow ticket POST https://api.aws.amazon.com/iam/disable-access-key PATCH https://graph.microsoft.com/v1.0/users/{user-id} { "accountEnabled": false } Build a Multi-Cloud SOC Dashboard Use Sentinel Workbooks to visualize unified operations: Query 1 – CloudTrail Events by Region AWSCloudTrail | summarize Count = count() by AWSRegion | render barchart Query 2 – Unified Security Alerts union SecurityAlert, AWSGuardDuty | summarize TotalAlerts = count() by ProviderName, Severity | render piechart Scenario Incident: A compromised developer account accesses EC2 instances on AWS and then logs into Azure via the same IP. Detection Flow: CloudTrail logs → Sentinel detects unusual API calls Entra ID sign-ins → Sentinel correlates IP and user Sentinel incident triggers playbook → disables user in Entra ID, suspends AWS IAM key, notifies SOC Strengthen Governance with Defender for Cloud Enable Microsoft Defender for Cloud to: Monitor both Azure and AWS accounts from a single portal Apply CIS benchmarks for AWS resources Surface findings in Sentinel’s SecurityRecommendations table284Views4likes0CommentsYou may be right after all! Disputing Submission Responses in Microsoft Defender for Office 365
Introduction As a Microsoft MVP (Most Valuable Professional) specializing in SIEM, XDR, and Cloud Security, I have witnessed the rapid evolution of cybersecurity technologies, especially those designed to protect organizations from sophisticated threats targeting email and collaboration tools. Microsoft Defender for Office 365 introduced an LLM-based engine to help better classify phishing emails that, these days, are mostly written using AI anyways about a year ago. Today, I'm excited to spotlight a new place AI has been inserted into a workflow to make it better…a feature that elevates the transparency and responsiveness of threat management: the ability to dispute a submission response directly within Microsoft Defender for Office 365. Understanding the Challenge While the automated and human-driven analyses are robust in Defender for Office 365, there are occasions where the response—be it a verdict of "benign" or "malicious"— doesn’t fully align with the security team's context or threat intelligence. If you are a Microsoft 365 organization with Exchange Online mailboxes, you’re probably familiar with how admins can use the Submissions page in the Microsoft Defender portal to submit messages, URLs, and attachments to Microsoft for analysis. As a recent enhancement, now all the admin submissions use LLM based response for better explainability. In the past, disputing such verdicts required separate support channels, using Community support, or manual email processes, often delaying resolution and impacting the speed of cyber operations. Introducing the Dispute Submission Response Feature With the new dispute submission response feature, Microsoft Defender for Office 365 bridges a critical gap in the incident response workflow. Now, when a security analyst or administrator receives a verdict on a submitted item, they have the option to dispute the response directly within the Microsoft 365 Defender portal. This feature streamlines feedback, allowing teams to quickly flag disagreements and provide additional context for review at the speed of operations. How It Works Upon submission of a suspicious item, Microsoft Defender for Office 365 provides a response indicating its assessment—malicious, benign, or other categorizations. If the security team disagrees with the verdict, they can select the "Dispute" option and submit their rationale, including supporting evidence and threat intelligence. The disputed case is escalated directly to Microsoft’s threat research team for further review, and the team is notified of progress and outcomes. This direct feedback loop not only empowers security teams to advocate for their organization's unique context, but also enables Microsoft to continually refine detection algorithms and verdict accuracy based on real-world input, because security is a team sport. Benefits for Security Operations Faster Resolution: Streamlined dispute submission eliminates the need for external support tickets and escalations, reducing turnaround time for critical cases. Greater Transparency: The feature fosters a collaborative relationship between customers and Microsoft, ensuring that verdicts are not final judgments but points in an ongoing dialogue. Continuous Improvement: Feedback from disputes enhances Microsoft’s threat intelligence and improves detection for all Defender for Office 365 users. Empowerment: Security teams gain a stronger voice in the protection of their environment, reinforcing trust in automated defenses. MVP Insights: Real-World Impact Having worked with global enterprises, I’ve seen how nuanced and context-specific threats can be. Sometimes, what appears benign to one organization may be a targeted attack for another, a slight modification to a URL may catch one email, but not others, as slight changes are made as billions of emails are sent. We are only as good as the consortium. The ability to dispute submission responses creates a vital safety net, ensuring that security teams are not forced to accept verdicts that could expose them to risk. It’s a welcome step toward adaptive, user-driven security operations. Conclusion The dispute submission response feature in Microsoft Defender for Office 365 is one of the most exciting features for me, because it focuses on enabling organizations striving for agility and accuracy in threat management. By enabling direct, contextual feedback, Microsoft empowers security teams to play an active role in shaping their defenses. As an MVP, I encourage all users to leverage this feature, provide detailed feedback, and help drive the future of secure collaboration in the cloud. You may be right after all. _________ This blog has been generously and expertly authored by Microsoft Security MVP, Mona Ghadiri with support of the Microsoft Defender for Office 365 product team. Mona Ghadiri Microsoft Security MVP Learn More and Meet the Author 1) December 16th Ask the Experts Webinar: Microsoft Defender for Office 365 | Ask the Experts: Tips and Tricks (REGISTER HERE) DECEMBER 16, 8 AM US Pacific You’ve watched the latest Microsoft Defender for Office 365 best practices videos and read the blog posts by the esteemed Microsoft Most Valuable Professionals (MVPs). Now bring your toughest questions or unique situations straight to the experts. In this interactive panel discussion, Microsoft MVPs will answer your real-world scenarios, clarify best practices, and highlight practical tips surfaced in the recent series. We’ll kick off with a who’s who and recent blog/video series recap, then dedicate most of the time to your questions across migration, SOC optimization, fine-tuning configuration, Teams protection, and even Microsoft community engagement. Come ready with your questions (or pre-submit here) for the expert Security MVPs on camera, or the Microsoft Defender for Office 365 product team in the chat! REGISTER NOW for 12/16. 2) Additional MVP Tips and Tricks Blogs and Videos in this Four-Part Series: 1. Microsoft Defender for Office 365: Migration & Onboarding by Purav Desai 2. Safeguarding Microsoft Teams with Microsoft Defender for Office 365 by Pierre Thoor 3. (This blog post) You may be right after all! Disputing Submission Responses in Microsoft Defender for Office 365 by Mona Ghadiri 4. Microsoft Defender for Office 365: Fine-Tuning | Real-world Defender for Office 365 tuning that closes real attack paths by Joe Stocker Learn and Engage with the Microsoft Security Community Log in and follow this Microsoft Defender for Office 365 blog and follow/post in the Microsoft Defender for Office 365 discussion space. Follow = Click the heart in the upper right when you're logged in 🤍 Learn more about the Microsoft MVP Program. Join the Microsoft Security Community and be notified of upcoming events, product feedback surveys, and more. Get early access to Microsoft Security products and provide feedback to engineers by joining the Microsoft Customer Connection Community. Join the Microsoft Security Community LinkedIn