Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
We have reviewed the new settings in Microsoft Edge version 145 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 139 security baselin...
Feb 14, 202649Views
0likes
0Comments
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such ...
Feb 13, 2026925Views
0likes
0Comments
3 MIN READ
In today’s rapidly evolving threat landscape, organizations need security solutions that deliver actionable insights in real time, not minutes or hours after the fact. Microsoft Sentinel continues to...
Feb 12, 2026622Views
0likes
0Comments
10 MIN READ
Member: TysonPaul | Microsoft Community Hub
From classroom to workforce: Helping higher ed faculty prepare students for what’s next
Team Blog: Microsoft Learn
Author: RWortmanMorris
Publish...
Feb 12, 2026134Views
0likes
0Comments
Recent Discussions
Lifecycle using Custom Protection with Purview Sensitivity Labels
Organizations using Purview Sensitivity Labels with custom protection face a fundamental governance challenge: there is no lifecycle‑ready way to maintain, audit, or update per‑document user rights as teams evolve. This affects compliance, need‑to‑know enforcement, and operational security. Document lifecycle challenges Team growth: new members do not inherit document‑specific rights. Team shrinkage: departing members retain access unless manually removed. Employee offboarding: accounts are disabled, but compliance may require explicit removal from protected documents. Audit requirements: organizations need to answer “Who has what rights on document X?” — and today, no native tool provides this for custom‑protected files. Existing method Limitation Purview PowerShell Overwrites all existing assignments; no granular updates MIP Client Not yet capable of bulk lifecycle operations OlaProeis/FileLabeler Great tool, but limited by the same PowerShell constraints What the tool enables Rights audit trail per document Controlled lifecycle updates (add/remove/transfer rights) Preservation of original files for rollback Multi‑action batch processing Admin‑only delegated workflow with MIP superuser role Full logging for compliance Supported operations ListRightAssignments – extract all rights from each document under a given label GUID SetOwner / AddOwner – assign or add owners AddEditor / AddRestrictedEditor / AddViewer – role‑based additions RemoveAccess – remove any user from all roles AddAccessAs – map one user’s role to one or more new users Multi‑action execution – combine operations in a single run Safe mode – original files preserved; updated copies created with a trailer Because this tool can modify access to highly sensitive content, it must be embedded in a controlled workflow: ticket‑based approval, delegated admin, MIP superuser assignment, and retention of all logs as part of the audit trail. This ensures compliance with need‑to‑know, separation of duties, and legal requirements. I would appreciate feedback from the community and Microsoft product teams on: whether similar lifecycle capabilities are planned for Purview whether the MIP SDK is the right long‑term approach how others handle custom‑protected document lifecycle today interest in collaborating on a more robust open‑source version MaxDisplay On-prem Password Policy on SSPR Page
Hi All We are beginning to rollout SSPR with on-prem writeback. So far so good. Is there a way we can display our on-prem password policy requirements on the SSPR screen? I have seen the MS docs, but can't really make any sense of them so any help would be greatly appreciated. SKCrowdStrike API Data Connector (via Codeless Connector Framework) (Preview)
API scopes created. Added to Connector however only streams observed are from Alerts and Hosts. Detections is not logging? Anyone experiencing this issue? Github has post about it apears to be escalated for feature request. CrowdStrikeDetections. not ingested Anyone have this setup and working?SAP & Business-Critical App Security Connectors
I validated what it takes to make SAP and SAP-adjacent security signals operational in a SOC: reliable ingestion, stable schemas, and detections that survive latency and schema drift. I focus on four integrations into Microsoft Sentinel: SAP Enterprise Threat Detection (ETD) cloud edition (SAPETDAlerts_CL, SAPETDInvestigations_CL), SAP S/4HANA Cloud Public Edition agentless audit ingestion (ABAPAuditLog), Onapsis Defend (Onapsis_Defend_CL), and SecurityBridge (also ABAPAuditLog). Because vendor API specifics for ETD Retrieval API / Audit Retrieval API aren’t publicly detailed in the accessible primary sources I could retrieve, I explicitly label pagination/rate/time-window behaviors as unspecified where appropriate. Connector architectures and deployment patterns For SAP-centric telemetry I separate two planes: First is SAP application telemetry that lands in SAP-native tables, especially ABAPAuditLog, ABAPChangeDocsLog, ABAPUserDetails, and ABAPAuthorizationDetails. These tables are the foundation for ABAP-layer monitoring and are documented with typed columns in Azure Monitor Logs reference. Second is external “security product” telemetry (ETD alerts, Onapsis findings). These land in custom tables (*_CL) and typically require a SOC-owned normalization layer to avoid brittle detections. Within Microsoft’s SAP solution itself, there are two deployment models: agentless and containerized connector agent. The agentless connector uses SAP Cloud Connector and SAP Integration Suite to pull logs, and Microsoft documents it as the recommended approach; the containerized agent is being deprecated and disabled on September 14, 2026. On the “implementation technology” axis, Sentinel integrations generally show up as: - Codeless Connector Framework (CCF) pollers/pushers (SaaS-managed ingestion definitions with DCR support). - Function/Logic App custom pipelines using the Logs Ingestion API when you need custom polling, enrichment, or a vendor endpoint that isn’t modeled in CCF. In my view, ETD and S/4HANA Cloud connectors are “agentless” from the Sentinel side (API credentials only), while Onapsis Defend and SecurityBridge connectors behave like push pipelines because Microsoft requires an Entra app + DCR permissions (typical Logs Ingestion API pattern). Authentication and secrets handling Microsoft documents the required credentials per connector: - ETD cloud connector requires Client Id + Client Secret for ETD Retrieval API (token mechanics unspecified). - S/4HANA Cloud Public Edition connector requires Client Id + Client Secret for Audit Retrieval API (token mechanics unspecified), and Microsoft notes “alternative authentication mechanisms” exist (details in linked repo are unspecified in accessible sources). - Onapsis Defend and SecurityBridge connectors require a Microsoft Entra ID app registration and Azure permission to assign Monitoring Metrics Publisher on DCRs. This maps directly to the Logs Ingestion API guidance, where a service principal is granted DCR access via that role (or the Microsoft.Insights/Telemetry/Write data action). For production, I treat these as “SOC platform secrets”: - Store client secrets/certificates in Key Vault when you own the pipeline (Function/Logic App); rotate on an operational schedule; alert on auth failures and sudden ingestion drops. - For vendor-managed ingestion (Onapsis/SecurityBridge), I still require: documented ownership of the Entra app, explicit RBAC scope for the DCR, and change control for credential rotation because a rotated secret is effectively a data outage. API behaviors and ingestion reliability For ETD Retrieval API and Audit Retrieval API, pagination/rate limits/time windows are unspecified in the accessible vendor documentation I could retrieve. I therefore design ingestion and detections assuming non-ideal API behavior: late-arriving events, cursor/page limitations, and throttling. CCF’s RestApiPoller model supports explicit retry policy, windowing, and multiple paging strategies, so if/when you can obtain vendor API semantics, you can encode them declaratively (rather than writing fragile code). For the SAP solution’s telemetry plane, Microsoft provides strong operational cues: agentless collection flows through Integration Suite, and troubleshooting typically happens in the Integration Suite message log; this is where I validate delivery failures before debugging Sentinel-side parsers. For scheduled detections, I always account for ingestion delay explicitly. Microsoft’s guidance is to widen event lookback by expected delay and then constrain on ingestion_time() to prevent duplicates from overlap. Schema, DCR transformations, and normalization layer Connector attribute comparison Connector Auth method Sentinel tables Default polling Backfill Pagination Rate limits SAP ETD (cloud) Client ID + Secret (ETD Retrieval API) SAPETDAlerts_CL, SAPETDInvestigations_CL unspecified unspecified unspecified unspecified SAP S/4HANA Cloud (agentless) Client ID + Secret (Audit Retrieval API); alt auth referenced ABAPAuditLog unspecified unspecified unspecified unspecified Onapsis Defend Entra app + DCR permission (Monitoring Metrics Publisher) Onapsis_Defend_CL n/a (push pattern) unspecified n/a unspecified SecurityBridge Entra app + DCR permission (Monitoring Metrics Publisher) ABAPAuditLog n/a (push pattern) unspecified n/a unspecified Ingestion-time DCR transformations Sentinel supports ingestion-time transformations through DCRs to filter, enrich, and mask data before it’s stored. Example: I remove low-signal audit noise and mask email identifiers in ABAPAuditLog: source | where isnotempty(TransactionCode) and isnotempty(User) | where TransactionCode !in ("SM21","ST22") // example noise; tune per tenant | extend Email = iif(Email has "@", strcat(substring(Email,0,2),"***@", tostring(split(Email,"@")[1])), Email) Normalization functions Microsoft explicitly recommends using SAP solution functions instead of raw tables because they can change the infrastructure beneath without breaking detections. I follow the same pattern for ETD/Onapsis custom tables: I publish SOC-owned functions as a schema contract. .create-or-alter function with (folder="SOC/SAP") Normalize_ABAPAudit() { ABAPAuditLog | project TimeGenerated, SystemId, ClientId, User, TransactionCode, TerminalIpV6, MessageId, MessageClass, MessageText, AlertSeverityText, UpdatedOn } .create-or-alter function with (folder="SOC/SAP") Normalize_ETDAlerts() { SAPETDAlerts_CL | extend AlertId = tostring(coalesce(column_ifexists("AlertId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("User",""), column_ifexists("user",""))) | project TimeGenerated, AlertId, Severity, SapUser, * } .create-or-alter function with (folder="SOC/SAP") Normalize_Onapsis() { Onapsis_Defend_CL | extend FindingId = tostring(coalesce(column_ifexists("FindingId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("user",""))) | project TimeGenerated, FindingId, Severity, SapUser, * } Health/lag monitoring and anti-gap I monitor both connector health and ingestion delay. SentinelHealth is the native health table, and Microsoft provides a health workbook and a schema reference for the fields. let lookback=24h; union isfuzzy=true (ABAPAuditLog | extend T="ABAPAuditLog"), (SAPETDAlerts_CL | extend T="SAPETDAlerts_CL"), (Onapsis_Defend_CL | extend T="Onapsis_Defend_CL") | where TimeGenerated > ago(lookback) | summarize LastEvent=max(TimeGenerated), P95DelaySec=percentile(datetime_diff("second", ingestion_time(), TimeGenerated), 95), Events=count() by T Anti-gap scheduled-rule frame (Microsoft pattern): let ingestion_delay=10m; let rule_lookback=5m; ABAPAuditLog | where TimeGenerated >= ago(ingestion_delay + rule_lookback) | where ingestion_time() > ago(rule_lookback) SOC detections for ABAP privilege abuse, fraud/insider behavior, and audit readiness Privileged ABAP transaction monitoring ABAPAuditLog includes TransactionCode, User, SystemId, and terminal/IP fields, so I start with a curated high-risk tcode set and then add baselines. let PrivTCodes=dynamic(["SU01","PFCG","SM59","RZ10","SM49","SE37","SE16","SE16N"]); Normalize_ABAPAudit() | where TransactionCode in (PrivTCodes) | summarize Actions=count(), Ips=make_set(TerminalIpV6,5) by SystemId, User, TransactionCode, bin(TimeGenerated, 1h) | where Actions >= 3 Fraud/insider scenario: sensitive object change near privileged audit activity ABAPChangeDocsLog exposes ObjectClass, ObjectId, and change types; I correlate sensitive object changes to privileged transactions in a tight window. let w=10m; let Sensitive=dynamic(["BELEG","BPAR","PFCG","IDENTITY"]); ABAPChangeDocsLog | where ObjectClass in (Sensitive) | project ChangeTime=TimeGenerated, SystemId, User=tostring(column_ifexists("User","")), ObjectClass, ObjectId, TypeOfChange=tostring(column_ifexists("ItemTypeOfChange","")) | join kind=innerunique ( Normalize_ABAPAudit() | project AuditTime=TimeGenerated, SystemId, User, TransactionCode ) on SystemId, User | where AuditTime between (ChangeTime-w .. ChangeTime+w) | project ChangeTime, AuditTime, SystemId, User, ObjectClass, ObjectId, TransactionCode, TypeOfChange Audit-ready pipeline: monitoring continuity and configuration touchpoints I treat audit logging itself as a monitored control. A simple SOC-safe control is “volume drop” by system; it’s vendor-agnostic and catches pipeline breaks and deliberate suppression. Normalize_ABAPAudit() | summarize PerHour=count() by SystemId, bin(TimeGenerated, 1h) | summarize Avg=avg(PerHour), Latest=arg_max(TimeGenerated, PerHour) by SystemId | where Latest_PerHour < (Avg * 0.2) Where Onapsis/ETD are present, I increase fidelity by requiring “privileged ABAP activity” plus an external SAP-security product finding (field mappings are tenant-specific; normalize first): let win=1h; Normalize_ABAPAudit() | where TransactionCode in ("SU01","PFCG","SM59","SE16N") | join kind=leftouter (Normalize_Onapsis()) on $left.User == $right.SapUser | where isempty(FindingId) == false and TimeGenerated1 between (TimeGenerated .. TimeGenerated+win) | project TimeGenerated, SystemId, User, TransactionCode, FindingId, OnapsisSeverity=Severity Production validation, troubleshooting, and runbook For acceptance, I validate in this order: table creation, freshness/lag percentiles, connector health state, and cross-check of event counts against the upstream system for the same UTC window (where available). Connector health monitoring is built around SentinelHealth plus the Data collection health workbook. For SAP agentless ingestion, Microsoft states most troubleshooting happens in Integration Suite message logs—this is where I triage authentication/networking failures before tuning KQL. For Onapsis/SecurityBridge-style ingestion, I validate Entra app auth, DCR permission assignment (Monitoring Metrics Publisher), and a minimal ingestion test payload using the Logs Ingestion API tutorial flow. Operational runbook items I treat as non-optional: health alerts on connector failure and freshness drift; scheduled rule anti-gap logic; playbooks that capture evidence bundles (ABAPAuditLog slice + user context from ABAPUserDetails/ABAPAuthorizationDetails); DCR filters to reduce noise and cost; and change control for normalization functions and watchlists. SOC “definition of done” checklist (short): 1) Tables present and steadily ingesting; 2) P95 ingestion delay measured and rules use the anti-gap pattern; 3) SentinelHealth enabled with alerts; 4) SOC-owned normalization functions deployed; 5) at least one privileged-tcode rule + one change-correlation rule + one audit-continuity rule in production. Mermaid ingestion flow:How can I discover my AI Foundry Agents?
Apologies for being new to Purview, but I cannot figure this out... So far I've created a Microsoft Purview account, included myself to a number of roles. I've enabled Microsoft Purview in the new Microsoft Foundry portal by navigating to Operate => Compliance => Security posture and enabled "Microsoft Purview". Back in Purview, I can see Copilot Apps and Agent, but I cannot see an agents deployed in Microsoft Foundry. What am I missing?22Views0likes0CommentseDiscovery - Issues exploring groups & users related to a hybrid data source
Hi all, first time posting - unusually I could find nothing out there that helped. I work in an organisation has an on-premises domain which syncs to our tenant. I don't manage the domain or the sync, but I'm assured that the settings are vanilla and there are no errors being logged. 99% of our users are hybrid. The tenant is shared across multiple legal entities, so I'm using eDiscovery to fulfil our GDPR subject access requests The issue I am hitting is straightforward. in eDiscovery searches with hybrid users as the data source, I cannot add related objects (manager, direct reports, groups the user is in). The properties are present in Entra, but not visible to Purview, so I'm not investigating sync errors at the moment. For cloud objects, I can see manager, teams, etc. and it works fine. Does anyone have any insights they can share on the "explore and add" mechanics in eDiscovery search data sources? I'm drawing a complete blank on this one. Where should I be looking?Dedicated cluster for Sentinels in different tenants
Hello I see that there is a possibility to use a dedicated cluster for a workspace in the same Azure region. What about workspaces that reside in different tenants but are in the same Azure region? Is that possible? We are utilizing multiple tenants, and we want to keep this operational model. However, there is a central SOC, and we wonder if there is a possibility to utilize a dedicated cluster for cost optimization.6Views0likes0CommentsHow Should a Fresher Learn Microsoft Sentinel Properly?
Hello everyone, I am a fresher interested in learning Microsoft Sentinel and preparing for SOC roles. Since Sentinel is a cloud-native enterprise tool and usually used inside organizations, I am unsure how individuals without company access are expected to gain real hands-on experience. I would like to hear from professionals who actively use Sentinel: - How do freshers typically learn and practice Sentinel? - What learning resources or environments are commonly used by beginners? - What level of hands-on experience is realistically expected at entry level? I am looking for guidance based on real industry practice. Thank you for your time.23Views0likes1CommentCannot find sensitivity label Confidential - Internal Employee
Dear Microsoft Community, As per review there no have the sensitivity label, Confidential - Internal Employees on the Microsoft Purview, SIT or Rule. We notices when email come from Department A send to all internal users always come with label "Confidential - Internal Employees". Can you share good practice to resolve / troubleshooting what is next action? To ensure no any interrupt user daily operation. Thanks,12Views0likes0CommentsDefault Sensitivity Label to be added to migrated files (from Local Network Server)
Hi Experts, We are migrating our file-sharing services from a local network file server to MS Teams/SPO. The requirement is to enable and give default sensitivity labels from the migrated files. Manually assigning sensitivity labels in over a TB of files is hectic and could be prone to error as well. MS Purview MIP labels and label policies are configured, however, at present, only new documents and/or revised files are only having the sensitivity labels assigned. Any suggestions, guide, and tips will be highly appreciated. Thanks, RheyCan´t Sign confidential documents
Hello, I have a problem. I want to send confidential contracts to customers for signing with Adobe DocuSign. This contracts have a label "confidential" from purview and are encrypted. But now the customer cant sign the contract with DocuSign because of the encryption. Is there a way that they can sign the document? We must encrypt the documents because compliance reasons and ISMS. Thank you.Adaptive Scope
I created an adaptive scope, in which i use CustomAttribute10 -eq "Leaver", to build the user scope. The accounts are hybrid ad accounts wherefore we need to populate ExtensionAttribute10 with the string "Leaver". the OnPrem Ad Account is updated accordingly Set-aduser $User.DistinguishedName -add @{ExtensionAttribute10="Leaver"} the extension attribute has been added to Entra-ID sync in which the attrute is sync to Entra-ID. When i verify the synced account in Entra-ID i can verify that Custom attribute 10 is indeed synced to Entra-ID. (get-mguser -Filter "DisplayName eq '$($AdUser.Name)')" -Property OnPremisesExtensionAttributes | select -ExpandProperty OnPremisesExtensionAttributes).ExtensionAttribute10 Leaver This is my adaptive Scope get-adaptivescope | select FilterConditions FilterConditions ---------------- {"Conditions":[{"Value":"Leaver","Operator":"Equals","Name":"CustomAttribute10"}],"Conjunction":"And"} I have created the adaptive scope about a week ago, so it should be poppulated. However when i check my scope, it is still empty. What did i miss?22Views0likes1CommentWhom to report when several days after a 'file submission' displayed status is 'In progress'...
Hello all, could anyone be so kind to tell me whom should I report these issues to, when : - after several days (since submission took place on June 29, 2024) reported Status for same below mentioned 'Submission ID' is still displayed as 'In progress' ? - no details for an already Submitted 'File submission' are available (because after clicking on ' 90d794a0-3a0d-4bc2-9d8f-2169d477fb30' only this error message is shown 'The details for the submission were not found or the submission has expired. You can view recent items in your submission history.') ? P.S. If I'd rather (better) submit this post/question into another Discussion Space then please just let me know ASAP. Below you find a screenshot showing main issue I described. Please also note that I also already tried to report same main issue same day of submission and also today via same 'Provide feedback' smiley icon (also shown in screenshot below and evidenced in a squared box) but with no results so far. Thanks in advance for any update. Best Regards Rob873Views1like7CommentsMultitenant organization (MTO): user licenses
Hello everyone, As described https://learn.microsoft.com/en-us/microsoft-365/enterprise/set-up-multi-tenant-org, I have created an MTO. It seems to have worked because I can see users from tenant A in tenant B. Everything looks correct, as the users have #EXT# in their usernames, their type is “Member”, and their identity is “ExternalAzureAD”. BUT they are all unlicensed. My question: is there a way to synchronize the licenses of the users, or do I really have to purchase the same license twice for a single user? Specifically, I am interested in the following licenses: Microsoft 365 Business Premium (access to Teams, SharePoint, Exchange Online shared mailboxes, etc.) Dynamics 365 licenses (e.g., Business Central). Thank you very much for your assistance, and warm regards, NicoOnboarding MDE with Defender for Cloud (Problem)
Hello Community, In our Customer i have a strange problem. We onboarded with Azure Arc server and activate a Defender for Cloud servises only for Endpoint protection. Some of this device onboarded into Microsoft Defender portale, but not appears as a device, infact i don't have opportunity to put them into a group to apply policy. I have check sensor of Azure Arc and all works fine (device are in Azure Arc, are in the defender portal and see them on Intune (managed by MDE)). From Intune portal From Defender portal But in difference from other device into entra ID exists only the enterprise application and not device I show the example of device that works correctly (the same onboarding method) Is there anyone who has or has had this problem? Thanks and Regards, GuidoCopilot Studio Auditing
Hey team, While I'm doing research around copilot studio audting and logging, I did noticed few descripencies. This is an arcticle that descibes audting in Microsoft copilot. https://learn.microsoft.com/en-us/microsoft-copilot-studio/admin-logging-copilot-studio?utm_source=chatgpt.com I did few simualtions on copilot studio in my test tenant, I don't see few operations generated which are mentioned in the article. For Example: For updating authentication details, it generated "BotUpdateOperation-BotIconUpdate" event. Ideally it should have generated "BotUpdateOperation-BotAuthUpdate" I did expected different operations for Instructions, tools and knowledge update, I believe all these are currently covered under "BotComponentUpdate". Any security experts suggestion/thoughts on this?XDR RBAC missing Endpoint & Vulnerability Management
I've been looking at ways to provide a user with access to the Vulnerability Dashboard and associated reports without giving them access to anything else within Defender (Email, Cloud App etc) looking at the article https://learn.microsoft.com/en-us/defender-xdr/activate-defender-rbac it has a slider for Endpoint Management which I don't appear to have? I have business Premium licences which give me GA access to see the data so I know I'm licenced for it and it works but I can't figure out how to assign permissions. When looking at creating a custom permission here https://learn.microsoft.com/en-us/defender-xdr/custom-permissions-details#security-posture--posture-management it mentions Security Posture Management would give them Vulnerability Management Level Read which is what I'm after but that doesn't appear to be working. The test account i'm using to try this out just gets an error Error getting device data I'm assuming its because it doesn't have permissions of the device details?Question malware autodelete
A malware like Trojan:Win32/Wacatac.C!ml can download other malware, this other malware can perform the malicious action, this malware can delete itself and in the next scan of antivirus free this malware that deleted itself will not have any trace and will not be detected by the scan?
Events
in 2 days
As AI becomes embedded in everyday work, traditional data security models break down. Copilots and agents can search, summarize, and recombine information at machine speed, creating new exposure path...
Tuesday, Feb 17, 2026, 09:00 AM PSTOnline
3likes
106Attendees
2Comments