microsoft 365
976 TopicsInterested in Being a Guest on the Microsoft Security Podcast at RSA?
Going to RSA? We’re recording short, 15-minute Microsoft Security podcast conversations live at the conference- focused on real-world practitioner experience. No pitches, no slides, no marketing. Just an honest conversation about what you’re seeing, what’s changed, and what you’d tell a peer. If you’re doing the work and want to share your perspective, we’d love to hear from you. Take the survey to let us know you are interested here!Microsoft Purview Data Security Investigations is now generally available
Every data security investigation starts with the same question: What data security risks are buried in this data? Exposed credentials in thousands of files across a data estate. Evidence of fraud hidden in vendor communications. Sensitive documents accidentally shared to a large group. Finding these risks manually — reviewing content file by file, message by message — is no longer viable when organizations are managing 220 zettabytes of data[1] and facing over 12,000 confirmed breaches annually[2]. That's why we built Microsoft Purview Data Security Investigations, now generally available. Microsoft Purview Data Security Investigations enables data security teams to identify investigation-relevant data, investigate that data with AI-powered deep content analysis, and mitigate risk — all within one unified solution. Teams can quickly analyze data at scale to surface sensitive data and security risks, then collaborate securely to address them. By streamlining complex, time‑consuming investigative workflows, admins can resolve investigations in hours instead of weeks or months. Proactive and reactive investigation scenarios Organizations are using Data Security Investigations to tackle diverse data security challenges — from reactive incident response to proactive risk assessment. Some of our top use cases from preview include: Data breach and leak: Understand severity, sensitivity, and significance of data that has been leaked or breached, including risks buried in impacted data, to take action and mitigate its impact to the organization. Credentials exposure: Proactively scan thousands of SharePoint sites to identify files containing credentials like passwords, which can give a threat actor prolonged access to an organization's environment. Internal fraud and bribery: Uncover suspicious communications tied to vendor payments or client interactions, uncovering hard-to-find patterns in large volumes of emails and messages. Sensitive data exposure in Teams: Determine who accessed classified documents after accidental sharing — and whether that data was further distributed. Inappropriate content investigations: Quickly find what was posted, where, and by whom, even when teams only know a timeframe or channel name. Investigations that once took weeks — or weren’t possible at all — can now be completed in hours. By eliminating manual effort and surfacing hidden risks across sprawling data estates, Data Security Investigations empowers teams to investigate more efficiently and confidently, making deep, scalable investigations a reality. What Microsoft Purview Data Security Investigations does – and what’s new Since launching public preview, we've listened closely to customer feedback and made significant enhancements to help teams investigate faster, mitigate more effectively, and manage costs with confidence. Data Security Investigations addresses three critical stages of an investigation: Identify impacted data Data Security admins can efficiently identify relevant data by searching their Microsoft 365 data estate to locate emails, Teams messages, Copilot prompts and responses, and documents. Investigators can also launch pre-scoped investigations from a Microsoft Defender XDR incident or a Microsoft Purview Insider Risk Management case. We’ve recently added a new integration that allows admins to launch a Data Security Investigation from Microsoft Purview Data Security Posture Management as well. This capability can help a data security admin investigate an objective, such as preventing data exfiltration. Investigate using deep content analysis Once the investigation is scoped, the solution's generative AI capabilities allow admins to gain deeper insights into the data, analyzing across 95+ languages to uncover critical sensitive data and security risks. Teams can quickly answer three questions: What data security risks exist within the data? Why do they matter? And what actions can be taken to mitigate them? To help answer these questions, two new investigative capabilities, AI search and AI context input, as well as enhancements to existing features were added in November. Data Security Investigations help admins scale their impact and accelerate investigations with the following features: AI search: Using a new AI-powered natural language search experience, admins can find key risks using keywords, metadata, and semantic embeddings — making it easier to locate investigation-relevant content across large data estates. Categorization: By automatically classifying investigation data into meaningful categories, admins can quickly understand incident severity, what types of content is at risk, and trends within an investigation. Vector search: Using semantic search, admins can find contextually related content — even when exact keywords don't match. Risk examination: Using deep content analysis, admins can examine content for sensitive data and security risks, providing a risk score, recommended mitigation steps, and AI-generated rationale for each analyzed asset. AI context input: Admins can now add investigation-specific context before analysis, resulting in more efficient, higher-quality insights tailored to the specific incident. AI search in action, finding credentials present in the dataset being investigated. Mitigate identified risks Investigators can use Data Security Investigations to securely collaborate with partner teams to mitigate identified risks, simplifying tasks that have traditionally been time consuming and complex. In September, we launched an integration with the Microsoft Sentinel graph, the data risk graph, allowing admins to visualize correlations between investigation data, users, and their activities. This automatically combines unified audit logs, Entra audit logs, and threat intelligence, which would otherwise need to be manually correlated, saving time, providing critical context, and allowing investigators to understand all nodes in their investigation. At the start of January 2026, we launched a new mitigation action, purge, that helps admins quickly and efficiently delete sensitive or overshared content directly within the investigation workflow in the product interface. This reduces exposure immediately and keeps incidents from escalating or recurring. Built-in cost management tools To help customers predict and manage costs associated with using Data Security Investigations, we recently released a lightweight cost estimator and usage dashboard. The in-product cost estimator is now available to help analysts model and forecast both storage and compute unit costs based on specific use cases, enabling more accurate budget planning. Additionally, the usage dashboard provides granular breakdowns of billed storage and compute unit usage, empowering data security admins to identify cost-saving opportunities and optimize resource allocation. For detailed guidance on managing costs, see https://aka.ms/DSIcostmanagementtips. Refined business model for general availability These cost management tools are designed to support our updated business model, which offers greater flexibility and transparency. Customers need the freedom to scale investigations without overcommitting resources. To better align with how customers investigate data risk at scale, we refined the Data Security Investigations business model as part of general availability. The product now uses two consumptive meters: Data Security Investigations Storage Meter – For storing investigation-related data, charged by GB Data Security Investigations Compute Meter – For the computational capacity required to complete AI-powered data analysis and actions, charged by Compute Units (CUs) Monthly charges are determined by the amount of data stored and the number of CUs consumed per hour. This pay-as-you-go model ensures customers only pay for what they need when they need it, providing the flexibility, scalability, and cost efficiency needed for both urgent incident response and proactive data security hygiene assessments. Find more information on pricing at aka.ms/purviewpricing. Get started today As data security threats evolve, so must the way we investigate them. Microsoft Purview Data Security Investigations is now generally available, giving organizations a modern, AI-powered approach to uncovering and mitigating risk — without the complexity of disconnected tools or manual workflows. Whether investigating an active breach or proactively hunting for hidden risks, Data Security Investigations gives data security teams the speed and precision needed to act decisively in today's threat landscape. Join for a live Ask Me Anything with the people behind the product on Thursday February 5th at 10am PST, more details here: aka.ms/PurviewDSIAMA2 Learn more about Data Security Investigations at aka.ms/DSIdocs View pricing details at aka.ms/purviewpricing Try Data Security Investigations today! Visit the product https://purview.microsoft.com/dsi and find setup instructions at aka.ms/DSIsetup [1] Worldwide IDC Global DataSphere Forecast, 2025–2029 [2] 2025-dbir-data-breach-investigations-report.pdfSearch and Purge workflow in the new modern eDiscovery experience
With the retirement of Content Search (Classic) and eDiscovery Standard (Classic) in May, and alongside the future retirement of eDiscovery Premium (Classic) in August, organizations may be wondering how this will impact their existing search and purge workflow. The good news is that it will not impact your organizations ability to search for and purge email, Teams and M365 Copilot messages; however there are some additional points to be careful about when working with purge with cmdlet and Graph alongside of the modern eDiscovery experience. We have made some recent updates to our documentation regarding this topic to reflect the changes in the new modern eDiscovery experience. These can be found below and you should ensure that you read them in full as they are packed with important information on the process. Find and delete email messages in eDiscovery | Microsoft Learn Find and delete Microsoft Teams chat messages in eDiscovery | Microsoft Learn Search for and delete Copilot data in eDiscovery | Microsoft Learn The intention of this first blog post in the series is to cover the high-level points including some best practices when it comes to running search and purge operations using Microsoft Purview eDiscovery. Please stay tuned for further blog posts intended to provide more detailed step-by-step of the following search and purge scenarios: Search and Purge email and Teams messages using Microsoft Graph eDiscovery APIs Search and Purge email messages using the Security and Compliance PowerShell cmdlets I will update this blog post with the subsequent links to the follow-on posts in this series. So let’s start by looking at the two methods available to issue a purge command with Microsoft Purview eDiscovery, they are the Microsoft Graph eDiscovery APIs or the Security and Compliance PowerShell cmdlets. What licenses you have dictates which options are available to you and what type of items you can be purge from Microsoft 365 workloads. For E3/G3 customers and cases which have the premium features disabled You can only use the PowerShell cmdlets to issue the purge command You should only purge email items from mailboxes and not Teams messages You are limited to deleting 10 items per location with a purge command For E5/G5 customers and cases which have the premium features enabled You can only use the Graph API to issue the purge command You can purge email items and Teams messages You can delete up to 100 items per location with a purge command To undertake a search and then purge you must have the correct permissions assigned to your account. There are two key Purview Roles that you must be assigned, they are: Compliance Search: This role lets users run the Content Search tool in the Microsoft Purview portal to search mailboxes and public folders, SharePoint Online sites, OneDrive for Business sites, Skype for Business conversations, Microsoft 365 groups, and Microsoft Teams, and Viva Engage groups. This role allows a user to get an estimate of the search results and create export reports, but other roles are needed to initiate content search actions such as previewing, exporting, or deleting search results. Search and Purge: This role lets users perform bulk removal of data matching the criteria of a search. To learn more about permissions in eDiscovery, along with the different eDiscovery Purview Roles, please refer to the following Microsoft Learn article: Assign permissions in eDiscovery | Microsoft Learn By default, eDiscovery Manager and eDiscovery Administrators have the “Compliance Search” role assigned. For search and purge, only the Organization Management Purview Role group has the role assigned by default. However, this is a highly privileged Purview Role group and customers should considering using a custom role group to assign the Search and Purge Purview role to authorised administrators. Details on how to create a custom role group in Purview can be found in the following article. Permissions in the Microsoft Purview portal | Microsoft Learn It is also important to consider the impact of any retention policies or legal holds will have when attempting to purge email items from a mailbox where you want to hard delete the items and remove it completely from the mailbox. When a retention policy or legal hold is applied to a mailbox, email items that are hard deleted via the purge process are moved and retained in the Recoverable Items folder of the mailbox. There purged items will be retained until such time as all holds are lifted and until the retention period defined in the retention policy has expired. It is important to note that items retained in the Recoverable Items folder are not visible to users but are returned in eDiscovery searches. For some search and purge use cases this is not a concern; if the primary goal is to remove the item from the user’s view then additional steps are required. However if the goal is to completely remove the email item from the mailbox in Exchange Online so it doesn't appear in the user’s view and is not returned by future eDiscovery searches then additional steps are required. They are: Disable client access to the mailbox Modify retention settings on the mailbox Disable the Exchange Online Managed Folder Assistant for the mailbox Remove all legal holds and retention policies from the mailbox Perform the search and purge operation Revert the mailbox to its previous state These steps should be carefully followed as any mistake could result in additional data that is being retained being permanently deleted from the service. The full detailed steps can be found in the following article. Delete items in the Recoverable Items folder mailboxes on hold in eDiscovery | Microsoft Learn Now for some best practice when running search and purge operations: Where possible target the specific locations containing the items you wish to purge and avoid tenant wide searches where possible If a tenant wide search is used to initially locate the items, once the locations containing the items are known modify the search to target the specific locations and rerun the steps Always validate the item report against the statistics prior to issuing the purge command to ensure you are only purging items you intend to remove If the item counts do not align then do not proceed with the purge command Ensure admins undertaking search and purge operations are appropriately trained and equipped with up-to-date guidance/process on how to safely execute the purge process The search conditions Identifier, Sensitivity Label and Sensitive Information Type do not support purge operations and if used can cause un-intended results Organizations with E5/G5 licenses should also take this opportunity to review if other Microsoft Purview and Defender offerings can help them achieve the same outcomes. When considering the right approach/tool to meet your desired outcomes you should become familiar with the following additional options for removing email items: Priority Clean-up (link): Use the Priority cleanup feature under Data Lifecycle Management in Microsoft Purview when you need to expedite the permanent deletion of sensitive content from Exchange mailboxes, overriding any existing retention settings or eDiscovery holds. This process might be implemented for security or privacy in response to an incident, or for compliance with regulatory requirements. Threat Explorer (link): Threat Explorer in Microsoft Defender for Office 365 is a powerful tool that enables security teams to investigate and remediate malicious emails in near real-time. It allows users to search for and filter email messages based on various criteria - such as sender, recipient, subject, or threat type - and take direct actions like soft delete, hard delete, or moving messages to junk or deleted folders. For manual remediation, Threat Explorer supports actions on emails delivered within the past 30 days In my next posts I will be delving further into how to use both the Graph APIs and the Security and Compliance PowerShell module to safely execute your purge commands.Security Baseline for M365 Apps for enterprise v2512
Security baseline for Microsoft 365 Apps for enterprise (v2512, December 2025) Microsoft is pleased to announce the latest Security Baseline for Microsoft 365 Apps for enterprise, version 2512, is now available as part of the Microsoft Security Compliance Toolkit. This release builds on previous baselines and introduces updated, security‑hardened recommendations aligned with modern threat landscapes and the latest Office administrative templates. As with prior releases, this baseline is intended to help enterprise administrators quickly deploy Microsoft recommended security configurations, reduce configuration drift, and ensure consistent protection across user environments. Download the updated baseline today from the Microsoft Security Compliance Toolkit, test the recommended configurations, and implement as appropriate. This release introduces and updates several security focused policies designed to strengthen protections in Microsoft Excel, PowerPoint, and core Microsoft 365 Apps components. These changes reflect evolving attacker techniques, partner feedback, and Microsoft’s secure by design engineering standards. The recommended settings in this security baseline correspond with the administrative templates released in version 5516. Below are the updated settings included in this baseline: Excel: File Block Includes External Link Files Policy Path: User Configuration\Administrative Templates\Microsoft Excel 2016\Excel Options\Security\Trust Center\File Block Settings\File Block includes external link files The baseline will ensure that external links to workbooks blocked by File Block will no longer refresh. Attempts to create or update links to blocked files return an error. This prevents data ingestion from untrusted or potentially malicious sources. Block Insecure Protocols Across Microsoft 365 Apps Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Block Insecure Protocols The baseline will block all non‑HTTPS protocols when opening documents, eliminating downgrade paths and unsafe connections. This aligns with Microsoft’s broader effort to enforce TLS‑secure communication across productivity and cloud services. Block OLE Graph Functionality Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Block OLE Graph This setting will prevent MSGraph.Application and MSGraph.Chart (classic OLE Graph components) from executing. Microsoft 365 Apps will instead render a static image, mitigating a historically risky automation interface. Block OrgChart Add‑in Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Block OrgChart The legacy OrgChart add‑in is disabled, preventing execution and replacing output with an image. This reduces exposure to outdated automation frameworks while maintaining visual fidelity. Restrict FPRPC Fallback in Microsoft 365 Apps Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Restrict Apps from FPRPC Fallback The baseline disables the ability for Microsoft 365 Apps to fall back to FrontPage Server Extensions RPC which is an aging protocol not designed for modern security requirements. Avoiding fallback ensures consistent use of modern, authenticated file‑access methods. PowerPoint: OLE Active Content Controls Updated Policy Path: User Configuration\Administrative Templates\Microsoft PowerPoint 2016\PowerPoint Options\Security\OLE Active Content This baseline enforces disabling interactive OLE actions, no OLE content will be activate. The recommended baseline selection ensures secure‑by‑default OLE activation, reducing risk from embedded legacy objects. Deployment options for the baseline IT Admins can apply baseline settings in different ways. Depending on the method(s) chosen, different registry keys will be written, and they will be observed in order of precedence: Office cloud policies will override ADMX/Group Policies which will override end user settings in the Trust Center. Cloud policies may be deployed with the Office cloud policy service for policies in HKCU. Cloud policies apply to a user on any device accessing files in Office apps with their AAD account. In Office cloud policy service, you can create a filter for the Area column to display the current Security Baselines, and within each policy's context pane the recommended baseline setting is set by default. Learn more about Office cloud policy service. ADMX policies may be deployed with Microsoft Intune for both HKCU and HKLM policies. These settings are written to the same place as Group Policy, but managed from the cloud. There are two methods to create and deploy policy configurations: Administrative templates or the settings catalog. Group Policy may be deployed with on premise AD DS to deploy Group Policy Objects (GPO) to users and computers. The downloadable baseline package includes importable GPOs, a script to apply the GPOs to local policy, a script to import the GPOs into Active Directory Group Policy, updated custom administrative template (SecGuide.ADMX/L) file, all the recommended settings in spreadsheet form and a Policy Analyzer rules file. GPOs included in the baseline Most organizations can implement the baseline’s recommended settings without any problems. However, there are a few settings that will cause operational issues for some organizations. We've broken out related groups of such settings into their own GPOs to make it easier for organizations to add or remove these restrictions as a set. The local-policy script (Baseline-LocalInstall.ps1) offers command-line options to control whether these GPOs are installed. "MSFT Microsoft 365 Apps v2512" GPO set includes “Computer” and “User” GPOs that represent the “core” settings that should be trouble free, and each of these potentially challenging GPOs: “DDE Block - User” is a User Configuration GPO that blocks using DDE to search for existing DDE server processes or to start new ones. “Legacy File Block - User” is a User Configuration GPO that prevents Office applications from opening or saving legacy file formats. "Legacy JScript Block - Computer" disables the legacy JScript execution for websites in the Internet Zone and Restricted Sites Zone. “Require Macro Signing - User” is a User Configuration GPO that disables unsigned macros in each of the Office applications. If you have questions or issues, please let us know via the Security Baseline Community or this post. Related: Learn about Microsoft Baseline Security Mode3.1KViews0likes2CommentsMFA Issue blocks Global Admin / Data Protection Team disconnects calls
Hi. I have just learned that the Microsoft Authenticator app allows you to create MFA for multiple Global Administrator accounts, but those accounts will not properly transfer when you move to a new Smartphone. I have one tenant that has only one Global Admin Account secured using MFA and the Microsoft Authenticator App. The MFA is no longer working. I have been told to work with the Microsoft Data Protection Team by calling them at 800-865-9408. The weird thing is they keep disconnecting the call before the issue gets addressed. It has happened multiple times. Calling them back results in hold times averaging over 2 hours. Does anyone have ideas how I can get my MFA issue solved perhaps by reaching the proper group at Microsoft in another fashion? Is there some customer advocate resource at Microsoft I can contact?496Views0likes2CommentsMigrating from Hybrid to pure Azure AD
We've currently got our domain/environment setup in a Hybrid AD. We've got a DC with AzureAD Connect installed and syncing to Azure. The plan is to uninstall AzureAD connect, demote the DC server, manually join computers to AzureAD. Will this work? I'm trying to understand if there is any consideration when uninstalling the AzureAD connect or disconnecting the server from Azure. Thanks!48KViews0likes7CommentsSearch and Purge using the Security and Compliance PowerShell cmdlets
Welcome back to the series of blogs covering search and purge in Microsoft Purview eDiscovery! If you are new to this series, please first visit the blog post in our series that you can find here: Search and Purge workflow in the new modern eDiscovery experience. Also please ensure you read in full the Microsoft Learn documentation on this topic as I will not be covering some of the steps in full (permissions, releasing holds, all limitations): Find and delete email messages in eDiscovery | Microsoft Learn So as a reminder, E3/G3 customers must use the Security and Compliance PowerShell cmdlets to execute the purge operation. Searches can continue to be created using the New-ComplianceSearch cmdlet and then run the newly created search using the Start-ComplianceSearch cmdlet. Once a search has run, the statistics can be reviewed before executing the New-ComplianceSearchAction cmdlet with the Purge switch to remove the item from the targeted locations. However, some organizations may want to initially run the search, review statistics and export an item report in the new user experience before using the New-ComplianceSearchAction cmdlet to purge the items from the mailbox. Before starting, ensure you have version 3.9.0 or later of the Exchange Online Management PowerShell Module installed (link). If multiple versions of the Exchange Online Management PowerShell module are installed alongside version 3.9.0, remove the older versions of the module to avoid potential conflicts between the different versions of the module. When connecting using the Connect-IPPSession cmdlet ensure you include the EnableSearchOnlySession parameter otherwise the purge command will not run and may generate an error (link) Create the case, if you will be using the new Content Search case you can skip this step. However, if you want to create a new case to host the search, you must create the case via PowerShell. This ensures any searches created within the case in the Purview portal will support the PowerShell based purge command. Use the Connect-IPPSession command to connect to Security and Compliance PowerShell before running the following command to create a new case. New-ComplianceCase “Test Case” Select the new Purview Content Search case or the new case you created in step 1 and create a new Search Within your new search use the Add Sources option to search for and select the mailboxes containing the item to be purged by adding them to the Data sources of your newly created search. Note: Make sure only Exchange mailboxes are selected as you can only purge items contained within Exchange Mailboxes. If you added both the mailbox and associated sites, you can remove the sites using the 3 dot menu next to the data source under User Options. Alternatively, use the manage sources button to remove the sites associated with the data source. Within Condition builder define the conditions required to target the item you wish to purge. In this example, I am targeting an email with a specific subject, from a specific sender, on a specific day. To help me understand the estimated number of items that would be returned by the search I can run a statistics job first to give me confidence that the query is correct. I do this by selecting Run Query from the search itself. Then I can select Statistics and Run Query to trigger the Statistics job. Note, you can view the progress of the job via the Process Manager Once completed I can view the Statistics to confirm the query looks accurate and returning the numbers I was expecting. If I want to further verify that the items returned by the search is what I am looking for, I can run a Sample job to review a sample of the items matching the search query Once the Sample job is completed, I can review samples for locations with hits to determine if this is indeed the items I want to purge. If I need to go further and generate a report of the items that match the search (not just statistics and sampling) I can run an export to generate a report for the items that match the search criteria. Note: It is important to run the export report to review the results that purge action will remove from the mailbox. This will ensure that we purge only the items of interest. Download the report for the export job via the Process Manager or the Export tab to review the items that were a match Note: If very few locations have hits it is recommended to reduce the scope of your search by updating the data sources to include only the locations with hits. Switch back to the cmdlet and use Get-ComplianceSearch cmdlet as below, ensure the query is as you specified in the Purview Portal Get-ComplianceSearch -Identity "My search and purge" | fl As the search hasn’t be run yet in PowerShell – the Items count is 0 and the JobEndTime is not set - the search needs to be re-run via PS as per the example shown below Start-ComplianceSearch "My search and purge" Give it a few minutes to complete and use Get-ComplianceSearch to check the status of the search, if the status is not “Completed” and JobEndTime is not set you may need to give it more time Check the search returned the same results once it has finished running Get-ComplianceSearch -Identity "My search and purge" | fl name,status,searchtype,items,searchstatistics CRITICAL: It is important to make sure the Items count match the number of items returned in the item report generated from the Purview Portal. If the number of items returned in PowerShell do not match, then do not continue with the purge action. Issue the purge command using the New-ComplianceSearchAction cmdlet New-ComplianceSearchAction -SearchName "My search and purge" -Purge -PurgeType HardDelete Once completed check the status of the purge command to confirm that the items have been deleted Get-ComplianceSearchAction "My search and purge_purge" | fl Now that the purge operation has been completed successfully, it has been removed from the target mailbox and is no longer accessible by the user.Microsoft Purview Data Risk Assessments: M365 vs Fabric
Why Data Risk Assessments matter more in the AI era: Generative AI changes the oversharing equation. It can surface data faster, to more people, with less friction, which means existing permission mistakes become more visible, more quickly. Microsoft Purview Data Risk Assessments are designed to identify and help you remediate oversharing risks before (or while) AI experiences like Copilot and analytics copilots accelerate access patterns. Quick Decision Guide: When to use Which? Use Microsoft 365 Data Risk Assessments when: You’re rolling out Microsoft 365 Copilot (or Copilot Chat/agents grounded in SharePoint/OneDrive). Your biggest exposure risk is oversharing SharePoint sites, broad internal access, anonymous links, or unlabeled sensitive files. Use Fabric Data Risk Assessments when: You’re rolling out Copilot in Fabric and want visibility into sensitive data exposure in workspaces and Fabric items (Dashboard, Report, DataExploration, DataPipeline, KQLQuerySet, Lakehouse, Notebook, SQLAnalyticsEndpoint, and Warehouse) Your governance teams need to reduce risk in analytics estates without waiting for a full data governance program to mature. Use both when (most enterprises): Data is spread across collaboration + analytics + AI interactions, and you want a unified posture and remediation motion under DSPM objectives. At a high level: Microsoft 365 Data Risk Assessments focus on oversharing risk in SharePoint and OneDrive content, a primary readiness step for Microsoft 365 Copilot rollouts. Fabric Data Risk Assessments focus on oversharing risk in Microsoft Fabric workspaces and items, especially relevant for Copilot in Fabric and Power BI / Fabric artifacts. Both experiences show up under the newer DSPM (preview) guided objectives (and also remain available via DSPM for AI (classic) paths depending on your tenant rollout). The Differences That Matter NOTE: Assessments are a posture snapshot, not a live feed. Default assessments will automatically re-scan every week while custom assessments need to be manually recreated or duplicated to get a new snapshot. Use them to prioritize remediation and then re-run on a cadence to measure improvement. Scenario M365 Data Risk Assessments Fabric Data Risk Assessments Default scope & frequency Surfaces top 100 most active SharePoint sites (and OneDrives) weekly. Surfaces org-wide oversharing issues in M365 content. Default data risk assessment automatically runs weekly for the top 100 Fabric workspaces based on usage in your organization. Focuses on oversharing in Fabric items. Supported item types SharePoint Online sites (incl. Teams files) & OneDrive documents. Focus on files/folders and their sharing links or permissions. Fabric content: Dashboards, Power BI Reports, Data Explorations, Data Pipelines, KQL Querysets, Lakehouses, Notebooks, SQL Analytics Endpoints, Warehouses (as of preview). Oversharing signals Unprotected sensitive files that don't have sensitivity label that have broad or external access (e.g., “everyone” or public link sharing). Also flags sites with many active users (high exposure). Unprotected sensitive data in Fabric workspaces. For example, reports or datasets with SITs but no sensitivity label, or Fabric items accessible to many users (or shared externally via link, if applicable) Freshness and re-run behavior Custom assessments can be rerun by duplicating the assessment to create a new run, results can expire after a defined window (30‑day expiration and using “duplicate” to re-run). Fabric custom assessments are also active for 30 days and can be duplicated to continue scanning the scoped list of workspaces. To rerun and to see results after the 30-day expiration, use the duplicate option to create a new assessment with the same selections. Setup Requirements For deeper capabilities like M365 item-level scanning in custom assessments requires an Entra app registration with specific Microsoft Graph application permissions + admin consent for your tenant. The Fabric default assessment requires one-time service principal setup and enabling service principal authentication for Fabric admin APIs in the Fabric admin portal. Remediation options Advanced: Item-level scanning identifies potentially overshared files with direct remediation actions (e.g., remove sharing link, apply sensitivity label, notify owner) for SharePoint sites. site-level actions are also available such as enabling default sensitivity labels or configuring DLP policies. Workspace level Purview controls: e.g., create DLP policies, apply default sensitivity labels to new items, or review user access in Entra. (The actual permission changes in Fabric must be done by admins or owners in Fabric.) User experience Default assessment: Site-level insights like label of sites, how many times site was accessed and how many sensitive items were found in the site. Custom Assessments: Site-level and item-level insights if using item-level scan. Shows counts of sensitive files, share links (anyone links, externals), label coverage, last accessed info, etc. And possible remediation actions Results organized by Fabric workspace. Shows counts of sensitive items found and how many are labeled vs unlabeled, broad access indicators (e.g., large viewer counts or “anyone link” usage for data in that workspace), plus recommended mitigation (DLP/sensitivity label policies) Licensing DSPM/DSPM for AI M365 features typically require Microsoft 365 E5 / E5 Compliance entitlements for relevant user-based scenarios. Copilot in Fabric governance (starting with Copilot for Power BI) requires E5 licensing and enabling the Fabric tenant option “Allow Microsoft Purview to secure AI interactions” (default enabled) and pay-as-you-go billing for Purview management of those AI interactions. Common pitfalls organizations face when securing Copilot in Fabric (and how to avoid them) Pitfall How to Avoid or Mitigate Not completing prerequisites for Purview to access Fabric environment Example: Skipping the Entra app setup for Fabric assessment scanning. Follow the setup checklist and enable the Purview-Fabric integration. Without it, your Fabric workspaces won’t be scanned for oversharing risk. Dedicate time pre-deployment to configure required roles, app registrations, and settings. Fabric setup stalls due to missing admin ownership Fabric assessments require Entra app admin + Fabric admin collaboration (service principal + tenant settings). Skipping the “label strategy” and jumping straight to DLP DLP is strongest when paired with a clear sensitivity labeling strategy; labels provide durable semantics across M365 + Fabric. Fragmented labeling strategy Example: Fabric assets have a different or no labeling schema separate from M365. Align on a unified sensitivity label taxonomy across M365 and Fabric. Re-use labels in Fabric via Purview publishing so that classifications mean the same thing everywhere. This ensures consistent DLP and retention behavior across all data locations. Too broad DLP blocks disrupt users Example: Blocking every sharing action involving any internal data causes frustration. Take a risk-based approach: Start with monitor-only DLP rules to gather data, then refine. Focus on high-impact scenarios (for instance, blocking external sharing of highly sensitive data) rather than blanket rules. Use user education (via policy tips) to drive awareness alongside enforcement. Ignoring the human element (owners & users) Example: IT implements controls but doesn’t inform data owners or train users. Involve workspace owners and end-users early. For each high-risk workspace, engage the owner to verify if data is truly sensitive and to help implement least-privilege access. Provide training to users about how Copilot uses data and why labeling & proper sharing are important. This fosters a culture of “shared responsibility” for AI security. No ongoing plan after initial fixes Example: One-time scan and label, but no follow-up, so new issues emerge unchecked. Operationalize the process: Treat this as a continuous cycle. Leverage the “Operate” phase – schedule regular re-assessments (e.g., monthly Fabric risk scans) and quarterly reviews of Copilot-related incidents. Consider appointing a Copilot Governance Board or expanding an existing data governance committee to include AI oversight, so there’s accountability for long-term upkeep. Resources Prevent oversharing with data risk assessments (DSPM) Prerequisites for Fabric Data risk assessments Fabric: enable service principal authentication for admin APIs Beyond Visibility: new Purview DSPM experience Microsoft Purview licensing guidance New Purview pricing options for protecting AI apps and agents (PAYG context) Acknowledgements Sincere Regards to Sunil Kadam, Principal Squad Leader & Jenny Li, Product Lead, for their review and feedback.Reachability of a domain across multiple tenants
I have a general question about an Entra scenario that we currently need to implement. Our company consists of 3 companies (companyA.com, companyB.com, companyC.com), each with their own MS Tenant. Here, A is the parent company and B and C are subsidiaries. Is it somehow possible, perhaps with Cross Tenant Synchronization from B, C -> A, that users from the subsidiaries can log in with the parent company's domain name in Entra, Teams & Co., and that Teams invitations can also be sent via an email address of the parent company? So I have mailto:email address removed for privacy reasons and I would like this user to also be known as mailto:email address removed for privacy reasons in the Microsoft ecosystem. From a marketing perspective, it is important that all employees log in and are reachable with the same domain. A migration into one tenant is probably not easily possible for legal reasons. Thank you in advance for your assistance. Christian101Views0likes1Comment