Blog Post

Microsoft Security Community Blog
5 MIN READ

Microsoft Purview Data Risk Assessments: M365 vs Fabric

Safeena Begum Lepakshi's avatar
Jan 20, 2026

Use cases and best practices, how to operationalize both

Why Data Risk Assessments matter more in the AI era:

Generative AI changes the oversharing equation. It can surface data faster, to more people, with less friction, which means existing permission mistakes become more visible, more quickly. Microsoft Purview Data Risk Assessments are designed to identify and help you remediate oversharing risks before (or while) AI experiences like Copilot and analytics copilots accelerate access patterns.

Quick Decision Guide: When to use Which?

Use Microsoft 365 Data Risk Assessments when: 

  • You’re rolling out Microsoft 365 Copilot (or Copilot Chat/agents grounded in SharePoint/OneDrive).  
  • Your biggest exposure risk is oversharing SharePoint sites, broad internal access, anonymous links, or unlabeled sensitive files.

Use Fabric Data Risk Assessments when: 

  • You’re rolling out Copilot in Fabric and want visibility into sensitive data exposure in workspaces and Fabric items (Dashboard, Report, DataExploration, DataPipeline, KQLQuerySet, Lakehouse, Notebook, SQLAnalyticsEndpoint, and Warehouse) 
  • Your governance teams need to reduce risk in analytics estates without waiting for a full data governance program to mature. 

Use both when (most enterprises): 

Data is spread across collaboration + analytics + AI interactions, and you want a unified posture and remediation motion under DSPM objectives.

At a high level: 

  • Microsoft 365 Data Risk Assessments focus on oversharing risk in SharePoint and OneDrive content, a primary readiness step for Microsoft 365 Copilot rollouts. 
  • Fabric Data Risk Assessments focus on oversharing risk in Microsoft Fabric workspaces and items, especially relevant for Copilot in Fabric and Power BI / Fabric artifacts. 

Both experiences show up under the newer DSPM (preview) guided objectives (and also remain available via DSPM for AI (classic) paths depending on your tenant rollout).

The Differences That Matter

NOTE: Assessments are a posture snapshot, not a live feed. Default assessments will automatically re-scan every week while custom assessments need to be manually recreated or duplicated to get a new snapshot. Use them to prioritize remediation and then re-run on a cadence to measure improvement.

 

Scenario

M365 Data Risk Assessments

Fabric Data Risk Assessments

Default scope & frequency 

Surfaces top 100 most active SharePoint sites (and OneDrives) weekly. Surfaces org-wide oversharing issues in M365 content. 

Default data risk assessment automatically runs weekly for the top 100 Fabric workspaces based on usage in your organization. Focuses on oversharing in Fabric items. 

Supported item types 

SharePoint Online sites (incl. Teams files) & OneDrive documents. Focus on files/folders and their sharing links or permissions. 

Fabric content: Dashboards, Power BI Reports, Data Explorations, Data Pipelines, KQL Querysets, Lakehouses, Notebooks, SQL Analytics Endpoints, Warehouses (as of preview). 

Oversharing signals

Unprotected sensitive files that don't have sensitivity label that have broad or external access (e.g., “everyone” or public link sharing). Also flags sites with many active users (high exposure). 

Unprotected sensitive data in Fabric workspaces. For example, reports or datasets with SITs but no sensitivity label, or Fabric items accessible to many users (or shared externally via link, if applicable)

Freshness and re-run behavior

Custom assessments can be rerun by duplicating the assessment to create a new run, results can expire after a defined window (30‑day expiration and using “duplicate” to re-run).

Fabric custom assessments are also active for 30 days and can be duplicated to continue scanning the scoped list of workspaces. To rerun and to see results after the 30-day expiration, use the duplicate option to create a new assessment with the same selections.

Setup Requirements

For deeper capabilities like M365 item-level scanning in custom assessments requires an Entra app registration with specific Microsoft Graph application permissions + admin consent for your tenant.

The Fabric default assessment requires one-time service principal setup and enabling service principal authentication for Fabric admin APIs in the Fabric admin portal.

Remediation options

Advanced: Item-level scanning identifies potentially overshared files with direct remediation actions (e.g., remove sharing link, apply sensitivity label, notify owner) for SharePoint sites. site-level actions are also available such as enabling default sensitivity labels or configuring DLP policies. 

 Workspace level Purview controls: e.g., create DLP policies, apply default sensitivity labels to new items, or review user access in Entra. (The actual permission changes in Fabric must be done by admins or owners in Fabric.) 

User experience 

Default assessment:

Site-level insights like label of sites, how many times site was accessed and how many sensitive items were found in the site.

Custom Assessments: Site-level and item-level insights if using item-level scan. Shows counts of sensitive files, share links (anyone links, externals), label coverage, last accessed info, etc. And possible remediation actions

Results organized by Fabric workspace. Shows counts of sensitive items found and how many are labeled vs unlabeled, broad access indicators (e.g., large viewer counts or “anyone link” usage for data in that workspace), plus recommended mitigation (DLP/sensitivity label policies)

Licensing

DSPM/DSPM for AI M365 features typically require Microsoft 365 E5 / E5 Compliance entitlements for relevant user-based scenarios.

Copilot in Fabric governance (starting with Copilot for Power BI) requires E5 licensing and enabling the Fabric tenant option “Allow Microsoft Purview to secure AI interactions” (default enabled) and pay-as-you-go billing for Purview management of those AI interactions.

Common pitfalls organizations face when securing Copilot in Fabric (and how to avoid them) 

Pitfall 

How to Avoid or Mitigate 

Not completing prerequisites for Purview to access Fabric environment Example: Skipping the Entra app setup for Fabric assessment scanning. 

Follow the setup checklist and enable the Purview-Fabric integration. Without it, your Fabric workspaces won’t be scanned for oversharing risk. Dedicate time pre-deployment to configure required roles, app registrations, and settings. 

Fabric setup stalls due to missing admin ownership

Fabric assessments require Entra app admin + Fabric admin collaboration (service principal + tenant settings).

Skipping the “label strategy” and jumping straight to DLP

DLP is strongest when paired with a clear sensitivity labeling strategy; labels provide durable semantics across M365 + Fabric.

Fragmented labeling strategy Example: Fabric assets have a different or no labeling schema separate from M365. 

Align on a unified sensitivity label taxonomy across M365 and Fabric. Re-use labels in Fabric via Purview publishing so that classifications mean the same thing everywhere. This ensures consistent DLP and retention behavior across all data locations. 

Too broad DLP blocks disrupt users Example: Blocking every sharing action involving any internal data causes frustration. 

Take a risk-based approach: Start with monitor-only DLP rules to gather data, then refine. Focus on high-impact scenarios (for instance, blocking external sharing of highly sensitive data) rather than blanket rules. Use user education (via policy tips) to drive awareness alongside enforcement. 

Ignoring the human element (owners & users) Example: IT implements controls but doesn’t inform data owners or train users. 

Involve workspace owners and end-users early. For each high-risk workspace, engage the owner to verify if data is truly sensitive and to help implement least-privilege access. Provide training to users about how Copilot uses data and why labeling & proper sharing are important. This fosters a culture of “shared responsibility” for AI security. 

No ongoing plan after initial fixes Example: One-time scan and label, but no follow-up, so new issues emerge unchecked. 

Operationalize the process: Treat this as a continuous cycle. Leverage the “Operate” phase – schedule regular re-assessments (e.g., monthly Fabric risk scans) and quarterly reviews of Copilot-related incidents. Consider appointing a Copilot Governance Board or expanding an existing data governance committee to include AI oversight, so there’s accountability for long-term upkeep. 

Resources
Acknowledgements

Sincere Regards to Sunil Kadam, Principal Squad Leader & Jenny Li, Product Lead, for their review and feedback. 

Updated Jan 19, 2026
Version 1.0
No CommentsBe the first to comment