data governance
30 TopicsFrom Oversharing to Enforcement: A Practical Guide to AI Data Security with Microsoft Purview
Why AI Changed the Data Security Problem AI does not create entirely new categories of risk—it supercharges existing ones. Traditional data leakage stems from ordinary behavior: sharing a document too broadly, sending an email to the wrong person, copying regulated data to an uncontrolled device. Generative AI amplifies all of these because of the power and speed with which it can proactively surface content that may be obsolete, over-permissioned, or ungoverned. DSPM exists to help with exactly this challenge: it continuously scans your environment to identify sensitive data, assess risk, and recommend actions to reduce exposure. Oversharing at Scale Before AI, an overshared SharePoint file might sit unnoticed. Now, Copilot can summarize it in response to a casual prompt, distributing its contents far beyond the original audience. Prompt Leakage Users can inadvertently expose sensitive information—financial account numbers, health records, project code names—simply by typing them into a Copilot prompt. Because AI interactions feel conversational, users tend to drop their guard. Shadow AI Beyond sanctioned tools, employees experiment with unapproved AI services. Autonomous Agents Autonomous agents expand the data security threat surface by acting independently on sensitive information across systems and boundaries. Their ability to access and share data without direct user interaction increases the risk of oversharing, exfiltration, and unauthorized access, while also introducing complex behavior patterns that are harder to monitor, govern, and control using traditional security models. What Microsoft Purview Now Brings Together Data Security Posture Management (DSPM) DSPM consolidates insights from Data Loss Prevention (DLP), Insider Risk Management, Information Protection, and Data Security Investigations into a single view for monitoring data risks, policy coverage, and posture trends. Now also in Public Preview, DSPM extends coverage to third-party SaaS and IaaS platforms such as Google Cloud Platform, Snowflake, and Databricks, and integrates with partner solutions including Cyera, BigID, and OneTrust for comprehensive risk insights. A central innovation in this version is data security objectives—prominent, selectable cards that each represent a specific security goal. Selecting an objective guides administrators through an end-to-end workflow that groups together the most relevant Purview solutions—information protection, DLP, Insider Risk Management, and eDiscovery—so teams can focus on achieving a specific data security outcome rather than navigating separate solutions. Each Outcome card displays key metrics such as the percentage of data covered by policies, the number of risky sharing incidents, and improvements over time. Within each outcome, DSPM surfaces suggested prioritized actions—applying sensitivity labels, configuring DLP policies, or investigating alerts—all tailored to the organization's data. Administrators can take action directly from the workflow, including remediating oversharing, configuring one-click policies, or launching investigations into suspicious activity. DLP Integration for AI Interactions DLP is one of the core solutions integrated into DSPM's unified approach. The Activity Explorer's AI activities tab captures events where DLP rules were matched during AI interactions—including prompts, responses, and browsing to generative AI sites. DSPM can automate remediation steps such as removing public sharing links or applying data loss prevention policies to help prevent incidents before they happen. AI Observability and Agent Governance Dedicated dashboards and metrics monitor risks associated with AI apps and agents. AI observability enables tracking of agent-specific activities—oversharing, exfiltration, and unusual access patterns—across both Microsoft and third-party environments. Enhanced reporting provides advanced filtering and customizable views, supporting granular analysis of sensitive data usage, DLP activity, and posture trends. Audit logs and activity explorer features help track interactions with AI apps and agents, supporting compliance investigations and incident response. AI-Powered Security Operations DSPM not only secures and governs AI apps and agents but also uses Microsoft Security Copilot and AI agents to help secure and govern data. AI analyzes access patterns, sharing behaviors, and policy gaps to surface actionable risks and can detect unusual activity such as excessive sharing or suspicious downloads. Under administrator guidance, AI agents can take direct action on detected risks—removing public sharing links, applying DLP policies, or revoking permissions. These actions are always audited. To streamline investigations, AI-driven triage agents review alerts from DLP and Insider Risk Management solutions, filtering out noise and highlighting the most critical threats. Three Practical Starting Points For many organizations adopting generative AI, the biggest hurdle isn't recognizing new risks—it's figuring out where to begin. A "boil the ocean" approach can stall progress, while tackling a few targeted areas delivers quicker wins. The best early moves are those that reduce exposure quickly, improve visibility, and build a foundation for stronger governance over time. Starting Point 1: Enable prompt-level protection for Microsoft 365 Copilot An effective first step is to put guardrails on the prompts users enter into AI. Microsoft Purview DLP allows administrators to restrict Microsoft 365 Copilot and Copilot Chat from processing prompts that contain sensitive information. In practice, users are often more comfortable pasting data into a chat prompt than attaching it to an email, which means a well-meaning employee could inadvertently feed a confidential file or personal data into Copilot. Enabling prompt-level DLP creates an immediate safety net: if a user's prompt includes, say, a credit card number or a customer's national ID, Copilot will detect it and refuse to process or share that content. DSPM provides suggested prioritized actions—including configuring DLP policies—that can be activated directly from the workflow, and recommended policies can start in simulation mode. Simulation mode lets you see what would have been blocked or flagged, without actually interrupting users, so you can fine-tune the policy and prepare your helpdesk for any questions. Once you're comfortable with the results, switching to enforcement mode will actively block disallowed prompts and log those events for review. By activating this one control, you've significantly reduced the most immediate oversharing risk—the "oops, I pasted the wrong data" scenario—within hours of starting your AI governance program. Tradeoff: Simulation mode provides safety but delays enforcement. For organizations with imminent regulatory exposure, consider shortening the simulation window and monitoring alert volumes closely. Starting Point 2: Gain visibility into shadow AI usage before broad enforcement The second step is to illuminate what's happening in the shadows. Before rushing into blocking every unsanctioned AI tool, it's crucial to understand how and where AI is being used across the organization. In most enterprises, there's an official layer of AI usage and an often larger, unofficial layer—employees experimenting with free online AI chatbots, writing assistants, or code generators. DSPM provides this visibility. The Discover > Apps and agents dashboard shows AI apps used across the organization, including the top 20 most recently used agents, with details about sensitive data they accessed and how they are protected by Purview policies. The AI observability page provides a broader inventory of all AI apps and agents with activity in the last 30 days, including how many are high risk and the total with sensitive interactions. The Activity Explorer's AI activities tab shows when users browsed to generative AI sites, the prompts and responses involved, whether sensitive information was present, and whether DLP rules were matched. Armed with this insight, you can make informed decisions. If you discover that the majority of "AI consumption" comes from just two external apps, you might focus your immediate controls on those two. Conversely, if the data shows most unsanctioned usage is low-risk, you might decide to monitor rather than block it. The key is visibility first, enforcement second—letting real data guide where to tighten controls versus where to offer secure alternatives. Tradeoff: Visibility without timely follow-through can create a false sense of security. Set a defined window (e.g., 30 days) after which findings must translate into at least one concrete policy action. Starting Point 3: Operationalize DSPM objectives for Copilot A stronger third starting point is to use DSPM as your operational guide, not just a dashboard of charts. DPSM introduces data security objectives—each one a focused end-to-end workflow for a specific outcome. Rather than configuring individual features in isolation, you select an objective and let Purview navigate you through achieving that outcome with the relevant tools. For generative AI, the key objective to leverage early is "Prevent data exposure in Microsoft 365 Copilot and Microsoft Copilot interactions". By selecting this objective in the Purview portal, you're effectively telling Purview, "help me implement whatever is needed to make Copilot safe with our data." The DSPM interface then groups together the critical pieces: it may prompt you to enable a DLP policy, suggest applying or refining sensitivity labels on content, or surface an Insider Risk Management policy template for detecting AI-related risky behavior. It also surfaces metrics so you can track progress—for example, the percentage of data covered by policies, or the number of risky sharing incidents that have been remediated. Using DSPM objectives keeps your team aligned on a clear goal from day one. It shifts the conversation from "what knobs do we turn on?" to "how do we achieve this outcome?" You follow a guided plan curated by the platform's intelligence rather than navigating five different admin pages and hoping it adds up to protection. Tradeoff: Objectives streamline the path but can obscure the underlying complexity. Teams should periodically step outside the guided workflow to review the full policy landscape and ensure no coverage gaps exist between objectives. From Visibility to Remediation: Turning Insights into Action Automated Remediation at Scale DSPM can automate remediation steps such as removing public sharing links or applying data loss prevention policies to prevent incidents before they happen. Under administrator guidance, AI agents within DSPM can take direct action on detected risks—removing sharing links, applying DLP policies, or revoking permissions—and these actions are always audited. This moves the operating model from manual, one-at-a-time fixes to systematic, policy-driven remediation. Closing the Loop: From Risk to Standing Policy DSPM's data security objectives surface suggested prioritized actions such as applying sensitivity labels, configuring DLP policies, or investigating alerts, all tailored to the organization's data. Reporting and analytics are organized by outcome, making it easier to identify and report improvements, compliance, and risk reduction. This turns recurring findings into standing preventive controls. Instead of re-running assessments and manually fixing the same patterns, administrators create durable policies that enforce the desired state going forward. Alert-Driven Investigation and Tuning Audit logs and activity explorer features help track interactions with AI apps and agents, supporting compliance investigations and incident response. Integrated investigation and forensics tools support rapid incident response and root cause analysis for data security events. Impact prediction visuals and progress tracking for remediation steps are surfaced throughout DSPM, enabling administrators to quantify the effect of their actions and adjust course. The closed-loop process is: Discover (DSPM scans and risk assessments) → Remediate (automated actions and bulk fixes) → Prevent (create or tighten DLP and auto-labeling policies) → Monitor (alert review, investigation, and policy tuning). What "Good" Looks Like in a Regulated or Risk-Aware Organization A mature AI governance posture is defined by measurable outcomes and sustainable operating rhythms—not feature count: Clear, communicated AI usage policies. Users know what is and is not acceptable in AI interactions because the tools reinforce the rules. DLP policy tips delivered at the moment of a violation are a primary training mechanism—they remind users in context why their prompt was blocked and what to do instead. Measured enablement over blanket bans. Leading organizations allow Copilot with appropriate controls and restrict only truly unacceptable scenarios. Policies deployed initially in simulation mode provide data to calibrate enforcement thresholds before blocking. This avoids productivity backlash while preserving security posture. High data hygiene and classification rates. Purview's AI protections depend heavily on sensitivity labels. If everything is unlabeled or "General," label-based controls have nothing to act on. Mature organizations invest in auto-labeling and mandatory labeling to close this gap before deploying AI at scale. DSPM's data security objectives include suggested actions such as applying sensitivity labels, directly tying classification to governance outcomes. Quantifiable risk reduction. Security leadership can produce metrics from Purview that show trend lines: DSPM Outcome cards display the percentage of data covered by policies, the number of risky sharing incidents, and improvements over time. These figures feed directly into compliance reporting and audit evidence. Key metrics are tracked over time, supporting continuous improvement of the organization's data security posture. Cross-functional governance. AI governance is not a solo IT Security effort. Stakeholders from security, compliance, legal, and business units review AI usage patterns, discuss policy tuning, and evaluate new Purview capabilities as they release. Role-based access controls within DSPM provide granular access to features and AI content for delegated administration and compliance, enabling this cross-functional model without overexposing sensitive data to every participant. Tradeoff: Strict enforcement can frustrate power users and slow AI adoption. Organizations should explicitly define escalation paths—if a legitimate use case is blocked by DLP, there must be a fast process to review and adjust, rather than a permanent "no." A Phased Adoption Model Phase Focus Key Activities Phase 1 — Quick Wins (weeks) Visibility and baseline safeguards Enable prompt-level DLP for Copilot in simulation mode. Run first DSPM data risk assessment for oversharing. Enable shadow AI discovery via DSPM's Apps and agents dashboard and AI observability page. Start from the DSPM objective "Prevent data exposure in Microsoft 365 Copilot and Microsoft Copilot interactions." Phase 2 — Broad Enforcement (months) Acting on findings Switch DLP policies from simulation to enforcement. Use automated remediation actions (removing sharing links, applying DLP policies, revoking permissions). Expand sensitive information type definitions and add custom types. Rollout user communications explaining new controls and escalation paths. Phase 3 — Mature Governance (ongoing) Continuous improvement and AI-powered operations Leverage AI-driven triage agents to filter alert noise and highlight critical threats. Conduct periodic DSPM posture reviews using Outcome card metrics. Tune policies based on impact prediction visuals and progress tracking. Extend protections to new AI apps and agents as they are adopted—DSPM's AI observability tracks agent-specific activities across Microsoft and third-party environments. Formalize cross-functional AI governance cadence. *Phase 1 should take weeks, not months—the objective is to establish a baseline before risk accumulates. *Phase 2 is where enforcement generates measurable risk reduction. *Phase 3 is ongoing: as Microsoft continues extending Purview to additional AI apps and agent types, the governance framework must evolve in tandem. The DSPM preview's integration with third-party SaaS and IaaS platforms (Google Cloud Platform, Snowflake, Databricks) and partner solutions (Cyera, BigID, OneTrust) means the governance perimeter can expand alongside the organization's AI footprint. Conclusion AI adoption and data protection are not opposing forces. Microsoft Purview now provides the visibility, policy controls, and remediation workflows to move from discovering AI risk to actively governing Copilot, third-party AI apps, and agents at scale. DSPM surfaces oversharing and AI usage patterns through unified dashboards, data risk assessments, and AI observability. DLP blocks sensitive data in prompts and restricts AI access to labeled content. Insider Risk Management detects adversarial AI behavior. AI-driven triage and remediation agents close the gap between identifying a problem and fixing it—with every automated action audited. The path forward starts with practical actions: enable prompt-level DLP, illuminate shadow AI usage, and operationalize DSPM's "Prevent data exposure in Microsoft 365 Copilot and Microsoft Copilot interactions" objective. From there, enforce what you find, measure the results using DSPM's outcome-based metrics, and progressively mature your governance posture. Organizations that operationalize this loop will be in a strong position: able to say, "We use AI to work smarter—and we have the safeguards in place to do it safely."535Views4likes1CommentGoverning Entra‑Registered AI Apps with Microsoft Purview
As the enterprise adoption of AI agents and intelligent applications continues to accelerate, organizations are rapidly moving beyond simple productivity tools toward autonomous, Entra‑registered AI workloads that can access, reason over, and act on enterprise data. While these capabilities unlock significant business value, they also introduce new governance, security, and compliance risks—particularly around data oversharing, identity trust boundaries, and auditability. In this context, it becomes imperative to govern AI interactions at the data layer, not just the identity layer. This is where Microsoft Purview, working alongside Microsoft Entra ID, provides a critical foundation for securing AI adoption—ensuring that AI agents can operate safely, compliantly, and transparently without undermining existing data protection controls. Lets look at the role of each solution Entra ID vs Microsoft Purview A very common misconception is that Purview “manages AI apps.” In reality, Purview and Entra serve distinct but complementary roles: Microsoft Entra ID Registers the AI app Controls authentication and authorization Enforces Conditional Access and identity governance Microsoft Purview Governs data interactions once access is granted Applies classification, sensitivity labels, DLP, auditing, and compliance controls Monitors and mitigates oversharing risks in AI prompts and responses Microsoft formally documents this split in its guidance for Entra‑registered AI apps, where Purview operates as the data governance and compliance layer on top of Entra‑secured identities. Lets look at how purview governs the Entra registered AI apps. Below is the high level reference architecture which can be extended to low level details 1. Visibility and inventory of AI usage Once an AI app is registered in Entra ID and integrated with Microsoft Purview APIs or SDK, Purview can surface AI interaction telemetry through Data Security Posture Management (DSPM). DSPM for AI provides: Visibility into which AI apps are being used Which users are invoking them What data locations and labels are touched during interactions Early indicators of oversharing risk This observability layer becomes increasingly important as organizations adopt Copilot extensions, custom agents and third‑party AI apps. 2. Classification and sensitivity awareness Purview does not rely on the AI app to “understand” sensitivity. Instead the Data remains classified and labeled at rest. AI interactions inherit that metadata at runtime Prompts and responses are evaluated against existing sensitivity labels If an AI app accesses content labeled Confidential or Highly Confidential, that classification travels with the interaction and becomes enforceable through policy. This ensures AI does not silently bypass years of data classification work already in place. 3. DLP for AI prompts and responses One of the most powerful but yet misunderstood purview capabilities is the AI‑aware DLP. Using DSPM for AI and standard Purview DLP: Prompts sent to AI apps are inspected Responses generated by AI can be validated Sensitive data types (PII, PCI, credentials, etc.) can be blocked, warned, or audited Policies are enforced consistently across M365 and AI workloads Microsoft specifically highlights this capability to prevent sensitive data from leaving trust boundaries via AI interactions. 4. Auditing and investigation Every AI interaction governed by Purview can be recorded in the Unified Audit Log, enabling: Forensic investigation Compliance validation Insider risk analysis eDiscovery for legal or regulatory needs This becomes critical when AI output influences business decisions and regulatory scrutiny increases. Audit records treat AI interactions as first‑class compliance events, not opaque system actions 5. Oversharing risk management Rather than waiting for a breach, Purview proactively highlights oversharing patterns using DSPM: AI repeatedly accessing broadly shared SharePoint sites High volumes of sensitive data referenced in prompts Excessive AI access to business‑critical repositories These insights feed remediation workflows, enabling administrators to tighten permissions, re‑scope access, or restrict AI visibility into specific datasets. In a nutshell, With agentic AI accelerating rapidly, Microsoft has made it clear that organizations must move governance closer to data, not embed it into individual AI apps. Purview provides a scalable way to enforce governance without rewriting every AI workload, while Entra continues to enforce who is allowed to act in the first place. This journey makes every organizations adopt Zero Trust at scale as its no longer limited to users, devices, and applications; It must now extend to AI apps and autonomous agents that act on behalf of the business. If you find the article insightful and you appreciate my time, please do not forget to like it 🙂190Views3likes1CommentIntegrate MS Purview with ServiceNow for Data Governance
Hi team, We are planning to leverage Microsoft Purview for core Data Governance (DG) capabilities and build the remaining DG functions on ServiceNow. We have two key questions as we design the target‑state architecture: 1. What is the recommended split of DG capabilities between Microsoft Purview and ServiceNow? 2. How should data be shared and synchronized between Purview and ServiceNow to keep governance processes aligned and up to date? Thanks!Solved265Views0likes3CommentsPurview Lightning Talks | Presented by the Microsoft Security Community
Purview Lightning Talks Join the Microsoft Security Community for Purview Lightning Talks; quick technical sessions delivered by the community, for the community. You’ll pick up practical Purview gems: must-know Compliance Manager tips, smart data security tricks, real-world scenarios, and actionable governance recommendations all in one energizing event. Hear directly from Purview customers, partners, and community members and walk away with ideas you can put to work right immediately. Register now; full agenda coming soon! When: Thursday, April 30, 2026 | 8:00AM - 9:30AM (PT, Redmond Time) Where: Join Here: https://aka.ms/JOIN-WEBINAR-23-MICROSOFT-PURVIEW To stay informed about future webinars and other events, join our Security Community at https://aka.ms/SecurityCommunity. We hope you will join us! This event may be recorded and shared publicly with others, including Microsoft’s global customers, partners, employees, and service providers. The recording may include your name and any questions you submit to Q&A Fine print: This event is certified fluff-free. There will be no sales pitches, marketing, or recruitment during this compilation of lighting fast sessions proudly presented by members of the Microsoft Security Community.191Views1like0CommentsAI‑Powered Troubleshooting for Microsoft Purview Data Lifecycle Management
Announcing the DLM Diagnostics MCP Server! Microsoft Purview Data Lifecycle Management (DLM) policies are critical for meeting compliance and governance requirements across Microsoft 365 workloads. However, when something goes wrong – such as retention policies not applying, archive mailboxes not expanding, or inactive mailboxes not getting purged – diagnosing the issue can be challenging and time‑consuming. To simplify and accelerate this process, we are excited to announce the open‑source release of the DLM Diagnostics Model Context Protocol (MCP) Server, an AI‑powered diagnostic server that allows AI assistants to safely investigate Microsoft Purview DLM issues using read‑only PowerShell diagnostics. GitHub repository: https://github.com/microsoft/purview-dlm-mcp The troubleshooting challenge When you notice issues such as: “Retention policy shows Success, but content isn’t being deleted” “Archiving is enabled, but items never move to the archive mailbox” The investigation typically involves: Connecting to Exchange Online and Security & Compliance PowerShell sessions Running 5–15 diagnostic cmdlets in a specific order Interpreting command output using multiple troubleshooting reference guides (TSGs) Correlating policy distribution, holds, archive configuration, and workload behavior Producing a root‑cause summary and recommended remediation steps This workflow requires deep familiarity with DLM internals and is largely manual. Introducing the DLM Diagnostics MCP Server The DLM Diagnostics MCP Server automates this diagnostic workflow by allowing AI assistants – such as GitHub Copilot, Claude Desktop, and other MCP‑compatible clients – to investigate DLM issues step by step. An administrator simply describes the symptom in natural language. The AI assistant then: Executes read‑only PowerShell diagnostics Evaluates results against known troubleshooting patterns Identifies likely root causes Presents recommended remediation steps (never executed automatically) Produces a complete audit trail of the investigation All diagnostics are performed under a strict security model to ensure safety and auditability. What is the Model Context Protocol (MCP)? The Model Context Protocol (MCP) is an open standard that enables AI assistants to interact with external tools and data sources in a secure and structured way. You can think of MCP as a “USB port for AI”: Any MCP‑compatible client can connect to an MCP server The server exposes well‑defined tools The AI can use those tools safely and deterministically The DLM Diagnostics MCP Server exposes Purview DLM diagnostics as MCP tools, enabling AI assistants to run PowerShell diagnostics, retrieve execution logs, and surface Microsoft Learn documentation. More information: https://modelcontextprotocol.io Diagnostic tools exposed by the server The server exposes four MCP tools. 1. Run read‑only PowerShell diagnostics This tool executes PowerShell commands against Exchange Online and Security & Compliance sessions using a strict allow list. Only read‑only cmdlets are permitted: Allowed verbs: Get-*, Test-*, Export-* Blocked verbs: Set-*, New-*, Remove-*, Enable-*, Invoke-*, and others Every command is validated before execution. Example: Archive mailbox not working Admin: “Archiving is not working for john.doe@contoso.com” The AI follows the archive troubleshooting guide: 1 Step 1 – Check archive mailbox status 2 Get-Mailbox -Identity john.doe@contoso.com | 3 Format-List ArchiveStatus, ArchiveState 4 5 Step 2 – Check archive mailbox size 6 Get-MailboxStatistics -Identity john.doe@contoso.com -Archive | 7 Format-List TotalItemSize, ItemCount 8 9 Step 3 – Check auto-expanding archive 10 Get-Mailbox -Identity john.doe@contoso.com | 11 Format-List AutoExpandingArchiveEnabled Finding The archive mailbox is not enabled. Recommended action (not executed automatically): 1 Enable-Mailbox <user mailbox> –Archive All remediation steps are presented as text only for administrator review. 2. Retrieve the execution log Every diagnostic session is fully logged, including: Command executed Timestamp Duration Status Output Admins can retrieve the complete investigation as a Markdown‑formatted audit trail, making it easy to attach to incident records or compliance documentation. 3. Microsoft Learn documentation lookup If a question does not match a diagnostic scenario – such as “How do I create a retention policy?” – the server falls back to curated Microsoft Learn documentation. The documentation lookup covers 11 Purview areas, including: Retention policies and labels Archive and inactive mailboxes eDiscovery Audit Communication compliance Records management Adaptive scopes 4. Create a GitHub issue (create_issue) create_issue lets the assistant open a feature request in the project’s GitHub repo and attach key session details (such as the commands run and any failures) to help maintainers reproduce and prioritize the request. Example: File a feature request from a failed diagnostic ✅ Created GitHub issue #42 Title: Allowlist should allow Get-ComplianceTag cmdlet Category: feature request Labels: enhancement URL: https://github.com/microsoft/purview-dlm-mcp/issues/42 Session context included: 3 commands executed, 1 failure Security and safety model Security is enforced at multiple layers: Read‑only allow list: Only approved diagnostic cmdlets can run No stored credentials: Authentication uses MSAL interactive sign‑in Session isolation: Each server instance runs in its own PowerShell process Full audit trail: Every command and result is logged No automatic remediation: Fixes are never executed by the server This design ensures diagnostics are safe to run even in sensitive compliance environments. Supported diagnostic scenarios The server currently includes 12 troubleshooting reference guides, covering common DLM issues such as: Retention policy shows Success but content is not retained or deleted Policy status shows Error or PolicySyncTimeout Items do not move to archive mailbox Auto‑expanding archive not triggering Inactive mailbox creation failures SubstrateHolds and Recoverable Items growth Teams messages not deleting Conflicts between MRM and Purview retention Adaptive scope misconfiguration Auto‑apply label failures SharePoint site deletion blocked by retention Unified Audit Configuration validation Each guide maps symptoms to diagnostic checks and remediation guidance. Getting started Prerequisites Node.js 18 or later PowerShell 7 ExchangeOnlineManagement module (v3.4+) Exchange Online administrator permissions Required permissions Option Roles Notes Least-privilege Global Reader + Compliance Administrator Recommended, covers both EXO and S&C read access. Single role group Organization Management Covers both workloads but broader than necessary. Full admin Global Administrator Works but overly broad, not recommended. Exchange Online (Connect-ExchangeOnline): cmdlets like Get-Mailbox, Get-MailboxStatistics, Export-MailboxDiagnosticLogs, Get-OrganizationConfig Security & Compliance (Connect-IPPSSession): cmdlets like Get-RetentionCompliancePolicy, Get-RetentionComplianceRule, Get-AdaptiveScope, Get-ComplianceTag Exchange cmdlets require EXO roles; compliance cmdlets require S&C roles. Without both, some diagnostics will fail with permission errors. Why both workloads? The server connects to two PowerShell sessions: The authenticating user (DLM_UPN) needs read access to both Exchange Online and Security & Compliance PowerShell sessions. MCP client configuration The server can be connected to IDE like Claude Desktop or Visual Studio Code (GitHub Copilot) using MCP configuration. Include this configuration in your MCP config JSON file (for VS Code, use .vscode/mcp.json; for Claude Desktop, use claude_desktop_config.json) { "mcpServers": { "dlm-diagnostics": { "command": "npx", "args": [ "-y", "@microsoft/purview-dlm-mcp" ], "env": { "DLM_UPN": "admin@yourtenant.onmicrosoft.com", "DLM_ORGANIZATION": "yourtenant.onmicrosoft.com", "DLM_COMMAND_TIMEOUT_MS": "180000" } } } } Summary The DLM Diagnostics MCP Server brings AI‑assisted, auditable, and safe troubleshooting to Microsoft Purview Data Lifecycle Management. By combining structured troubleshooting guides with read‑only PowerShell diagnostics and MCP, it significantly reduces the time and expertise required to diagnose complex DLM issues. We invite you to try it out, provide feedback, and contribute to the project via GitHub. GitHub repository: https://github.com/microsoft/purview-dlm-mcp Rishabh Kumar, Victor Legat & Purview Data Lifecycle Management Team1.6KViews2likes0CommentsHow to Unassign Assets from Data Products in Microsoft Purview at Once
Hello, I’ve assigned around 100 assets to a specific data product and would now like to unassign all of them at once, rather than removing them individually. Using the Purview REST API with Python, I was able to retrieve the list of my data products and successfully identify the target data product. However, I haven’t been able to fetch the list of assets currently assigned to it, which prevents me from performing a bulk unassignment. Could anyone please advise how to retrieve and unassign all assets from a data product programmatically?239Views1like3CommentsScaling Data Governance- Does a Purview in a Day Framework Exist?
Hello Purview Community, I’ve been exploring the available acceleration resources for Microsoft Purview, and one thing I noticed is a potential gap in the "In a Day" workshop series. While we have excellent programs like Power BI in a Day or Fabric in a Day, I haven't yet seen a formalized Purview in a Day framework designed to help organizations jumpstart their governance journey in a single, cohesive session. I am reaching out because my team is currently preparing something in this area that we believe will be very useful to the community and Microsoft in the future. Rather than working in isolation, we want to ensure we are aligned with the official roadmap. I wanted to reach out to the community and the Microsoft product team to ask: Is there an official "In a Day" initiative for Purview currently in the works? If not, who would be the best point of contact to discuss alignment? Looking forward to hearing your thoughts and seeing if we can build something impactful together!191Views2likes3CommentsWorkaround Enabling Purview Data Quality & Profiling for Cross-Tenant Microsoft Fabric Assets
The Challenge: Cross-Tenant Data Quality Blockers Like many of you, I have been managing a complex architecture where Microsoft Purview sits in Tenant A and Microsoft Fabric resides in Tenant B. While we can achieve basic metadata scanning (with some configuration), I hit a hard wall when trying to enable Data Quality (DQ) scanning. Purview's native Data Quality scan for Fabric currently faces limitations in cross-tenant scenarios, preventing us from running Profiling or applying DQ Rules directly on the remote Delta tables. The Experiment: "Governance Staging" Architecture rather than waiting for a native API fix, I conducted an experiment to bridge this gap using a "Data Staging" approach. The goal was to bring the data's "physicality" into the same tenant as Purview to unlock the full DQ engine. The Solution Steps: Data Movement (Tenant B to Tenant A): Inside the Fabric Workspace (Tenant B), I created a Fabric Data Pipeline. I used this to export the critical Delta Tables as Parquet files to an ADLS Gen2 account located in Tenant A (the same tenant as Purview). Note: You can schedule this to run daily to keep the "Governance Copy" fresh. Native Scanning (Tenant A): I registered this ADLS Gen2 account as a source in Purview. Because both Purview and the ADLS account are in the same tenant, the scan was seamless, instantaneous, and required no complex authentication hurdles. Activating Data Quality: Once the Parquet files were scanned, I attached these assets to a Data Product in the Purview Data Governance portal. The Results: The results were immediate and successful. Because the data now resides on a fully supported, same-tenant ADLS Gen2 surface: ✅ Data Profiling: I could instantly see column statistics, null distributions, and value patterns. ✅ DQ Rules: I was able to apply custom logic and business rules to the data. ✅ Scans: The DQ scan ran successfully, generating a Data Quality Score for our Fabric data. Conclusion: While we await native cross-tenant "Live View" support for DQ in Fabric, this workaround works today. It allows you to leverage the full power of Microsoft Purview's Data Quality engine immediately. If you are blocked by tenant boundaries, I highly recommend setting up a lightweight "Governance Staging" container in your primary tenant. Has anyone else experimented with similar staging patterns for Governance? Let's discuss below.Solved266Views2likes3CommentsData Security Posture Management for AI
A special thanks to Chris Jeffrey for his contributions as a peer reviewer to this blog post. Microsoft Purview Data Security Posture Management (DSPM) for AI provides a unified location to monitor how AI Applications (Microsoft Copilot, AI systems created in Azure AI Foundry, AI Agents, and AI applications using 3 rd party Large Language Models). This Blog Post aims to provide the reader with a holistic understanding of achieving Data Security and Governance using Purview Data Security and Governance for AI offering. Purview DSPM is not to be confused with Defender Cloud Security Posture Management (CSPM) which is covered in the Blog Post Demystifying Cloud Security Posture Management for AI. Benefits When an organization adopts Microsoft Purview Data Security Posture Management (DSPM), it unlocks a powerful suite of AI-focused security benefits that helps them have a more secure AI adoption journey. Unified Visibility into AI Activities & Agents DSPM centralizes visibility across both Microsoft Copilots and third-party AI tools—capturing prompt-level interactions, identifying AI agents in use, and detecting shadow AI deployments across the enterprise. One‑Click AI Security & Data Loss Prevention Policies Prebuilt policies simplify deployment with a single click, including: Automatic detection and blocking of sensitive data in AI prompts, Controls to prevent data leakage to third-party LLMs, and Endpoint-level DLP enforcement across browsers (Edge, Chrome, Firefox) for third-party AI site usage. Sensitive Data Risk Assessments & Risky Usage Alerts DSPM runs regular automated and on-demand scans of top-priority SharePoint/E3 sites, AI interactions, and agent behavior to identify high-risk data exposures. This helps in detecting oversharing of confidential content, highlight compliance gaps and misconfigurations, and provides actionable remediation guidance. Actionable Insights & Prioritized Remediation The DSPM for AI overview dashboard offers actionable insights, including: Real-time analytics, usage trends, and risk scoring for AI interactions, and Integration with Security Copilot to guide investigations and remediation during AI-driven incidents. Features and Coverage Data Security Posture Management for AI (DSPM-AI) helps you gain insights into AI usage within the organization, the starting point is activating the recommended preconfigured policies using single-click activations. The default behavior for DSPM-AI is to run weekly data risk assessments for the top 100 SharePoint sites (based on usage) and provide data security admins with relevant insights. Organizations get an overview of how data is being accessed and used by AI tools. Data Security administrators can use on-demand classifiers as well to ensure that all contents are properly classified or scan items that were not scanned to identify whether they contain any sensitive information or not. AI access to data in SharePoint site can be controlled by the Data Security administrator using DSPM-AI. The admin can specify restrictions based on data labels or can apply a blanket restriction to all data in a specific site. Organizations can further expand the risks assessments with their own custom data risk assessments, a feature that is currently in preview. Thanks to its recommendations section, DSPM-AI helps data security administrators achieve faster time to value. Below is a sample of the policy to “Capture interactions for enterprise AI apps” that can be created using recommendations. More details about the recommendations that a Data Security Administrator can expect can be found at the DSPM-AI Documentation, these recommendations might be different in the environment based on what is relevant to each organization. Following customers’ feedback, Microsoft have announced during Ignite 2025 (18-21 Nov 2025, San Francisco – California) the inclusion of these recommendations in the Data Security Posture Management (DSPM) recommendations section, this helps Data Security Administrators view all relevant data security recommendations in the same place whether they apply to human interactions, tools interactions, or AI interactions of the data. More details about the new Microsoft Purview Data Security Posture Management (DSPM) experience are published in the Purview Technical Blog site under the article Beyond Visibility: The new Microsoft Purview Data Security Posture Management (DSPM) experience. After creating/enabling the Data Security Policies, Data Security Administrators can view reports that show AI usage patterns in the organization, in these reports Data Security Administrators will have visibility into interaction activities. Including the ability to dig into details. In the same reports view, Data Security Administrators will also be able to view reports regarding AI interactions with data including sensitive interactions and unethical interactions. And similar to activities, the Data Security Administrator can dig into Data interactions. Under reports, Data Security Administrators will also have visibility regarding risky user interaction patterns with the ability to drill down into details. Adaption This section provides an overview of the requirements to enable Data Security Posture Management for AI in an organization’s tenant. License Requirements The license requirements for Data Security Posture Management for AI depends on what features the organization needs and what AI workloads they expect to cover. To cover Interaction, Prompts, and Response in DSPM for AI, the organization needs to have a Microsoft 365 E5 license, this will cover activities from: Microsoft 365 Copilot, Microsoft 365 Copilot Chat, Security Copilot, Copilot in Fabric for Power BI only, Custom Copilot Studio Agents, Entra-registered AI Applications, ChatGPT enterprise, Azure AI Services, Purview browser extension, Browser Data Security, and Network Data Security. Information regarding licensing in this article is provided for guidance purposes only and doesn’t provide any contractual commitment. This list and license requirements are subject to change without any prior notice and readers are encouraged to consult with their Account Executive to get up-to-date information regarding license requirements and coverage. User Access Rights requirements To be able to view, create, and edit in Data Security Posture Management for AI, the user should have a role or role group: Microsoft Entra Compliance Administrator role Microsoft Entra Global Administrator role Microsoft Purview Compliance Administrator role group To have a view-only access to Data Security Posture Management for AI, the user should have a role or role group: Microsoft Purview Security Reader role group Purview Data Security AI Viewer role AI Administrator role from Entra Purview Data Security AI Content Viewer role for AI interactions only Purview Data Security Content Explorer Content Viewer role for AI interactions and file details for data risk assessments only For more details, including permissions needed per activity, please refer to the Permissions for Data Security Posture Management for AI documentation page. Technical Requirements To start using Data Security Posture Management for AI, a set of technical requirements need to be met to achieve the desired visibility, these include: Activating Microsoft Purview Audit: Microsoft Purview Audit is an integrated solution that help organizations effectively respond to security events, forensic investigations, internal investigations, and compliance obligations. Enterprise version of Microsoft Purview data governance: Needed to support the required APIs to cover Copilot in Fabric and Security Copilot. Installing Microsoft Purview browser extension: The Microsoft Purview Compliance Extension for Edge, Chrome, and Firefox collects signals that help you detect sharing sensitive data with AI websites and risky user activity activities on AI websites. Onboard devices to Microsoft Purview: Onboarding user devices to Microsoft Purview allows activity monitoring and enforcement of data protection policies when users are interacting with AI apps. Entra-registered AI Applications: Should be integrated with the Microsoft Purview SDK. More details regarding consideration for deploying Data Security Posture Management for AI can be found in the Data Security Posture Management for AI considerations documentation page. Conclusion Data Security Posture Management for AI helps Data Security Administrators gain more visibility regarding how AI Applications (Systems, Agents, Copilot, etc.) are interacting with their data. Based on the license entitlements an organization has under its agreement with Microsoft, the organization might already have access to these capabilities and can immediately start leveraging them to reduce the potential impact of any data-associated risks originating from its AI systems.1.5KViews2likes1Comment