Blog Post

Microsoft Security Community Blog
6 MIN READ

Safeguarding Sensitive Data in Microsoft 365 Copilot Interactions: DLP for Microsoft 365 Copilot

Aaron_Thorp's avatar
Aaron_Thorp
Icon for Microsoft rankMicrosoft
Apr 21, 2026

Microsoft 365 Copilot is redefining how organizations work, bringing the power of generative AI directly into our secure productivity tools. As Copilot adoption accelerates, we’ve heard that you want more control over how your sensitive data can be used in interactions with Copilot. At Ignite 2025, Microsoft announced a major enhancement: Microsoft Purview Data Loss Prevention for Microsoft 365 Copilot to safeguard Microsoft 365 Copilot and Copilot Chat prompts, now entering General Availability. Even better, this capability is included for all users of Microsoft 365 Copilot and Copilot Chat.

Why DLP for Copilot Prompts Is a Game-Changer

As organizations adopt Copilot, their ways of sharing, creating, and interacting with data expand. With just a prompt, users can have Copilot summarize documents, analyze spreadsheets, or help brainstorm presentations. However, it raises an important question: what if the prompt includes sensitive information, like project code names, financial account numbers, health records, or other sensitive data?

Over the last 2 years, Microsoft has been building a set of Data Loss Prevention (DLP) controls specifically designed for Copilot. Below is a quick overview of these related capabilities — ranging from already available to newly in preview — before we dive deep into today's GA announcement:

Prevent Copilot processing of files & emails based on sensitivity labels

In November 2024, Microsoft introduced the ability to create a DLP policy to restrict Microsoft 365 Copilot and Copilot Chat from processing sensitive files and emails using Sensitivity Labels for grounding data. This capability gives you control over whether content with the sensitivity labels you specify is restricted from being used in Microsoft 365 Copilot and Copilot Chat to generate summaries and responses.

Prevent web searches for prompts containing Sensitive Information Types (SITs)

The latest feature entering Public Preview is DLP for Microsoft 365 Copilot and Copilot Chat to prevent web searches for prompts containing sensitive data. This real-time control helps organizations mitigate data leakage and oversharing risks by preventing Microsoft 365 Copilot and agents from using sensitive data for external web searches. If a sensitive information type (SIT) is detected in a user prompt, Copilot can still leverage your enterprise data to form a response without sending the sensitive data to external search engines for web grounding. This capability extends to Microsoft 365 Copilot and agents built in Copilot Studio that are published to Microsoft 365 Copilot.

DLP to Safeguard Copilot Prompts with Sensitive Information Types (SITs)

The rest of this blog focuses on a key addition to this capability set: DLP for Microsoft 365 Copilot + Copilot Chat prompts to prevent processing of prompts containing sensitive information, now entering General Availability. Unlike the web search capability above, which prevents sensitive data from being sent externally during a web query, this capability evaluates the user’s text input directly, before processing occurs, to determine whether both enterprise data and web grounding can proceed.

This feature uses Sensitive Information Types (SITs) as a condition within a Purview DLP policy to assess whether a user prompt sent to Copilot contains sensitive data, even if the data is unlabeled. With DLP for Copilot prompts, a user’s text input is scanned in real time for SITs, whether built-in (like Social Security Numbers, credit card numbers, etc.) or custom-defined by your organization (such as confidential terms or project names). If a text prompt contains one of the SITs you specify, Copilot restricts processing, halts any Graph or web grounding, and displays a clear message to the end user that the request cannot be completed.

 

A user enters a prompt in Microsoft 365 Copilot Chat containing sensitive information.

 

Microsoft 365 Copilot Chat detects a SIT within the user prompt and restricts a response.
How DLP for Copilot Protects Prompts: Real-Time, Intelligent Protection

The new DLP capability integrates seamlessly with Microsoft Purview, leveraging its powerful data classification & detection engine for sensitive information types. Here’s how it works:

  • Input: When a user submits a prompt, Copilot checks the prompt for sensitive information using built-in or organization-defined sensitive information types (SITs).
  • Immediate Action: If a SIT is detected, Copilot restricts the prompt from being processed. No AI response is generated, and no data is sent for Graph or web grounding.
  • Output: Users receive a clear notification that their request cannot be completed due to company policies.

This real-time protection ensures that sensitive data is not leaked or overshared, even as users explore new ways to work with AI.

Overview of how the feature works.
Setting Up DLP for Copilot Prompts: Data Security Admin Experience

The easiest way to get started is through the new Microsoft Purview Data Security Posture Management (DSPM) portal, which provides a guided, one-click setup experience:

1. In Purview, go to Solutions > DSPM (preview)

2. Select the "Prevent data exposure in Microsoft 365 Copilot and Microsoft Copilot interactions" objective.

3. Follow the guided workflow and apply the recommended one-click DLP policy. The policy starts in simulation mode so you can review activity before enforcing it.

Alternatively, you can configure and customize this policy directly from the Purview DLP portal Policies page or enable it from the Microsoft 365 Admin Center.

Navigate to the Data Security Posture Management (Preview) portal Objectives tab. View the objective, “Prevent data exposure in Microsoft 365 Copilot and Microsoft Copilot interactions” and click the button, view the remediation plan.

 

View the remediation plan details and estimated impact on risk pattern.

 

Click the button, view policy details and review. Then click the button, create a custom policy in DLP simulation mode to protect sensitive data referenced in Microsoft 365 Copilot and Microsoft Copilot.

 

IT and AI admins can enable DLP protection for Copilot prompts directly from the Security section of the Microsoft 365 Admin Center using a simplified setup experience.

 

To configure polices in DLP, navigate to the Purview DLP portal. Then select the Policies tab to create a new policy.

 

Create a DLP Custom policy.

 

Choose where to apply the policy (Microsoft 365 Copilot and Copilot Chat).

 

Create a rule with a name and optional description.

 

Add Sensitive Information Types as part of the conditions.

 

Select the desired Sensitive Information Types (built-in or custom).

 

Identify the confidence level and instance count.

 

Add the action to restrict Copilot from processing content and complete the policy configuration.Confirm the rule was set up correctly by testing it out.

 

Practical Scenarios: Protecting What Matters Most
  • Protect PII, financial data, and intellectual property: Financial institutions can block prompts containing deal terms, account numbers, or other sensitive data, preventing leaks through AI interactions. Similarly, healthcare organizations can safeguard patient information, and manufacturers can secure intellectual property and trade secrets from exposure, along with many other practical use cases. Once the prompt is detected and blocked, Microsoft Graph grounding and Bing web grounding is restricted.
  • Safeguard sensitive non-public information: Imagine an organization involved in a confidential merger. By using DLP for Copilot prompts, administrators can set up a custom SIT that includes the project’s code name. If a user asks Copilot about the merger using the project’s code name, their request will be blocked, keeping sensitive information secure and protected.
Visibility into DLP for M365 Copilot Prompts

When a user’s prompt triggers a DLP policy, notifications and alerts are surfaced directly in the Microsoft Purview and Defender portals for security administrators. These alerts provide detailed information about which policy was activated, the type of sensitive information detected, and the context of the attempted Copilot interaction.

Using these alert queues in Purview and Defender XDR, administrators can efficiently track policy activity, investigate potential incidents, and refine DLP rules to better align with organizational needs. The ability to review historical alerts and track ongoing enforcement empowers admins to maintain strong data security and proactively safeguard sensitive information.

 

DLP policy alert within the Alerts page.

 

Defender XDR portal investigation of prompt DLP based incident.

Takeaways

The introduction of this latest enhancement to DLP for Copilot represents a key advancement in secure Copilot deployment and adoption. By empowering organizations to block sensitive data at the prompt level, Microsoft is helping customers unlock the full potential of Copilot, without compromising security or compliance.

This innovation reflects Microsoft’s commitment to responsible AI, continuous improvement, and customer-driven development. As Copilot evolves, so will the tools to protect your data, ensuring that productivity and security go hand in hand.

For more details, stay tuned for updates to the Product Roadmap and Learn documentation.

 

 

Updated Apr 17, 2026
Version 1.0
No CommentsBe the first to comment