Microsoft Purview helps organizations scale agentic AI with confidence by embedding security, compliance, and governance controls across the AI stack—from prompts and data, to copilots and autonomous agents—so innovation doesn’t come at the cost of risk
AI is rapidly reshaping how organizations work. From AI apps embedded in everyday productivity tools to autonomous agents orchestrating complex, multi‑step workflows, AI is moving from experimentation to execution at enterprise scale. With that shift comes a fundamental change in the risk landscape: AI systems can access, reason over, and act on sensitive data faster and more broadly than ever before. According to the last Microsoft Data Security Index 2025 report, 73% of organizations already are implementing AI-dedicated controls and 82% are planning to use AI to power their data security program.
As organizations adopt agentic AI, new challenges emerge. Agents can act on behalf of users, operate autonomously, invoke other apps, and work continuously— even without direct human interaction. They can access sensitive data at scale, trigger cascading actions, and interact with external systems, dramatically expanding an organization’s data risk surface.
With AI becoming more agentic, data security and compliance cannot be bolted on after the fact—or applied to just one layer of AI. They must be deeply integrated and enforced where AI touches data, and this is exactly where Microsoft Purview plays a critical role. Purview extends enterprise-grade data security, compliance, and risk management across multiple layers of the AI stack—from data and prompts, to AI-powered apps and copilots, to autonomous agents—so organizations can scale AI adoption with confidence.
Purview embeds familiar, trusted controls directly into where organizations interact with AI - extending capabilities to meet organizations wherever they are in their AI adoption journey.
Securing data in an agentic AI world
To help organizations manage this complexity and gain confidence to move further with AI, earlier this month, we announced the general availability of Agent 365 and Microsoft 365 E7: The Frontier Suite. A365, the control plane for agents, provides a unified platform that enables IT, security, and business teams to work together to observe, govern, and secure agents across the organization.
At Microsoft Ignite in November, we shared how Microsoft Purview capabilities in A365 provide comprehensive data security and compliance to protect agents from accessing sensitive data, prevent data leaks from risky insiders, and help ensure agents process data responsibly to support compliance with global regulations[1]. These capabilities include:
- Data security insights and controls embedded directly into the Copilot Control System, enabling IT admins to understand relevant risks and apply default policies across AI apps and agents.
- Data Security Posture Management (DSPM) provides visibility and insights into data risks for agents so data security admins can proactively mitigate those risks.
- Information Protection helps ensure that agents inherit and honor Microsoft 365 data sensitivity labels so they follow the same rules as users for handling sensitive data to prevent agent-led data leaks.
- Data Loss Prevention (DLP) protects sensitive data by allowing user protections to be extended to agents, and by blocking sensitive files from being used as grounding data by the agent.
- Insider Risk Management (IRM) extends detections to help identify agentic risky behavior and ensure that risky agent interactions with sensitive data are flagged to data security admins.
- Data Lifecycle Management (DLM) enables data retention and deletion policies for prompts and agent-generated data so you can manage risk by keeping the data that you need and deleting what you don’t.
- Audit and eDiscovery extend core compliance and records management capabilities to agents, treating AI agents as auditable entities alongside users and applications. This helps ensure that organizations can audit, investigate, and defensibly manage AI agent activity across the enterprise.
- Communication Compliance detects potentially risky behavior performed by the agent, with prompts and responses evaluated against existing compliance policies and classifiers, ensuring agent interactions align with the organization’s values and compliance standards.
We also extended Purview features, enabling developers to build data security and compliance into their applications:
- Purview SDK embedded in the Agent Framework SDK to help developers build AI agents with enterprise‑grade security by enabling automatic classification and protection of sensitive data, preventing data leaks and oversharing, and providing visibility and control for regulatory compliance.
- Azure AI Search honors sensitivity labels and their corresponding protection policies for source documents across multiple connectors , such as content indexed from Microsoft 365, Microsoft OneLake, and Azure Data Lake Storage. This allows AI Search to respect label-based protection policies so users can access only data authorized for them once labels are synced. This allows AI Search to respect label-based protection policies so users can access only data authorized for them once labels are synced.
These integrations allow organizations to keep up with the pace of the growth and development of developer-built custom agents.
What’s New?
We’ve recently announced, in public preview, inline DLP for Copilot Studio agents[2], enabling detection of sensitive information—such as PII, credit card numbers, and custom sensitive information types—directly within prompts sent to the agent. This allows sensitive data to be identified and blocked at input, before the agent is invoked. This enables safer deployment of custom-built agents by preventing accidental misuse of sensitive or regulated data—such as pasting real customer IDs into prompts—reducing potential data leakage and increasing confidence in deploying agents for business‑critical scenarios. This adds a critical layer of runtime protection, preventing agents from receiving and processing data that violates legal, regulatory, or organizational policy.
Figure 1: Inline DLP for Copilot Studio agents
We are excited to announce a new Purview Data Lifecycle Management (DLM) feature to help customers strengthen data security and compliance posture management with insights and policy recommendations on DSPM for Microsoft Copilot and AI apps interactions. Generally available June 2026, this new capability will help reduce AI-driven data risk with visibility into how Microsoft Copilot and AI apps interact with enterprise data. Customers can understand which AI apps and agents are in use across their tenant and apply consistent retention and deletion policies to AI prompts and outputs—helping meet regulatory obligations and maintain control over sensitive information as AI adoption scales.
Strengthening data security for Microsoft 365 Copilot
Purview provides deep, built-in data security and compliance capabilities for Microsoft 365 Copilot, and we continue to expand those capabilities as Copilot adoption grows.
Today, Purview ensures that Microsoft 365 Copilot operates within established organizational data boundaries. Sensitivity labels are enforced so Copilot can’t access content that users aren’t authorized to use, and sensitive information types (SITs) are detected directly in prompts, so risky prompts can be blocked in real time. Purview helps organizations identify and remediate data oversharing risks in SharePoint and OneDrive before Copilot can process overshared data, and customers can benefit from compliance capabilities like Audit, eDiscovery, and Data Lifecycle Management (DLM) for Copilot‑generated data, enabling investigations, legal holds, and regulatory compliance on that data using familiar tools.
What’s New: We’re now expanding DLP for Microsoft 365 Copilot to web search, preventing sensitive information in prompts from being sent to external web search, while still allowing Copilot to answer questions using Microsoft 365 enterprise data, when permitted. For example, if a user asks Copilot to analyze a scenario that includes a customer’s credit card number, the prompt will not be sent to Bing, but Copilot can still generate an answer based on internal SharePoint site if allowed. For organizations, this mitigates a key AI-driven data leakage risk without sacrificing productivity, enabling safer adoption of Copilot.
Figure 2: DLP for Microsoft 365 Copilot web search
Extending data protections to the network and browser
AI risk doesn’t start and end within Microsoft environments. Employees often use third-party, potentially unsanctioned AI apps to analyze data, generate content, and automate tasks. Purview extends data protections to these experiences as well.
By extending Purview data security capabilities to the web traffic layer, including network and browser traffic, organizations can protect sensitive data inline in interactions with third-party AI services, such as ChatGPT or Google Gemini,. Sensitive data can be detected and blocked in prompts and responses in real time, helping prevent accidental data leaks even when AI apps sit outside Microsoft 365. These protections also apply to 3rdparty AI services running in Agent Mode, so that agents can’t be prompted to generate responses or execute tasks using sensitive data. This ensures organizational data protection policies remain effective regardless of where AI is being used.
What’s new: Previously, we announced our secure access service edge (SASE) partner ecosystem to bring data security to the network. This extends Purview protections to AI interactions occurring between users and AI desktop apps, AI plugins, and AI apps accessed over the web. Now, in public preview, Purview integrates with Palo Alto Networks Prisma SASE to detect and block sensitive data in transit—including data shared with unmanaged AI apps and agents. Additionally, Purview is expanding its partner ecosystem to additional enterprise browsers like Island and Palo Alto Networks Prisma Browser, enabling enforcement of Purview classifiers, sensitivity labels, and DLP policies in third-party browsers to prevent risky data sharing with consumer AI sites accessed through the web. Learn more here.
AI is becoming core to how work gets done—but trust is what determines whether it can scale. By extending data security and compliance to different layers of the AI stack, Microsoft Purview helps organizations observe agent risk, prevent oversharing, govern AI interactions, and enforce policy in real time. Securing AI starts with securing data, and Microsoft Purview ensures AI can reason and act on data safely. This allows organizations to scale AI with confidence, knowing protection, policy, and accountability are built into every AI interaction.
What's Next?
Check out the other Microsoft Purview Data Security announcements and all Microsoft is doing to secure AI across teams in your organization.
Explore Microsoft Purview capabilities, review documentation, or start a free trial to see how Purview can help you secure your data, wherever it lives and travels.
[1] Supported agent types/platforms during Frontier Preview: Agent Builder agents, Microsoft Copilot Studio agents, Productivity App agents (Word, PPT, etc.), Foundry, Security Copilot, Partner ecosystem agents through Purview SDK.
[2] Copilot Studio agents published through Microsoft 365 Copilot. Also available for: Agent Builder agents Productivity App agents (Word, PPT, etc.), Microsoft-built agents, Foundry agents published to Microsoft 365.