insider risk management
144 TopicsDeep Dive: Insider Risk Management in Microsoft Purview
Hi everyone I recently explored the Insider Risk Management (IRM) workflow in Microsoft Purview and how it connects across governance, compliance, and security. This end-to-end process helps organizations detect risky activities, triage alerts, investigate incidents, and take corrective action. Key Phases in the IRM Workflow: Policy: Define rules to detect both accidental (data spillage) and malicious risks (IP theft, fraud, insider trading). Alerts: Generate alerts when policies are violated. Triage: Prioritize and classify alerts by severity. Investigate: Use dashboards, Content Explorer, and Activity Explorer to dig into context. Action: Take remediation steps such as user training, legal escalation, or SIEM integration. Key takeaways from my lab: Transparency is essential (balancing privacy vs. protection). Integration across Microsoft 365 apps makes IRM policies actionable. Defender + Purview together unify detection + governance for insider risk. This was part of my ongoing security lab series. Curious to hear from the community — how are you applying Insider Risk Management in your environments or labs?72Views0likes2CommentsSecuring Data with Microsoft Purview IRM + Defender: A Hands-On Lab
Hi everyone I recently explored how Microsoft Purview Insider Risk Management (IRM) integrates with Microsoft Defender to secure sensitive data. This lab demonstrates how these tools work together to identify, investigate, and mitigate insider risks. What I covered in this lab: Set up Insider Risk Management policies in Microsoft Purview Connected Microsoft Defender to monitor risky activities Walkthrough of alerts triggered → triaged → escalated into cases Key governance and compliance insights Key learnings from the lab: Purview IRM policies detect both accidental risks (like data spillage) and malicious ones (IP theft, fraud, insider trading) IRM principles include transparency (balancing privacy vs. protection), configurable policies, integrations across Microsoft 365 apps, and actionable alerts IRM workflow follows: Define policies → Trigger alerts → Triage by severity → Investigate cases (dashboards, Content Explorer, Activity Explorer) → Take action (training, legal escalation, or SIEM integration) Defender + Purview together provide unified coverage: Defender detects and responds to threats, while Purview governs compliance and insider risk This was part of my ongoing series of security labs. Curious to hear from others — how are you approaching Insider Risk Management in your organizations or labs?73Views0likes3CommentsFrom Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI
By Chirag Mehta, Vice President and Principal Analyst - Constellation Research AI is changing the way organizations work. It helps teams write code, detect fraud, automate workflows, and make complex decisions faster than ever before. But as AI adoption increases, so do the risks, many of which traditional security tools were not designed to address. Cybersecurity leaders are starting to see that AI security is not just another layer of defense. It is becoming essential to building trust, ensuring resilience, and maintaining business continuity. Earlier this year, after many conversations with CISOs and CIOs, I saw a clear need to bring more attention to this topic. That led to my report on AI Security, which explores how AI-specific vulnerabilities differ from traditional cybersecurity risks and why securing AI systems calls for a more intentional approach. Why AI Changes the Security Landscape AI systems do not behave like traditional software. They learn from data instead of following pre-defined logic. This makes them powerful, but also vulnerable. For example, an AI model can: Misinterpret input in ways that humans cannot easily detect Be tricked into producing harmful or unintended responses through crafted prompts Leak sensitive training data in its outputs Take actions that go against business policies or legal requirements These are not coding flaws. They are risks that originate from how AI systems process information and act on it. These risks become more serious with agentic AI. These systems act on behalf of humans, interact with other software, and sometimes with other AI agents. They can make decisions, initiate actions, and change configurations. If one is compromised, the consequences can spread quickly. A key challenge is that many organizations still rely on traditional defenses to secure AI systems. While those tools remain necessary, they are no longer enough. AI introduces new risks across every layer of the stack, including data, networks, endpoints, applications, and cloud infrastructure. As I explained in my report, the security focus must shift from defending the perimeter to governing the behavior of AI systems, the data they use, and the decisions they make. The Shift Toward AI-Aware Cyber Resilience Cyber resilience is the ability to withstand, adapt to, and recover from attacks. Meeting that standard today requires understanding how AI is developed, deployed, and used by employees, customers, and partners. To get there, organizations must answer questions such as: Where is our sensitive data going, and is it being used safely to train models? What non-human identities, such as AI agents, are accessing systems and data? Can we detect when an AI system is being misused or manipulated? Are we in compliance with new AI regulations and data usage rules? Let’s look at how Microsoft has evolved its mature security portfolio to help protect AI workloads and support this shift toward resilience. Microsoft’s Approach to Secure AI Microsoft has taken a holistic and integrated approach to AI security. Rather than creating entirely new tools, it is extending existing products already used by millions to support AI workloads. These features span identity, data, endpoint, and cloud protection. 1. Microsoft Defender: Treating AI Workloads as Endpoints AI models and applications are emerging as a new class of infrastructure that needs visibility and protection. Defender for Cloud secures AI workloads across Azure and other cloud platforms such as AWS and GCP by monitoring model deployments and detecting vulnerabilities. Defender for Cloud Apps extends protection to AI-enabled apps running at the edge Defender for APIs supports AI systems that use APIs, which are often exposed to risks such as prompt injection or model manipulation Additionally, Microsoft has launched tools to support AI red-teaming, content safety, and continuous evaluation capabilities to ensure agents operate safely and as intended. This allows teams identify and remediate risks such as jailbreaks or prompt injection before models are deployed. 2. Microsoft Entra: Managing Non-Human Identities As organizations roll out more AI agents and copilots, non-human identities are becoming more common. These digital identities need strong oversight. Microsoft Entra helps create and manage identities for AI agents Conditional Access ensures AI agents only access the resources they need, based on real-time signals and context Privileged Identity Management manages, controls, and monitors AI agents access to important resources within an organization 3. Microsoft Purview: Securing Data Used in AI Purview plays an important role in securing both the data that powers AI apps and agents, and the data they generate through interactions. Data discovery and classification helps label sensitive information and track its use Data Loss Prevention policies help prevent leaks or misuse of data in tools such as Copilot or agents built in Azure AI Foundry Insider Risk Management alerts security teams when employees feed sensitive data into AI systems without approval Purview also helps organizations meet transparency and compliance requirements, extending the same policies they already use today to AI workloads, without requiring separate configurations, as regulations like the EU AI Act take effect. Here's a video that explains the above Microsoft security products: Securing AI Is Now a Strategic Priority AI is evolving quickly, and the risks are evolving with it. Traditional tools still matter, but they were not built for systems that learn, adapt, and act independently. They also weren’t designed for the pace and development approaches AI requires, where securing from the first line of code is critical to staying protected at scale. Microsoft is adapting its security portfolio to meet this shift. By strengthening identity, data, and endpoint protections, it is helping customers build a more resilient foundation. Whether you are launching your first AI-powered tool or managing dozens of agents across your organization, the priority is clear. Secure your AI systems before they become a point of weakness. You can read more in my AI Security report and learn how Microsoft is helping organizations secure AI supporting these efforts across its security portfolio.Insider Risk Management Alerts/Activities issue
Hello, we have a problem where Insider Risk Management is generating activity data/alerts based on false data (sort of). There is an activity called: EPOFILEARCHIVED or FileArchived that is done by the SenseCE.exe application. SenseCE is "Windows Defender Advanced Threat Protection Sense CE module" according to 3rd party source and "Data Loss Prevention Classification" according to another, I guess it is related as a service application for Endpoint DLP as well. Anyways, it is generating lots of false activity and there is not any actual way to exclude this activity (as an app or as an activity type) from Purview and it introduces false data into Insider Risk Management (which picks it up as an Archive activity). Anyone have similar issues or have another explanation why this activity is appearing? Perhaps there are ways to remedy this somehow? Example:55Views0likes1CommentUnlocking the Power of Microsoft Purview for ChatGPT Enterprise
In today's rapidly evolving technology landscape, data security and compliance are key. Microsoft Purview offers a robust solution for managing and securing interactions with AI based solutions. This integration not only enhances data governance but also ensures that sensitive information is handled with the appropriate controls. Let's dive into the benefits of this integration and outline the steps to integrate with ChatGPT Enterprise in specific. The integration works for Entra connected users on the ChatGPT workspace, if you have needs that goes beyond this, please tell us why and how it impacts you. Important update 1: Effective May 1, these capabilities require you to enable pay-as-you-go billing in your organization. Important update 2: From May 19, you are required to create a collection policy to ingest ChatGPT Enterprise information. In DSPM for AI you will find this one click process. Benefits of Integrating ChatGPT Enterprise with Microsoft Purview Enhanced Data Security: By integrating ChatGPT Enterprise with Microsoft Purview, organizations can ensure that interactions are securely captured and stored within their Microsoft 365 tenant. This includes user text prompts and AI app text responses, providing a comprehensive record of communications. Compliance and Governance: Microsoft Purview offers a range of compliance solutions, including Insider Risk Management, eDiscovery, Communication Compliance, and Data Lifecycle & Records Management. These tools help organizations meet regulatory requirements and manage data effectively. Customizable Detection: The integration allows for the detection of built in can custom classifiers for sensitive information, which can be customized to meet the specific needs of the organization. To help ensures that sensitive data is identified and protected. The audit data streams into Advanced Hunting and the Unified Audit events that can generate visualisations of trends and other insights. Seamless Integration: The ChatGPT Enterprise integration uses the Purview API to push data into Compliant Storage, ensuring that external data sources cannot access and push data directly. This provides an additional layer of security and control. Step-by-Step Guide to Setting Up the Integration 1. Get Object ID for the Purview account in Your Tenant: Go to portal.azure.com and search for "Microsoft Purview" in the search bar. Click on "Microsoft Purview accounts" from the search results. Select the Purview account you are using and copy the account name. Go to portal.azure.com and search for “Enterprise" in the search bar. Click on Enterprise applications. Remove the filter for Enterprise Applications Select All applications under manage, search for the name and copy the Object ID. 2. Assign Graph API Roles to Your Managed Identity Application: Assign Purview API roles to your managed identity application by connecting to MS Graph utilizing Cloud Shell in the Azure portal. Open a PowerShell window in portal.azure.com and run the command Connect-MgGraph. Authenticate and sign in to your account. Run the following cmdlet to get the ServicePrincipal ID for your organization for the Purview API app. (Get-MgServicePrincipal -Filter "AppId eq '9ec59623-ce40-4dc8-a635-ed0275b5d58a'").id This command provides the permission of Purview.ProcessConversationMessages.All to the Microsoft Purview Account allowing classification processing. Update the ObjectId to the one retrieved in step 1 for command and body parameter. Update the ResourceId to the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{a4543e1f-6e5d-4ec9-a54a-f3b8c156163f}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam It will look something like this from the command line We also need to add the permission for the application to read the user accounts to correctly map the ChatGPT Enterprise user with Entra accounts. First run the following command to get the ServicePrincipal ID for your organization for the GRAPH app. (Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'").id The following step adds the permission User.Read.All to the Purview application. Update the ObjectId with the one retrieved in step 1. Update the ResourceId with the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{df021288-bdef-4463-88db-98f22de89214}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam 3. Store the ChatGPT Enterprise API Key in Key Vault The steps for setting up Key vault integration for Data Map can be found here Create and manage credentials for scans in the Microsoft Purview Data Map | Microsoft Learn When setup you will see something like this in Key vault. 4. Integrate ChatGPT Enterprise Workspace to Purview: Create a new data source in Purview Data Map that connects to the ChatGPT Enterprise workspace. Go to purview.microsoft.com and select Data Map, search if you do not see it on the first screen. Select Data sources Select Register Search for ChatGPT Enterprise and select Provide your ChatGPT Enterprise ID Create the first scan by selecting Table view and filter on ChatGPT Add your key vault credentials to the scan Test the connection and once complete click continue When you click continue the following screen will show up, if everything is ok click Save and run. Validate the progress by clicking on the name, completion of the first full scan may take an extended period of time. Depending on size it may take more than 24h to complete. If you click on the scan name you expand to all the runs for that scan. When the scan completes you can start to make use of the DSPM for AI experience to review interactions with ChatGPT Enterprise. The mapping to the users is based on the ChatGPT Enterprise connection to Entra, with prompts and responses stored in the user's mailbox. 5. Review and Monitor Data: Please see this article for required permissions and guidance around Microsoft Purview Data Security Posture Management (DSPM) for AI, Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Use Purview DSPM for AI analytics and Activity Explorer to review interactions and classifications. You can expand on prompts and responses in ChatGPT Enterprise 6. Microsoft Purview Communication Compliance Communication Compliance (here after CC) is a feature of Microsoft Purview that allows you to monitor and detect inappropriate or risky interactions with ChatGPT Enterprise. You can monitor and detect requests and responses that are inappropriate based on ML models, regular Sensitive Information Types, and other classifiers in Purview. This can help you identify Jailbreak and Prompt injection attacks and flag them to IRM and for case management. Detailed steps to configure CC policies and supported configurations can be found here. 7. Microsoft Purview Insider Risk Management We believe that Microsoft Purview Insider Risk Management (here after IRM) can serve a key role in protecting your AI workloads long term. With its adaptive protection capabilities, IRM dynamically adjusts user access based on evolving risk levels. In the event of heightened risk, IRM can enforce Data Loss Prevention (DLP) policies on sensitive content, apply tailored Entra Conditional Access policies, and initiate other necessary actions to effectively mitigate potential risks. This strategic approach will help you to apply more stringent policies where it matters avoiding a boil the ocean approach to allow your team to get started using AI. To get started use the signals that are available to you including CC signals to raise IRM tickets and enforce adaptive protection. You should create your own custom IRM policy for this. Do include Defender signals as well. Based on elevated risk you may select to block users from accessing certain assets such as ChatGPT Enterprise. Please see this article for more detail Block access for users with elevated insider risk - Microsoft Entra ID | Microsoft Learn. 8. eDiscovery eDiscovery of AI interactions is crucial for legal compliance, transparency, accountability, risk management, and data privacy protection. Many industries must preserve and discover electronic communications and interactions to meet regulatory requirements. Including AI interactions in eDiscovery ensures organizations comply with these obligations and preserves relevant evidence for litigation. This process also helps maintain trust by enabling the review of AI decisions and actions, demonstrating due diligence to regulators. Microsoft Purview eDiscovery solutions | Microsoft Learn 9. Data Lifecycle Management Microsoft Purview offers robust solutions to manage AI data from creation to deletion, including classification, retention, and secure disposal. This ensures that AI interactions are preserved and retrievable for audits, litigation, and compliance purposes. Please see this article for more information Automatically retain or delete content by using retention policies | Microsoft Learn. Closing By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their ChatGPT Enterprise interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. We are still in preview some of the features listed are not fully integrated, please reach out to us if you have any questions or if you have additional requirements.Using Copilot in Fabric with Confidence: Data Security, Compliance & Governance with DSPM for AI
Introduction As organizations embrace AI to drive innovation and productivity, ensuring data security, compliance, and governance becomes paramount. Copilot in Microsoft Fabric offers powerful AI-driven insights. But without proper oversight, users can misuse copilot to expose sensitive data or violate regulatory requirements. Enter Microsoft Purview’s Data Security Posture Management (DSPM) for AI—a unified solution that empowers enterprises to monitor, protect, and govern AI interactions across Microsoft and third-party platforms. We are excited to announce the general availability of Microsoft Purview capabilities for Copilot in Fabric, starting with Copilot in Power BI. This blog explores how Purview DSPM for AI integrates with Copilot in Fabric to deliver robust data protection and governance and provides a step-by-step guide to enable this integration. Capabilities of Purview DSPM for AI As organizations adopt AI, implementing data controls and Zero Trust approach is crucial to mitigate risks like data oversharing and leakage, and potential non-compliant usage in AI. We are excited to announce Microsoft Purview capabilities for Copilot in Fabric, starting with Copilot for Power BI, By combining Microsoft Purview and Copilot for Power BI, users can: Discover data risks such as sensitive data in user prompts and responses in Activity Explorer and receive recommended actions in their Microsoft Purview DSPM for AI Reports to reduce these risks. DSPM for AI Activity Explorer DSPM for AI Reports If you find Copilot in Fabric actions in DSPM for AI Activity Explorer or reports to be potentially inappropriate or malicious, you can look for further information in Insider Risk Management (IRM), through an eDiscovery case, Communication Compliance (CC), or Data Lifecycle Management (DLM). Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI. Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant or unethical AI usage detection with Purview Communication Compliance. Purview Audit provides a detailed log of user and admin activity within Copilot in Fabric, enabling organizations to track access, monitor usage patterns, and support forensic investigations. Purview eDiscovery enables legal and investigative teams to identify, collect, and review Copilot in Fabric interactions as part of case workflows, supporting defensible investigations Communication Compliance helps detect potential policy violations or risky behavior in administrator interactions, enabling proactive monitoring and remediation for Copilot in Fabric Data Lifecycle Management allows teams to automate the retention, deletion, and classification of Copilot in Fabric data—reducing storage costs and minimizing risk from outdated or unnecessary information Steps to Enable the Integration To use DSPM for AI from the Microsoft Purview portal, you must have the following prerequisites, Activate Purview Audit which requires user to have the role of Entra Compliance Admin or Entra Global admin to enable Purview Audit. More details on DSPM pre-requisites can be found here, Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn To enable Purview DSPM for AI for Copilot for Power BI, Step 1: Enable DSPM for AI Policies Navigate to Microsoft Purview DSPM for AI. Enable the one-click policy: “DSPM for AI – Capture interactions for Copilot experiences”. Optionally enable additional policies: Detect risky AI usage Detect unethical behavior in AI apps These policies can be configured in the Microsoft Purview DSPM for AI portal and tailored to your organization’s risk profile. Step 2: Monitor and Act Use DSPM for AI Reports and Activity Explorer to monitor AI interactions. Apply IRM, DLM, CC and eDiscovery actions as needed. Purview Roles and Permissions Needed by Users To manage and operate DSPM for AI effectively, assign the following roles: Role Responsibilities Purview Compliance Administrator Full access to configure policies and DSPM for AI setup Purview Security Reader View reports, dashboards, policies and AI Activity Content Explorer Content Viewer Additional Permission to view the actual prompts and responses on top of the above permissions More details on Purview DSPM for AI Roles & permissions can be found here, Permissions for Microsoft Purview Data Security Posture Management for AI | Microsoft Learn Purview Costs Microsoft Purview now offers a combination of entitlement-based (per-user-per-month) and Pay-As-You-Go (PAYG) pricing models. The PAYG model applies to a broader set of Purview capabilities—including Insider Risk Management, Communication Compliance, eDiscovery, and other data security and governance solutions—based on copilot for Power BI usage volume or complexity. Purview Audit logging of Copilot for Power BI activity remains included at no additional cost as part of Microsoft 365 E5 licensing. This flexible pricing structure ensures that organizations only pay for what they use as data flows through AI models, networks, and applications. For further details, please refer to this blog: New Purview pricing options for protecting AI apps and agents | Microsoft Community Hub Conclusion Microsoft Purview DSPM for AI is a game-changer for organizations looking to adopt AI responsibly. By integrating with Copilot in Fabric, it provides a comprehensive framework to discover, protect, and govern AI interactions—ensuring compliance, reducing risk, and enabling secure innovation. Whether you're a Fabric Admin, compliance admin or security admin, enabling this integration is a strategic step toward building a secure, AI-ready enterprise. Additional resources Use Microsoft Purview to manage data security & compliance for Microsoft Copilot in Fabric | Microsoft Learn How to deploy Microsoft Purview DSPM for AI to secure your AI apps Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn Learn about Microsoft Purview billing models | Microsoft LearnSecure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Coplot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurEmpowering Secure AI Innovation: Data Security and Compliance for AI Agents
As organizations embrace the transformative power of generative AI, agentic AI is quickly becoming a core part of enterprise innovation. Whether organizations are just beginning their AI journey or scaling advanced solutions, one thing is clear: agents are poised to transform every function and workflow across organizations. IDC predicts that over 1 billion new business process agents will be created in the next four years 1 . This surge in AI adoption is empowering employees across roles – from low-code makers to pro-code developers – to build and use AI in new ways. Business leaders are eager to support this momentum, but they also recognize the need to innovate responsibly with AI. Microsoft Purview’s evolution When Microsoft 365 Copilot launched in November 2022, it sparked a wave of excitement and an immediate question: how do we secure and govern the data powering these AI experiences? Microsoft Purview quickly evolved to meet this need, extending its data security and compliance capabilities to the Microsoft 365 Copilot ecosystem. It delivered discoverability, protection, and governance value that helped customers discover data risks such as data oversharing, protect sensitive data to prevent data loss and insider risks, and govern AI usage to meet regulations and policies. Now, as customers move beyond pre-built agents like Copilot to develop their own AI agents and applications, Microsoft Purview has evolved to extend the same data protections built for Microsoft 365 Copilot to AI agents. Today, those protections span the entire development spectrum—from no-code and low-code tools like Copilot Studio to pro-code environments such as Azure AI Foundry. Microsoft Purview helps address challenges across the development spectrum Makers – typically business users or citizen developers who build solutions using low-code or no-code tools – shouldn’t need to become security experts to build AI responsibly. Yet, without proper safeguards, these agents can inadvertently expose sensitive data or violate compliance policies. That is why with Microsoft Purview, security and IT teams can feel confident about the agents being built in their organizations. When makers build agents through the Agent Builder or directly in Copilot Studio, security admins can set up Microsoft Purview’s data security and compliance controls that work behind the scenes to support makers in building secure and compliant agents. These controls automatically enforce policies, monitor data access, and ensure compliance without requiring the maker to become a security expert without requiring makers to take additional actions. In fact, a recent Microsoft study found that 71% of developer decision-makers acknowledge that these constraints result in security trade-offs and development delays 2 . Pro-code developers are under increasing pressure to deliver fast, flexible, and seamlessly integrated solutions, yet data security often becomes a deployment blocker or an afterthought. Building enterprise-grade data security and compliance capabilities from scratch is not only time-consuming but also requires deep domain expertise. This is where Microsoft Purview steps in. As an industry leader in data security and compliance, Purview does the heavy lifting, so developers don’t have to. Now in preview, Purview SDK can be used by developers to embed robust, enterprise-ready data protections directly into their AI applications, instead of building complex security frameworks on their own. The Purview SDK is a comprehensive set of REST APIs, documentation, and code samples, allowing developers to easily incorporate Microsoft Purview’s capabilities into their workflows—regardless of their integrated development environment (IDE). This empowers them to move fast without compromising on security or compliance and at the same time, Microsoft Purview helps security teams remain in control. : By embedding Purview APIs into the IDE, developers help enable their AI apps to be secured and governed at runtime Startups, ISVs, and partners can leverage the Purview SDK to seamlessly integrate Purview’s industry-leading features into their AI agents and applications. This enables their offerings to become Purview-aware, empowering customers to more easily secure and govern data within their AI environments. For example, Christian Veillette, Chief Technology Officer at Arthur Health, a Quisitive customer, states “The synergistic integration of MazikCare, the Quisitive Intelligence Platform, and the data compliance power of Purview SDK, including its DSPM for AI, forms a foundational pillar for trustworthy and safe AI-driven healthcare transformations. This powerful combination ensures continuous oversight and instant enforcement of compliance policies, giving IT leadership full assurance in the output of every AI model and upholding the highest safety standards. By centralizing policy enforcement, security concerns are significantly eased, empowering leadership to confidently steer their organizations through the AI transformation journey.” Microsoft partner, Infotechtion, has also leveraged the new Purview SDK to embed Purview value into their GenAI initiatives. Vivek Bhatt, Infotechtion’s Chief Technology Officer says, “Embedding Purview SDK into Infotechtion's AI governance solution improved trust and security by aligning Gen-AI interactions with Microsoft Purview's enterprise policies.” Microsoft Purview also natively integrates with Azure AI Foundry, enabling seamless, built-in security and compliance for AI workloads without requiring additional development effort. With this integration, signals from Azure AI Foundry are automatically surfaced in Microsoft Purview’s Data Security Posture Management (DSPM) for AI, Insider Risk Management, and compliance solutions. This means security teams can monitor AI usage, detect data risks, and enforce compliance policies across AI agents and applications—whether they’re built in-house or with Azure AI Foundry models. This reinforces Microsoft’s commitment to delivering secure-by-default AI innovation—empowering organizations to scale responsibly with confidence. : Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents in Microsoft Purview DSPM for AI. Explore more partner case studies from Ernst & Young and Infosys to see how they’re leveraging Purview SDK. Learn more about Purview SDK and Microsoft Purview for Azure AI Foundry. Unified visibility and control Whether supporting pro-code developers or low-code makers, Microsoft Purview enables organizations to secure and govern AI across organizations. With Purview, security teams can discover data security risks, protect sensitive data against data leakage and insider risks, and govern AI interactions. Discover data security risks With Data Security Posture Management (DSPM) for AI, data security teams can discover detailed data risk insights in AI interactions across Microsoft Copilots, agents built in Agent Builder and Copilot Studio, and custom AI apps and agents. Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents all in Microsoft Purview DSPM for AI. Protect sensitive data against data leaks and insider risks In DSPM for AI, data security admins can also get recommended insights to improve their organization’s security posture like minimizing risks of data oversharing. For example, an admin might get a recommendation to set up a data loss prevention (DLP) policy that prevents agents in Microsoft 365 Copilot from using certain labeled documents as grounding data to generate summaries or responses. By setting up this policy, organizations can prevent confidential legal documents—with specific language that could lead to improper guidance—from being summarized. It also ensures that “Internal only” documents aren’t used to create content that might be shared outside the organization. Extend data loss prevention (DLP) policies to agents in Microsoft 365 to protect sensitive data. Agents often pull data from sources like SharePoint and Dataverse, and Microsoft Purview helps protect that data every step of the way. It honors sensitivity labels, enforces access permissions, and applies label inheritance so that AI-generated content carries the same protections as its source. With auto-labeling in Dataverse, sensitive data is classified as soon as it’s ingested—reducing manual effort and maintaining consistent protection. When responses draw from multiple sources with different labels, the most restrictive label is applied to uphold compliance and minimize risk. : Sensitivity labels will be automatically applied to data in Dataverse. : AI-generated responses will inherit and honor the source data’s sensitivity labels. In addition to data and permission controls that help address data oversharing or leakage, security teams also need ways to detect users' risky activities in AI apps and agents that could potentially lead to data security incidents. With risky AI usage indicators, policy template, and analytics report in Microsoft Purview Insider Risk Management, security teams with appropriate permissions can detect risky activities. For example, there could be a departing employee receiving an unusual number of AI responses across Copilots and agents containing sensitive data, deviating from their past activity patterns. Security teams can then effectively detect and respond to these potential incidents to minimize the negative impact. For example, they can configure Adaptive Protection to automatically block a high-risk user from accessing sensitive data. An Insider Risk Management alert from a Risky AI usage policy shows a user with anomalous activities. Govern AI Interactions to detect non-compliant usage Microsoft Purview provides a comprehensive set of tools to govern AI usage and detect non-compliant user activities. AI interactions across Microsoft Copilots, AI apps and agents, are recorded in Audit logs. eDiscovery enables legal and compliance teams with appropriate permissions to collect and review AI-generated content for internal investigations or litigation. Data Lifecycle Management enables teams to set policies to retain or dispose of AI interactions, while Communication Compliance helps detect risky or inappropriate use of AI, such as harmful content or other violations against code-of-conduct policies. Together, these capabilities give organizations the visibility and control they need to innovate responsibly with AI. AI interactions across Microsoft Copilots, AI apps and agents are recorded in Audit logs. AI interactions across Microsoft Copilots, AI apps and agents can be collected and reviewed in eDiscovery. Microsoft Purview Communication Compliance can detect non-compliant content in AI prompts across Microsoft Copilots, AI apps and agents. Securing the Future of AI Innovation — Explore Additional Resources As organizations accelerate their adoption of agentic AI, the need for built-in security and compliance has never been more critical. Microsoft Purview empowers both makers and developers to innovate with confidence—ensuring that every AI interaction is secure, compliant, and aligned with enterprise standards. By embedding protection across the entire development lifecycle, Purview helps organizations unlock the full potential of AI while maintaining the trust, transparency, and control that responsible innovation demands. To dive deeper into how Microsoft Purview supports secure AI development, explore our additional resources, documentation, and integration guides: Learn more about Security for AI solutions on our webpage Learn more about Microsoft Purview SDK Learn more about Purview pricing Get started with Azure AI Foundry Get started with Microsoft Purview 1 IDC, 1 Billion New Logical Applications: More Background, Gary Chen, Jim Mercer, April 2024 https://blogs.idc.com/2025/04/04/the-agentic-evolution-of-enterprise-applications/ 2 Microsoft, AI App Security Quantitative Study, April 2025Modern, unified data security in the AI era: New capabilities in Microsoft Purview
AI is transforming how organizations work—but it’s also changing how data moves, who can access it, and how easily it can be exposed. Sensitive data now appears in AI prompts, Copilot responses, and across a growing ecosystem of SaaS and GenAI tools. To keep up, organizations need data security that’s built for how people work with AI today. Microsoft Purview brings together native classification, visibility, protection and automated workflows across your data estate—all in one integrated platform. Today, we’re highlighting some of our new capabilities that help you: Uncover data blind spots: Discover hidden risks and improve data security posture and find sensitive data on endpoints with on-demand classification Strengthen protection across data flows: Enhance oversharing controls for Microsoft 365 Copilot, expand protection to more Azure data sources, and extend data security to the network layer Respond faster with automation: Automate investigation workflows with Alert agents in Data Loss Prevention (DLP) and Insider Risk Management (IRM) Discover hidden risks and improve data security posture Many security teams struggle with fragmented tools that siloes sensitive data visibility across apps and clouds. According to recent studies, 21% of decision-makers cite the lack of unified visibility as a top barrier to effective data security. This leads to gaps in protection and inefficient incident response—ultimately weakening the organization’s overall data security posture. To help organizations address these challenges, last November at Ignite we launched Microsoft Purview Data Security Posture Management (DSPM), and we’re excited to share that this capability is now available. DSPM continuously assesses your data estate, surfaces contextual insights into sensitive data and its usage, and recommends targeted controls to reduce risk and strengthen your data security program. We’re also bringing in new signals from email exfiltration and from user activity in the browser and network into DSPM’s insights and policy recommendations, making sure organizations can improve their protections and address potential data security gaps. You can now also experience deeper investigations into DSPM with 3x more suggested prompts, outcome-based promptbooks and new guidance experience that helps interpret unsupported user queries and offers helpful alternatives, increasing usability without hard stops. New Security Copilot task-based promptbooks in Purview DSPM Learn more about how DSPM can help your organization strengthen your data security posture. Find sensitive data on endpoints with on-demand classification Security teams often struggle to uncover sensitive data sitting for a long time on endpoints, one of the most overlooked and unmanaged surfaces in the data estate. Typically, data gets classified when a file is created, modified, or accessed. As a result, older data at rest that hasn’t been touched in a while can remain outside the scope of classification workflows. This lack of visibility increases the risk of exposure, especially for sensitive data that is not actively used or monitored. To tackle this challenge, we are introducing on-demand classification for endpoints. Coming to public preview in July, on-demand classification for endpoints gives security teams a targeted way to scan data at rest on Windows devices, without relying on file activity, to uncover sensitive files that have never been classified or reviewed. This means you can: Discover sensitive data on endpoints, including older, unclassified data that may never have been scanned, giving admins visibility into unclassified files that typically fall outside traditional classification workflows Support audit and compliance efforts by identifying sensitive data Focus scans on specific users, file types, or timelines to get visibility that really matters Get insights needed to prioritize remediation or protection strategies Security teams can define where or what to focus on by selecting specific users, file types, or last modified dates. This allows teams to prioritize scans for high-priority scenarios, like users handling sensitive data. Because on-demand classification scans are manually triggered and scoped without complex configuration, organizations can get targeted visibility into sensitive data on endpoints with minimal performance impact and without the need for complex setup. Complements just-in-time protection On-demand classification for endpoints also works hand-in-hand with existing endpoint DLP capabilities like just-in-time (JIT) protection. JIT protection kicks in during file access, blocking or alerting based on real-time content evaluation On-demand classification works ahead of time, identifying sensitive data that hasn’t been modified or accessed in an extended period Used together, they form a layered endpoint protection strategy, ensuring full visibility and protection. Choosing the right tool On-demand classification for endpoints is purpose-built for discovering sensitive data at rest on endpoints, especially files that haven’t been accessed or modified for a long time. It gives admins targeted visibility—no user action required. If you’re looking to apply labels, enforce protection policies, or scan files stored on on-premises servers, the Microsoft Purview Information Protection Scanner may be a better fit. It is designed for ongoing policy enforcement and label application across your hybrid environment. Learn more here. Get started with on-demand classification On-demand classification is easy to set up, with no agents to install or complex rules to configure. It only runs when you choose, rather than continuously running in the background. You stay in control of when and where scans happen, making it a simple and efficient way to extend visibility to endpoints. On-demand classification for endpoints enters public preview in July. Stay tuned for setup guidance and more details as we get closer to launch. Streamlining technical issue resolution with always-on diagnostics for endpoint devices Historically, resolving technical support tickets for Purview DLP required admins to manually collect logs and have end users reproduce the original issue at the time of the request. This could lead to delays, extended resolution times, and repeated communication cycles, especially for non-reproducible issues. Today, we’re introducing a new way to capture and share endpoint diagnostics: Always-on diagnostics available in public preview. When submitting support requests for Purview endpoint DLP, customers can now share rich diagnostic data with Microsoft without needing to recreate the issue scenario again at the time of submitting an investigation request such as a support ticket. This capability can now be enabled through your endpoint DLP settings. Learn more about always-on diagnostics here. Strengthening DLP for Microsoft 365 Copilot As organizations adopt Microsoft 365 Copilot, DLP plays a critical role in minimizing the risk of sensitive data exposure through AI. New enhancements give security teams greater control, visibility, and flexibility when protecting sensitive content in Copilot scenarios. Expanded protection to labeled emails DLP for Microsoft 365 Copilot now supports labeled email, available today, in addition to files in SharePoint and OneDrive. This helps prevent sensitive emails from being processed by Copilot and used as grounding data. This capability is applicable to emails sent after 1/1/2025. Alerts and investigations for Copilot access attempts Security teams can now configure DLP alerts for Microsoft 365 Copilot activity, surfacing attempts to access emails or files with sensitivity labels that match DLP policies. Alert reports include key details like user identity, policy match, and file name, enabling admins to quickly assess what happened, determine if further investigation is needed, and take appropriate follow-up actions. Admins can also choose to notify users directly, reinforcing responsible data use. The rollout will start on June 30 and is expected to be completed by the end of July. Simulation mode for Copilot DLP policies As part of the rollout starting on June 30, simulation mode lets admins test Copilot-specific DLP policies before enforcement. By previewing matches without impacting users, security teams can fine-tune rules, reduce false positives, and deploy policies with greater confidence. Learn more about DLP for Microsoft 365 Copilot here. Extended protection to more Azure data sources AI development is only as secure as the data that feeds it. That’s why Microsoft Purview Information Protection is expanding its auto-labeling capabilities to cover more Azure data sources. Now in public preview, security teams can automatically apply sensitivity labels to additional Azure data sources, including Azure Cosmos DB, PostgreSQL, KustoDB, MySQL, Azure Files, Azure Databricks, Azure SQL Managed Instances, and Azure Synapse. These additions build on existing coverage for Azure Blob Storage, Azure Data Lake Storage, and Azure SQL Database. These sources commonly fuel analytics pipelines and AI training workloads. With auto-labeling extended to more high-value data sources, sensitivity labels are applied to the data before it’s copied, shared, or integrated into downstream systems. These labels help enforce protection policies and limit unauthorized access to ensure sensitive data is handled appropriately across apps and AI workflows. Secure your AI training data, learn how to set up auto-labeling here. Extending data security to the network layer With more sensitive data moving through unmanaged SaaS apps and personal AI tools, your network is now a critical security surface. Earlier this year, we announced the introduction of Purview data security controls for the network layer. With inline data discovery for the network, organizations can detect sensitive data that’s outside of the trusted boundaries of the organization, such as unmanaged SaaS apps and cloud services. This helps admins understand how sensitive data can be intentionally or inadvertently exfiltrated to personal instances of apps, unsanctioned GenAI apps, cloud storage boxes, and more. This capability is now available in public preview — learn more here. Visibility of sensitive data sent through the network also includes insights into how users may be sharing data in risky ways. User activities such as file uploads or AI prompt submissions are captured in Insider Risk Management to formulate richer and comprehensive profiles of user risk. In turn, these signals will also better contextualize future data interactions and enrich policy verdicts. These user risk indicators will become available in the coming weeks. Automate investigation workflows with Alert Triage Agents in DLP and IRM Security teams today face a high volume of alerts, often spending hours sorting through false positives and low priority flags to find threats that matter. To help security teams focus on what’s truly high risk, we’re excited to share that the Alert Triage Agents in Microsoft Purview Data Loss Prevention (DLP) and Insider Risk Management (IRM) are now available in public preview. These autonomous, Security Copilot-powered agents prioritize alerts that pose the greatest risk to organizations. Whether it’s identifying high-impact exfiltration attempts in DLP or surfacing potential insider threats in IRM, the agents analyze both content and intent to deliver transparent, explainable findings. Built to learn and improve from user feedback, these agents not only accelerate investigations, but also improve over time, empowering teams to prioritize real threats, reduce time spent on false positives, and adapt to evolving risks through feedback. Watch the new Mechanics video, or learn more about how to get started here. A unified approach to modern data security Disjointed security tools create gaps and increase operational overhead. Microsoft Purview offers a unified data security platform designed to keep pace with how your organization works with AI today. From endpoints visibility to automated security workflows, Purview unifies data security across your estate, giving you one platform for end-to-end data security. As your data estate grows and AI reshapes the way you work, Purview helps you stay ahead—so you can scale securely, reduce risk, and unlock the full productivity potential of AI with confidence. Ready to unify your data security into one integrated platform? Try Microsoft Purview free for 90 days.How business conduct violations can help understand data security risks
Discover how the integration of Communication Compliance and Insider Risk Management enhances understanding of data security risks by providing deeper insights into user intent on potentially risky activities, ultimately aiding proactive management and safeguarding of sensitive assets within organizations.