microsoft purview communication compliance
33 TopicsArtificial Intelligence & Security
Understanding Artificial Intelligence Artificial intelligence (AI) is a computational system that perform human‑intelligence tasks, learning, reasoning, problem‑solving, perception, and language understanding by leveraging algorithmic and statistical methods to analyse data and make informed decisions. Artificial Intelligence (AI) can also be abbreviated as is the simulation of human intelligence through machines programmed to learn, reason, and act. It blends statistics, machine learning, and robotics to deliver following outcomes: Prediction: The application of statistical modelling and machine learning techniques to anticipate future outcomes, such as detecting fraudulent transactions. Automation: The utilisation of robotics and artificial intelligence to streamline and execute routine processes, exemplified by automated invoice processing. Augmentation: The enhancement of human decision-making and operational capabilities through AI-driven tools, for instance, AI-assisted sales enablement. Artificial Intelligence: Core Capabilities and Market Outlook Key capabilities of AI include: Data-driven decision-making: Analysing large datasets to generate actionable insights and optimise outcomes. Anomaly detection: Identifying irregular patterns or deviations in data for risk mitigation and quality assurance. Visual interpretation: Processing and understanding visual inputs such as images and videos for applications like computer vision. Natural language understanding: Comprehending and interpreting human language to enable accurate information extraction and contextual responses. Conversational engagement: Facilitating human-like interactions through chatbots, virtual assistants, and dialogue systems. With the exponential growth of data, ML learning models and computing power. AI is advancing much faster and as According to industry analyst reports breakthroughs in deep learning and neural network architectures have enabled highly sophisticated applications across diverse sectors, including healthcare, finance, manufacturing, and retail. The global AI market is on a trajectory of significant expansion, projected to increase nearly 5X by 2030, from $391 billion in 2025 to $1.81 trillion. This growth corresponds to a compound annual growth rate (CAGR) of 35.9% during the forecast period. These projections are estimates and subject to change as per rapid growth and advancement in the AI Era. AI and Cloud Synergy AI, and cloud computing form a powerful technological mixture. Digital assistants are offering scalable, cloud-powered intelligence. Cloud platforms such as Azure provide pre-trained models and services, enabling businesses to deploy AI solutions efficiently. Core AI Workloads Capabilities Machine Learning Machine learning (ML) underpins most AI systems by enabling models to learn from historical and real-time data to make predictions, classifications, and recommendations. These models adapt over time as they are exposed to new data, improving accuracy and robustness. Example use cases: Credit risk scoring in banking, demand forecasting in retail, and predictive maintenance in manufacturing. Anomaly Detection Anomaly detection techniques identify deviations from expected patterns in data, systems, or processes. This capability is critical for risk management and operational resilience, as it enables early detection of fraud, security breaches, or equipment failures. Example use cases: Fraud detection in financial transactions, network intrusion monitoring in cybersecurity, and quality control in industrial production. Natural Language Processing (NLP) NLP focuses on enabling machines to understand, interpret, and generate human language in both text and speech formats. This capability powers a wide range of applications that require contextual comprehension and semantic accuracy. Example use cases: Sentiment analysis for customer feedback, document summarisation for legal and compliance teams, and multilingual translation for global operations. Principles of Responsible AI To ensure ethical and trustworthy AI, organisations must embrace: Reliability & Safety Privacy & Security Inclusiveness Fairness Transparency Accountability These principles are embedded in frameworks like the Responsible-AI-Standard and reinforced by governance models such as Microsoft AI Governance Framework. Responsible AI Principles and Approach | Microsoft AI AI and Security AI introduces both opportunities and risks. A responsible approach to AI security involves three dimensions: Risk Mitigation: It Is addressing threats from immature or malicious AI applications. Security Applications: These are used to enhance AI security and public safety. Governance Systems: Establishing frameworks to manage AI risks and ensure safe development. Security Risks and Opportunities Due to AI Transformation AI’s transformative nature brings new challenges: Cybersecurity: This brings the opportunities and advancement to track, detect and act against Vulnerabilities in infrastructure and learning models. Data Security: This helps the tool and solutions such as Microsoft Purview to prevent data security by performing assessments, creating Data loss prevention policies applying sensitivity labels. Information Security: The biggest risk is securing the information and due to the AI era of transformation securing IS using various AI security frameworks. These concerns are echoed in The Crucial Role of Data Security Posture Management in the AI Era, which highlights insider threats, generative AI risks, and the need for robust data governance. AI in Security Applications AI’s capabilities in data analysis and decision-making enable innovative security solutions: Network Protection: applications include use of AI algorithms for intrusion detection, malware detection, security situational awareness, and threat early warning, etc. Data Management: applications refer to the use of AI technologies to achieve data protection objectives such as hierarchical classification, leak prevention, and leak traceability. Intelligent Security: applications refer to the use of AI technology to upgrade the security field from passive defence toward the intelligent direction, developing of active judgment and timely early warning. Financial Risk Control: applications use AI technology to improve the efficiency and accuracy of credit assessment, risk management, etc., and assisting governments in the regulation of financial transactions. AI Security Management Effective AI security requires: Regulations & Policies: Establish and safety management laws specifically designed to for governance by regulatory authorities and management policies for key application domains of AI and prominent security risks. Standards & Specifications: Industry-wide benchmarks, along with international and domestic standards can be used to support AI safety. Technological Methods: Early detection with Modern set of tools such as Defender for AI can be used to support to detect and mitigate and remediate AI threats. Security Assessments: Organization should use proper tools and platforms for evaluating AI risks and perform assessments regularly using automated tools approach Conclusion AI is transforming how organizations operate, innovate, and secure their environments. As AI capabilities evolve, integrating security and governance considerations from the outset remains critical. By combining responsible AI principles, effective governance, and appropriate security measures, organizations can work toward deploying AI technologies in a manner that supports both innovation and trust. Industry projections suggest continued growth in AI‑related security investments over the coming years, reflecting increased focus on managing AI risks alongside its benefits. These estimates are subject to change and should be interpreted in the context of evolving technologies and regulatory developments. Disclaimer References to Microsoft products and frameworks are for informational purposes only and do not imply endorsement, guarantee, or contractual commitment. Market projections referenced are based on publicly available industry analyses and are subject to change.How business conduct violations can help understand data security risks
Discover how the integration of Communication Compliance and Insider Risk Management enhances understanding of data security risks by providing deeper insights into user intent on potentially risky activities, ultimately aiding proactive management and safeguarding of sensitive assets within organizations.Security as the core primitive - Securing AI agents and apps
This week at Microsoft Ignite, we shared our vision for Microsoft security -- In the agentic era, security must be ambient and autonomous, like the AI it protects. It must be woven into and around everything we build—from silicon to OS, to agents, apps, data, platforms, and clouds—and throughout everything we do. In this blog, we are going to dive deeper into many of the new innovations we are introducing this week to secure AI agents and apps. As I spend time with our customers and partners, there are four consistent themes that have emerged as core security challenges to secure AI workloads. These are: preventing agent sprawl and access to resources, protecting against data oversharing and data leaks, defending against new AI threats and vulnerabilities, and adhering to evolving regulations. Addressing these challenges holistically requires a coordinated effort across IT, developers, and security leaders, not just within security teams and to enable this, we are introducing several new innovations: Microsoft Agent 365 for IT, Foundry Control Plane in Microsoft Foundry for developers, and the Security Dashboard for AI for security leaders. In addition, we are releasing several new purpose-built capabilities to protect and govern AI apps and agents across Microsoft Defender, Microsoft Entra, and Microsoft Purview. Observability at every layer of the stack To facilitate the organization-wide effort that it takes to secure and govern AI agents and apps – IT, developers, and security leaders need observability (security, management, and monitoring) at every level. IT teams need to enable the development and deployment of any agent in their environment. To ensure the responsible and secure deployment of agents into an organization, IT needs a unified agent registry, the ability to assign an identity to every agent, manage the agent’s access to data and resources, and manage the agent’s entire lifecycle. In addition, IT needs to be able to assign access to common productivity and collaboration tools, such as email and file storage, and be able to observe their entire agent estate for risks such as over-permissioned agents. Development teams need to build and test agents, apply security and compliance controls by default, and ensure AI models are evaluated for safety guardrails and security vulnerabilities. Post deployment, development teams must observe agents to ensure they are staying on task, accessing applications and data sources appropriately, and operating within their cost and performance expectations. Security & compliance teams must ensure overall security of their AI estate, including their AI infrastructure, platforms, data, apps, and agents. They need comprehensive visibility into all their security risks- including agent sprawl and resource access, data oversharing and leaks, AI threats and vulnerabilities, and complying with global regulations. They want to address these risks by extending their existing security investments that they are already invested in and familiar with, rather than using siloed or bolt-on tools. These teams can be most effective in delivering trustworthy AI to their organizations if security is natively integrated into the tools and platforms that they use every day, and if those tools and platforms share consistent security primitives such as agent identities from Entra; data security and compliance controls from Purview; and security posture, detections, and protections from Defender. With the new capabilities being released today, we are delivering observability at every layer of the AI stack, meeting IT, developers, and security teams where they are in the tools they already use to innovate with confidence. For IT Teams - Introducing Microsoft Agent 365, the control plane for agents, now in preview The best infrastructure for managing your agents is the one you already use to manage your users. With Agent 365, organizations can extend familiar tools and policies to confidently deploy and secure agents, without reinventing the wheel. By using the same trusted Microsoft 365 infrastructure, productivity apps, and protections, organizations can now apply consistent and familiar governance and security controls that are purpose-built to protect against agent-specific threats and risks. gement and governance of agents across organizations Microsoft Agent 365 delivers a unified agent Registry, Access Control, Visualization, Interoperability, and Security capabilities for your organization. These capabilities work together to help organizations manage agents and drive business value. The Registry powered by the Entra provides a complete and unified inventory of all the agents deployed and used in your organization including both Microsoft and third-party agents. Access Control allows you to limit the access privileges of your agents to only the resources that they need and protect their access to resources in real time. Visualization gives organizations the ability to see what matters most and gain insights through a unified dashboard, advanced analytics, and role-based reporting. Interop allows agents to access organizational data through Work IQ for added context, and to integrate with Microsoft 365 apps such as Outlook, Word, and Excel so they can create and collaborate alongside users. Security enables the proactive detection of vulnerabilities and misconfigurations, protects against common attacks such as prompt injections, prevents agents from processing or leaking sensitive data, and gives organizations the ability to audit agent interactions, assess compliance readiness and policy violations, and recommend controls for evolving regulatory requirements. Microsoft Agent 365 also includes the Agent 365 SDK, part of Microsoft Agent Framework, which empowers developers and ISVs to build agents on their own AI stack. The SDK enables agents to automatically inherit Microsoft's security and governance protections, such as identity controls, data security policies, and compliance capabilities, without the need for custom integration. For more details on Agent 365, read the blog here. For Developers - Introducing Microsoft Foundry Control Plane to observe, secure and manage agents, now in preview Developers are moving fast to bring agents into production, but operating them at scale introduces new challenges and responsibilities. Agents can access tools, take actions, and make decisions in real time, which means development teams must ensure that every agent behaves safely, securely, and consistently. Today, developers need to work across multiple disparate tools to get a holistic picture of the cybersecurity and safety risks that their agents may have. Once they understand the risk, they then need a unified and simplified way to monitor and manage their entire agent fleet and apply controls and guardrails as needed. Microsoft Foundry provides a unified platform for developers to build, evaluate and deploy AI apps and agents in a responsible way. Today we are excited to announce that Foundry Control Plane is available in preview. This enables developers to observe, secure, and manage their agent fleets with built-in security, and centralized governance controls. With this unified approach, developers can now identify risks and correlate disparate signals across their models, agents, and tools; enforce consistent policies and quality gates; and continuously monitor task adherence and runtime risks. Foundry Control Plane is deeply integrated with Microsoft’s security portfolio to provide a ‘secure by design’ foundation for developers. With Microsoft Entra, developers can ensure an agent identity (Agent ID) and access controls are built into every agent, mitigating the risk of unmanaged agents and over permissioned resources. With Microsoft Defender built in, developers gain contextualized alerts and posture recommendations for agents directly within the Foundry Control Plane. This integration proactively prevents configuration and access risks, while also defending agents from runtime threats in real time. Microsoft Purview’s native integration into Foundry Control Plane makes it easy to enable data security and compliance for every Foundry-built application or agent. This allows Purview to discover data security and compliance risks and apply policies to prevent user prompts and AI responses from safety and policy violations. In addition, agent interactions can be logged and searched for compliance and legal audits. This integration of the shared security capabilities, including identity and access, data security and compliance, and threat protection and posture ensures that security is not an afterthought; it’s embedded at every stage of the agent lifecycle, enabling you to start secure and stay secure. For more details, read the blog. For Security Teams - Introducing Security Dashboard for AI - unified risk visibility for CISOs and AI risk leaders, coming soon AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 90% of security professionals, including CISOs, report that their responsibilities have expanded to include data governance and AI oversight within the past year. 1 At the same time, 86% of risk managers say disconnected data and systems lead to duplicated efforts and gaps in risk coverage. 2 To address these needs, we are excited to introduce the Security Dashboard for AI. This serves as a unified dashboard that aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview. This unified dashboard allows CISOs and AI risk leaders to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. For example, you can see your full AI inventory and get visibility into a quarantined agent, flagged for high data risk due to oversharing sensitive information in Purview. The dashboard then correlates that signal with identity insights from Entra and threat protection alerts from Defender to provide a complete picture of exposure. From there, you can delegate tasks to the appropriate teams to enforce policies and remediate issues quickly. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, there’s nothing new to buy. If you’re already using Microsoft security products to secure AI, you’re already a Security Dashboard for AI customer. Figure 5: Security Dashboard for AI provides CISOs and AI risk leaders with a unified view of their AI risk by bringing together their AI inventory, AI risk, and security recommendations to strengthen overall posture Together, these innovations deliver observability and security across IT, development, and security teams, powered by Microsoft’s shared security capabilities. With Microsoft Agent 365, IT teams can manage and secure agents alongside users. Foundry Control Plane gives developers unified governance and lifecycle controls for agent fleets. Security Dashboard for AI provides CISOs and AI risk leaders with a consolidated view of AI risks across platforms, apps, and agents. Added innovation to secure and govern your AI workloads In addition to the IT, developer, and security leader-focused innovations outlined above, we continue to accelerate our pace of innovation in Microsoft Entra, Microsoft Purview, and Microsoft Defender to address the most pressing needs for securing and governing your AI workloads. These needs are: Manage agent sprawl and resource access e.g. managing agent identity, access to resources, and permissions lifecycle at scale Prevent data oversharing and leaks e.g. protecting sensitive information shared in prompts, responses, and agent interactions Defend against shadow AI, new threats, and vulnerabilities e.g. managing unsanctioned applications, preventing prompt injection attacks, and detecting AI supply chain vulnerabilities Enable AI governance for regulatory compliance e.g. ensuring AI development, operations, and usage comply with evolving global regulations and frameworks Manage agent sprawl and resource access 76% of business leaders expect employees to manage agents within the next 2–3 years. 3 Widespread adoption of agents is driving the need for visibility and control, which includes the need for a unified registry, agent identities, lifecycle governance, and secure access to resources. Today, Microsoft Entra provides robust identity protection and secure access for applications and users. However, organizations lack a unified way to manage, govern, and protect agents in the same way they manage their users. Organizations need a purpose-built identity and access framework for agents. Introducing Microsoft Entra Agent ID, now in preview Microsoft Entra Agent ID offers enterprise-grade capabilities that enable organizations to prevent agent sprawl and protect agent identities and their access to resources. These new purpose-built capabilities enable organizations to: Register and manage agents: Get a complete inventory of the agent fleet and ensure all new agents are created with an identity built-in and are automatically protected by organization policies to accelerate adoption. Govern agent identities and lifecycle: Keep the agent fleet under control with lifecycle management and IT-defined guardrails for both agents and people who create and manage them. Protect agent access to resources: Reduce risk of breaches, block risky agents, and prevent agent access to malicious resources with conditional access and traffic inspection. Agents built in Microsoft Copilot Studio, Microsoft Foundry, and Security Copilot get an Entra Agent ID built-in at creation. Developers can also adopt Entra Agent ID for agents they build through Microsoft Agent Framework, Microsoft Agent 365 SDK, or Microsoft Entra Agent ID SDK. Read the Microsoft Entra blog to learn more. Prevent data oversharing and leaks Data security is more complex than ever. Information Security Media Group (ISMG) reports that 80% of leaders cite leakage of sensitive data as their top concern. 4 In addition to data security and compliance risks of generative AI (GenAI) apps, agents introduces new data risks such as unsupervised data access, highlighting the need to protect all types of corporate data, whether it is accessed by employees or agents. To mitigate these risks, we are introducing new Microsoft Purview data security and compliance capabilities for Microsoft 365 Copilot and for agents and AI apps built with Copilot Studio and Microsoft Foundry, providing unified protection, visibility, and control for users, AI Apps, and Agents. New Microsoft Purview controls safeguard Microsoft 365 Copilot with real-time protection and bulk remediation of oversharing risks Microsoft Purview and Microsoft 365 Copilot deliver a fully integrated solution for protecting sensitive data in AI workflows. Based on ongoing customer feedback, we’re introducing new capabilities to deliver real-time protection for sensitive data in M365 Copilot and accelerated remediation of oversharing risks: Data risk assessments: Previously, admins could monitor oversharing risks such as SharePoint sites with unprotected sensitive data. Now, they can perform item-level investigations and bulk remediation for overshared files in SharePoint and OneDrive to quickly reduce oversharing exposure. Data Loss Prevention (DLP) for M365 Copilot: DLP previously excluded files with sensitivity labels from Copilot processing. Now in preview, DLP also prevents prompts that include sensitive data from being processed in M365 Copilot, Copilot Chat, and Copilot agents, and prevents Copilot from using sensitive data in prompts for web grounding. Priority cleanup for M365 Copilot assets: Many organizations have org-wide policies to retain or delete data. Priority cleanup, now generally available, lets admins delete assets that are frequently processed by Copilot, such as meeting transcripts and recordings, on an independent schedule from the org-wide policies while maintaining regulatory compliance. On-demand classification for meeting transcripts: Purview can now detect sensitive information in meeting transcripts on-demand. This enables data security admins to apply DLP policies and enforce Priority cleanup based on the sensitive information detected. & bulk remediation Read the full Data Security blog to learn more. Introducing new Microsoft Purview data security capabilities for agents and apps built with Copilot Studio and Microsoft Foundry, now in preview Microsoft Purview now extends the same data security and compliance for users and Copilots to agents and apps. These new capabilities are: Enhanced Data Security Posture Management: A centralized DSPM dashboard that provides observability, risk assessment, and guided remediation across users, AI apps, and agents. Insider Risk Management (IRM) for Agents: Uniquely designed for agents, using dedicated behavioral analytics, Purview dynamically assigns risk levels to agents based on their risky handing of sensitive data and enables admins to apply conditional policies based on that risk level. Sensitive data protection with Azure AI Search: Azure AI Search enables fast, AI-driven retrieval across large document collections, essential for building AI Apps. When apps or agents use Azure AI Search to index or retrieve data, Purview sensitivity labels are preserved in the search index, ensuring that any sensitive information remains protected under the organization’s data security & compliance policies. For more information on preventing data oversharing and data leaks - Learn how Purview protects and governs agents in the Data Security and Compliance for Agents blog. Defend against shadow AI, new threats, and vulnerabilities AI workloads are subject to new AI-specific threats like prompt injections attacks, model poisoning, and data exfiltration of AI generated content. Although security admins and SOC analysts have similar tasks when securing agents, the attack methods and surfaces differ significantly. To help customers defend against these novel attacks, we are introducing new capabilities in Microsoft Defender that deliver end-to-end protection, from security posture management to runtime defense. Introducing Security Posture Management for agents, now in preview As organizations adopt AI agents to automate critical workflows, they become high-value targets and potential points of compromise, creating a critical need to ensure agents are hardened, compliant, and resilient by preventing misconfigurations and safeguarding against adversarial manipulation. Security Posture Management for agents in Microsoft Defender now provides an agent inventory for security teams across Microsoft Foundry and Copilot Studio agents. Here, analysts can assess the overall security posture of an agent, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions, all aligned to the MITRE ATT&CK framework. Additionally, the new agent attack path analysis visualizes how an agent’s weak security posture can create broader organizational risk, so you can quickly limit exposure and prevent lateral movement. Introducing Threat Protection for agents, now in preview Attack techniques and attack surfaces for agents are fundamentally different from other assets in your environment. That’s why Defender is delivering purpose-built protections and detections to help defend against them. Defender is introducing runtime protection for Copilot Studio agents that automatically block prompt injection attacks in real time. In addition, we are announcing agent-specific threat detections for Copilot Studio and Microsoft Foundry agents coming soon. Defender automatically correlates these alerts with Microsoft’s industry-leading threat intelligence and cross-domain security signals to deliver richer, contextualized alerts and security incident views for the SOC analyst. Defender’s risk and threat signals are natively integrated into the new Microsoft Foundry Control Plane, giving development teams full observability and the ability to act directly from within their familiar environment. Finally, security analysts will be able to hunt across all agent telemetry in the Advanced Hunting experience in Defender, and the new Agent 365 SDK extends Defender’s visibility and hunting capabilities to third-party agents, starting with Genspark and Kasisto, giving security teams even more coverage across their AI landscape. To learn more about how you can harden the security posture of your agents and defend against threats, read the Microsoft Defender blog. Enable AI governance for regulatory compliance Global AI regulations like the EU AI Act and NIST AI RMF are evolving rapidly; yet, according to ISMG, 55% of leaders report lacking clarity on current and future AI regulatory requirements. 5 As enterprises adopt AI, they must ensure that their AI innovation aligns with global regulations and standards to avoid costly compliance gaps. Introducing new Microsoft Purview Compliance Manager capabilities to stay ahead of evolving AI regulations, now in preview Today, Purview Compliance Manager provides over 300 pre-built assessments for common industry, regional, and global standards and regulations. However, the pace of change for new AI regulations requires controls to be continuously re-evaluated and updated so that organizations can adapt to ongoing changes in regulations and stay compliant. To address this need, Compliance Manager now includes AI-powered regulatory templates. AI-powered regulatory templates enable real-time ingestion and analysis of global regulatory documents, allowing compliance teams to quickly adapt to changes as they happen. As regulations evolve, the updated regulatory documents can be uploaded to Compliance Manager, and the new requirements are automatically mapped to applicable recommended actions to implement controls across Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft 365, and Microsoft Foundry. Automated actions by Compliance Manager further streamline governance, reduce manual workload, and strengthen regulatory accountability. Introducing expanded Microsoft Purview compliance capabilities for agents and AI apps now in preview Microsoft Purview now extends its compliance capabilities across agent-generated interactions, ensuring responsible use and regulatory alignment as AI becomes deeply embedded across business processes. New capabilities include expanded coverage for: Audit: Surface agent interactions, lifecycle events, and data usage with Purview Audit. Unified audit logs across user and agent activities, paired with traceability for every agent using an Entra Agent ID, support investigation, anomaly detection, and regulatory reporting. Communication Compliance: Detect prompts sent to agents and agent-generated responses containing inappropriate, unethical, or risky language, including attempts to manipulate agents into bypassing policies, generating risky content, or producing noncompliant outputs. When issues arise, data security admins get full context, including the prompt, the agent’s output, and relevant metadata, so they can investigate and take corrective action Data Lifecycle Management: Apply retention and deletion policies to agent-generated content and communication flows to automate lifecycle controls and reduce regulatory risk. Read about Microsoft Purview data security for agents to learn more. Finally, we are extending our data security, threat protection, and identity access capabilities to third-party apps and agents via the network. Advancing Microsoft Entra Internet Access Secure Web + AI Gateway - extend runtime protections to the network, now in preview Microsoft Entra Internet Access, part of the Microsoft Entra Suite, has new capabilities to secure access to and usage of GenAI at the network level, marking a transition from Secure Web Gateway to Secure Web and AI Gateway. Enterprises can accelerate GenAI adoption while maintaining compliance and reducing risk, empowering employees to experiment with new AI tools safely. The new capabilities include: Prompt injection protection which blocks malicious prompts in real time by extending Azure AI Prompt Shields to the network layer. Network file filtering which extends Microsoft Purview to inspect files in transit and prevents regulated or confidential data from being uploaded to unsanctioned AI services. Shadow AI Detection that provides visibility into unsanctioned AI applications through Cloud Application Analytics and Defender for Cloud Apps risk scoring, empowering security teams to monitor usage trends, apply Conditional Access, or block high-risk apps instantly. Unsanctioned MCP server blocking prevents access to MCP servers from unauthorized agents. With these controls, you can accelerate GenAI adoption while maintaining compliance and reducing risk, so employees can experiment with new AI tools safely. Read the Microsoft Entra blog to learn more. As AI transforms the enterprise, security must evolve to meet new challenges—spanning agent sprawl, data protection, emerging threats, and regulatory compliance. Our approach is to empower IT, developers, and security leaders with purpose-built innovations like Agent 365, Foundry Control Plane, and the Security Dashboard for AI. These solutions bring observability, governance, and protection to every layer of the AI stack, leveraging familiar tools and integrated controls across Microsoft Defender, Microsoft Entra, and Microsoft Purview. The future of security is ambient, autonomous, and deeply woven into the fabric of how we build, deploy, and govern AI systems. Explore additional resources Learn more about Security for AI solutions on our webpage Learn more about Microsoft Agent 365 Learn more about Microsoft Entra Agent ID Get started with Microsoft 365 Copilot Get started with Microsoft Copilot Studio Get started with Microsoft Foundry Get started with Microsoft Defender for Cloud Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Purview Compliance Manager Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Bedrock Security, 2025 Data Security Confidence Index, published Mar 17, 2025. 2 AuditBoard & Ascend2, Connected Risk Report 2024; as cited by MIT Sloan Management Review, Spring 2025. 3 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more 4 First Annual Generative AI study: Business Rewards vs. Security Risks, , Q3 2023, ISMG, N=400 5 First Annual Generative AI study: Business Rewards vs. Security Risks, Q3 2023, ISMG, N=400Secure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Copilot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurEmpowering organizations with integrated data security: What’s new in Microsoft Purview
Today, data moves across clouds, apps, and devices at an unprecedented speed, often outside the visibility of siloed legacy tools. The rise of autonomous agents, generative AI, and distributed data ecosystems means that traditional perimeter-based security models are no longer sufficient. Even though companies are spending more than $213 billion globally, they still face several persistent security challenges: Fragmented tools don’t integrate together well and leave customers lacking full visibility of their data security risks The growing use of AI in the workplace is creating new data risks for companies to manage The shortage of skilled cybersecurity professionals is making it difficult to accomplish data security objectives Microsoft is a global leader in cloud, productivity, and security solutions. Microsoft Purview benefits from this breadth of offerings, integrating seamlessly across Microsoft 365, Azure, Microsoft Fabric, and other Microsoft platforms — while also working in harmony with complementary security tools. Unlike fragmented point solutions, Purview delivers an end-to-end data security platform built into the productivity and collaboration tools organizations already rely on. This deep understanding of data within Microsoft environments, combined with continually improving external data risk detections, allows customers to simplify their security stack, increase visibility, and act on data risks more quickly. At Ignite, we’re introducing the next generation of data security — delivering advanced protection and operational efficiency, so security teams can move at business speed while maintaining control of their data. Go beyond visibility into action, across your data estate Many customers today lack a comprehensive view of how to holistically address data security risks and properly manage their data security posture. To help customers strengthen data security across their data estate, we are excited to announce the new, enhanced Microsoft Purview Data Security Posture Management (DSPM). This new AI-powered DSPM experience unifies current Purview DSPM and DSPM for AI capabilities to create a central entry point for data security insights and controls, from which organizations can take action to continually improve their data security posture and prioritize risks. The new capabilities in the enhanced DSPM experience are: Outcome-Based workflows: Choose a data security objective and see related metrics, risk patterns, a recommended action plan and its impact - going from insight to action. Expanded coverage and remediation on Data Risk Assessments: Conduct item-level analysis with new remediation actions like bulk disabling of overshared SharePoint links. Out-of-box posture reports: Uncover data protection gaps and track security posture improvements with out-of-box reports that provide rich context on label usage, auto-labeling effectiveness, posture drift through label transitions, and DLP policy activities. AI Observability: Surface an organization’s agent inventory with assigned agent risk level and agent posture metrics based on agentic interactions with the organization’s data. New Security Copilot Agent: Accelerate the discovery and analysis of sensitive data to uncover hidden risks across files, emails, and messages. Gain visibility of non-Microsoft data within your data estate: Enable a unified view of data risks by gaining visibility into Salesforce, Snowflake, Google Cloud Platform, and Databricks – available through integrations with external partners via Microsoft Sentinel. These DSPM enhancements will be available in Public Preview within the upcoming weeks. Learn more in our blog dedicated to the announcement of the new Microsoft Purview DSPM. Together, these innovations reflect a larger shift: data security is no longer about silos—it’s about unified visibility and control everywhere data lives and having a comprehensive understanding of the data estate to detect and prevent data risks. Organizations trust Microsoft for their productivity and security platforms, but their footprint spans across third-party data environments too. That’s why Purview continues to expand protection beyond Microsoft environments. In addition to bringing in 3rd party data into DSPM, we are also expanding auto-labeling to three new Data Map sources, adding to the data sources we previously announced. Currently in public preview, the new sources include Snowflake, SQL Server, and Amazon S3. Once connected to Purview, admins gain an “at-a-glance” view of all data sources and can automatically apply sensitivity labels, enforcing consistent security policies without manual effort. This helps organizations discover sensitive information at scale, reduce the risk of data exposure, and ensure safer AI adoption all while simplifying governance through centralized policy management and visibility across their entire data estate. Enable AI adoption and prevent data oversharing As organizations adopt more autonomous agents, new risks emerge, such as unsupervised data access and creation, cascading agent interactions, and unclear data activity accountability. Besides AI Observability in DSPM providing details on the inventory and risk level of the agents, Purview is expanding its industry-leading data security and compliance capabilities to secure and govern agents that inherit users’ policies and controls, as well as agents that have their own unique IDs, policies, and controls. This includes agent types across Microsoft 365 Copilot, Copilot Studio, Microsoft Foundry, and third-party platforms. Key enhancements include: Extension of Purview Information Protection and Data Loss Prevention policies to autonomous agents: Scope autonomous agents with an Agent ID into Purview policies that work for users across Microsoft 365 apps, including Exchange, SharePoint, and Teams. Microsoft Purview Insider Risk Management for Agents: With dedicated indicators and behavioral analytics to flag specific risky agent activities, enable proactive investigation by assigning risk levels to each agent. Extension of Purview data compliance capabilities to agent interactions: Microsoft Purview Communication Compliance, Data Lifecycle Management, Audit, and eDiscovery extend to agent interactions, supporting responsible use, secure retention, and agentic accountability. Purview SDK embedded in Agent Framework SDK: Purview SDK embedded in Agent Framework SDK enables developers to integrate enterprise-grade security, compliance, and governance into AI agents. It delivers automatic data classification, prevents sensitive data leaks and oversharing, and provides visibility and control for regulatory compliance, empowering secure adoption of AI agents in complex environments. Purview integration with Foundry: Purview is now enabled within Foundry, allowing Foundry admins to activate Microsoft Purview on their subscription. Once enabled, interaction data from all apps and agents flows into Purview for centralized compliance, governance, and posture management of AI data. Azure AI Search honors Purview labels and policies: Azure AI Search now ingests Microsoft Purview sensitivity labels and enforces corresponding protection policies through built-in indexers (SharePoint, OneLake, Azure Blob, ADLS Gen2). This ensures secure, policy-aligned search over enterprise data, enabling agentic RAG scenarios where only authorized documents are returned or sent to LLMs, preventing oversharing and aligning with enterprise data protection standards. Extension of Purview Data Loss Prevention policies to Copilot Mode in Edge for Business: This week, Microsoft Edge for Business introduced Copilot Mode, transforming the browser into a proactive, agentic partner. This is AI-assisted browsing will honor the user’s existing DLP protections, such as endpoint DLP policies that prevent pasting to sensitive service domains, or summarizing sensitive page content. Learn more in our blog dedicated to the announcements of Microsoft Purview for Agents. New capabilities in Microsoft Purview, now in public preview, to help prevent data oversharing and leakage through AI include: Expansion of Microsoft Purview Data Loss Prevention (DLP) for Microsoft 365 Copilot: Previously, we introduced DLP for Microsoft 365 Copilot to prevent labeled files & emails from being used as grounding data for responses, therefore reducing the risk of oversharing. Today, we are expanding DLP for Microsoft 365 Copilot to safeguard prompts containing sensitive data. This real-time control helps organizations mitigate data leakage and oversharing risks by preventing Microsoft 365 Copilot, Copilot Chat, and Microsoft 365 Copilot agents from returning a response when prompts contain sensitive data or using that sensitive data for grounding in Microsoft 365 or the web. For example, if a user searches, “Can you tell me more about my customer based on their address: 1234 Main Street,” Copilot will both inform the user that organizational policies prevent it from responding to their prompt, as well as block any web queries to Bing for “1234 Main Street.” Enhancements to inline data protection in Edge for Business: Earlier this year, we introduced inline data protection in Edge for Business to prevent sensitive data from being leaked to unmanaged consumer AI apps, starting with ChatGPT, Google Gemini, and DeepSeek. We are not only making this capability generally available for the initial set of AI apps, but also expanding the capability to 30+ new apps in public preview and supporting file upload activity in addition to text. This addresses potential data leakage that can occur when employees send organizational files or data to consumer AI apps for help with work-related tasks, such as document creation or code reviews. Inline data protection for the network: For user activity outside of the browser, we are also enabling inline data protection at the network layer. Earlier this year, we introduced integrations with supported secure service edge (SSE) providers to detect when sensitive data is shared to unmanaged cloud locations, such as consumer AI apps or personal cloud storage, even if sharing occurs outside of the Edge browser. In addition to the discovery of sensitive data, these integrations now support protection controls that block sensitive data from leaving a user device and reaching an unmanaged cloud service or application. These capabilities are now generally available through the Netskope and iboss integrations, and inline data discovery is available in public preview through the Palo Alto Networks integration. Extension of Purview protection to on-device AI: Purview DLP policies now extend to the Recall experience in Copilot+ PC devices to prevent sensitive organizational data from being undesirably captured and retained. Admins can now block Recall snapshots based on sensitivity label or the presence of Purview sensitive information types (SITs) in a document open on the device, or simply honor and display the sensitivity labels of content captured in the Recall snapshot library. For example, a DLP policy can be configured to prevent recall from taking snapshots of any documents labeled “Highly Confidential,” or a product design file that contains intellectual property. Learn more in the Windows IT Pro blog. Best-in-class data security for Microsoft environments Microsoft Purview sets the standard for data security within its own ecosystem. Organizations benefit from unified security policies and seamless compliance controls that are purpose-built for Microsoft environments, ensuring sensitive data remains secure without compromising productivity. We also are constantly investing in expanding protections and controls to Microsoft collaboration tools including SharePoint, Teams, Fabric, Azure and across Microsoft 365. On-demand classification adds meeting transcript coverage and new enhancements: To help organizations protect sensitive data sitting in data-at-rest, on-demand classification now extends to meeting transcripts, enabling the discovery and classification of sensitive information shared in existing recorded meeting transcripts. Once classified, admins can set up DLP or Data Lifecycle Management (DLM) policies to properly protect and retain this data according to organizational policies. This is now generally available, empowering organizations to strengthen data security, streamline compliance, and ensure even sensitive information in data-at-rest is discovered, protected, and governed more effectively. In addition, on-demand classification for endpoints is also generally available, giving organizations even broader coverage across their data estate. New usage posture and consumption reports: We’re introducing new usage posture and consumption reports, now in public preview. Admins can quickly identify compliance gaps, optimize Purview seat assignments, and understand how consumptive features are driving spend. With granular insights by feature, policy, and user type, admins can analyze usage trends, forecast costs, and toggle consumptive features on and off directly, all from a unified dashboard. The result: stronger compliance, easier cost management, and better alignment of Purview investments to your organization’s needs. Enable DLP and Copilot protection with extended SharePoint permissions: Extended SharePoint permissions, now generally available, make it simple to protect and manage files in SharePoint by allowing library owners to apply a default sensitivity label to an entire document library. When this is enabled, the label is dynamically enforced across all unprotected files in the library, both new and existing, within the library. Downloaded files are automatically encrypted, and access is managed based on SharePoint site membership, giving organizations powerful, scalable access control. With extended SharePoint permissions, teams can consistently apply labels at scale, automate DLP policy enforcement, and confidently deploy Copilot, all without the need for manually labeling files. Whether for internal teams, external partners, or any group where permissions need to be tightly controlled, extended SharePoint permissions streamline protection and compliance in SharePoint. Network file filtering via Entra GSA integration: We are integrating Purview with Microsoft Entra to enable file filtering at the network layer. These filtering controls help prevent sensitive content from being shared to unauthorized services based on properties such as sensitivity labels or presence of Purview sensitive information types (SITs) within the file. For example, Entra admins can now create a file policy to block files containing credit card numbers from passing through the network. Learn more here. Expanded protection scenarios enabled by Purview endpoint DLP: We are introducing several noteworthy enhancements to Purview endpoint DLP to protect an even broader range of exfiltration or leakage scenarios from organizational devices, without hindering user productivity. These enhancements, initially available on Windows devices, include: Extending protection to unsaved files: Files no longer need to be saved to disk to be protected under a DLP policy. With this improvement, unsaved files will undergo a point-in-time evaluation to detect the presence of sensitive data and apply the appropriate protections. Expanded support for removable media: Admins can now prevent data exfiltration to broader list of removable media devices, including iPhones, Android devices, and CD-ROMs. Protection for Outlook attachments downloaded to removable media or network shares: Admins can now prevent exfiltration of email attachments when users attempt to drag and drop them into USB devices, network shares, and other removable media. Expanded capability support for macOS: In addition to the new endpoint DLP protections introduced above, we are also expanding the following capabilities, already available for Windows devices, to devices running on macOS: Expanded file type coverage to 110+ file types, blanket protections for non-Office or PDF file types, addition of “allow” and “off” policy actions, device-based policy scoping to scope policies to specific devices or device groups (or apply exclusions), and integration with Power Automate. Manageability and alert investigation improvements in Purview DLP: Lastly, we are also introducing device manageability and alert investigation improvements in Purview DLP to simplify the day-to-day experience for admins. These improvements include: Reporting and troubleshooting improvements for devices onboarded to endpoint DLP: We are introducing additional tools for admins to build confidence in their Purview DLP protections for endpoint devices. These enhancements, designed to maximize reliability and enable better troubleshooting of potential issues, include near real-time reporting of policy syncs initiated on devices and policy health insights into devices’ compliance status and readiness to receive policies. Enhancements to always-on diagnostics: Earlier this year, we introduced always-on diagnostics to automatically collect logs from Windows endpoint devices, eliminating the need to reproduce issues when submitting an investigation request or raising a support ticket. This capability is expanding so that admins now have on-demand access to diagnostic logs from users’ devices without intervening in their operations. This further streamlines the issue resolution process for DLP admins while minimizing end user disruption. Simplified DLP alert investigation, including easier navigation to crucial alert details in just 1 click, and the ability to aggregate alerts originating from a single user for more streamlined investigation and response. For organizations who manage Purview DLP alerts within their broader incident management process in Microsoft Defender, we are pleased to share that alert severities will now be synced between the Purview portal and the Defender portal. Expanding enterprise-grade data security to small and medium businesses (SMBs): Purview is extending its reach beyond large enterprises by introducing a new add-on for Microsoft 365 Business Premium, bringing advanced data security and compliance capabilities to SMBs. The Microsoft Purview suite for Business Premium brings the same enterprise-grade protection, such as sensitivity labeling, data loss prevention, and compliance management, to organizations with up to 300 users. This enables SMBs to operate with the same level of compliance and data security as large enterprises, all within a simplified, cost-effective experience built for smaller teams. Stepping into the new era of technology with AI-powered data security Globally, there is a shortage of skilled cybersecurity professionals. Simultaneously, the volume of alerts and incidents is ever growing. By infusing AI into data security solutions, admins can scale their impact. By reducing manual workloads, they enhance operational effectiveness and strengthen overall security posture – allowing defenders to stay ahead. In 2025, 82% of organizations have developed plans to use GenAI to fortify their data security programs. With its cutting-edge generative AI-powered investigative capabilities, Microsoft Purview Data Security Investigations (DSI) is transforming and scaling how data security admins analyze incident-related data. Since being released into public preview in April, the product has made a big impact with customers like Toyota Motors North America. "Data Security Investigations eliminates manual work, automating investigations in minutes. It’s designed to handle the scale and complexity of large data sets by correlating user activity with data movement, giving analysts a faster, more efficient path to meaningful insights,” said solution architect Dan Garawecki. This Ignite, we are introducing several new capabilities in DSI, including: DSI integration with DSPM: View proactive, summary insights and launch a Data Security Investigation directly from DSPM. This integration brings the full power of DSI analysis to your fingertips, enabling admins to drill into data risks surfaced in DSPM with speed and precision. Enhancements in DSI AI-powered deep content analysis capabilities: Admins can now add context before AI analysis for higher-quality, more efficient investigations. A new AI-powered natural language search function lets admins locate specific files using keywords, metadata, and embeddings. Vector search and content categorization enhancements allow admins to better identify risky assets. Together, these enhancements equip admins with sharper, faster tools for identifying buried data risks – both proactively and reactively. DSI cost transparency report and in-product estimator: To help customers manage pay-as-you-go billing, DSI is adding a new lightweight in-product cost estimator and transparency report. We are also expanding Security Copilot in Microsoft Purview with AI-powered capabilities that strengthen both the protection and investigation of sensitive data by introducing the Data Security Posture Agent and Data Security Triage Agent. Data Security Posture Agent: Available in preview, the new Data Security Posture Agent uses LLMs to help admins answer “Is this happening?” across thousands of files—delivering fast, intent-driven discovery and risk profiling, even when explicit keywords are absent. Integrated with Purview DSPM, it surfaces actionable insights and improves compliance, helping teams reduce risk and respond to threats before they escalate. Data Security Triage Agent: Alongside this, the Data Security Triage Agent, now generally available, enables analysts to efficiently triage and remediate the most critical alerts, automating incident response and surfacing the threats that matter most. Together, these agentic capabilities convert high-volume signals into consistent, closed-loop action, accelerate investigations and remediation, reduce policy-violation dwell time, and improve audit readiness, all natively integrated within Microsoft 365 and Purview so security teams can scale outcomes without scaling headcount. To make the agents easily accessible and help teams get started more quickly, we are excited to announce that Security Copilot will be available to all Microsoft 365 E5 customers. Rollout starts today for existing Security Copilot customers with Microsoft 365 E5 and will continue in the upcoming months for all Microsoft 365 E5 customers. Customers will receive advanced notice before activation. Learn more: https://aka.ms/SCP-Ignite25 Data security that keeps innovating alongside you As we look ahead, Microsoft Purview remains focused on empowering organizations with scalable solutions that address the evolving challenges of data security. While we deliver best-in-class security for Microsoft, we recognize that today’s organizations rarely operate in a single cloud, many businesses rely on a diverse mix of platforms to power their operations and innovation. That’s why we have been extending Purview’s capabilities beyond Microsoft environments, helping customers protect data across their entire digital estate. In a world where data is the lifeblood of innovation, securing it must be more than a checkbox—it must be a catalyst for progress. As organizations embrace AI, autonomous agents, and increasingly complex digital ecosystems, Microsoft Purview empowers them to move forward with confidence. By unifying visibility, governance, and protection across the entire data estate, Purview transforms security from a fragmented challenge into a strategic advantage. The future of data security isn’t just about defense—it’s about enabling bold, responsible innovation at scale. Let’s build that future together.Unlocking the Power of Microsoft Purview for ChatGPT Enterprise
In today's rapidly evolving technology landscape, data security and compliance are key. Microsoft Purview offers a robust solution for managing and securing interactions with AI based solutions. This integration not only enhances data governance but also ensures that sensitive information is handled with the appropriate controls. Let's dive into the benefits of this integration and outline the steps to integrate with ChatGPT Enterprise in specific. The integration works for Entra connected users on the ChatGPT workspace, if you have needs that goes beyond this, please tell us why and how it impacts you. Important update 1: Effective May 1, these capabilities require you to enable pay-as-you-go billing in your organization. Important update 2: From May 19, you are required to create a collection policy to ingest ChatGPT Enterprise information. In DSPM for AI you will find this one click process. Benefits of Integrating ChatGPT Enterprise with Microsoft Purview Enhanced Data Security: By integrating ChatGPT Enterprise with Microsoft Purview, organizations can ensure that interactions are securely captured and stored within their Microsoft 365 tenant. This includes user text prompts and AI app text responses, providing a comprehensive record of communications. Compliance and Governance: Microsoft Purview offers a range of compliance solutions, including Insider Risk Management, eDiscovery, Communication Compliance, and Data Lifecycle & Records Management. These tools help organizations meet regulatory requirements and manage data effectively. Customizable Detection: The integration allows for the detection of built in can custom classifiers for sensitive information, which can be customized to meet the specific needs of the organization. To help ensures that sensitive data is identified and protected. The audit data streams into Advanced Hunting and the Unified Audit events that can generate visualisations of trends and other insights. Seamless Integration: The ChatGPT Enterprise integration uses the Purview API to push data into Compliant Storage, ensuring that external data sources cannot access and push data directly. This provides an additional layer of security and control. Step-by-Step Guide to Setting Up the Integration 1. Get Object ID for the Purview account in Your Tenant: Go to portal.azure.com and search for "Microsoft Purview" in the search bar. Click on "Microsoft Purview accounts" from the search results. Select the Purview account you are using and copy the account name. Go to portal.azure.com and search for “Enterprise" in the search bar. Click on Enterprise applications. Remove the filter for Enterprise Applications Select All applications under manage, search for the name and copy the Object ID. 2. Assign Graph API Roles to Your Managed Identity Application: Assign Purview API roles to your managed identity application by connecting to MS Graph utilizing Cloud Shell in the Azure portal. Open a PowerShell window in portal.azure.com and run the command Connect-MgGraph. Authenticate and sign in to your account. Run the following cmdlet to get the ServicePrincipal ID for your organization for the Purview API app. (Get-MgServicePrincipal -Filter "AppId eq '9ec59623-ce40-4dc8-a635-ed0275b5d58a'").id This command provides the permission of Purview.ProcessConversationMessages.All to the Microsoft Purview Account allowing classification processing. Update the ObjectId to the one retrieved in step 1 for command and body parameter. Update the ResourceId to the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{a4543e1f-6e5d-4ec9-a54a-f3b8c156163f}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam It will look something like this from the command line We also need to add the permission for the application to read the user accounts to correctly map the ChatGPT Enterprise user with Entra accounts. First run the following command to get the ServicePrincipal ID for your organization for the GRAPH app. (Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'").id The following step adds the permission User.Read.All to the Purview application. Update the ObjectId with the one retrieved in step 1. Update the ResourceId with the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{df021288-bdef-4463-88db-98f22de89214}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam 3. Store the ChatGPT Enterprise API Key in Key Vault The steps for setting up Key vault integration for Data Map can be found here Create and manage credentials for scans in the Microsoft Purview Data Map | Microsoft Learn When setup you will see something like this in Key vault. 4. Integrate ChatGPT Enterprise Workspace to Purview: Create a new data source in Purview Data Map that connects to the ChatGPT Enterprise workspace. Go to purview.microsoft.com and select Data Map, search if you do not see it on the first screen. Select Data sources Select Register Search for ChatGPT Enterprise and select Provide your ChatGPT Enterprise ID Create the first scan by selecting Table view and filter on ChatGPT Add your key vault credentials to the scan Test the connection and once complete click continue When you click continue the following screen will show up, if everything is ok click Save and run. Validate the progress by clicking on the name, completion of the first full scan may take an extended period of time. Depending on size it may take more than 24h to complete. If you click on the scan name you expand to all the runs for that scan. When the scan completes you can start to make use of the DSPM for AI experience to review interactions with ChatGPT Enterprise. The mapping to the users is based on the ChatGPT Enterprise connection to Entra, with prompts and responses stored in the user's mailbox. 5. Review and Monitor Data: Please see this article for required permissions and guidance around Microsoft Purview Data Security Posture Management (DSPM) for AI, Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Use Purview DSPM for AI analytics and Activity Explorer to review interactions and classifications. You can expand on prompts and responses in ChatGPT Enterprise 6. Microsoft Purview Communication Compliance Communication Compliance (here after CC) is a feature of Microsoft Purview that allows you to monitor and detect inappropriate or risky interactions with ChatGPT Enterprise. You can monitor and detect requests and responses that are inappropriate based on ML models, regular Sensitive Information Types, and other classifiers in Purview. This can help you identify Jailbreak and Prompt injection attacks and flag them to IRM and for case management. Detailed steps to configure CC policies and supported configurations can be found here. 7. Microsoft Purview Insider Risk Management We believe that Microsoft Purview Insider Risk Management (here after IRM) can serve a key role in protecting your AI workloads long term. With its adaptive protection capabilities, IRM dynamically adjusts user access based on evolving risk levels. In the event of heightened risk, IRM can enforce Data Loss Prevention (DLP) policies on sensitive content, apply tailored Entra Conditional Access policies, and initiate other necessary actions to effectively mitigate potential risks. This strategic approach will help you to apply more stringent policies where it matters avoiding a boil the ocean approach to allow your team to get started using AI. To get started use the signals that are available to you including CC signals to raise IRM tickets and enforce adaptive protection. You should create your own custom IRM policy for this. Do include Defender signals as well. Based on elevated risk you may select to block users from accessing certain assets such as ChatGPT Enterprise. Please see this article for more detail Block access for users with elevated insider risk - Microsoft Entra ID | Microsoft Learn. 8. eDiscovery eDiscovery of AI interactions is crucial for legal compliance, transparency, accountability, risk management, and data privacy protection. Many industries must preserve and discover electronic communications and interactions to meet regulatory requirements. Including AI interactions in eDiscovery ensures organizations comply with these obligations and preserves relevant evidence for litigation. This process also helps maintain trust by enabling the review of AI decisions and actions, demonstrating due diligence to regulators. Microsoft Purview eDiscovery solutions | Microsoft Learn 9. Data Lifecycle Management Microsoft Purview offers robust solutions to manage AI data from creation to deletion, including classification, retention, and secure disposal. This ensures that AI interactions are preserved and retrievable for audits, litigation, and compliance purposes. Please see this article for more information Automatically retain or delete content by using retention policies | Microsoft Learn. Closing By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their ChatGPT Enterprise interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. We are still in preview some of the features listed are not fully integrated, please reach out to us if you have any questions or if you have additional requirements.Elevating Trust in Data through Data Quality in the AI Era
Data collection and utilization are growing rapidly, and organizations are increasingly relying on data as they transition into the era of AI. However, many face significant challenges in effectively managing investments across cloud, data, and AI. This is largely due to a lack of visibility across their entire data estate—which is often fragmented across silos, heterogeneous systems, on-premises environments, and the cloud. Concerns about data trustworthiness, AI-readiness, and uncertainty around security and compliance further complicate the ability to drive timely business insights. Elevating data quality means going beyond merely identifying issues—it's about equipping data stewards, analysts, and engineers with the right tools to proactively improve trust, consistency, and readiness of data for AI, analytics, and operations. Despite the critical importance of data quality, many organizations struggle to activate the full value of their data estate. Research shows that 75% of companies do not have a formal data quality program. This is alarming, especially as data quality has become a cornerstone of successful AI initiatives. Simply put: your AI is only as good as your data. In the past, when humans were always in the loop, minor data quality issues could be corrected manually. But in today’s world—where AI interprets not just the structure but the content of the data—any inconsistencies, inaccuracies, or noncompliance in the data can directly lead to flawed insights, unreliable AI outputs, and poor business decisions. Bad data can make your AI wrong. It can make your BI reports misleading. And it can impact your organization's credibility—as well as your reputation as a data professional. After all, there’s nothing worse than building something that no one trusts or uses. That’s why defining and deploying a robust data quality framework is more critical now than ever before. Organizations must establish a data quality maturity model, track quality across their data estate, and take continuous improvement and remediation actions. Key steps to maintaining data quality and ensuring the health of your data estate include: Define the scope – Identify which data is needed for specific business use cases. Measure quality – Assess whether data meets expected standards. Analyze findings – Understand patterns, gaps, and root causes. Improve quality – Take corrective actions to meet business needs. Control quality – Continuously monitor and govern data to maintain high standards. To ensure your data is fit for purpose, it's essential to establish strong data quality practices within your organization, supported by the right tools, roles, and governance structures. Profile your data to understand the distribution Data profiling is the process of analyzing data to understand its structure, content, and quality. It helps uncover patterns, anomalies, missing values, duplicates, and data types—providing valuable insight into the trustworthiness and usability of data for analytics, decision-making, and quality improvement. By examining datasets for structure, relationships, and inconsistencies, data practitioners can identify potential issues and define validation rules for ongoing data quality assurance. Microsoft Purview Unified Data Catalog provides an integrated data quality experience that supports profiling as a foundational step. With Purview, users can profile data to understand its distribution, patterns, and data types—helping inform data quality programs and define rules for continuous monitoring and improvement. Purview's data profiling leverages AI to recommend which columns in a dataset are most critical and should be profiled. Users remain in control and can adjust these recommendations by deselecting suggested columns or adding others. Additionally, all profiling history is preserved, allowing users to compare current profiles with historical patterns to detect changes over time and continuously assess data health. Define and apply rules to validate your data Applying rules and constraints is essential to ensure that data conforms to predefined requirements or business logic—such as data type, completeness, uniqueness, and consistency. Data profiling results can be leveraged to define these rules and validate data continuously, helping ensure that it is trustworthy and ready for both business and AI use cases. To achieve this, data quality should be measured across all stages: data at creation, data in motion, and data at rest. While many CRM and web-based applications perform UI-level validations to check user inputs before submission, a significant amount of poor-quality data still enters systems through bulk upload processes. These low-quality records often bypass front-end checks and propagate downstream through the data supply chain, leading to broader data integrity issues. In Medallion architecture, you can validate and correct data directly in the pipeline. Bad-quality data detected in the bronze layer can trigger notifications to upstream systems to fix issues at the source. Purview Unified Catalog Data Quality capability provides a user-friendly UI for managing data quality rules. You can configure rules for any supported data sources in cloud or on-premises or datasets in Fabric Lakehouse's bronze layer, schedule DQ jobs, and send notifications to data engineers and stewards when issues arise. This proactive monitoring ensures data quality is addressed early—before data progresses through the silver and gold layers of your architecture. Visualizing Data Quality Metrics and Driving Action Visualizing data quality metrics and trends provides critical insights into the overall health of your data and supports informed, data-driven decision-making. Microsoft Purview Unified Catalog publishes all metadata—including data quality rules and scores—into Fabric OneLake. Data analysts can link Purview metadata with raw data to generate actionable insights. They can also leverage Fabric AI skills to enhance intelligence and integrate with Data Activator to trigger notifications for upstream data publishers and downstream consumers. Alerts and notifications can be configured directly in the Purview Unified Catalog. Data stewards can set thresholds for one or many data assets to automatically notify upstream and downstream contacts if those thresholds are breached. Notifications can be directed to specific individuals or groups (e.g., a support team). These alerts empower data providers to resolve issues at the source, and data engineers can address problems within the bronze layer of their analytical storage—such as in Microsoft Fabric. Additionally, alerts can be used to pause data movement from the bronze to the silver and gold layers, ensuring only high-quality data flows downstream. Users can configure the storage location in the Purview Data Quality solution to publish failed or error records. This allows data stewards and data engineers to review and fix issues, improving data quality before using it for analytics or as input for ML model training. Integrated Data Observability with Data Quality Scores Data observability in Microsoft Purview Unified Catalog offers a comprehensive, bird’s-eye view into the health of the data estate as data flows across various sources. Data stewards, domain experts, and those responsible for data health can monitor their entire data landscape from a single unified interface. This centralized view provides visibility into data lineage—from source to consumption—and reveals how data assets map to governance domains. It enables users to understand where data originates and terminates, pinpoint data quality issues, and assess the impact on reporting and compliance obligations. By consolidating metadata into a single, accessible location, users can explore how data quality is evolving, track usage patterns, and understand who is interacting with the data. With full visibility across the data estate, both central and federated data teams can efficiently identify opportunities to improve metadata quality, clarify ownership, enhance data quality, and optimize data architecture. Summary Defining and implementing a data quality framework has become more critical than ever. Organizations must establish a data quality maturity model and continuously monitor the health of their data estate to enable ongoing improvement and remediation. Microsoft Purview Unified Catalog empowers governance domain owners and data stewards to assess and manage data quality across the ecosystem, driving informed and targeted improvements. In today’s AI-driven world, the trustworthiness of data is directly linked to the effectiveness of AI insights and recommendations. Unreliable data not only weakens AI outcomes but can also diminish trust in the systems themselves and slow their adoption. Poor data quality or inconsistent data structures can disrupt business operations and impair decision-making. Microsoft Purview addresses these challenges with a no-code/low-code approach to defining data quality rules—including out-of-the-box (OOB) and AI-generated rules. These rules are applied at the column level and rolled up into scores at the data asset, data product, and governance domain levels, offering full visibility into data quality across the enterprise. Purview’s AI-powered data profiling capabilities intelligently recommend which columns to profile, while still allowing human review and refinement. This human-in-the-loop process not only improves the relevance and accuracy of profiling but also feeds back into improving AI model performance over time. Elevating data quality is more than identifying problems—it's about equipping stewards, analysts, and engineers with the tools to proactively build trust, ensure consistency, and prepare data for AI, analytics, and business success.Using Copilot in Fabric with Confidence: Data Security, Compliance & Governance with DSPM for AI
Introduction As organizations embrace AI to drive innovation and productivity, ensuring data security, compliance, and governance becomes paramount. Copilot in Microsoft Fabric offers powerful AI-driven insights. But without proper oversight, users can misuse copilot to expose sensitive data or violate regulatory requirements. Enter Microsoft Purview’s Data Security Posture Management (DSPM) for AI—a unified solution that empowers enterprises to monitor, protect, and govern AI interactions across Microsoft and third-party platforms. We are excited to announce the general availability of Microsoft Purview capabilities for Copilot in Fabric, starting with Copilot in Power BI. This blog explores how Purview DSPM for AI integrates with Copilot in Fabric to deliver robust data protection and governance and provides a step-by-step guide to enable this integration. Capabilities of Purview DSPM for AI As organizations adopt AI, implementing data controls and Zero Trust approach is crucial to mitigate risks like data oversharing and leakage, and potential non-compliant usage in AI. We are excited to announce Microsoft Purview capabilities for Copilot in Fabric, starting with Copilot for Power BI, By combining Microsoft Purview and Copilot for Power BI, users can: Discover data risks such as sensitive data in user prompts and responses in Activity Explorer and receive recommended actions in their Microsoft Purview DSPM for AI Reports to reduce these risks. DSPM for AI Activity Explorer DSPM for AI Reports If you find Copilot in Fabric actions in DSPM for AI Activity Explorer or reports to be potentially inappropriate or malicious, you can look for further information in Insider Risk Management (IRM), through an eDiscovery case, Communication Compliance (CC), or Data Lifecycle Management (DLM). Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI. Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant or unethical AI usage detection with Purview Communication Compliance. Purview Audit provides a detailed log of user and admin activity within Copilot in Fabric, enabling organizations to track access, monitor usage patterns, and support forensic investigations. Purview eDiscovery enables legal and investigative teams to identify, collect, and review Copilot in Fabric interactions as part of case workflows, supporting defensible investigations Communication Compliance helps detect potential policy violations or risky behavior in administrator interactions, enabling proactive monitoring and remediation for Copilot in Fabric Data Lifecycle Management allows teams to automate the retention, deletion, and classification of Copilot in Fabric data—reducing storage costs and minimizing risk from outdated or unnecessary information Steps to Enable the Integration To use DSPM for AI from the Microsoft Purview portal, you must have the following prerequisites, Activate Purview Audit which requires user to have the role of Entra Compliance Admin or Entra Global admin to enable Purview Audit. More details on DSPM pre-requisites can be found here, Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn To enable Purview DSPM for AI for Copilot for Power BI, Step 1: Enable DSPM for AI Policies Navigate to Microsoft Purview DSPM for AI. Enable the one-click policy: “DSPM for AI – Capture interactions for Copilot experiences”. Optionally enable additional policies: Detect risky AI usage Detect unethical behavior in AI apps These policies can be configured in the Microsoft Purview DSPM for AI portal and tailored to your organization’s risk profile. Step 2: Monitor and Act Use DSPM for AI Reports and Activity Explorer to monitor AI interactions. Apply IRM, DLM, CC and eDiscovery actions as needed. Purview Roles and Permissions Needed by Users To manage and operate DSPM for AI effectively, assign the following roles: Role Responsibilities Purview Compliance Administrator Full access to configure policies and DSPM for AI setup Purview Security Reader View reports, dashboards, policies and AI Activity Content Explorer Content Viewer Additional Permission to view the actual prompts and responses on top of the above permissions More details on Purview DSPM for AI Roles & permissions can be found here, Permissions for Microsoft Purview Data Security Posture Management for AI | Microsoft Learn Purview Costs Microsoft Purview now offers a combination of entitlement-based (per-user-per-month) and Pay-As-You-Go (PAYG) pricing models. The PAYG model applies to a broader set of Purview capabilities—including Insider Risk Management, Communication Compliance, eDiscovery, and other data security and governance solutions—based on copilot for Power BI usage volume or complexity. Purview Audit logging of Copilot for Power BI activity remains included at no additional cost as part of Microsoft 365 E5 licensing. This flexible pricing structure ensures that organizations only pay for what they use as data flows through AI models, networks, and applications. For further details, please refer to this blog: New Purview pricing options for protecting AI apps and agents | Microsoft Community Hub Conclusion Microsoft Purview DSPM for AI is a game-changer for organizations looking to adopt AI responsibly. By integrating with Copilot in Fabric, it provides a comprehensive framework to discover, protect, and govern AI interactions—ensuring compliance, reducing risk, and enabling secure innovation. Whether you're a Fabric Admin, compliance admin or security admin, enabling this integration is a strategic step toward building a secure, AI-ready enterprise. Additional resources Use Microsoft Purview to manage data security & compliance for Microsoft Copilot in Fabric | Microsoft Learn How to deploy Microsoft Purview DSPM for AI to secure your AI apps Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn Learn about Microsoft Purview billing models | Microsoft Learn1.2KViews0likes0CommentsMicrosoft Purview Powering Data Security and Compliance for Security Copilot
Microsoft Purview provides Security and Compliance teams with extensive visibility into admin actions within Security Copilot. It offers tools for enriched users and data insights to identify, review, and manage Security Copilot interaction data in DSPM for AI. Data security and compliance administrators can also utilize Purview’s capabilities for data lifecycle management and information protection, advanced retention, eDiscovery, and more. These features support detailed investigations into logs to demonstrate compliance within the Copilot tenant. Prerequisites Please refer to the prerequisites for Security Copilot and DSPM for AI in the Microsoft Learn Docs. Key Capabilities and Features Heightened Context and Clarity As organizations adopt AI, implementing data controls and a Zero Trust approach is essential to mitigate risks like data oversharing, leakage, and non-compliant usage. Microsoft Purview, combined with Data Security Posture Management (DSPM) for AI, empowers security and compliance teams to manage these risks across Security Copilot interactions. With this integration, organizations can: Discover data risks by identifying sensitive information in user prompts and responses. Microsoft Purview surfaces these insights in the DSPM for AI dashboard and recommends actions to reduce exposure. Identify risky AI usage using Microsoft Purview Insider Risk Management to investigate behaviors such as inadvertent sharing of sensitive data or to detect suspicious activity within Security Copilot usage. These capabilities provide heightened visibility into how AI is used across the organization, helping teams proactively address potential risks before they escalate. Compliance and Governance Building on this visibility, organizations can take action using Microsoft Purview’s integrated compliance and governance solutions. Here are some examples of how teams are leveraging these capabilities to govern Security Copilot interactions: Audit provides a detailed log of user and admin activity within Security Copilot, enabling organizations to track access, monitor usage patterns, and support forensic investigations. eDiscovery enables legal and investigative teams to identify, collect, and review Security Copilot interactions as part of case workflows, supporting defensible investigations. Communication Compliance helps detect potential policy violations or risky behavior in administrator interactions, enabling proactive monitoring and remediation. Data Lifecycle Management allows teams to automate the retention, deletion, and classification of Security Copilot data—reducing storage costs and minimizing risk from outdated or unnecessary information. Together, these tools provide a comprehensive governance framework that supports secure, compliant, and responsible AI adoption across the enterprise. Getting Started Enable Purview Audit for Security Copilot Sign into your Copilot tenant at https://securitycopilot.microsoft.com/, and with the Security Administrator permissions, navigate to the Security Copilot owner settings and ensure Audit logging is enabled. Microsoft Purview To start using DSPM for AI and the Microsoft Purview capabilities, please complete the following steps to get set up and then feel free to experiment yourself. Navigate to Purview (Purview.Microsoft.com) and ensure you have adequate permissions to access the different Purview solutions as described here. DSPM for AI Select the DSPM for AI “Solution” option on the left-most navigation. Go to the policies or recommendations tab turn on the following: a. “DSPM for AI – Capture interactions for Copilot Experiences”: Captures prompts and responses for data security posture and regulatory compliance from Security Copilot and other Copilot experiences. b. “Detect Risky AI Usage”: Helps to calculate user risk by detecting risky prompts and responses in Copilot experiences. c. “Detect unethical behavior in AI apps”: Detects sensitive info and inappropriate use of AI in prompts and responses in Copilot experiences. To begin reviewing Security Copilot usage within your organization and identifying interactions that contain sensitive information, select Reports from the left navigation panel. a. The "Sensitive interactions per AI app" report shows the most common sensitive information types used in Security Copilot interactions and their frequency. For instance, this tenant has a significant amount of IT and IP Address information within these interactions. Therefore, it is important to ensure that all sensitive information used in Security Copilot interactions is utilized for legitimate workplace purposes and does not involve any malicious or non-compliant use of Security Copilot. b. “Top unethical AI interactions” will show an overview of any potentially unsafe or inappropriate interactions with AI apps. In this case, Security Copilot only has seven potentially unsafe interactions that included unauthorized disclosure and regulatory collusion. c. “Insider risk severity per AI app” shows the number of high risk, medium risk, low risk and no risk users that are interacting with Security Copilot. In this tenant, there are about 1.9K Security Copilot users, but very few of them have an insider risk concern. d. To check the interaction details of this potentially risky activity, head over to Activity Explorer for more information. 5. In Activity Explorer, you should filter the App to Security Copilot. You will also have the option to filter based on the user risk level and sensitive information type. To identify the highest risk behaviors, filter for users with a medium to high risk level or those associated with the most sensitive information types. a. Once you have filtered, you can start looking through the activity details for more information like the user details, the sensitive information types, the prompt and response data, and more. b. Based on the details shown, you may decide to investigate the activity and the user further. To do so, we have data security investigation and governance tools. Data Security Investigations and Governance If you find Security Copilot actions in DSPM for AI Activity Explorer to be potentially inappropriate or malicious, you can look for further information in Insider Risk Management (IRM), through an eDiscovery case, Communication Compliance (CC), or Data Lifecycle Management (DLM). Insider Risk Management By enabling the quick policy in DSPM for AI to monitor risky Copilot usage, alerts will start appearing in IRM. Customize this policy based on your organization's risk tolerance by adjusting triggering events, thresholds, and indicators for detected activity. Examine the alerts associated with the "DSPM for AI – Detect risky AI usage" policy, potentially sorting them by severity from high to low. For these alerts, you will find a User Activity scatter plot that provides insights into the activities preceding and following the user's engagement with a risky prompt in Security Copilot. This assists the Data Security administrator in understanding the necessary triage actions for this user/alert. After thoroughly investigating these details and determining whether the activity was malicious or an inadvertent insider risk, appropriate actions can be taken, including issuing a user warning, resolving the case, sharing the case with an email recipient, or escalating the case to eDiscovery for further investigation. eDiscovery To identify, review and manage your Security Copilot logs to support your investigations, use the eDiscovery tool. Here are the steps to take in eDiscovery: a. Create an eDiscovery Case b. Create a new search c. In Search, go to condition builder and select Add conditions -> KeyQL d. Enter the query as: - KQL Equal (ItemClass=IPM.SkypeTeams.Message.Copilot.Security.SecurityCopilot) e. Run the query f. Once completed, add the search to a review set (Button at the top) g. In the review set, view details of the Security Copilot conversation Communication Compliance In Communication Compliance, like IRM, you can investigate details around the Security Copilot interactions. Specifically, in CC, you can determine if these interactions contained non-compliant usage of Security Copilot or inappropriate text. After identifying the sentiment of the Security Copilot communication, you can take action by resolving the alert, sending a warning notice to the user, escalating the alert to a reviewer, or escalating the alert for investigation, which will create a new eDiscovery case. Data Lifecycle Management For regulatory compliance or investigation purposes, navigate to Data Lifecycle Management to create a new retention policy for Security Copilot activities. a. Provide a friendly name for the retention policy and select Next b. Skip Policy Scope section for this validation c. Select “Static” type of retention policy and select Next d. Choose “Microsoft Copilot Experiences” to apply retention policy to Security Copilot interactions Billing Model Microsoft Purview audit logging of Security Copilot activity remains included at no additional cost as part of Microsoft 365 E5 licensing. However, Microsoft Purview now offers a combination of entitlement-based (per-user-per-month) and Pay-As-You-Go (PAYG) pricing models. The PAYG model applies to a broader set of Purview capabilities—including Insider Risk Management, Communication Compliance, eDiscovery, and other data security and governance solutions—based on usage volume or complexity. This flexible pricing structure ensures that organizations only pay for what they use as data flows through AI models, networks, and applications. For further details, please refer to this Microsoft Security Community Blog: New Purview pricing options for protecting AI apps and agents | Microsoft Community Hub Looking Ahead By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their Security Copilot interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. Please reach out to us if you have any questions or additional requirements. Additional Resources Use Microsoft Purview to manage data security & compliance for Microsoft Security Copilot | Microsoft Learn How to deploy Microsoft Purview DSPM for AI to secure your AI apps Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn Learn about Microsoft Purview billing models | Microsoft LearnHow to deploy Microsoft Purview DSPM for AI to secure your AI apps
Microsoft Purview Data Security Posture Management (DSPM for AI) is designed to enhance data security for the following AI applications: Microsoft Copilot experiences, including Microsoft 365 Copilot. Enterprise AI apps, including ChatGPT enterprise integration. Other AI apps, including all other AI applications like ChatGPT consumer, Microsoft Copilot, DeepSeek, and Google Gemini, accessed through the browser. In this blog, we will dive into the different policies and reporting we have to discover, protect and govern these three types of AI applications. Prerequisites Please refer to the prerequisites for DSPM for AI in the Microsoft Learn Docs. Login to the Purview portal To begin, start by logging into Microsoft 365 Purview portal with your admin credentials: In the Microsoft Purview portal, go to the Home page. Find DSPM for AI under solutions. 1. Securing Microsoft 365 Copilot Be sure to check out our blog on How to use the DSPM for AI data assessment report to help you address oversharing concerns when you deploy Microsoft 365 Copilot. Discover potential data security risks in Microsoft 365 Copilot interactions In the Overview tab of DSPM for AI, start with the tasks in “Get Started” and Activate Purview Audit if you have not yet activated it in your tenant to get insights into user interactions with Microsoft Copilot experiences In the Recommendations tab, review the recommendations that are under “Not Started”. Create the following data discovery policy to discover sensitive information in AI interactions by clicking into it. Detect risky interactions in AI apps - This public preview Purview Insider Risk Management policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot experiences. Click here to learn more about Risky AI usage policy. With the policies to discover sensitive information in Microsoft Copilot experiences in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter to Microsoft Copilot Experiences, and review the following for Microsoft Copilot experiences: Total interactions over time (Microsoft Copilot) Sensitive interactions per AI app Top unethical AI interactions Top sensitivity labels references in Microsoft 365 Copilot Insider Risk severity Insider risk severity per AI app Potential risky AI usage Protect sensitive data in Microsoft 365 Copilot interactions From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities from Microsoft Copilot experiences based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. Then drill down to each activity to view details including the capability to view prompts and response with the right permissions. To protect the sensitive data in interactions for Microsoft 365 Copilot, review the Not Started policies in the Recommendations tab and create these policies: Information Protection Policy for Sensitivity Labels - This option creates default sensitivity labels and sensitivity label policies. If you've already configured sensitivity labels and their policies, this configuration is skipped. Protect sensitive data referenced in Microsoft 365 Copilot - This guides you through the process of creating a Purview Data Loss Prevention (DLP) policy to restrict the processing of content with specific sensitivity labels in Copilot interactions. Click here to learn more about Data Loss Prevention for Microsoft 365 Copilot. Protect sensitive data referenced in Copilot responses - Sensitivity labels help protect files by controlling user access to data. Microsoft 365 Copilot honors sensitivity labels on files and only shows users files they already have access to in prompts and responses. Use Data assessments to identify potential oversharing risks, including unlabeled files. Stay tuned for an upcoming blog post on using DSPM for AI data assessments! Use Copilot to improve your data security posture - Data Security Posture Management combines deep insights with Security Copilot capabilities to help you identify and address security risks in your org. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Govern the prompts and responses in Microsoft 365 Copilot interactions Understand and comply with AI regulations by selecting “Guided assistance to AI regulations” in the Recommendations tab and walking through the “Actions to take”. From the Recommendations tab, create a Control unethical behavior in AI Purview Communications Compliance policy to detect sensitive information in prompts and responses and address potentially unethical behavior in Microsoft Copilot experiences and ChatGPT for Enterprise. This policy covers all users and groups in your organization. To retain and/or delete Microsoft 365 Copilot prompts and responses, setup a Data Lifecycle policy by navigating to Microsoft Purview Data Lifecycle Management and find Retention Policies under the Policies header. You can also preserve, collect, analyze, review, and export Microsoft 365 Copilot interactions by creating an eDiscovery case. 2. Securing Enterprise AI apps Please refer to this amazing blog on Unlocking the Power of Microsoft Purview for ChatGPT Enterprise | Microsoft Community Hub for detailed information on how to integrate with ChatGPT for enterprise, the Purview solutions it currently supports through Purview Communication Compliance, Insider Risk Management, eDiscovery, and Data Lifecycle Management. Learn more about the feature also through our public documentation. 3. Securing other AI Microsoft Purview DSPM for AI currently supports the following list of AI sites. Be sure to also check out our blog on the new Microsoft Purview data security controls for the browser & network to secure other AI apps. Discover potential data security risks in prompts sent to other AI apps In the Overview tab of DSPM for AI, go through these three steps in “Get Started” to discover potential data security risk in other AI interactions: Install Microsoft Purview browser extension For Windows users: The Purview extension is not necessary for the enforcement of data loss prevention on the Edge browser but required for Chrome to detect sensitive info pasted or uploaded to AI sites. The extension is also required to detect browsing to other AI sites through an Insider Risk Management policy for both Edge and Chrome browser. Therefore, Purview browser extension is required for both Edge and Chrome in Windows. For MacOS users: The Purview extension is not necessary for the enforcement of data loss prevention on macOS devices, and currently, browsing to other AI sites through Purview Insider Risk Management is not supported on MacOS, therefore, no Purview browser extension is required for MacOS. Extend your insights for data discovery – this one-click collection policy will setup three separate Purview detection policies for other AI apps: Detect sensitive info shared in AI prompts in Edge – a Purview collection policy that detects prompts sent to ChatGPT consumer, Micrsoft Copilot, DeepSeek, and Google Gemini in Microsoft Edge and discovers sensitive information shared in prompt contents. This policy covers all users and groups in your organization in audit mode only. Detect when users visit AI sites – a Purview Insider Risk Management policy that detects when users use a browser to visit AI sites. Detect sensitive info pasted or uploaded to AI sites – a Purview Endpoint Data loss prevention (eDLP) policy that discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only. With the policies to discover sensitive information in other AI apps in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter by Other AI Apps, and review the following for other AI apps: Total interactions over time (other AI apps) Total visits (other AI apps) Sensitive interactions per AI app Insider Risk severity Insider risk severity per AI app Protect sensitive info shared with other AI apps From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. To protect the sensitive data in interactions for other AI apps, review the Not Started policies in the Recommendations tab and create these policies: Fortify your data security – This will create three policies to manage your data security risks with other AI apps: 1) Block elevated risk users from pasting or uploading sensitive info on AI sites – this will create a Microsoft Purview endpoint data loss prevention (eDLP) policy that uses adaptive protection to give a warn-with-override to elevated risk users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode. Learn more about adaptive protection in Data loss prevention. 2) Block elevated risk users from submitting prompts to AI apps in Microsoft Edge – this will create a Microsoft Purview browser data loss prevention (DLP) policy, and using adaptive protection, this policy will block elevated, moderate, and minor risk users attempting to put information in other AI apps using Microsoft Edge. This integration is built-in to Microsoft Edge. Learn more about adaptive protection in Data loss prevention. 3) Block sensitive info from being sent to AI apps in Microsoft Edge - this will create a Microsoft Purview browser data loss prevention (DLP) policy to detect inline for a selection of common sensitive information types and blocks prompts being sent to AI apps while using Microsoft Edge. This integration is built-in to Microsoft Edge. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Conclusion Microsoft Purview DSPM for AI can help you discover, protect, and govern the interactions from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps. We recommend you review the Reports in DSPM for AI routinely to discover any new interactions that may be of concern, and to create policies to secure and govern those interactions as necessary. We also recommend you utilize the Activity Explorer in DSPM for AI to review different Activity explorer events while users interacting with AI, including the capability to view prompts and response with the right permissions. We will continue to update this blog with new features that become available in DSPM for AI, so be sure to bookmark this page! Follow-up Reading Check out this blog on the details of each recommended policies in DSPM for AI: Microsoft Purview – Data Security Posture Management (DSPM) for AI | Microsoft Community Hub Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365