security copilot
42 TopicsSecure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Copilot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurWelcome to the Microsoft Security Community!
Microsoft Security Community Hub | Protect it all with Microsoft Security Eliminate gaps and get the simplified, comprehensive protection, expertise, and AI-powered solutions you need to innovate and grow in a changing world. The Microsoft Security Community is your gateway to connect, learn, and collaborate with peers, experts, and product teams. Gain access to technical discussions, webinars, and help shape Microsoft’s security products. Get there fast To stay up to date on upcoming opportunities and the latest Microsoft Security Community news, make sure to subscribe to our email list. Find the latest skilling content and on-demand videos – subscribe to the Microsoft Security Community YouTube channel. Catch the latest announcements and connect with us on LinkedIn – Microsoft Security Community and Microsoft Entra Community. Read the latest in the the Microsoft Security Community blog. Upcoming Community Calls April 2026 CANCELLED: Apr. 14 | 8:00am | Microsoft Sentinel | Using distributed content to manage your multi-tenant SecOps Content distribution is a powerful multi-tenant feature that enables scalable management of security content across tenants. With this capability, you can create content distribution profiles in the multi-tenant portal that allow you to seamlessly replicate existing content—such as custom detection rules and endpoint security policies—from a source tenant to designated target tenants. Once distributed, the content runs on the target tenant, enabling centralized control with localized execution. This allows you to onboard new tenants quickly and maintain a consistent security baseline across tenants. In this session we'll walk through how you can use this new capability to scale your security operations. Apr. 23 | 8:00am | Security Copilot Skilling Series | Getting started with Security Copilot New to Security Copilot? This session walks through what you actually need to get started, including E5 inclusion requirements and a practical overview of the core experiences and agents you will use on day one. RESCHEDULED Apr. 28 | 8:00am | Security Copilot Skilling Series | Security Copilot Agents, DSPM AI Observability, and IRM for Agents This session covers an overview of how Microsoft Purview supports AI risk visibility and investigation through Data Security Posture Management (DSPM) and Insider Risk Management (IRM), alongside Security Copilot–powered agents. This session will go over what is AI Observability in DSPM as well as IRM for Agents in Copilot Studio and Azure AI Foundry. Attendees will learn about the IRM Triage Agent and DSPM Posture Agent and their deployment. Attendees will gain an understanding of how DSPM and IRM capabilities could be leveraged to improve visibility, context, and response for AI-related data risks in Microsoft Purview. Apr. 30 | 8:00am | Microsoft Security Community Presents | Purview Lightning Talks Join the Microsoft Security Community for Purview Lightning Talks; quick technical sessions delivered by the community, for the community. You’ll pick up practical Purview gems: must-know Compliance Manager tips, smart data security tricks, real-world scenarios, and actionable governance recommendations all in one energizing event. Hear directly from Purview customers, partners, and community members and walk away with ideas you can put to work right immediately. Register now; full agenda coming soon! May 2026 May 12 | 9:00am | Microsoft Sentinel | Hyper scale your SOC: Manage delegated access and role-based scoping in Microsoft Defender In this session we'll discuss Unified role based access control (RBAC) and granular delegated admin privileges (GDAP) expansions including: How to use RBAC to -Allow multiple SOC teams to operate securely within a shared Sentinel environment-Support granular, row-level access without requiring workspace separation-Get consistent and reusable scope definitions across tables and experiences How to use GDAP to -Manage MSSPs and hyper-scaler organizations with delegated- access to governed tenants within the Defender portal-Manage delegated access for Sentinel. Looking for more? Join the Security Advisors! As a Security Advisor, you’ll gain early visibility into product roadmaps, participate in focus groups, and access private preview features before public release. You’ll have a direct channel to share feedback with engineering teams, influencing the direction of Microsoft Security products. The program also offers opportunities to collaborate and network with fellow end users and Microsoft product teams. Join the Security Advisors program that best fits your interests: www.aka.ms/joincommunity. Additional resources Microsoft Security Hub on Tech Community Virtual Ninja Training Courses Microsoft Security Documentation Azure Network Security GitHub Microsoft Defender for Cloud GitHub Microsoft Sentinel GitHub Microsoft Defender XDR GitHub Microsoft Defender for Cloud Apps GitHub Microsoft Defender for Identity GitHub Microsoft Purview GitHub45KViews7likes13CommentsAuthorization and Identity Governance Inside AI Agents
Designing Authorization‑Aware AI Agents Enforcing Microsoft Entra ID RBAC in Copilot Studio As AI agents move from experimentation to enterprise execution, authorization becomes the defining line between innovation and risk. AI agents are rapidly evolving from experimental assistants into enterprise operators—retrieving user data, triggering workflows, and invoking protected APIs. While many early implementations rely on prompt‑level instructions to control access, regulated enterprise environments require authorization to be enforced by identity systems, not language models. This article presents a production‑ready, identity‑first architecture for building authorization‑aware AI agents using Copilot Studio, Power Automate, Microsoft Entra ID, and Microsoft Graph, ensuring every agent action executes strictly within the requesting user’s permissions. Why Prompt‑Level Security Is Not Enough Large Language Models interpret intent—they do not enforce policy. Even the most carefully written prompts cannot: Validate Microsoft Entra ID group or role membership Reliably distinguish delegated user identity from application identity Enforce deterministic access decisions Produce auditable authorization outcomes Relying on prompts for authorization introduces silent security failures, over‑privileged access, and compliance gaps—particularly in Financial Services, Healthcare, and other regulated industries. Authorization is not a reasoning problem. It is an identity enforcement problem. Common Authorization Anti‑Patterns in AI Agents The following patterns frequently appear in early AI agent implementations and should be avoided in enterprise environments: Hard‑coded role or group checks embedded in prompts Trusting group names passed as plain‑text parameters Using application permissions for user‑initiated actions Skipping verification of the user’s Entra ID identity Lacking an auditable authorization decision point These approaches may work in demos, but they do not survive security reviews, compliance audits, or real‑world misuse scenarios. Authorization‑Aware Agent Architecture In an authorization‑aware design, the agent never decides access. Authorization is enforced externally, by identity‑aware workflows that sit outside the language model’s reasoning boundary. High‑Level Flow The Copilot Studio agent receives a user request The agent passes the User Principal Name (UPN) and intended action A Power Automate flow validates permissions using Microsoft Entra ID via Microsoft Graph Only authorized requests are allowed to proceed Unauthorized requests fail fast with a deterministic outcome Authorization‑aware Copilot Studio architecture enforces Entra ID RBAC before executing any business action. The agent orchestrates intent. Identity systems enforce access. Enforcing Entra ID RBAC with Microsoft Graph Power Automate acts as the authorization enforcement layer: Resolve user identity from the supplied UPN Retrieve group or role memberships using Microsoft Graph Normalize and compare memberships against approved RBAC groups Explicitly deny execution when authorization fails This keeps authorization logic: Centralized Deterministic Auditable Independent of the AI model Reference Implementation: Power Automate RBAC Enforcement Flow The following import‑ready Power Automate cloud flow demonstrates a secure RBAC enforcement pattern for Copilot Studio agents. It validates Microsoft Entra ID group membership before allowing any business action. Scenario Trigger: User‑initiated agent action Identity model: Delegated user identity Input: userUPN, requestedAction Outcome: Authorized or denied based on Entra ID RBAC { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", "contentVersion": "1.0.0.0", "triggers": { "Copilot_Request": { "type": "Request", "kind": "Http", "inputs": { "schema": { "type": "object", "properties": { "userUPN": { "type": "string" }, "requestedAction": { "type": "string" } }, "required": [ "userUPN" ] } } } }, "actions": { "Get_User_Groups": { "type": "Http", "inputs": { "method": "GET", "uri": "https://graph.microsoft.com/v1.0/users/@{triggerBody()?['userUPN']}/memberOf?$select=displayName", "authentication": { "type": "ManagedServiceIdentity" } } }, "Normalize_Group_Names": { "type": "Select", "inputs": { "from": "@body('Get_User_Groups')?['value']", "select": { "groupName": "@toLower(item()?['displayName'])" } }, "runAfter": { "Get_User_Groups": [ "Succeeded" ] } }, "Check_Authorization": { "type": "Condition", "expression": "@contains(body('Normalize_Group_Names'), 'ai-authorized-users')", "runAfter": { "Normalize_Group_Names": [ "Succeeded" ] }, "actions": { "Authorized_Action": { "type": "Compose", "inputs": "User authorized via Entra ID RBAC" } }, "else": { "actions": { "Access_Denied": { "type": "Terminate", "inputs": { "status": "Failed", "message": "Access denied. User not authorized via Entra ID RBAC." } } } } } } } This pattern enforces authorization outside the agent, aligns with Zero Trust principles, and creates a clear audit boundary suitable for enterprise and regulated environments. Flow Diagram: Agent Integrated with RBAC Authorization Flow and Sample Prompt Execution: Delegated vs Application Permissions Scenario Recommended Permission Model User‑initiated agent actions Delegated permissions Background or system automation Application permissions Using delegated permissions ensures agent execution remains strictly within the requesting user’s identity boundary. Auditing and Compliance Benefits Deterministic and explainable authorization decisions Centralized enforcement aligned with identity governance Clear audit trails for security and compliance reviews Readiness for SOC, ISO, PCI, and FSI assessments Enterprise Security Takeaways Authorization belongs in Microsoft Entra ID, not prompts AI agents must respect enterprise identity boundaries Copilot Studio + Power Automate + Microsoft Graph enable secure‑by‑design AI agents By treating AI agents as first‑class enterprise actors and enforcing authorization at the identity layer, organizations can scale AI adoption with confidence, trust, and compliance.Microsoft Sentinel MCP server - Generally Available With Exciting New Capabilities
Today, we’re excited to announce the General Availability of Microsoft Sentinel MCP (Model Context Protocol) server, a fully managed cloud service built on an open standard that empowers AI agents to seamlessly access your entire security context through natural language, eliminating the need for complex data engineering as you build agents. This unlocks new levels of AI agent performance and effectiveness, enabling them to do more for you. Since the public preview launch on September 30, hundreds of customers have explored MCP tools that provide semantic access to their entire security context. These tools allow security AI agents to operate with unprecedented precision by understanding your unique security context in natural language. Today, we’re introducing multiple innovations and new capabilities designed to help even more customers unlock more with AI-driven security. This post offers a high-level overview of what’s new. Stay tuned for deep-dive blogs that will unpack each feature in detail. Connect to Sentinel MCP server from Multiple AI Platforms By adopting the MCP open standard, we can progress on our mission to empower effective AI agents wherever you choose to run them. Beyond Security Copilot and VSCode Github Copilot, Sentinel MCP server is now natively integrated with Copilot Studio and Microsoft Foundry agent-building experiences. When creating an agent in any of these platforms, you can easily select Sentinel MCP tools, no pre-configuration required. It’s ready to use, so if you are using any of these platforms, dive in and give it a try. Click here for detailed guidance Additionally, you can now connect OpenAI ChatGPT to Sentinel MCP server through a secured OAuth authentication through a simple configuration in Entra. Learn how here assess threat impact on your organization Custom KQL Tools Many organizations rely on a curated library of KQL queries for incident triage, investigation, and threat hunting used in manual Standard Operating Procedures (SOP) or SOAR playbooks—often managed within Defender Advanced Hunting. Now, with Sentinel MCP server, you can instantly transform these saved KQL queries into custom tools with just a click. This new capability allows you to empower your AI agents with precise, actionable data tailored to your unique security workflows. Once a KQL query is saved as a tool, Sentinel MCP server automatically creates and maintains a corresponding MCP tool—ensuring it’s always in sync with the latest version of your saved query in Defender Advanced Hunting. Any connected agent can invoke this tool, confident it reflects your most current logic and requirements. Learn more here Entity Analyzer Assessing the risk of entities is a core task for SOC teams—whether triaging incidents, investigating threats, or automating response workflows. Traditionally, this has required building complex playbooks or custom logic to gather and analyze fragmented security data from multiple sources. With entity analyzer, this complexity is eliminated. The tool leverages your organization’s security data in Sentinel to deliver comprehensive, reasoned risk assessments for any entity your agents encounter – starting with users and urls. By providing a unified, out-of-the-box solution for entity analysis, entity analyzer enables your AI agents to make smarter decisions and automate more tasks—without the need to manually engineer risk evaluation logic for each entity type. This not only accelerates agent development, but also ensures your agents are always working with the most relevant and up-to-date context from across your security environment. Entity Analyzer is now available to any MCP client integrated with Sentinel MCP Server. And for those building SOAR workflows, entity analyzer is natively integrated with Logic Apps, making it easy to enrich entities and automate verdicts within your playbooks. Learn how to build a Logic Apps playbook with Entity Analyzer Graph Tools Microsoft Sentinel graph connects assets, identities, activities, and threat intelligence into a unified security graph, uncovering insights that structured data alone can’t provide such as relationships, blast radius, and attack paths. The graph is now generally available, and these advanced insights can be accessed by AI agents in natural language through a dedicated set of MCP tools. Graph MCP tools are offered in a sign-up preview. Triage Incidents and Alerts Sentinel MCP server extends to enable natural language access to a set of APIs that enable incident and alert triage. AI agents can use these tools to carry out autonomous triage and investigation of Defender XDR and Sentinel alerts and incidents. In the next couple of weeks, it will be available, out of the box, to all customers using Microsoft Defender XDR, Microsoft Sentinel or Microsoft Defender for Endpoint. Stay tuned. Smarter Security, Less Effort With the latest innovations in Sentinel MCP server, security teams can now harness the full power of AI-driven automation with unprecedented simplicity and impact. From seamless integration with leading AI platforms to instant creation of custom KQL tools and out-of-the-box entity analysis, Sentinel MCP server empowers your agents to deliver smarter, faster, and more effective security outcomes. These advancements eliminate manual complexity, accelerate agent development, and ensure your SOC is always equipped with the most relevant context. Currently, features like entity analysis are available at no additional charge; as we continue to evolve the platform, we’ll share updates on future pricing well in advance. Try out the new features today and stay tuned for deep-dive updates as we continue to push the boundaries of AI-powered security automation. Learn how to get started6.2KViews6likes0CommentsGraph RAG for Security: Insights from a Microsoft Intern
As a software engineering intern at Microsoft Security, I had the exciting opportunity to explore how Graph Retrieval-Augmented Generation (Graph RAG) can enhance data security investigations. This blog post shares my learning journey and insights from working with this evolving technology.Strengthen your data security posture in the era of AI with Microsoft Purview
Organizations face challenges with fragmented data security solutions and the amplified risks due to generative AI. We are now introducing Microsoft Purview Data Security Posture Management (DSPM) in public preview, which provides comprehensive visibility into sensitive data, contextual insights, and continuous risk assessment. DSPM is integrated with Microsoft 365 and Windows devices, leveraging generative AI through Security Copilot for deeper investigations and efficient risk management, and provides several capabilities across centralized visibility, actionable policy recommendations, and continuous risk assessment to enhance data security.Announcing Microsoft Sentinel Model Context Protocol (MCP) server – Public Preview
Security is being reengineered for the AI era—moving beyond static, rulebound controls and after-the-fact response toward platform-led, machine-speed defense. The challenge is clear: fragmented tools, sprawling signals, and legacy architectures that can’t match the velocity and scale of modern attacks. What’s needed is an AI-ready, data-first foundation—one that turns telemetry into a security graph, standardizes access for agents, and coordinates autonomous actions while keeping humans in command of strategy and high-impact investigations. Security teams already center operations on their SIEM for end-to-end visibility, and we’re advancing that foundation by evolving Microsoft Sentinel into both the SIEM and the platform for agentic defense—connecting analytics and context across ecosystems. And now, we’re introducing new platform capabilities that build on Sentinel data lake: Sentinel graph for deeper insight and context; an MCP server and tools to make data agent ready; new developer capabilities; and Security Store for effortless discovery and deployment—so protection accelerates to machine speed while analysts do their best work. Introducing Sentinel MCP server We’re excited to announce the public preview of Microsoft Sentinel MCP (Model Context Protocol) server, a fully managed cloud service built on an open standard that lets AI agents seamlessly access the rich security context in your Sentinel data lake. Recent advances in large language models have enabled AI agents to perform reasoning—breaking down complex tasks, inferring patterns, and planning multistep actions, making them capable of autonomously performing business processes. To unlock this potential in cybersecurity, agents must operate with your organization’s real security context, not just public training data. Sentinel MCP server solves that by providing standardized, secure access to that context—across graph relationships, tabular telemetry, and vector embeddings—via reusable, natural language tools, enabling security teams to unlock the full potential of AI-driven automation and focus on what matters most. Why Model Context Protocol (MCP)? Model Context Protocol (MCP) is a rapidly growing open standard that allows AI models to securely communicate with external applications, services, and data sources through a well-structured interface. Think of MCP as a bridge that lets an AI agents understand and invoke an application’s capabilities. These capabilities are exposed as discrete “tools” with natural language inputs and outputs. The AI agent can autonomously choose the right tool (or combination of tools) for the task it needs to accomplish. In simpler terms, MCP standardizes how an AI talks to systems. Instead of developers writing custom connectors for each application, the MCP server presents a menu of available actions to the AI in a language it understands. This means an AI agent can discover what it can do (search data, run queries, trigger actions, etc.) and then execute those actions safely and intelligently. By adopting an open standard like MCP, Microsoft is ensuring that our AI integrations are interoperable and future-proof. Any AI application that speaks MCP can connect. Security Copilot offers built-in integration, while other MCP-compatible platforms can leverage your Sentinel data and services can quickly connect by simply adding a new MCP server and typing Sentinel’s MCP server URL. How to Get Started Sentinel MCP server is a fully managed service now available to all Sentinel data lake customers. If you are already onboarded to Sentinel data lake, you are ready to begin using MCP. Not using Sentinel data lake yet? Learn more here. Currently, you can connect to the Sentinel MCP server using Visual Studio Code (VS Code) with the GitHub Copilot add-on. Here’s a step-by-step guide: Open VS Code and authenticate with an account that has at least Security Reader role access (required to query the data lake via the Sentinel MCP server) Open the Command Palette in VS Code (Ctrl + Shift + P) Type or select “MCP: Add Server…” Choose “HTTP” (HTTP or Server-Sent Event) Enter the Sentinel MCP server URL: “https://sentinel.microsoft.com/mcp/data-exploration" When prompted, allow authentication with the Sentinel MCP server by clicking “Allow” Once connected, GitHub Copilot will be linked to Sentinel MCP server. Open the agent pane, set it to Agent mode (Ctrl + Shift + I), and you are ready to go. GitHub Copilot will autonomously identify and utilize Sentinel MCP tools as necessary. You can now experience how AI agents powered by the Sentinel MCP server access and utilize your security context using natural language, without knowing KQL, which tables to query and wrangle complex schemas. Try prompts like: “Find the top 3 users that are at risk and explain why they are at risk.” “Find sign-in failures in the last 24 hours and give me a brief summary of key findings.” “Identify devices that showed an outstanding amount of outgoing network connections.” To learn more about the existing capabilities of Sentinel MCP tools, refer to our documentation. Security Copilot will also feature native integration with Sentinel MCP server; this seamless connectivity will enhance autonomous security agents and the open prompting experience. Check out the Security Copilot blog for additional details. What’s coming next? The public preview of Sentinel MCP server marks the beginning of a new era in cybersecurity—one where AI agents operate with full context, precision, and autonomy. In the coming months, the MCP toolset will expand to support natural language access across tabular, graph, and embedding-based data, enabling agents to reason over both structured and unstructured signals. This evolution will dramatically boost agentic performance, allowing them to handle more complex tasks, surface deeper insights, and accelerate protection to machine speed. As we continue to build out this ecosystem, your feedback will be essential in shaping its future. Be sure to check out the Microsoft Secure for a deeper dive and live demo of Sentinel MCP server in action.7.4KViews3likes0CommentsModern, unified data security in the AI era: New capabilities in Microsoft Purview
AI is transforming how organizations work—but it’s also changing how data moves, who can access it, and how easily it can be exposed. Sensitive data now appears in AI prompts, Copilot responses, and across a growing ecosystem of SaaS and GenAI tools. To keep up, organizations need data security that’s built for how people work with AI today. Microsoft Purview brings together native classification, visibility, protection and automated workflows across your data estate—all in one integrated platform. Today, we’re highlighting some of our new capabilities that help you: Uncover data blind spots: Discover hidden risks and improve data security posture and find sensitive data on endpoints with on-demand classification Strengthen protection across data flows: Enhance oversharing controls for Microsoft 365 Copilot, expand protection to more Azure data sources, and extend data security to the network layer Respond faster with automation: Automate investigation workflows with Alert agents in Data Loss Prevention (DLP) and Insider Risk Management (IRM) Discover hidden risks and improve data security posture Many security teams struggle with fragmented tools that siloes sensitive data visibility across apps and clouds. According to recent studies, 21% of decision-makers cite the lack of unified visibility as a top barrier to effective data security. This leads to gaps in protection and inefficient incident response—ultimately weakening the organization’s overall data security posture. To help organizations address these challenges, last November at Ignite we launched Microsoft Purview Data Security Posture Management (DSPM), and we’re excited to share that this capability is now available. DSPM continuously assesses your data estate, surfaces contextual insights into sensitive data and its usage, and recommends targeted controls to reduce risk and strengthen your data security program. We’re also bringing in new signals from email exfiltration and from user activity in the browser and network into DSPM’s insights and policy recommendations, making sure organizations can improve their protections and address potential data security gaps. You can now also experience deeper investigations into DSPM with 3x more suggested prompts, outcome-based promptbooks and new guidance experience that helps interpret unsupported user queries and offers helpful alternatives, increasing usability without hard stops. New Security Copilot task-based promptbooks in Purview DSPM Learn more about how DSPM can help your organization strengthen your data security posture. Find sensitive data on endpoints with on-demand classification Security teams often struggle to uncover sensitive data sitting for a long time on endpoints, one of the most overlooked and unmanaged surfaces in the data estate. Typically, data gets classified when a file is created, modified, or accessed. As a result, older data at rest that hasn’t been touched in a while can remain outside the scope of classification workflows. This lack of visibility increases the risk of exposure, especially for sensitive data that is not actively used or monitored. To tackle this challenge, we are introducing on-demand classification for endpoints. Coming to public preview in July, on-demand classification for endpoints gives security teams a targeted way to scan data at rest on Windows devices, without relying on file activity, to uncover sensitive files that have never been classified or reviewed. This means you can: Discover sensitive data on endpoints, including older, unclassified data that may never have been scanned, giving admins visibility into unclassified files that typically fall outside traditional classification workflows Support audit and compliance efforts by identifying sensitive data Focus scans on specific users, file types, or timelines to get visibility that really matters Get insights needed to prioritize remediation or protection strategies Security teams can define where or what to focus on by selecting specific users, file types, or last modified dates. This allows teams to prioritize scans for high-priority scenarios, like users handling sensitive data. Because on-demand classification scans are manually triggered and scoped without complex configuration, organizations can get targeted visibility into sensitive data on endpoints with minimal performance impact and without the need for complex setup. Complements just-in-time protection On-demand classification for endpoints also works hand-in-hand with existing endpoint DLP capabilities like just-in-time (JIT) protection. JIT protection kicks in during file access, blocking or alerting based on real-time content evaluation On-demand classification works ahead of time, identifying sensitive data that hasn’t been modified or accessed in an extended period Used together, they form a layered endpoint protection strategy, ensuring full visibility and protection. Choosing the right tool On-demand classification for endpoints is purpose-built for discovering sensitive data at rest on endpoints, especially files that haven’t been accessed or modified for a long time. It gives admins targeted visibility—no user action required. If you’re looking to apply labels, enforce protection policies, or scan files stored on on-premises servers, the Microsoft Purview Information Protection Scanner may be a better fit. It is designed for ongoing policy enforcement and label application across your hybrid environment. Learn more here. Get started with on-demand classification On-demand classification is easy to set up, with no agents to install or complex rules to configure. It only runs when you choose, rather than continuously running in the background. You stay in control of when and where scans happen, making it a simple and efficient way to extend visibility to endpoints. On-demand classification for endpoints enters public preview in July. Stay tuned for setup guidance and more details as we get closer to launch. Streamlining technical issue resolution with always-on diagnostics for endpoint devices Historically, resolving technical support tickets for Purview DLP required admins to manually collect logs and have end users reproduce the original issue at the time of the request. This could lead to delays, extended resolution times, and repeated communication cycles, especially for non-reproducible issues. Today, we’re introducing a new way to capture and share endpoint diagnostics: Always-on diagnostics available in public preview. When submitting support requests for Purview endpoint DLP, customers can now share rich diagnostic data with Microsoft without needing to recreate the issue scenario again at the time of submitting an investigation request such as a support ticket. This capability can now be enabled through your endpoint DLP settings. Learn more about always-on diagnostics here. Strengthening DLP for Microsoft 365 Copilot As organizations adopt Microsoft 365 Copilot, DLP plays a critical role in minimizing the risk of sensitive data exposure through AI. New enhancements give security teams greater control, visibility, and flexibility when protecting sensitive content in Copilot scenarios. Expanded protection to labeled emails DLP for Microsoft 365 Copilot now supports labeled email, available today, in addition to files in SharePoint and OneDrive. This helps prevent sensitive emails from being processed by Copilot and used as grounding data. This capability is applicable to emails sent after 1/1/2025. Alerts and investigations for Copilot access attempts Security teams can now configure DLP alerts for Microsoft 365 Copilot activity, surfacing attempts to access emails or files with sensitivity labels that match DLP policies. Alert reports include key details like user identity, policy match, and file name, enabling admins to quickly assess what happened, determine if further investigation is needed, and take appropriate follow-up actions. Admins can also choose to notify users directly, reinforcing responsible data use. The rollout will start on June 30 and is expected to be completed by the end of July. Simulation mode for Copilot DLP policies As part of the rollout starting on June 30, simulation mode lets admins test Copilot-specific DLP policies before enforcement. By previewing matches without impacting users, security teams can fine-tune rules, reduce false positives, and deploy policies with greater confidence. Learn more about DLP for Microsoft 365 Copilot here. Extended protection to more Azure data sources AI development is only as secure as the data that feeds it. That’s why Microsoft Purview Information Protection is expanding its auto-labeling capabilities to cover more Azure data sources. Now in public preview, security teams can automatically apply sensitivity labels to additional Azure data sources, including Azure Cosmos DB, PostgreSQL, KustoDB, MySQL, Azure Files, Azure Databricks, Azure SQL Managed Instances, and Azure Synapse. These additions build on existing coverage for Azure Blob Storage, Azure Data Lake Storage, and Azure SQL Database. These sources commonly fuel analytics pipelines and AI training workloads. With auto-labeling extended to more high-value data sources, sensitivity labels are applied to the data before it’s copied, shared, or integrated into downstream systems. These labels help enforce protection policies and limit unauthorized access to ensure sensitive data is handled appropriately across apps and AI workflows. Secure your AI training data, learn how to set up auto-labeling here. Extending data security to the network layer With more sensitive data moving through unmanaged SaaS apps and personal AI tools, your network is now a critical security surface. Earlier this year, we announced the introduction of Purview data security controls for the network layer. With inline data discovery for the network, organizations can detect sensitive data that’s outside of the trusted boundaries of the organization, such as unmanaged SaaS apps and cloud services. This helps admins understand how sensitive data can be intentionally or inadvertently exfiltrated to personal instances of apps, unsanctioned GenAI apps, cloud storage boxes, and more. This capability is now available in public preview — learn more here. Visibility of sensitive data sent through the network also includes insights into how users may be sharing data in risky ways. User activities such as file uploads or AI prompt submissions are captured in Insider Risk Management to formulate richer and comprehensive profiles of user risk. In turn, these signals will also better contextualize future data interactions and enrich policy verdicts. These user risk indicators will become available in the coming weeks. Automate investigation workflows with Alert Triage Agents in DLP and IRM Security teams today face a high volume of alerts, often spending hours sorting through false positives and low priority flags to find threats that matter. To help security teams focus on what’s truly high risk, we’re excited to share that the Alert Triage Agents in Microsoft Purview Data Loss Prevention (DLP) and Insider Risk Management (IRM) are now available in public preview. These autonomous, Security Copilot-powered agents prioritize alerts that pose the greatest risk to organizations. Whether it’s identifying high-impact exfiltration attempts in DLP or surfacing potential insider threats in IRM, the agents analyze both content and intent to deliver transparent, explainable findings. Built to learn and improve from user feedback, these agents not only accelerate investigations, but also improve over time, empowering teams to prioritize real threats, reduce time spent on false positives, and adapt to evolving risks through feedback. Watch the new Mechanics video, or learn more about how to get started here. A unified approach to modern data security Disjointed security tools create gaps and increase operational overhead. Microsoft Purview offers a unified data security platform designed to keep pace with how your organization works with AI today. From endpoints visibility to automated security workflows, Purview unifies data security across your estate, giving you one platform for end-to-end data security. As your data estate grows and AI reshapes the way you work, Purview helps you stay ahead—so you can scale securely, reduce risk, and unlock the full productivity potential of AI with confidence. Ready to unify your data security into one integrated platform? Try Microsoft Purview free for 90 days.2.6KViews3likes0Comments