security copilot
31 TopicsSecurity Copilot Skilling Series
Starting this October, Security Copilot joins forces with your favorite Microsoft Security products in a skilling series miles above the rest. The Security Copilot Skilling Series is your opportunity to strengthen your security posture through threat detection, incident response, and leveraging AI for security automation. These technical skilling sessions are delivered live by experts from our product engineering teams. Come ready to learn, engage with your peers, ask questions, and provide feedback. Upcoming sessions are noted below and will be available on-demand on the Microsoft Security Community YouTube channel. Coming Up October 30 | What's New in Copilot in Microsoft Intune Speaker: Amit Ghodke, Principal PM Architect, CxE CAT MEM Join us to learn about the latest Security Copilot capabilities in Microsoft Intune. We will discuss what's new and how you can supercharge your endpoint management experience with the new AI capabilities in Intune. Register now. (Save the date; registration opens soon) November 13 | Microsoft Entra AI: Unlocking Identity Intelligence with Security Copilot Skills and Agents Speakers: Mamta Kumar, Sr. Product Manager; Rahul Prakash, Principal Product Manager, AI Innovations; Chad Hasbrook, Sr. Product Manager, IDNA This session will demonstrate how Security Copilot in Microsoft Entra transforms identity security by introducing intelligent, autonomous capabilities that streamline operations and elevate protection. Customers will discover how to leverage AI-driven tools to optimize conditional access, automate access reviews, and proactively manage identity and application risks - empowering them into a more secure, and efficient digital future. Please stand by for an updated flight list; many more sessions coming soon. Click "follow" in the upper right of this article to be notified of updates. Now On-Demand October 16 | What’s New in Copilot in Microsoft Purview Speaker: Patrick David, Principal Product Manager, CxE CAT Compliance Join us for an insider’s look at the latest innovations in Microsoft Purview —where alert triage agents for DLP and IRM are transforming how we respond to sensitive data risks and improve investigation depth and speed. We’ll also dive into powerful new capabilities in Data Security Posture Management (DSPM) with Security Copilot, designed to supercharge your security insights and automation. Whether you're driving compliance or defending data, this session will give you the edge. October 9 | When to Use Logic Apps vs. Security Copilot Agents Speaker: Shiv Patel, Sr. Product Manager, Security Copilot Explore how to scale automation in security operations by comparing the use cases and capabilities of Logic Apps and Security Copilot Agents. This webinar highlights when to leverage Logic Apps for orchestrated workflows and when Security Copilot Agents offer more adaptive, AI-driven responses to complex security scenarios. All sessions will be published to the Microsoft Security Community YouTube channel - Security Copilot Skilling Series Playlist __________________________________________________________________________________________________________________________________________________________________ Looking for more? Keep up on the latest information on the Security Copilot Blog. Join the Microsoft Security Community mailing list to stay up to date on the latest product news and events. Engage with your peers one of our Microsoft Security discussion spaces.Introducing developer solutions for Microsoft Sentinel platform
Security is being reengineered for the AI era, moving beyond static, rule-bound controls and toward after-the-fact response toward platform-led, machine-speed defense. The challenge is clear: fragmented tools, sprawling signals, and legacy architectures that can’t match the velocity and scale of modern attacks. What’s needed is an AI-ready, data-first foundation - one that turns telemetry into a security graph, standardizes access for agents, and coordinates autonomous actions while keeping humans in command of strategy and high-impact investigations. Security teams already center operations on their SIEM for end-to-end visibility, and we’re advancing that foundation by evolving Microsoft Sentinel into both the SIEM and the platform for agentic defense—connecting analytics and context across ecosystems. And today, we’re introducing new platform capabilities that build on Sentinel data lake: Sentinel graph for deeper insight and context; Sentinel MCP server and tools to make data agent ready; new developer capabilities; and Security Store for effortless discovery and deployment—so protection accelerates to machine speed while analysts do their best work. Today, customers use a breadth of solutions to keep themselves secure. Each solution typically ingests, processes, and stores the security data it needs which means applications maintain identical copies of the same underlying data. This is painful for both customers and partners, who don’t want to build and maintain duplicate infrastructure and create data silos that make it difficult to counter sophisticated attacks. With today’s announcement, we’re directly addressing those challenges by giving partners the ability to create solutions that can reason over the single copy of the security data that each customer has in their Sentinel data lake instance. Partners can create AI solutions that use Sentinel and Security Copilot and distribute them in Microsoft Security Store to reach audiences, grow their revenue, and keep their customers safe. Sentinel already has a rich partner ecosystem with hundreds of SIEM solutions that include connectors, playbooks, and other content types. These new platform capabilities extend those solutions, creating opportunities for partners to address new scenarios and bring those solutions to market quickly since they don’t need to build complex data pipelines or store and process new data sets in their own infrastructure. For example, partners can use Sentinel connectors to bring their own data into the Sentinel data lake. They can create Jupyter notebook jobs in the updated Sentinel Visual Studio Code extension to analyze that data or take advantage of the new Model Context Protocol (MCP) server which makes the data understandable and accessible to AI agents in Security Copilot. With Security Copilot’s new vibe-coding capabilities, partners can create their agent in the same Sentinel Visual Studio Code extension or the environment of their choice. The solution can then be packaged and published to the new Microsoft Security Store, which gives partners an opportunity to expand their audience and grow their revenue while protecting more customers across the ecosystem. These capabilities are being embraced across our ecosystem by mature and emerging partners alike. Services partners such as Accenture and ISVs such as Zscaler and ServiceNow are already creating solutions that leverage the capabilities of the Sentinel platform. Partners have already brought several solutions to market using the integrated capabilities of the Sentinel platform: Illumio. Illumio for Microsoft Sentinel combines Illumio Insights with Microsoft Sentinel data lake and Security Copilot to revolutionize detection and response to cyber threats. It fuses data from Illumio and all the other sources feeding into Sentinel to deliver a unified view of threats, giving SOC analysts, incident responders, and threat hunters visibility and AI-driven breach containment capabilities for lateral traffic threats and attack paths across hybrid and multi-cloud environments. To learn more, visit Illumio for Microsoft Sentinel. OneTrust. OneTrust’s AI-ready governance platform enables 14,000 customers globally – including over half of the Fortune 500 – to accelerate innovation while ensuring responsible data use. Privacy and risk teams know that undiscovered personal data in their digital estate puts their business and customers at risk. OneTrust’s Privacy Risk Agent uses Security Copilot, Purview scan logs, Entra ID data, and Jupyter notebook jobs in the Sentinel data lake to automatically discover personal data, assess risk, and take mitigating actions. To learn more, visit here. Tanium. The Tanium Security Triage Agent accelerates alert triage using real-time endpoint intelligence from Tanium. Tanium intends to expand its agent to ingest contextual identity data from Microsoft Entra using Sentinel data lake. Discover how Tanium’s integrations empower IT and security teams to make faster, more informed decisions. Simbian. Simbian’s Threat Hunt Agent makes hunters more effective by automating the process of validating threat hunt hypotheses with AI. Threat hunters provide a hypothesis in natural language, and the Agent queries and analyzes the full breadth of data available in Sentinel data lake to validate the hypothesis and do deep investigation. Simbian's AI SOC Agent investigates and responds to security alerts from Sentinel, Defender, and other alert sources and also uses Sentinel data lake to enhance the depth of investigations. Learn more here. Lumen. Lumen’s Defender℠ Threat Feed for Microsoft Sentinel helps customers correlate known-bad artifacts with activity in their environment. Lumen’s Black Lotus Labs® harnesses unmatched network visibility and machine intelligence to produce high-confidence indicators that can be operationalized at scale for detection and investigation. Currently Lumen’s Defender℠ Threat Feed for Microsoft Sentinel is available as an invite only preview. To request an invite, reach out to the Lumen Defender Threat Feed Sales team. The updated Sentinel Visual Studio Code extension for Microsoft Sentinel The Sentinel Extension for Visual Studio code brings new AI and packaging capabilities on top of existing Jupyter notebook jobs to help developers efficiently create new solutions. Building with AI Impactful AI security solutions need access and understanding of relevant security data to address a scenario. The new Microsoft Sentinel Model Context Protocol (MCP) server makes data in Sentinel data lake AI-discoverable and understandable to agents so they can reason over it to generate powerful new insights. It integrates with the Sentinel VS Code extension so developers can use those tools to explore the data in the lake and have agents use those tools as they do their work. To learn more, read the Microsoft Sentinel MCP server announcement. Microsoft is also releasing MCP tools to make creating AI agents more straightforward. Developers can use Security Copilot’s MCP tools to create agents within either the Sentinel VS Code extension or the environment of their choice. They can also take advantage of the low code agent authoring experience right in the Security Copilot portal. To learn more about the Security Copilot pro code and low code agent authoring experiences visit the Security Copilot blog post on Building your own Security Copilot agents. Jupyter Notebook Jobs Jupyter notebooks jobs are an important part of the Sentinel data lake and were launched at our public preview a couple of months ago. See the documentation here for more details on Jupyter notebooks jobs and how they can be used in a solution. Note that when jobs write to the data lake, agents can use the Sentinel MCP tools to read and act on those results in the same way they’re able to read any data in the data lake. Packaging and Publishing Developers can now package solutions containing notebook jobs and Copilot agents so they can be distributed through the new Microsoft Security Store. With just a few clicks in the Sentinel VS Code extension, a developer can create a package which they can then upload to Security Store. Distribution and revenue opportunities with Security Store Sentinel platform solutions can be packaged and offered through the new Microsoft Security Store, which gives partners new ways to grow revenue and reach customers. Learn more about the ways Microsoft Security Store can help developers reach customers and grow revenue by visiting securitystore.microsoft.com. Getting started Developers can get started building powerful applications that bring together Sentinel data, Jupyter notebook jobs, and Security Copilot today: Become a partner to publish solutions to Microsoft Security Store Onboarding to Sentinel data lake Downloading the Sentinel Visual Studio Code extension Learn about Security Copilot news Learn about Microsoft Security Store1.6KViews2likes0CommentsAnnouncing Microsoft Sentinel Model Context Protocol (MCP) server – Public Preview
Security is being reengineered for the AI era—moving beyond static, rulebound controls and after-the-fact response toward platform-led, machine-speed defense. The challenge is clear: fragmented tools, sprawling signals, and legacy architectures that can’t match the velocity and scale of modern attacks. What’s needed is an AI-ready, data-first foundation—one that turns telemetry into a security graph, standardizes access for agents, and coordinates autonomous actions while keeping humans in command of strategy and high-impact investigations. Security teams already center operations on their SIEM for end-to-end visibility, and we’re advancing that foundation by evolving Microsoft Sentinel into both the SIEM and the platform for agentic defense—connecting analytics and context across ecosystems. And now, we’re introducing new platform capabilities that build on Sentinel data lake: Sentinel graph for deeper insight and context; an MCP server and tools to make data agent ready; new developer capabilities; and Security Store for effortless discovery and deployment—so protection accelerates to machine speed while analysts do their best work. Introducing Sentinel MCP server We’re excited to announce the public preview of Microsoft Sentinel MCP (Model Context Protocol) server, a fully managed cloud service built on an open standard that lets AI agents seamlessly access the rich security context in your Sentinel data lake. Recent advances in large language models have enabled AI agents to perform reasoning—breaking down complex tasks, inferring patterns, and planning multistep actions, making them capable of autonomously performing business processes. To unlock this potential in cybersecurity, agents must operate with your organization’s real security context, not just public training data. Sentinel MCP server solves that by providing standardized, secure access to that context—across graph relationships, tabular telemetry, and vector embeddings—via reusable, natural language tools, enabling security teams to unlock the full potential of AI-driven automation and focus on what matters most. Why Model Context Protocol (MCP)? Model Context Protocol (MCP) is a rapidly growing open standard that allows AI models to securely communicate with external applications, services, and data sources through a well-structured interface. Think of MCP as a bridge that lets an AI agents understand and invoke an application’s capabilities. These capabilities are exposed as discrete “tools” with natural language inputs and outputs. The AI agent can autonomously choose the right tool (or combination of tools) for the task it needs to accomplish. In simpler terms, MCP standardizes how an AI talks to systems. Instead of developers writing custom connectors for each application, the MCP server presents a menu of available actions to the AI in a language it understands. This means an AI agent can discover what it can do (search data, run queries, trigger actions, etc.) and then execute those actions safely and intelligently. By adopting an open standard like MCP, Microsoft is ensuring that our AI integrations are interoperable and future-proof. Any AI application that speaks MCP can connect. Security Copilot offers built-in integration, while other MCP-compatible platforms can leverage your Sentinel data and services can quickly connect by simply adding a new MCP server and typing Sentinel’s MCP server URL. How to Get Started Sentinel MCP server is a fully managed service now available to all Sentinel data lake customers. If you are already onboarded to Sentinel data lake, you are ready to begin using MCP. Not using Sentinel data lake yet? Learn more here. Currently, you can connect to the Sentinel MCP server using Visual Studio Code (VS Code) with the GitHub Copilot add-on. Here’s a step-by-step guide: Open VS Code and authenticate with an account that has at least Security Reader role access (required to query the data lake via the Sentinel MCP server) Open the Command Palette in VS Code (Ctrl + Shift + P) Type or select “MCP: Add Server…” Choose “HTTP” (HTTP or Server-Sent Event) Enter the Sentinel MCP server URL: “https://sentinel.microsoft.com/mcp/data-exploration" When prompted, allow authentication with the Sentinel MCP server by clicking “Allow” Once connected, GitHub Copilot will be linked to Sentinel MCP server. Open the agent pane, set it to Agent mode (Ctrl + Shift + I), and you are ready to go. GitHub Copilot will autonomously identify and utilize Sentinel MCP tools as necessary. You can now experience how AI agents powered by the Sentinel MCP server access and utilize your security context using natural language, without knowing KQL, which tables to query and wrangle complex schemas. Try prompts like: “Find the top 3 users that are at risk and explain why they are at risk.” “Find sign-in failures in the last 24 hours and give me a brief summary of key findings.” “Identify devices that showed an outstanding amount of outgoing network connections.” To learn more about the existing capabilities of Sentinel MCP tools, refer to our documentation. Security Copilot will also feature native integration with Sentinel MCP server; this seamless connectivity will enhance autonomous security agents and the open prompting experience. Check out the Security Copilot blog for additional details. What’s coming next? The public preview of Sentinel MCP server marks the beginning of a new era in cybersecurity—one where AI agents operate with full context, precision, and autonomy. In the coming months, the MCP toolset will expand to support natural language access across tabular, graph, and embedding-based data, enabling agents to reason over both structured and unstructured signals. This evolution will dramatically boost agentic performance, allowing them to handle more complex tasks, surface deeper insights, and accelerate protection to machine speed. As we continue to build out this ecosystem, your feedback will be essential in shaping its future. Be sure to check out the Microsoft Secure for a deeper dive and live demo of Sentinel MCP server in action.Cyber Dial Agent: Protecting Your Custom Copilot Agents
Introducing…the Cyber Dial Agent; a browser add-on and agent that streamlines security investigations by providing analysts with a unified, menu-driven interface to quickly access relevant pages in Microsoft Defender, Purview, and Defender for Cloud. This tool eliminates the need for manual searches across multiple portals, reducing investigation time and minimizing context switching for both technical and non-technical users. Visit the full article “Safeguard & Protect Your Custom Copilot Agents (Cyber Dial Agent)” in the Microsoft Purview Community Blog. You’ll access detailed, visual, step-by-step guides on all of the following: Importing the agent that was built via Microsoft Copilot Studio solution into another tenant and publishing it afterward To add your browser add-on solution in Microsoft Edge (or any modern browser) Using Purview DSPM for AI to Secure (Cyber Dial Custom Agent) Copilot Studio Agents Read the full article by Hesham_Saad.200Views0likes0CommentsNo More Guesswork—Copilot Makes Azure Security Crystal Clear
Elevating Azure Security and Compliance In today’s rapidly evolving digital landscape, security and compliance are more critical than ever. As organizations migrate workloads to Azure, the need for robust security frameworks and proactive compliance strategies grows. Security Copilot, integrated with Azure, is transforming how technical teams approach these challenges, empowering users to build secure, compliant environments with greater efficiency and confidence. As a security expert, I’d like to provide clear guidance on how to effectively utilize Security Copilot in the ever-evolving landscape of security and compliance. Security Copilot is a premium offering; it includes advanced capabilities that go beyond standard Azure security tools. These features may require specific licensing or subscription tiers. It provides deeper insights, enhanced automation, and tailored guidance for complex security scenarios. Below, I’ll highlight a range of security topics with sample Copilot prompts that you can use to help create a more secure and compliant environment. Getting Started with Microsoft Security Copilot Before leveraging the advanced capabilities of Security Copilot, it's important to understand the foundational requirements and setup steps: Azure Subscription Requirement Security Copilot is not automatically available in all Azure subscriptions. To use it, your organization must have an active Azure subscription. This is necessary to provision Security Compute Units (SCUs), which are the core resources that power Copilot workloads. Provisioning Security Compute Units (SCUs) SCUs are billed hourly and can be scaled based on workload needs. At least one SCU must be provisioned to activate Security Copilot. You can manage SCUs via the Azure portal or the Security Copilot portal, adjusting capacity as needed for performance and cost optimization. Role-Based Access Control To set up and manage Security Copilot: You need to be an Azure Owner or Contributor to provision SCUs. Users must be assigned appropriate Microsoft Entra roles (e.g., Security Administrator) to access and interact with Copilot features. Embedded Experience Security Copilot can be used as a standalone tool or embedded within other Microsoft services like Defender for Endpoint, Intune, and Purview, offering unified security management experience. Data Privacy and Security: Foundational Best Practices Why settle for generic security advice when Security Copilot delivers prioritized, actionable guidance backed by Microsoft’s best practices? Copilot doesn’t just recommend security measures, it actively helps you implement them, leveraging advanced features like encryption and granular access controls to safeguard every layer of your Azure environment. While Security Copilot doesn’t directly block threats like a firewall or Web Application Firewall (WAF), it enhances data integrity and confidentiality by analyzing security signals across Azure, identifying vulnerabilities, and guiding teams with prioritized, actionable recommendations. It helps implement encryption, access controls, and compliance-aligned configurations, while integrating with existing security tools to interpret logs and suggest containment strategies. By automating investigations and supporting secure-by-design practices, Copilot empowers organizations to proactively reduce breach risks and maintain a strong security posture. Secure Coding and Developer Productivity While Security Copilot supports secure coding by identifying vulnerabilities like SQL injection, Cross-Site Scripting (XSS), and buffer overflows, it is not a direct replacement for traditional code scanning tools, instead, it complements these tools by leveraging telemetry from integrated Microsoft services and applying AI-driven insights to prioritize risks and guide remediation. Copilot enhances developer productivity by interpreting signals, offering tailored recommendations, and embedding security practices throughout the software lifecycle. Understanding Security Protocols and Mechanisms Azure’s security stands on robust protocols and mechanisms but understanding them shouldn’t require a cryptography degree. Security Copilot demystifies encryption, authentication, and secure communications—making complex concepts accessible and actionable. With Security Copilot as your guide, teams can confidently configure Azure resources and respond to threats with informed, best-practice decisions. Compliance and Regulatory Alignment Regulatory requirements such as GDPR, HIPAA, and PCI-DSS don’t have to slow you down. Security Copilot streamlines Azure compliance with ready-to-use templates, clear guidelines, and robust documentation support. From maintaining audit logs to generating compliance reports, Security Copilot keeps every action tracked and organized—reducing non-compliance risk and making audits a breeze. Incident Response Planning No security strategy is complete without a solid incident response plan. Security Copilot equips Azure teams with detailed protocols for identifying, containing, and mitigating threats. It enhances Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solutions through ready-made playbooks tailored to diverse scenarios. With built-in incident simulations, Copilot enables teams to rehearse and refine their responses—minimizing breach impact and accelerating recovery. Security Best Practices for Azure Staying ahead of threats means never standing still. Security Copilot builds on Azure’s proven security features—like multi-factor authentication, regular updates, and least privilege access—by automating their implementation, monitoring usage patterns, and surfacing actionable insights. It connects with tools like Microsoft Defender and Entra ID to interpret signals, recommend improvements, and guide teams in real time. With Copilot, your defenses don’t just follow best practices, they evolve dynamically to meet emerging threats, keeping your team sharp and your environment secure. Integrating Copilot into Your Azure Security Strategy Security Copilot isn’t just a technical tool—it’s your strategic partner for Azure security. By weaving Copilot into your workflows, you unlock advanced security enhancements, optimized code, and robust privacy protection. Its holistic approach ensures security and compliance are seamlessly integrated into every corner of your Azure environment. Conclusion Security Copilot is changing the game for Azure security and compliance. By blending secure coding, advanced security expertise, regulatory support, incident response playbooks, and best practices, Copilot empowers technical teams to build resilient, compliant cloud environments. As threats evolve, Copilot keeps your data protected and your organization ahead of the curve. Ready to take your Azure security and compliance to the next level? Start leveraging Security Copilot today to empower your team, streamline operations, and stay ahead of evolving threats. Dive deeper into best practices, hands-on tutorials, and expert guidance to maximize your security posture and unlock the full potential of Copilot in your organization. Explore, learn, and secure your cloud—your journey starts now! Further Reading & Resources Microsoft Security Copilot documentation Get started with Microsoft Security Copilot Microsoft Copilot in Azure Overview Security best practices and patterns - Microsoft Azure Azure compliance documentation Copilot Learning Hub Microsoft Security Copilot Blog Author: Microsoft Principal Technical Trainer, https://www.linkedin.com/in/eliasestevao/ #MicrosoftLearn #SkilledByMTTGraph RAG for Security: Insights from a Microsoft Intern
As a software engineering intern at Microsoft Security, I had the exciting opportunity to explore how Graph Retrieval-Augmented Generation (Graph RAG) can enhance data security investigations. This blog post shares my learning journey and insights from working with this evolving technology.Secure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Coplot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurMicrosoft Purview Powering Data Security and Compliance for Security Copilot
Microsoft Purview provides Security and Compliance teams with extensive visibility into admin actions within Security Copilot. It offers tools for enriched users and data insights to identify, review, and manage Security Copilot interaction data in DSPM for AI. Data security and compliance administrators can also utilize Purview’s capabilities for data lifecycle management and information protection, advanced retention, eDiscovery, and more. These features support detailed investigations into logs to demonstrate compliance within the Copilot tenant. Prerequisites Please refer to the prerequisites for Security Copilot and DSPM for AI in the Microsoft Learn Docs. Key Capabilities and Features Heightened Context and Clarity As organizations adopt AI, implementing data controls and a Zero Trust approach is essential to mitigate risks like data oversharing, leakage, and non-compliant usage. Microsoft Purview, combined with Data Security Posture Management (DSPM) for AI, empowers security and compliance teams to manage these risks across Security Copilot interactions. With this integration, organizations can: Discover data risks by identifying sensitive information in user prompts and responses. Microsoft Purview surfaces these insights in the DSPM for AI dashboard and recommends actions to reduce exposure. Identify risky AI usage using Microsoft Purview Insider Risk Management to investigate behaviors such as inadvertent sharing of sensitive data or to detect suspicious activity within Security Copilot usage. These capabilities provide heightened visibility into how AI is used across the organization, helping teams proactively address potential risks before they escalate. Compliance and Governance Building on this visibility, organizations can take action using Microsoft Purview’s integrated compliance and governance solutions. Here are some examples of how teams are leveraging these capabilities to govern Security Copilot interactions: Audit provides a detailed log of user and admin activity within Security Copilot, enabling organizations to track access, monitor usage patterns, and support forensic investigations. eDiscovery enables legal and investigative teams to identify, collect, and review Security Copilot interactions as part of case workflows, supporting defensible investigations. Communication Compliance helps detect potential policy violations or risky behavior in administrator interactions, enabling proactive monitoring and remediation. Data Lifecycle Management allows teams to automate the retention, deletion, and classification of Security Copilot data—reducing storage costs and minimizing risk from outdated or unnecessary information. Together, these tools provide a comprehensive governance framework that supports secure, compliant, and responsible AI adoption across the enterprise. Getting Started Enable Purview Audit for Security Copilot Sign into your Copilot tenant at https://securitycopilot.microsoft.com/, and with the Security Administrator permissions, navigate to the Security Copilot owner settings and ensure Audit logging is enabled. Microsoft Purview To start using DSPM for AI and the Microsoft Purview capabilities, please complete the following steps to get set up and then feel free to experiment yourself. Navigate to Purview (Purview.Microsoft.com) and ensure you have adequate permissions to access the different Purview solutions as described here. DSPM for AI Select the DSPM for AI “Solution” option on the left-most navigation. Go to the policies or recommendations tab turn on the following: a. “DSPM for AI – Capture interactions for Copilot Experiences”: Captures prompts and responses for data security posture and regulatory compliance from Security Copilot and other Copilot experiences. b. “Detect Risky AI Usage”: Helps to calculate user risk by detecting risky prompts and responses in Copilot experiences. c. “Detect unethical behavior in AI apps”: Detects sensitive info and inappropriate use of AI in prompts and responses in Copilot experiences. To begin reviewing Security Copilot usage within your organization and identifying interactions that contain sensitive information, select Reports from the left navigation panel. a. The "Sensitive interactions per AI app" report shows the most common sensitive information types used in Security Copilot interactions and their frequency. For instance, this tenant has a significant amount of IT and IP Address information within these interactions. Therefore, it is important to ensure that all sensitive information used in Security Copilot interactions is utilized for legitimate workplace purposes and does not involve any malicious or non-compliant use of Security Copilot. b. “Top unethical AI interactions” will show an overview of any potentially unsafe or inappropriate interactions with AI apps. In this case, Security Copilot only has seven potentially unsafe interactions that included unauthorized disclosure and regulatory collusion. c. “Insider risk severity per AI app” shows the number of high risk, medium risk, low risk and no risk users that are interacting with Security Copilot. In this tenant, there are about 1.9K Security Copilot users, but very few of them have an insider risk concern. d. To check the interaction details of this potentially risky activity, head over to Activity Explorer for more information. 5. In Activity Explorer, you should filter the App to Security Copilot. You will also have the option to filter based on the user risk level and sensitive information type. To identify the highest risk behaviors, filter for users with a medium to high risk level or those associated with the most sensitive information types. a. Once you have filtered, you can start looking through the activity details for more information like the user details, the sensitive information types, the prompt and response data, and more. b. Based on the details shown, you may decide to investigate the activity and the user further. To do so, we have data security investigation and governance tools. Data Security Investigations and Governance If you find Security Copilot actions in DSPM for AI Activity Explorer to be potentially inappropriate or malicious, you can look for further information in Insider Risk Management (IRM), through an eDiscovery case, Communication Compliance (CC), or Data Lifecycle Management (DLM). Insider Risk Management By enabling the quick policy in DSPM for AI to monitor risky Copilot usage, alerts will start appearing in IRM. Customize this policy based on your organization's risk tolerance by adjusting triggering events, thresholds, and indicators for detected activity. Examine the alerts associated with the "DSPM for AI – Detect risky AI usage" policy, potentially sorting them by severity from high to low. For these alerts, you will find a User Activity scatter plot that provides insights into the activities preceding and following the user's engagement with a risky prompt in Security Copilot. This assists the Data Security administrator in understanding the necessary triage actions for this user/alert. After thoroughly investigating these details and determining whether the activity was malicious or an inadvertent insider risk, appropriate actions can be taken, including issuing a user warning, resolving the case, sharing the case with an email recipient, or escalating the case to eDiscovery for further investigation. eDiscovery To identify, review and manage your Security Copilot logs to support your investigations, use the eDiscovery tool. Here are the steps to take in eDiscovery: a. Create an eDiscovery Case b. Create a new search c. In Search, go to condition builder and select Add conditions -> KeyQL d. Enter the query as: - KQL Equal (ItemClass=IPM.SkypeTeams.Message.Copilot.Security.SecurityCopilot) e. Run the query f. Once completed, add the search to a review set (Button at the top) g. In the review set, view details of the Security Copilot conversation Communication Compliance In Communication Compliance, like IRM, you can investigate details around the Security Copilot interactions. Specifically, in CC, you can determine if these interactions contained non-compliant usage of Security Copilot or inappropriate text. After identifying the sentiment of the Security Copilot communication, you can take action by resolving the alert, sending a warning notice to the user, escalating the alert to a reviewer, or escalating the alert for investigation, which will create a new eDiscovery case. Data Lifecycle Management For regulatory compliance or investigation purposes, navigate to Data Lifecycle Management to create a new retention policy for Security Copilot activities. a. Provide a friendly name for the retention policy and select Next b. Skip Policy Scope section for this validation c. Select “Static” type of retention policy and select Next d. Choose “Microsoft Copilot Experiences” to apply retention policy to Security Copilot interactions Billing Model Microsoft Purview audit logging of Security Copilot activity remains included at no additional cost as part of Microsoft 365 E5 licensing. However, Microsoft Purview now offers a combination of entitlement-based (per-user-per-month) and Pay-As-You-Go (PAYG) pricing models. The PAYG model applies to a broader set of Purview capabilities—including Insider Risk Management, Communication Compliance, eDiscovery, and other data security and governance solutions—based on usage volume or complexity. This flexible pricing structure ensures that organizations only pay for what they use as data flows through AI models, networks, and applications. For further details, please refer to this Microsoft Security Community Blog: New Purview pricing options for protecting AI apps and agents | Microsoft Community Hub Looking Ahead By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their Security Copilot interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. Please reach out to us if you have any questions or additional requirements. Additional Resources Use Microsoft Purview to manage data security & compliance for Microsoft Security Copilot | Microsoft Learn How to deploy Microsoft Purview DSPM for AI to secure your AI apps Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn Learn about Microsoft Purview billing models | Microsoft LearnModern, unified data security in the AI era: New capabilities in Microsoft Purview
AI is transforming how organizations work—but it’s also changing how data moves, who can access it, and how easily it can be exposed. Sensitive data now appears in AI prompts, Copilot responses, and across a growing ecosystem of SaaS and GenAI tools. To keep up, organizations need data security that’s built for how people work with AI today. Microsoft Purview brings together native classification, visibility, protection and automated workflows across your data estate—all in one integrated platform. Today, we’re highlighting some of our new capabilities that help you: Uncover data blind spots: Discover hidden risks and improve data security posture and find sensitive data on endpoints with on-demand classification Strengthen protection across data flows: Enhance oversharing controls for Microsoft 365 Copilot, expand protection to more Azure data sources, and extend data security to the network layer Respond faster with automation: Automate investigation workflows with Alert agents in Data Loss Prevention (DLP) and Insider Risk Management (IRM) Discover hidden risks and improve data security posture Many security teams struggle with fragmented tools that siloes sensitive data visibility across apps and clouds. According to recent studies, 21% of decision-makers cite the lack of unified visibility as a top barrier to effective data security. This leads to gaps in protection and inefficient incident response—ultimately weakening the organization’s overall data security posture. To help organizations address these challenges, last November at Ignite we launched Microsoft Purview Data Security Posture Management (DSPM), and we’re excited to share that this capability is now available. DSPM continuously assesses your data estate, surfaces contextual insights into sensitive data and its usage, and recommends targeted controls to reduce risk and strengthen your data security program. We’re also bringing in new signals from email exfiltration and from user activity in the browser and network into DSPM’s insights and policy recommendations, making sure organizations can improve their protections and address potential data security gaps. You can now also experience deeper investigations into DSPM with 3x more suggested prompts, outcome-based promptbooks and new guidance experience that helps interpret unsupported user queries and offers helpful alternatives, increasing usability without hard stops. New Security Copilot task-based promptbooks in Purview DSPM Learn more about how DSPM can help your organization strengthen your data security posture. Find sensitive data on endpoints with on-demand classification Security teams often struggle to uncover sensitive data sitting for a long time on endpoints, one of the most overlooked and unmanaged surfaces in the data estate. Typically, data gets classified when a file is created, modified, or accessed. As a result, older data at rest that hasn’t been touched in a while can remain outside the scope of classification workflows. This lack of visibility increases the risk of exposure, especially for sensitive data that is not actively used or monitored. To tackle this challenge, we are introducing on-demand classification for endpoints. Coming to public preview in July, on-demand classification for endpoints gives security teams a targeted way to scan data at rest on Windows devices, without relying on file activity, to uncover sensitive files that have never been classified or reviewed. This means you can: Discover sensitive data on endpoints, including older, unclassified data that may never have been scanned, giving admins visibility into unclassified files that typically fall outside traditional classification workflows Support audit and compliance efforts by identifying sensitive data Focus scans on specific users, file types, or timelines to get visibility that really matters Get insights needed to prioritize remediation or protection strategies Security teams can define where or what to focus on by selecting specific users, file types, or last modified dates. This allows teams to prioritize scans for high-priority scenarios, like users handling sensitive data. Because on-demand classification scans are manually triggered and scoped without complex configuration, organizations can get targeted visibility into sensitive data on endpoints with minimal performance impact and without the need for complex setup. Complements just-in-time protection On-demand classification for endpoints also works hand-in-hand with existing endpoint DLP capabilities like just-in-time (JIT) protection. JIT protection kicks in during file access, blocking or alerting based on real-time content evaluation On-demand classification works ahead of time, identifying sensitive data that hasn’t been modified or accessed in an extended period Used together, they form a layered endpoint protection strategy, ensuring full visibility and protection. Choosing the right tool On-demand classification for endpoints is purpose-built for discovering sensitive data at rest on endpoints, especially files that haven’t been accessed or modified for a long time. It gives admins targeted visibility—no user action required. If you’re looking to apply labels, enforce protection policies, or scan files stored on on-premises servers, the Microsoft Purview Information Protection Scanner may be a better fit. It is designed for ongoing policy enforcement and label application across your hybrid environment. Learn more here. Get started with on-demand classification On-demand classification is easy to set up, with no agents to install or complex rules to configure. It only runs when you choose, rather than continuously running in the background. You stay in control of when and where scans happen, making it a simple and efficient way to extend visibility to endpoints. On-demand classification for endpoints enters public preview in July. Stay tuned for setup guidance and more details as we get closer to launch. Streamlining technical issue resolution with always-on diagnostics for endpoint devices Historically, resolving technical support tickets for Purview DLP required admins to manually collect logs and have end users reproduce the original issue at the time of the request. This could lead to delays, extended resolution times, and repeated communication cycles, especially for non-reproducible issues. Today, we’re introducing a new way to capture and share endpoint diagnostics: Always-on diagnostics available in public preview. When submitting support requests for Purview endpoint DLP, customers can now share rich diagnostic data with Microsoft without needing to recreate the issue scenario again at the time of submitting an investigation request such as a support ticket. This capability can now be enabled through your endpoint DLP settings. Learn more about always-on diagnostics here. Strengthening DLP for Microsoft 365 Copilot As organizations adopt Microsoft 365 Copilot, DLP plays a critical role in minimizing the risk of sensitive data exposure through AI. New enhancements give security teams greater control, visibility, and flexibility when protecting sensitive content in Copilot scenarios. Expanded protection to labeled emails DLP for Microsoft 365 Copilot now supports labeled email, available today, in addition to files in SharePoint and OneDrive. This helps prevent sensitive emails from being processed by Copilot and used as grounding data. This capability is applicable to emails sent after 1/1/2025. Alerts and investigations for Copilot access attempts Security teams can now configure DLP alerts for Microsoft 365 Copilot activity, surfacing attempts to access emails or files with sensitivity labels that match DLP policies. Alert reports include key details like user identity, policy match, and file name, enabling admins to quickly assess what happened, determine if further investigation is needed, and take appropriate follow-up actions. Admins can also choose to notify users directly, reinforcing responsible data use. The rollout will start on June 30 and is expected to be completed by the end of July. Simulation mode for Copilot DLP policies As part of the rollout starting on June 30, simulation mode lets admins test Copilot-specific DLP policies before enforcement. By previewing matches without impacting users, security teams can fine-tune rules, reduce false positives, and deploy policies with greater confidence. Learn more about DLP for Microsoft 365 Copilot here. Extended protection to more Azure data sources AI development is only as secure as the data that feeds it. That’s why Microsoft Purview Information Protection is expanding its auto-labeling capabilities to cover more Azure data sources. Now in public preview, security teams can automatically apply sensitivity labels to additional Azure data sources, including Azure Cosmos DB, PostgreSQL, KustoDB, MySQL, Azure Files, Azure Databricks, Azure SQL Managed Instances, and Azure Synapse. These additions build on existing coverage for Azure Blob Storage, Azure Data Lake Storage, and Azure SQL Database. These sources commonly fuel analytics pipelines and AI training workloads. With auto-labeling extended to more high-value data sources, sensitivity labels are applied to the data before it’s copied, shared, or integrated into downstream systems. These labels help enforce protection policies and limit unauthorized access to ensure sensitive data is handled appropriately across apps and AI workflows. Secure your AI training data, learn how to set up auto-labeling here. Extending data security to the network layer With more sensitive data moving through unmanaged SaaS apps and personal AI tools, your network is now a critical security surface. Earlier this year, we announced the introduction of Purview data security controls for the network layer. With inline data discovery for the network, organizations can detect sensitive data that’s outside of the trusted boundaries of the organization, such as unmanaged SaaS apps and cloud services. This helps admins understand how sensitive data can be intentionally or inadvertently exfiltrated to personal instances of apps, unsanctioned GenAI apps, cloud storage boxes, and more. This capability is now available in public preview — learn more here. Visibility of sensitive data sent through the network also includes insights into how users may be sharing data in risky ways. User activities such as file uploads or AI prompt submissions are captured in Insider Risk Management to formulate richer and comprehensive profiles of user risk. In turn, these signals will also better contextualize future data interactions and enrich policy verdicts. These user risk indicators will become available in the coming weeks. Automate investigation workflows with Alert Triage Agents in DLP and IRM Security teams today face a high volume of alerts, often spending hours sorting through false positives and low priority flags to find threats that matter. To help security teams focus on what’s truly high risk, we’re excited to share that the Alert Triage Agents in Microsoft Purview Data Loss Prevention (DLP) and Insider Risk Management (IRM) are now available in public preview. These autonomous, Security Copilot-powered agents prioritize alerts that pose the greatest risk to organizations. Whether it’s identifying high-impact exfiltration attempts in DLP or surfacing potential insider threats in IRM, the agents analyze both content and intent to deliver transparent, explainable findings. Built to learn and improve from user feedback, these agents not only accelerate investigations, but also improve over time, empowering teams to prioritize real threats, reduce time spent on false positives, and adapt to evolving risks through feedback. Watch the new Mechanics video, or learn more about how to get started here. A unified approach to modern data security Disjointed security tools create gaps and increase operational overhead. Microsoft Purview offers a unified data security platform designed to keep pace with how your organization works with AI today. From endpoints visibility to automated security workflows, Purview unifies data security across your estate, giving you one platform for end-to-end data security. As your data estate grows and AI reshapes the way you work, Purview helps you stay ahead—so you can scale securely, reduce risk, and unlock the full productivity potential of AI with confidence. Ready to unify your data security into one integrated platform? Try Microsoft Purview free for 90 days.2.2KViews2likes0Comments