insider risk management
159 TopicsBeyond Visibility: The new Microsoft Purview Data Security Posture Management (DSPM) experience
In today’s AI-powered enterprises, understanding your data estate—and the risks that come with it—is both more complex and more critical than ever. Meanwhile, many organizations still grapple with a fragmented data security landscape, relying on a patchwork of disconnected tools that obscure visibility and hinder effective data security posture management. As AI adoption accelerates, entirely new data risk vectors are emerging—ranging from oversharing and compliance gaps to operational inefficiencies. According to recent research[1], 40% of data security incidents now occur within AI applications, and 78% of AI users are bringing their own AI tools to work. This challenge is further compounded by the rise of AI agents, creating a scenario that demands a unified, context-aware approach to understanding and securing data within trusted workflows. This is where data security posture management helps organizations - by providing the visibility and control they need across sprawling data estates and evolving risk surfaces. By continuously assessing data security posture, organizations can better identify gaps and remediate risks, avoiding fragmented efforts. However, even with these capabilities, many organizations still struggle to stay focused on the ultimate goal—achieving meaningful security outcomes rather than simply managing tools or processes. To overcome this, organizations must shift their perspective: seeing data security not as a collection of individual solutions, but as a holistic program anchored in desired business and security outcomes. Managing data security posture should become the foundation for building a sustainable and healthy data security program—one that continuously improves, drives measurable resilience, strengthens trust, and systematically reduces risk across the enterprise. At Microsoft Ignite, we’re excited to share the newly enhanced Microsoft Purview Data Security Posture Management (DSPM) experience—an AI-powered, centralized solution that focuses on the goals your organization needs to accomplish, and helps you strengthen data security to confidently embrace AI apps and agents with actionable insights, new third-party signals, and Security Copilot agents. Enabling AI and agents confidently with enhanced data security posture The enhanced DSPM experience is designed to simplify data security posture by stitching together the scenarios and goals customers need to achieve when it comes to their data. We are combining the depth of Purview visibility and controls with the breadth of external signals and agentic activities, complemented by Security Copilot agents, to provide a strong, proactive DSPM experience. See what’s new in Purview DSPM: ▪ Outcome-based guided workflows: To avoid the guesswork of interpreting insights and determining the next best actions, now customers can manage their data security posture by selecting which data security outcome to prioritize and the risks related to each—shifting from reactive visibility to actionable, outcome-driven insights. For each outcome, this experience will guide customers through the key metrics and risk patterns present in their organization, as well as a recommended action plan, including the expected impact of taking those actions. For example, if an admin chooses to address the risk of “Preventing sensitive data exfiltration to risky destinations,” DSPM will show how many sensitive files are at risk, how many have been exfiltrated to personal domains or external cloud services in the past 30 days, and provide recommended actions to mitigate these risks. These actions may include creating a new DLP policy and an IRM policy to detect and prevent such exfiltration to personal emails, and admins can see the impact each of these actions will have. After that, they can continuously assess their data security posture through the outcome metrics. [Figure 1: List of data security objectives, with metrics and remediation plans per objective] ▪ External data source visibility: Organizations trust Microsoft for collaboration and productivity, but their footprint spans to external data platforms too. To provide a more complete and comprehensive view of data risks across the digital estate, we’re excited to announce the advancement of the Purview partner ecosystem, with the inclusion of third-party signals in DSPM through the collaboration with our partners Varonis, BigID, Cyera and OneTrust. This partnership, possible via integration with Microsoft Sentinel Data Lake, is designed to help organizations see and understand more of their data—wherever it resides. Through DSPM, a customer will be able to easily turn on these external data signals and evaluate data asset information (such as permissions, location, sensitive information types) in these environments. Available sources initially will be: Salesforce (provided by Varonis), Databricks (provided by BigID), Snowflake (provided by Cyera), and Google Cloud Platform (provided by OneTrust), with additional external data coverage coming soon. By integrating these external data sources into Purview, data security teams gain extended visibility into sensitive data across third-party platforms alongside their Microsoft data, which also empowers teams to raise their confidence when adopting AI apps and agents by expanding visibility on external data that is referenced by those tools. This collaboration not only eliminates blind spots and strengthens risk posture, but also simplifies data security operations with a single, streamlined experience. These signals will be offered using pay-as-you-go billing through Microsoft Sentinel consumptive meters. Learn more here. [Figure 2: Asset explorer with external data from Databricks, Snowflake, Google Cloud Platform, and Salesforce] ▪ New out-of-the-box reports for posture insights: DSPM also extends visibility by presenting new out-of-the-box reports that deliver immediate visibility into top-of-mind metrics organizations care about, such as protection coverage via Sensitivity labels, Data Loss Prevention (DLP) policy triggers, and posture trends over time. With advanced filtering options and deep drilldowns, security teams can quickly identify unprotected sensitive data, track label adoption, monitor policy effectiveness, and surface potential risks earlier. These actionable insights streamline monitoring and support precise policy fine-tuning, enabling data security teams to shift from reactive operations to proactive, data-driven strategic decisions. ▪ Expanded coverage and remediation on Data Risk Assessments: DSPM now extends Data Risk Assessments to item-level analysis with automated new remediation actions like—enabling bulk disabling of overshared SharePoint links and direct activation of protection policies. Starting from an outcome-based remediation plan or the Data Risk Assessment tab, teams can take targeted actions such as removing or tightening sharing links, notifying owners, and applying or updating sensitivity labels—including new support for bulk manual labeling from search—so fixes occur where the risky items reside, and progress is immediately reflected in posture metrics. Beyond Microsoft 365, Data Risk Assessments have also expanded to Microsoft Fabric, surfacing Fabric assets in a new default assessment and proactive actions to protect new Fabric assets with DLP policies or sensitivity labels. These enhancements address key customer challenges around visibility gaps, fragmented remediation workflows, and governance across hybrid environments. AI agents are growing rapidly in enterprise environments, bringing unique data risks that traditional security can’t address. Their autonomous actions and broad access to sensitive information create complex risk profiles tied to behavior, not just identity. To stay secure, organizations need data protection strategies that treat agents as first-class entities with tailored visibility, risk scoring, and policy controls. DSPM is also adapting to this new scenario: ▪ AI Observability for agents: We’re introducing a dedicated view within DSPM that treats agents—such as the ones created on Microsoft 365 Copilot, Copilot Studio, and Azure AI Foundry—as first-class entities in your organizations when it comes to data security posture. It provides a unified inventory of all agents – including third-party agents – as well as the assigned insider risk level based on the agent behavior, posture metrics, and activity trends of each agent. Security teams can drill down into individual agents to see contextual insights like risky behaviors, oversharing patterns, and can take recommended actions, such as the creation of retention policies. AI Observability gives customers clear visibility across agents and connects insights to guided actions— simplifying governance, facilitating risk prioritization, and enabling secure AI adoption without slowing innovation. [Figure 3: AI Observability plane with inventory of 1st and 3rd party agents within the organization, as well as assigned risk level per agent] Learn more about all the innovations we are announcing to help you safely adopt agents. Redefining data security posture for the AI-powered era The new DSPM experience marks a pivotal moment in Microsoft Purview’s journey to secure the modern enterprise. By unifying visibility, protection, and investigation across human and agentic data activity, Purview empowers organizations to embrace AI responsibly, reduce risk, and drive continuous improvement in their data security posture. When it comes to leveraging built-in AI within data security solutions, admins can view proactive or summary insights and launch a Data Security Investigation (DSI) directly from DSPM. This important integration allows admins to utilize the power and scale of DSI analysis to take a closer look at data risks. Furthermore, applying AI to strengthen data security is just as critical as securing AI itself, as AI-powered solutions help organizations anticipate and neutralize risks at scale, and agents have the potential to take data security processes to another level, increasing automation and allowing teams to focus on the most pressing risks. That’s why we’re thrilled to introduce the Data Security Posture Agent, designed to augment the new Purview DSPM experience even further. This agent leverages LLMs to understand context and intent, going beyond traditional classifiers that can often miss nuance. It analyzes selected file sets and generates precise reports on requested information, such as merger & acquisition details or PO numbers. Armed with these insights, admins can decide on their own next steps, whether that’s applying new labels, updating policies, or initiating investigations, streamlining discovery and risk reduction in one intelligent, outcome-driven experience. This capability tackles the challenges of manual, time-consuming data analysis and limited visibility into sensitive information, helping organizations achieve faster resolution, stronger compliance posture, and greater operational efficiency. [Figure 4: Data Security Posture Agent to discover sensitive data and take appropriate actions] Combined with the Data Security Triage agent and other Security Copilot capabilities integrated within Purview, the Data Security Posture agent creates a robust AI-powered foundation for modern data security teams. To make the agents easily accessible and help teams get started more quickly, we are excited to announce that Security Copilot will be available to all Microsoft 365 E5 customers. Rollout starts today for existing Security Copilot customers with Microsoft 365 E5 and will continue in the upcoming months for all Microsoft 365 E5 customers. Customers will receive advanced notice before activation. Learn more: https://aka.ms/SCP-Ignite25 Building the future of data security alongside customers As organizations navigate this new era of AI-driven innovation, the ability to secure data confidently and proactively is no longer optional—it’s mission-critical. Microsoft Purview DSPM delivers a unified, outcome-based approach that transforms complexity into clarity, guiding teams from insight to action with precision. Current solutions Purview DSPM and DSPM for AI will remain available until June, when the new Purview DSPM experience becomes the centralized solution. Costumers’ top-of-mind capabilities within current workflows, such as Data Risk Assessments and Security Copilot prompt gallery, will also be available within the new DSPM experience. The new DSPM experience and capabilities will roll out in Public Preview within the next few weeks, and will be available for customers with Microsoft 365 E5 and E5 Compliance licenses. By extending visibility across external sources, introducing AI observability, and empowering remediation through intelligent agents, Purview enables enterprises to embrace AI and agents without compromise—strengthening trust, reducing risk, and driving continuous improvement in data security posture. The future of secure AI adoption starts here. Getting connected with Microsoft Purview Read our blog with the main announcements across the Purview data security solutions at Ignite. Try Microsoft Purview data security. Learn more about Microsoft Purview on our website and Microsoft Learn. [1] July 2025 multi-national survey of over 1700 data security professionals commissioned by Microsoft from Hypothesis GroupHow to deploy Microsoft Purview DSPM for AI to secure your AI apps
Microsoft Purview Data Security Posture Management (DSPM for AI) is designed to enhance data security for the following AI applications: Microsoft Copilot experiences, including Microsoft 365 Copilot. Enterprise AI apps, including ChatGPT enterprise integration. Other AI apps, including all other AI applications like ChatGPT consumer, Microsoft Copilot, DeepSeek, and Google Gemini, accessed through the browser. In this blog, we will dive into the different policies and reporting we have to discover, protect and govern these three types of AI applications. Prerequisites Please refer to the prerequisites for DSPM for AI in the Microsoft Learn Docs. Login to the Purview portal To begin, start by logging into Microsoft 365 Purview portal with your admin credentials: In the Microsoft Purview portal, go to the Home page. Find DSPM for AI under solutions. 1. Securing Microsoft 365 Copilot Be sure to check out our blog on How to use the DSPM for AI data assessment report to help you address oversharing concerns when you deploy Microsoft 365 Copilot. Discover potential data security risks in Microsoft 365 Copilot interactions In the Overview tab of DSPM for AI, start with the tasks in “Get Started” and Activate Purview Audit if you have not yet activated it in your tenant to get insights into user interactions with Microsoft Copilot experiences In the Recommendations tab, review the recommendations that are under “Not Started”. Create the following data discovery policy to discover sensitive information in AI interactions by clicking into it. Detect risky interactions in AI apps - This public preview Purview Insider Risk Management policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot experiences. Click here to learn more about Risky AI usage policy. With the policies to discover sensitive information in Microsoft Copilot experiences in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter to Microsoft Copilot Experiences, and review the following for Microsoft Copilot experiences: Total interactions over time (Microsoft Copilot) Sensitive interactions per AI app Top unethical AI interactions Top sensitivity labels references in Microsoft 365 Copilot Insider Risk severity Insider risk severity per AI app Potential risky AI usage Protect sensitive data in Microsoft 365 Copilot interactions From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities from Microsoft Copilot experiences based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. Then drill down to each activity to view details including the capability to view prompts and response with the right permissions. To protect the sensitive data in interactions for Microsoft 365 Copilot, review the Not Started policies in the Recommendations tab and create these policies: Information Protection Policy for Sensitivity Labels - This option creates default sensitivity labels and sensitivity label policies. If you've already configured sensitivity labels and their policies, this configuration is skipped. Protect sensitive data referenced in Microsoft 365 Copilot - This guides you through the process of creating a Purview Data Loss Prevention (DLP) policy to restrict the processing of content with specific sensitivity labels in Copilot interactions. Click here to learn more about Data Loss Prevention for Microsoft 365 Copilot. Protect sensitive data referenced in Copilot responses - Sensitivity labels help protect files by controlling user access to data. Microsoft 365 Copilot honors sensitivity labels on files and only shows users files they already have access to in prompts and responses. Use Data assessments to identify potential oversharing risks, including unlabeled files. Stay tuned for an upcoming blog post on using DSPM for AI data assessments! Use Copilot to improve your data security posture - Data Security Posture Management combines deep insights with Security Copilot capabilities to help you identify and address security risks in your org. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Govern the prompts and responses in Microsoft 365 Copilot interactions Understand and comply with AI regulations by selecting “Guided assistance to AI regulations” in the Recommendations tab and walking through the “Actions to take”. From the Recommendations tab, create a Control unethical behavior in AI Purview Communications Compliance policy to detect sensitive information in prompts and responses and address potentially unethical behavior in Microsoft Copilot experiences and ChatGPT for Enterprise. This policy covers all users and groups in your organization. To retain and/or delete Microsoft 365 Copilot prompts and responses, setup a Data Lifecycle policy by navigating to Microsoft Purview Data Lifecycle Management and find Retention Policies under the Policies header. You can also preserve, collect, analyze, review, and export Microsoft 365 Copilot interactions by creating an eDiscovery case. 2. Securing Enterprise AI apps Please refer to this amazing blog on Unlocking the Power of Microsoft Purview for ChatGPT Enterprise | Microsoft Community Hub for detailed information on how to integrate with ChatGPT for enterprise, the Purview solutions it currently supports through Purview Communication Compliance, Insider Risk Management, eDiscovery, and Data Lifecycle Management. Learn more about the feature also through our public documentation. 3. Securing other AI Microsoft Purview DSPM for AI currently supports the following list of AI sites. Be sure to also check out our blog on the new Microsoft Purview data security controls for the browser & network to secure other AI apps. Discover potential data security risks in prompts sent to other AI apps In the Overview tab of DSPM for AI, go through these three steps in “Get Started” to discover potential data security risk in other AI interactions: Install Microsoft Purview browser extension For Windows users: The Purview extension is not necessary for the enforcement of data loss prevention on the Edge browser but required for Chrome to detect sensitive info pasted or uploaded to AI sites. The extension is also required to detect browsing to other AI sites through an Insider Risk Management policy for both Edge and Chrome browser. Therefore, Purview browser extension is required for both Edge and Chrome in Windows. For MacOS users: The Purview extension is not necessary for the enforcement of data loss prevention on macOS devices, and currently, browsing to other AI sites through Purview Insider Risk Management is not supported on MacOS, therefore, no Purview browser extension is required for MacOS. Extend your insights for data discovery – this one-click collection policy will setup three separate Purview detection policies for other AI apps: Detect sensitive info shared in AI prompts in Edge – a Purview collection policy that detects prompts sent to ChatGPT consumer, Micrsoft Copilot, DeepSeek, and Google Gemini in Microsoft Edge and discovers sensitive information shared in prompt contents. This policy covers all users and groups in your organization in audit mode only. Detect when users visit AI sites – a Purview Insider Risk Management policy that detects when users use a browser to visit AI sites. Detect sensitive info pasted or uploaded to AI sites – a Purview Endpoint Data loss prevention (eDLP) policy that discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only. With the policies to discover sensitive information in other AI apps in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter by Other AI Apps, and review the following for other AI apps: Total interactions over time (other AI apps) Total visits (other AI apps) Sensitive interactions per AI app Insider Risk severity Insider risk severity per AI app Protect sensitive info shared with other AI apps From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. To protect the sensitive data in interactions for other AI apps, review the Not Started policies in the Recommendations tab and create these policies: Fortify your data security – This will create three policies to manage your data security risks with other AI apps: 1) Block elevated risk users from pasting or uploading sensitive info on AI sites – this will create a Microsoft Purview endpoint data loss prevention (eDLP) policy that uses adaptive protection to give a warn-with-override to elevated risk users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode. Learn more about adaptive protection in Data loss prevention. 2) Block elevated risk users from submitting prompts to AI apps in Microsoft Edge – this will create a Microsoft Purview browser data loss prevention (DLP) policy, and using adaptive protection, this policy will block elevated, moderate, and minor risk users attempting to put information in other AI apps using Microsoft Edge. This integration is built-in to Microsoft Edge. Learn more about adaptive protection in Data loss prevention. 3) Block sensitive info from being sent to AI apps in Microsoft Edge - this will create a Microsoft Purview browser data loss prevention (DLP) policy to detect inline for a selection of common sensitive information types and blocks prompts being sent to AI apps while using Microsoft Edge. This integration is built-in to Microsoft Edge. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Conclusion Microsoft Purview DSPM for AI can help you discover, protect, and govern the interactions from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps. We recommend you review the Reports in DSPM for AI routinely to discover any new interactions that may be of concern, and to create policies to secure and govern those interactions as necessary. We also recommend you utilize the Activity Explorer in DSPM for AI to review different Activity explorer events while users interacting with AI, including the capability to view prompts and response with the right permissions. We will continue to update this blog with new features that become available in DSPM for AI, so be sure to bookmark this page! Follow-up Reading Check out this blog on the details of each recommended policies in DSPM for AI: Microsoft Purview – Data Security Posture Management (DSPM) for AI | Microsoft Community Hub Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365Introducing Security Dashboard for AI (Now in Public Preview)
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is available in public preview. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 2026New Microsoft Purview Deployment Blueprint | Lightweight guide to mitigate data leakage
We’re excited to share our latest Data Security deployment blueprint: “Lightweight guide to mitigate data leakage”—a practical resource designed to help organizations quickly enable core data security features across their Microsoft 365 estate with minimal setup. The blueprint follows a good / better / best model that maps protections to your licensing. “Good” highlights foundational features included in Business Premium SKUs, while “Better” and “Best” layer in advanced E5 Compliance capabilities, such as auto-labeling, Endpoint DLP, insider risk signals and much more. With the new E5 Compliance Add-On for Business Premium, this guide shows how organizations can capture quick wins today while building toward stronger, long-term security practices. This blueprint is designed for IT administrators, security teams, and compliance stakeholders tasked with protecting sensitive data – and it’s equally valuable for Microsoft partners and consultants supporting customers on their data security journey. Whether you’re enabling basic safeguards or advancing towards automated protection, this guide provides clear, actionable steps to strengthen your data security posture. Ready to get started? Visit our Purview deployment blueprint page or jump straight to the direct PPT link for a step-by-step walkthrough. Securing your data doesn’t have to be complex – this lightweight blueprint makes it achievable for organizations of any size.Introducing a faster, more intelligent, end-to-end insider risk investigation experience
Modern insider risk investigations succeed or fail based on how quickly teams can move from signal to clarity. That’s why the latest Microsoft Purview Insider Risk Management (IRM) investigation enhancements are designed as a progressive 3-step acceleration model, starting with AI‑driven prioritization, followed by faster validation, and easier escalations. Step 1: Accelerate triage with the newly enhanced Data Security Triage Agent The first and most impactful speed improvement happens before analysts even begin reviewing individual activities. The newly enhanced Data Security Triage Agent acts as the front door to investigations, helping teams immediately understand who and what matters most. Instead of manually reviewing raw alerts, the Data Security Triage Agent provides analysts with: Prioritized alerts based on meaningful user and activity risk Behavioral risk patterns that summarize activity into clear investigative themes Critical user context, such as role, employment status (including last working date), and prior alert history, surfaced upfront to inform urgency and scope To make these insights even more actionable, we’re now adding an advanced AI reasoning layer that enables deeper, multi-step analysis across these data signals. This new reasoning layer analyzes the massive quantity of logs received each day to better identify risk patterns within IRM alerts. This further improves analysts’ ability to focus attention where risk is most likely, rather than spending time assembling context across multiple views. Launch details: Public Preview: March 2026 Roadmap ID: 557683 Microsoft Purview Insider Risk Management Alerts page showing a prioritized list of insider risk alerts. The view highlights AI‑generated risk summaries, behavioral patterns, and user context such as role, employment status, and prior alerts to help analysts focus on high‑risk activity. Don't have the data security triage agent deployed yet? Navigate to Purview's Agent tab and turn on the Data Security Triage Agent in Insider Risk Management. Analysts and investigators can also access the Data Security Triage Agent from the Triage Agent toggle in the Alerts tab of Insider Risk Management. To help teams get started more quickly, Security Copilot is being made available to all Microsoft 365 E5 customers. Rollout has already begun and will continue in the upcoming months for all Microsoft 365 E5 customers. Customers will receive advanced notice before activation. Learn more. Step 2: Validate risk instantly with content preview in Activity Explorer Once activity is identified, the next question is immediate and practical: “Is this activity actually risky for our business?” With content preview in Activity Explorer, investigators can validate risk the moment suspicious activity appears, without creating a case or waiting for content to be downloaded. Supported file and message content can be previewed inline, allowing analysts to: Confirm whether sensitive data is present Identify false positives early Decide whether escalation is warranted This turns Activity Explorer into a true triage surface, enabling fast, informed decisions before committing to a full investigation workflow. Launch details: Public preview: April 2026 Roadmap ID: 557189 Activity Explorer displaying a table of file activities with inline content preview. A document preview pane shows file contents and metadata, allowing investigators to quickly assess whether accessed content contains sensitive business information. Step 3: Escalate immediately by creating cases without content download When risk is confirmed, speed matters. With the ability to create cases without content download enabled, teams no longer have to wait for content collection before taking action. Analysts can immediately: Create and escalate a case Assign ownership and measure progress Begin coordination with investigators, legal, or HR Content download can be enabled later while the case remains active, allowing teams to maintain momentum while deciding whether deeper evidence review is required. This also enables scale—supporting up to 2,000 active cases, while prioritizing content access for the subset of investigations that truly need it. Launch details: Public preview: March 2026 Roadmap ID: 554940 Microsoft Purview Insider Risk Management dashboard showing alerts for a single user with options to create and manage investigation cases. The interface illustrates case creation and escalation without requiring content download. The enhanced Insider Risk Management investigation experience isn’t about doing more—it’s about moving faster with confidence. By combining AI‑driven prioritization, early risk validation, and easier escalations with these new enhancements, teams can move from signal to action without losing momentum. Get started with Insider Risk investigations Click this link to get started today! Privacy Statement: Microsoft Purview Insider Risk Management correlates various signals to identify potential malicious or inadvertent insider risks, such as IP theft, data leakage, and security violations. Insider Risk Management enables customers to create policies to manage security and compliance. Built with privacy by design, users are pseudonymized by default, and role-based access controls and audit logs are in place to help ensure user-level privacy.Building Secure, Enterprise Ready AI Agents with Purview SDK and Agent Framework
At Microsoft Ignite, we announced the public preview of Purview integration with the Agent Framework SDK—making it easier to build AI agents that are secure, compliant, and enterprise‑ready from day one. AI agents are quickly moving from demos to production. They reason over enterprise data, collaborate with other agents, and take real actions. As that happens, one thing becomes non‑negotiable: Governance has to be built in. That’s where Purview SDK comes in. Agentic AI Changes the Security Model Traditional apps expose risks at the UI or API layer. AI agents are different. Agents can: Process sensitive enterprise data in prompts and responses Collaborate with other agents across workflows Act autonomously on behalf of users Without built‑in controls, even a well‑designed agent can create compliance gaps. Purview SDK brings Microsoft’s enterprise data security and compliance directly into the agent runtime, so governance travels with the agent—not after it. What You Get with Purview SDK + Agent Framework This integration delivers a few key things developers and enterprises care about most: Inline Data Protection Evaluate prompts and responses against Data Loss Prevention (DLP) policies in real time. Content can be allowed or blocked automatically. Built‑In Governance Send AI interactions to Purview for audit, eDiscovery, communication compliance, and lifecycle management—without custom plumbing. Enterprise‑Ready by Design Ship agents that meet enterprise security expectations from the start, not as a follow‑up project. All of this is done natively through Agent Framework middleware, so governance feels like part of the platform—not an add‑on. How Enforcement Works (Quickly) When an agent runs: Prompts and responses flow through the Agent Framework pipeline Purview SDK evaluates content against configured policies A decision is returned: allow, redact, or block Governance signals are logged for audit and compliance This same model works for: User‑to‑agent interactions Agent‑to‑agent communication Multi‑agent workflows Try It: Add Purview SDK in Minutes Here’s a minimal Python example using Agent Framework: That’s it! From that point on: Prompts and responses are evaluated against Purview policies setup within the enterprise tenant Sensitive data can be automatically blocked Interactions are logged for governance and audit Designed for Real Agent Systems Most production AI apps aren’t single‑agent systems. Purview SDK supports: Agent‑level enforcement for fine‑grained control Workflow‑level enforcement across orchestration steps Agent‑to‑agent governance to protect data as agents collaborate This makes it a natural fit for enterprise‑scale, multi‑agent architectures. Get Started Today You can start experimenting right away: Try the Purview SDK with Agent Framework Follow the Microsoft Learn docs to configure Purview SDK with Agent Framework. Explore the GitHub samples See examples of policy‑enforced agents in Python and .NET. Secure AI, Without Slowing It Down AI agents are quickly becoming production systems—not experiments. By integrating Purview SDK directly into the Agent Framework, Microsoft is making governance a default capability, not a deployment blocker. Build intelligent agents. Protect sensitive data. Scale with confidence.Microsoft Purview Data Security Investigations is now generally available
Every data security investigation starts with the same question: What data security risks are buried in this data? Exposed credentials in thousands of files across a data estate. Evidence of fraud hidden in vendor communications. Sensitive documents accidentally shared to a large group. Finding these risks manually — reviewing content file by file, message by message — is no longer viable when organizations are managing 220 zettabytes of data[1] and facing over 12,000 confirmed breaches annually[2]. That's why we built Microsoft Purview Data Security Investigations, now generally available. Microsoft Purview Data Security Investigations enables data security teams to identify investigation-relevant data, investigate that data with AI-powered deep content analysis, and mitigate risk — all within one unified solution. Teams can quickly analyze data at scale to surface sensitive data and security risks, then collaborate securely to address them. By streamlining complex, time‑consuming investigative workflows, admins can resolve investigations in hours instead of weeks or months. Proactive and reactive investigation scenarios Organizations are using Data Security Investigations to tackle diverse data security challenges — from reactive incident response to proactive risk assessment. Some of our top use cases from preview include: Data breach and leak: Understand severity, sensitivity, and significance of data that has been leaked or breached, including risks buried in impacted data, to take action and mitigate its impact to the organization. Credentials exposure: Proactively scan thousands of SharePoint sites to identify files containing credentials like passwords, which can give a threat actor prolonged access to an organization's environment. Internal fraud and bribery: Uncover suspicious communications tied to vendor payments or client interactions, uncovering hard-to-find patterns in large volumes of emails and messages. Sensitive data exposure in Teams: Determine who accessed classified documents after accidental sharing — and whether that data was further distributed. Inappropriate content investigations: Quickly find what was posted, where, and by whom, even when teams only know a timeframe or channel name. Investigations that once took weeks — or weren’t possible at all — can now be completed in hours. By eliminating manual effort and surfacing hidden risks across sprawling data estates, Data Security Investigations empowers teams to investigate more efficiently and confidently, making deep, scalable investigations a reality. What Microsoft Purview Data Security Investigations does – and what’s new Since launching public preview, we've listened closely to customer feedback and made significant enhancements to help teams investigate faster, mitigate more effectively, and manage costs with confidence. Data Security Investigations addresses three critical stages of an investigation: Identify impacted data Data Security admins can efficiently identify relevant data by searching their Microsoft 365 data estate to locate emails, Teams messages, Copilot prompts and responses, and documents. Investigators can also launch pre-scoped investigations from a Microsoft Defender XDR incident or a Microsoft Purview Insider Risk Management case. We’ve recently added a new integration that allows admins to launch a Data Security Investigation from Microsoft Purview Data Security Posture Management as well. This capability can help a data security admin investigate an objective, such as preventing data exfiltration. Investigate using deep content analysis Once the investigation is scoped, the solution's generative AI capabilities allow admins to gain deeper insights into the data, analyzing across 95+ languages to uncover critical sensitive data and security risks. Teams can quickly answer three questions: What data security risks exist within the data? Why do they matter? And what actions can be taken to mitigate them? To help answer these questions, two new investigative capabilities, AI search and AI context input, as well as enhancements to existing features were added in November. Data Security Investigations help admins scale their impact and accelerate investigations with the following features: AI search: Using a new AI-powered natural language search experience, admins can find key risks using keywords, metadata, and semantic embeddings — making it easier to locate investigation-relevant content across large data estates. Categorization: By automatically classifying investigation data into meaningful categories, admins can quickly understand incident severity, what types of content is at risk, and trends within an investigation. Vector search: Using semantic search, admins can find contextually related content — even when exact keywords don't match. Risk examination: Using deep content analysis, admins can examine content for sensitive data and security risks, providing a risk score, recommended mitigation steps, and AI-generated rationale for each analyzed asset. AI context input: Admins can now add investigation-specific context before analysis, resulting in more efficient, higher-quality insights tailored to the specific incident. AI search in action, finding credentials present in the dataset being investigated. Mitigate identified risks Investigators can use Data Security Investigations to securely collaborate with partner teams to mitigate identified risks, simplifying tasks that have traditionally been time consuming and complex. In September, we launched an integration with the Microsoft Sentinel graph, the data risk graph, allowing admins to visualize correlations between investigation data, users, and their activities. This automatically combines unified audit logs, Entra audit logs, and threat intelligence, which would otherwise need to be manually correlated, saving time, providing critical context, and allowing investigators to understand all nodes in their investigation. At the start of January 2026, we launched a new mitigation action, purge, that helps admins quickly and efficiently delete sensitive or overshared content directly within the investigation workflow in the product interface. This reduces exposure immediately and keeps incidents from escalating or recurring. Built-in cost management tools To help customers predict and manage costs associated with using Data Security Investigations, we recently released a lightweight cost estimator and usage dashboard. The in-product cost estimator is now available to help analysts model and forecast both storage and compute unit costs based on specific use cases, enabling more accurate budget planning. Additionally, the usage dashboard provides granular breakdowns of billed storage and compute unit usage, empowering data security admins to identify cost-saving opportunities and optimize resource allocation. For detailed guidance on managing costs, see https://aka.ms/DSIcostmanagementtips. Refined business model for general availability These cost management tools are designed to support our updated business model, which offers greater flexibility and transparency. Customers need the freedom to scale investigations without overcommitting resources. To better align with how customers investigate data risk at scale, we refined the Data Security Investigations business model as part of general availability. The product now uses two consumptive meters: Data Security Investigations Storage Meter – For storing investigation-related data, charged by GB Data Security Investigations Compute Meter – For the computational capacity required to complete AI-powered data analysis and actions, charged by Compute Units (CUs) Monthly charges are determined by the amount of data stored and the number of CUs consumed per hour. This pay-as-you-go model ensures customers only pay for what they need when they need it, providing the flexibility, scalability, and cost efficiency needed for both urgent incident response and proactive data security hygiene assessments. Find more information on pricing at aka.ms/purviewpricing. Get started today As data security threats evolve, so must the way we investigate them. Microsoft Purview Data Security Investigations is now generally available, giving organizations a modern, AI-powered approach to uncovering and mitigating risk — without the complexity of disconnected tools or manual workflows. Whether investigating an active breach or proactively hunting for hidden risks, Data Security Investigations gives data security teams the speed and precision needed to act decisively in today's threat landscape. Join for a live Ask Me Anything with the people behind the product on Thursday February 5th at 10am PST, more details here: aka.ms/PurviewDSIAMA2 Learn more about Data Security Investigations at aka.ms/DSIdocs View pricing details at aka.ms/purviewpricing Try Data Security Investigations today! Visit the product https://purview.microsoft.com/dsi and find setup instructions at aka.ms/DSIsetup [1] Worldwide IDC Global DataSphere Forecast, 2025–2029 [2] 2025-dbir-data-breach-investigations-report.pdfUnlocking the Power of Microsoft Purview for ChatGPT Enterprise
In today's rapidly evolving technology landscape, data security and compliance are key. Microsoft Purview offers a robust solution for managing and securing interactions with AI based solutions. This integration not only enhances data governance but also ensures that sensitive information is handled with the appropriate controls. Let's dive into the benefits of this integration and outline the steps to integrate with ChatGPT Enterprise in specific. The integration works for Entra connected users on the ChatGPT workspace, if you have needs that goes beyond this, please tell us why and how it impacts you. Important update 1: Effective May 1, these capabilities require you to enable pay-as-you-go billing in your organization. Important update 2: From May 19, you are required to create a collection policy to ingest ChatGPT Enterprise information. In DSPM for AI you will find this one click process. Benefits of Integrating ChatGPT Enterprise with Microsoft Purview Enhanced Data Security: By integrating ChatGPT Enterprise with Microsoft Purview, organizations can ensure that interactions are securely captured and stored within their Microsoft 365 tenant. This includes user text prompts and AI app text responses, providing a comprehensive record of communications. Compliance and Governance: Microsoft Purview offers a range of compliance solutions, including Insider Risk Management, eDiscovery, Communication Compliance, and Data Lifecycle & Records Management. These tools help organizations meet regulatory requirements and manage data effectively. Customizable Detection: The integration allows for the detection of built in can custom classifiers for sensitive information, which can be customized to meet the specific needs of the organization. To help ensures that sensitive data is identified and protected. The audit data streams into Advanced Hunting and the Unified Audit events that can generate visualisations of trends and other insights. Seamless Integration: The ChatGPT Enterprise integration uses the Purview API to push data into Compliant Storage, ensuring that external data sources cannot access and push data directly. This provides an additional layer of security and control. Step-by-Step Guide to Setting Up the Integration 1. Get Object ID for the Purview account in Your Tenant: Go to portal.azure.com and search for "Microsoft Purview" in the search bar. Click on "Microsoft Purview accounts" from the search results. Select the Purview account you are using and copy the account name. Go to portal.azure.com and search for “Enterprise" in the search bar. Click on Enterprise applications. Remove the filter for Enterprise Applications Select All applications under manage, search for the name and copy the Object ID. 2. Assign Graph API Roles to Your Managed Identity Application: Assign Purview API roles to your managed identity application by connecting to MS Graph utilizing Cloud Shell in the Azure portal. Open a PowerShell window in portal.azure.com and run the command Connect-MgGraph. Authenticate and sign in to your account. Run the following cmdlet to get the ServicePrincipal ID for your organization for the Purview API app. (Get-MgServicePrincipal -Filter "AppId eq '9ec59623-ce40-4dc8-a635-ed0275b5d58a'").id This command provides the permission of Purview.ProcessConversationMessages.All to the Microsoft Purview Account allowing classification processing. Update the ObjectId to the one retrieved in step 1 for command and body parameter. Update the ResourceId to the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{a4543e1f-6e5d-4ec9-a54a-f3b8c156163f}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam It will look something like this from the command line We also need to add the permission for the application to read the user accounts to correctly map the ChatGPT Enterprise user with Entra accounts. First run the following command to get the ServicePrincipal ID for your organization for the GRAPH app. (Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'").id The following step adds the permission User.Read.All to the Purview application. Update the ObjectId with the one retrieved in step 1. Update the ResourceId with the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{df021288-bdef-4463-88db-98f22de89214}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam 3. Store the ChatGPT Enterprise API Key in Key Vault The steps for setting up Key vault integration for Data Map can be found here Create and manage credentials for scans in the Microsoft Purview Data Map | Microsoft Learn When setup you will see something like this in Key vault. 4. Integrate ChatGPT Enterprise Workspace to Purview: Create a new data source in Purview Data Map that connects to the ChatGPT Enterprise workspace. Go to purview.microsoft.com and select Data Map, search if you do not see it on the first screen. Select Data sources Select Register Search for ChatGPT Enterprise and select Provide your ChatGPT Enterprise ID Create the first scan by selecting Table view and filter on ChatGPT Add your key vault credentials to the scan Test the connection and once complete click continue When you click continue the following screen will show up, if everything is ok click Save and run. Validate the progress by clicking on the name, completion of the first full scan may take an extended period of time. Depending on size it may take more than 24h to complete. If you click on the scan name you expand to all the runs for that scan. When the scan completes you can start to make use of the DSPM for AI experience to review interactions with ChatGPT Enterprise. The mapping to the users is based on the ChatGPT Enterprise connection to Entra, with prompts and responses stored in the user's mailbox. 5. Review and Monitor Data: Please see this article for required permissions and guidance around Microsoft Purview Data Security Posture Management (DSPM) for AI, Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Use Purview DSPM for AI analytics and Activity Explorer to review interactions and classifications. You can expand on prompts and responses in ChatGPT Enterprise 6. Microsoft Purview Communication Compliance Communication Compliance (here after CC) is a feature of Microsoft Purview that allows you to monitor and detect inappropriate or risky interactions with ChatGPT Enterprise. You can monitor and detect requests and responses that are inappropriate based on ML models, regular Sensitive Information Types, and other classifiers in Purview. This can help you identify Jailbreak and Prompt injection attacks and flag them to IRM and for case management. Detailed steps to configure CC policies and supported configurations can be found here. 7. Microsoft Purview Insider Risk Management We believe that Microsoft Purview Insider Risk Management (here after IRM) can serve a key role in protecting your AI workloads long term. With its adaptive protection capabilities, IRM dynamically adjusts user access based on evolving risk levels. In the event of heightened risk, IRM can enforce Data Loss Prevention (DLP) policies on sensitive content, apply tailored Entra Conditional Access policies, and initiate other necessary actions to effectively mitigate potential risks. This strategic approach will help you to apply more stringent policies where it matters avoiding a boil the ocean approach to allow your team to get started using AI. To get started use the signals that are available to you including CC signals to raise IRM tickets and enforce adaptive protection. You should create your own custom IRM policy for this. Do include Defender signals as well. Based on elevated risk you may select to block users from accessing certain assets such as ChatGPT Enterprise. Please see this article for more detail Block access for users with elevated insider risk - Microsoft Entra ID | Microsoft Learn. 8. eDiscovery eDiscovery of AI interactions is crucial for legal compliance, transparency, accountability, risk management, and data privacy protection. Many industries must preserve and discover electronic communications and interactions to meet regulatory requirements. Including AI interactions in eDiscovery ensures organizations comply with these obligations and preserves relevant evidence for litigation. This process also helps maintain trust by enabling the review of AI decisions and actions, demonstrating due diligence to regulators. Microsoft Purview eDiscovery solutions | Microsoft Learn 9. Data Lifecycle Management Microsoft Purview offers robust solutions to manage AI data from creation to deletion, including classification, retention, and secure disposal. This ensures that AI interactions are preserved and retrievable for audits, litigation, and compliance purposes. Please see this article for more information Automatically retain or delete content by using retention policies | Microsoft Learn. Closing By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their ChatGPT Enterprise interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. We are still in preview some of the features listed are not fully integrated, please reach out to us if you have any questions or if you have additional requirements.Artificial Intelligence & Security
Understanding Artificial Intelligence Artificial intelligence (AI) is a computational system that perform human‑intelligence tasks, learning, reasoning, problem‑solving, perception, and language understanding by leveraging algorithmic and statistical methods to analyse data and make informed decisions. Artificial Intelligence (AI) can also be abbreviated as is the simulation of human intelligence through machines programmed to learn, reason, and act. It blends statistics, machine learning, and robotics to deliver following outcomes: Prediction: The application of statistical modelling and machine learning techniques to anticipate future outcomes, such as detecting fraudulent transactions. Automation: The utilisation of robotics and artificial intelligence to streamline and execute routine processes, exemplified by automated invoice processing. Augmentation: The enhancement of human decision-making and operational capabilities through AI-driven tools, for instance, AI-assisted sales enablement. Artificial Intelligence: Core Capabilities and Market Outlook Key capabilities of AI include: Data-driven decision-making: Analysing large datasets to generate actionable insights and optimise outcomes. Anomaly detection: Identifying irregular patterns or deviations in data for risk mitigation and quality assurance. Visual interpretation: Processing and understanding visual inputs such as images and videos for applications like computer vision. Natural language understanding: Comprehending and interpreting human language to enable accurate information extraction and contextual responses. Conversational engagement: Facilitating human-like interactions through chatbots, virtual assistants, and dialogue systems. With the exponential growth of data, ML learning models and computing power. AI is advancing much faster and as According to industry analyst reports breakthroughs in deep learning and neural network architectures have enabled highly sophisticated applications across diverse sectors, including healthcare, finance, manufacturing, and retail. The global AI market is on a trajectory of significant expansion, projected to increase nearly 5X by 2030, from $391 billion in 2025 to $1.81 trillion. This growth corresponds to a compound annual growth rate (CAGR) of 35.9% during the forecast period. These projections are estimates and subject to change as per rapid growth and advancement in the AI Era. AI and Cloud Synergy AI, and cloud computing form a powerful technological mixture. Digital assistants are offering scalable, cloud-powered intelligence. Cloud platforms such as Azure provide pre-trained models and services, enabling businesses to deploy AI solutions efficiently. Core AI Workloads Capabilities Machine Learning Machine learning (ML) underpins most AI systems by enabling models to learn from historical and real-time data to make predictions, classifications, and recommendations. These models adapt over time as they are exposed to new data, improving accuracy and robustness. Example use cases: Credit risk scoring in banking, demand forecasting in retail, and predictive maintenance in manufacturing. Anomaly Detection Anomaly detection techniques identify deviations from expected patterns in data, systems, or processes. This capability is critical for risk management and operational resilience, as it enables early detection of fraud, security breaches, or equipment failures. Example use cases: Fraud detection in financial transactions, network intrusion monitoring in cybersecurity, and quality control in industrial production. Natural Language Processing (NLP) NLP focuses on enabling machines to understand, interpret, and generate human language in both text and speech formats. This capability powers a wide range of applications that require contextual comprehension and semantic accuracy. Example use cases: Sentiment analysis for customer feedback, document summarisation for legal and compliance teams, and multilingual translation for global operations. Principles of Responsible AI To ensure ethical and trustworthy AI, organisations must embrace: Reliability & Safety Privacy & Security Inclusiveness Fairness Transparency Accountability These principles are embedded in frameworks like the Responsible-AI-Standard and reinforced by governance models such as Microsoft AI Governance Framework. Responsible AI Principles and Approach | Microsoft AI AI and Security AI introduces both opportunities and risks. A responsible approach to AI security involves three dimensions: Risk Mitigation: It Is addressing threats from immature or malicious AI applications. Security Applications: These are used to enhance AI security and public safety. Governance Systems: Establishing frameworks to manage AI risks and ensure safe development. Security Risks and Opportunities Due to AI Transformation AI’s transformative nature brings new challenges: Cybersecurity: This brings the opportunities and advancement to track, detect and act against Vulnerabilities in infrastructure and learning models. Data Security: This helps the tool and solutions such as Microsoft Purview to prevent data security by performing assessments, creating Data loss prevention policies applying sensitivity labels. Information Security: The biggest risk is securing the information and due to the AI era of transformation securing IS using various AI security frameworks. These concerns are echoed in The Crucial Role of Data Security Posture Management in the AI Era, which highlights insider threats, generative AI risks, and the need for robust data governance. AI in Security Applications AI’s capabilities in data analysis and decision-making enable innovative security solutions: Network Protection: applications include use of AI algorithms for intrusion detection, malware detection, security situational awareness, and threat early warning, etc. Data Management: applications refer to the use of AI technologies to achieve data protection objectives such as hierarchical classification, leak prevention, and leak traceability. Intelligent Security: applications refer to the use of AI technology to upgrade the security field from passive defence toward the intelligent direction, developing of active judgment and timely early warning. Financial Risk Control: applications use AI technology to improve the efficiency and accuracy of credit assessment, risk management, etc., and assisting governments in the regulation of financial transactions. AI Security Management Effective AI security requires: Regulations & Policies: Establish and safety management laws specifically designed to for governance by regulatory authorities and management policies for key application domains of AI and prominent security risks. Standards & Specifications: Industry-wide benchmarks, along with international and domestic standards can be used to support AI safety. Technological Methods: Early detection with Modern set of tools such as Defender for AI can be used to support to detect and mitigate and remediate AI threats. Security Assessments: Organization should use proper tools and platforms for evaluating AI risks and perform assessments regularly using automated tools approach Conclusion AI is transforming how organizations operate, innovate, and secure their environments. As AI capabilities evolve, integrating security and governance considerations from the outset remains critical. By combining responsible AI principles, effective governance, and appropriate security measures, organizations can work toward deploying AI technologies in a manner that supports both innovation and trust. Industry projections suggest continued growth in AI‑related security investments over the coming years, reflecting increased focus on managing AI risks alongside its benefits. These estimates are subject to change and should be interpreted in the context of evolving technologies and regulatory developments. Disclaimer References to Microsoft products and frameworks are for informational purposes only and do not imply endorsement, guarantee, or contractual commitment. Market projections referenced are based on publicly available industry analyses and are subject to change.