Generative AI adoption is accelerating, with AI transformation happening in real-time across various industries. This rapid adoption is reshaping how organizations operate and innovate, but it also introduces new challenges that require careful attention.
At Ignite last fall, we announced several new capabilities to help organizations secure their AI transformation. These capabilities were designed to address top customer priorities such as preventing data oversharing, safeguarding custom AI, and preparing for emerging AI regulations. Organizations like Cummins, KPMG, and Mia Labs have leveraged these capabilities to confidently strengthen their AI security and governance efforts.
However, despite these advancements, challenges persist. One major concern is the rise of shadow AI—applications used without IT or security oversight. In fact, 78% of AI users report bringing their own AI tools, such as ChatGPT and DeepSeek, into the workplace1. Additionally, new threats, like indirect prompt injection attacks, are emerging, with 77% of organizations expressing concerns and 11% of organizations identifying them as a critical risk2.
To address these challenges, we are excited to announce new features and capabilities that help customers do the following:
- Prevent risky access and data leakage in shadow AI with granular access controls and inline data security capabilities
- Manage AI security posture across multi-cloud and multi-model environments
- Detect and respond to new AI threats, such as indirect prompt injections and wallet abuse
- Secure and govern data in Microsoft 365 Copilot and beyond
In this blog, we’ll explore these announcements and demonstrate how they help organizations navigate AI adoption with confidence, mitigating risks, and unlocking AI’s full potential on their transformation journey.
Prevent risky access and data leakage in shadow AI
With the rapid rise of generative AI, organizations are increasingly encountering unauthorized employee use of AI applications without IT or security team approval. This unsanctioned and unprotected usage has given rise to “shadow AI,” significantly heightening the risk of sensitive data exposure. Today, we are introducing a set of access and data security controls designed to support a defense-in-depth strategy, helping you mitigate risks and prevent data leakage in third-party AI applications.
Real-time access controls to shadow AI
The first line of defense against security risks in AI applications is controlling access. While security teams can use endpoint controls to block access for all users across the organization, this approach is often too restrictive and impractical. Instead, they need more granular controls at the user level to manage access to SaaS-based AI applications.
Today we are announcing the general availability of the AI web category filter in Microsoft Entra Internet Access to help enforce access controls that govern which users and groups have access to different AI applications. Internet Access deep integration with Microsoft Entra ID extends Conditional Access to any AI application, enabling organizations to apply AI access policies with granularity. By using Conditional Access as the policy control engine, organizations can enforce policies based on user roles, locations, device compliance, user risk levels, and other conditions, ensuring secure and adaptive access to AI applications. For example, with Internet Access, organizations can allow your strategy team to experiment with all or most consumer AI apps while blocking those apps for highly privileged roles, such as accounts payable or IT infrastructure admins. For even greater security, organizations can further restrict access to all AI applications if Microsoft Entra detects elevated identity risk.
Inline discovery and protection of sensitive data
Once users gain access to sanctioned AI applications, security teams still need to ensure that sensitive data isn’t shared with those applications. Microsoft Purview provides data security capabilities to prevent users from sending sensitive data to AI applications.
Today, we are announcing enhanced Purview data security capabilities for the browser available in preview in the coming weeks. The new inline discovery & protection controls within Microsoft Edge for Business detect and block sensitive data from being sent to AI apps in real-time, even if typed directly. This prevents sensitive data leaks as users interact with consumer AI applications, starting with ChatGPT, Google Gemini, and DeepSeek. For example, if an employee attempts to type sensitive details about an upcoming merger or acquisition into Google Gemini to generate a written summary, the new inline protection controls in Microsoft Purview will block the prompt from being submitted, effectively blocking the potential leaks of confidential data to an unsanctioned AI app.
This augments existing DLP controls for Edge for Business, including protections that prevent file uploads and the pasting of sensitive content into AI applications. Since inline protection is built natively into Edge for Business, newly deployed policies automatically take effect in the browser even if endpoint DLP is not deployed to the device.
Figure 1: Inline DLP in Edge for Business prevents sensitive data from being submitted to consumer AI applications like Google Gemini by blocking the action.
The new inline protection controls are integrated with Adaptive Protection to dynamically enforce different levels of DLP policies based on the risk level of the user interacting with the AI application. For example, admins can block low-risk users from submitting prompts containing the highest-sensitivity classifiers for their organization, such as M&A-related data or intellectual property, while blocking prompts containing any sensitive information type (SIT) for elevated-risk users. Learn more about inline discovery & protection in the Edge for Business browser in this blog.
In addition to the new capabilities within Edge for Business, today we are also introducing Purview data security capabilities for the network layer available in preview starting in early May. Enabled through integrations with Netskope and iboss to start, organizations will be able to extend inline discovery of sensitive data to interactions between managed devices and untrusted AI sites. By integrating Purview DLP with their SASE solution (e.g. Netskope and iBoss), data security admins can gain visibility into the use of sensitive data on the network as users interact with AI applications. These interactions can originate from desktop applications such as the ChatGPT desktop app or Microsoft Word with a ChatGPT plugin installed, or non-Microsoft browsers such as Opera and Brave that are accessing AI sites. Using Purview Data Security Posture Management (DSPM) for AI, admins will also have visibility into how these interactions contribute to organizational risk and can take action through DSPM for AI policy recommendations. For example, if there is a high volume of prompts containing sensitive data sent to ChatGPT, DSPM for AI will detect and recommend a new DLP policy to help mitigate this risk. Learn more about inline discovery for the network, including Purview integrations with Netskope and iBoss, in this blog.
Manage AI security posture across multi-cloud and multi-model environments
In today’s rapidly evolving AI landscape, developers frequently leverage multiple cloud providers to optimize cost, performance, and availability. Different AI models excel at various tasks, leading developers to deploy models from various providers for different use cases. Consequently, managing security posture across multi-cloud and multi-model environments has become essential.
Today, Microsoft Defender for Cloud supports deployed AI workloads across Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock. To further enhance our security coverage, we are expanding AI Security Posture Management (AI-SPM) in Defender for Cloud to improve compatibility with additional cloud service providers and models. This includes:
- Support for Google Vertex AI models
- Enhanced support for Azure AI Foundry model catalog and custom models
With this expansion, AI-SPM in Defender for Cloud will now offer the discovery of the AI inventory and vulnerabilities, attack path analysis, and recommended actions to address risks in Google VertexAI workloads. Additionally, it will support all models in Azure AI Foundry model catalog, including Meta Llama, Mistral, DeepSeek, as well as custom models. This expansion ensures a consistent and unified approach to managing AI security risks across multi-model and multi-cloud environments. Support for Google Vertex AI models will be available in public preview starting May 1, while support for Azure AI Foundry model catalog and custom models is generally available today. Learn More.
Figure 2: Microsoft Defender for Cloud detects an attack path to a DeepSeek R1 workload.
In addition, Defender for Cloud will also offer a new data and AI security dashboard. Security teams will have access to an intuitive overview of their datastores and AI services across their multi-cloud environment, top recommendations, and critical attack paths to prioritize and accelerate remediation. The dashboard will be generally available on May 1.
Figure 3: The new data & AI security dashboard in Microsoft Defender for Cloud provides a comprehensive overview of your data and AI security posture.
These new capabilities reflect Microsoft’s commitment to helping organizations address the most critical security challenges in managing AI security posture in their heterogeneous environments.
Detect and respond to new AI threats
Organizations are integrating generative AI into their workflows and facing new security risks unique to AI. Detecting and responding to these evolving threats is critical to maintaining a secure AI environment. The Open Web Application Security Project (OWASP) provides a trusted framework for identifying and mitigating such vulnerabilities, such as prompt injection and sensitive information disclosure.
Today, we are announcing Threat protection for AI services, a new capability that enhances threat protection in Defender for Cloud, enabling organizations to secure custom AI applications by detecting and responding to emerging AI threats more effectively. Building on the OWASP Top 10 risks for LLM applications, this capability addresses those critical vulnerabilities highlighted on the top 10 list, such as prompt injections and sensitive information disclosure.
Threat protection for AI services helps organizations identify and mitigate threats to their custom AI applications using anomaly detection and AI-powered insights. With this announcement, Defender for Cloud will now extend its threat protection for AI workloads, providing a rich suite of new and enriched detections for Azure OpenAI Service and models in the Azure AI Foundry model catalog. New detections include direct and indirect prompt injections, novel attack techniques like ASCII smuggling, malicious URL in user prompts and AI responses, wallet abuse, suspicious access to AI resources, and more. Security teams can leverage evidence-based security alerts to enhance investigation and response actions through integration with Microsoft Defender XDR.
For example, in Microsoft Defender XDR, a SOC analyst can detect and respond to a wallet abuse attack, where an attacker exploits an AI system to overload resources and increase costs. The analyst gains detailed visibility into the attack, including the affected application, user-entered prompts, IP address, and other suspicious activities performed by the bad actor. With this information, the SOC analyst can take action and block the attacker from accessing the AI application, preventing further risks. This capability will be generally available on May 1. Learn More.
Figure 4: Security teams can investigate new detections of AI threats in Defender XDR.
Secure and govern data in Microsoft 365 Copilot and beyond
Data oversharing and non-compliant AI use are significant concerns when it comes to securing and governing data in Microsoft Copilots. Today, we are announcing new data security and compliance capabilities.
New data oversharing insights for unclassified data available in Microsoft Purview DSPM for AI: Today, we are announcing the public preview of on-demand classification for SharePoint and OneDrive. This new capability gives data security admins visibility into unclassified data stored in SharePoint and OneDrive and enables them to classify that data on demand. This helps ensure that Microsoft 365 Copilot is indexing and referencing files in its responses that have been properly classified. Previously, unclassified and unscanned files did not appear in DSPM for AI oversharing assessments. Now admins can initiate an on-demand data classification scan, directly from the oversharing assessment, ensuring that older or previously unscanned files are identified, classified, and incorporated into the reports. This allows organizations to detect and address potential risks more comprehensively. For example, an admin can initiate a scan of legacy customer contracts stored in a specified SharePoint library to detect and classify sensitive information such as account numbers or contact information. If these newly classified documents match the classifiers included in any existing auto-labeling policies, they will be automatically labeled. This helps ensure that documents containing sensitive information remain protected when they are referenced in Microsoft 365 Copilot interactions. Learn More.
Figure 5: Security teams can trigger on-demand classification scan results in the oversharing assessment in Purview DSPM for AI.
Secure and govern data in Security Copilot and Copilot in Fabric: We are excited to announce the public preview of Purview for Security Copilot and Copilot in Fabric, starting with Copilot in Power BI, offering DSPM for AI, Insider Risk Management, and data compliance controls, including eDiscovery, Audit, Data Lifecycle Management, and Communication Compliance. These capabilities will help organizations enhance data security posture, manage compliance, and mitigate risks more effectively. For example, admins can now use DSPM for AI to discover sensitive data in user prompts and responses and detect unethical or risky AI usage.
Figure 6: Purview’s DSPM for AI provides admins with comprehensive reports on user activities and data interactions in Copilot for Power BI, as part of the Copilot in Fabric experience, and Security Copilot.
DSPM Discoverability for Communication Compliance: This new feature in Communication Compliance, which will be available in public preview starting May 1, enables organizations to quickly create policies that detect inappropriate messages that could lead to data compliance risks. The new recommendation card on the DSPM for AI page offers a one-click policy creation in Microsoft Purview Communication Compliance, simplifying the detection and mitigation of potential threats, such as regulatory violations or improperly shared sensitive information.
With these enhanced capabilities for securing and governing data in Microsoft 365 Copilot and beyond, organizations can confidently embrace AI innovation while maintaining strict security and compliance standards.
Explore additional resources
As organizations embrace AI, securing and governing its use is more important than ever. Staying informed and equipped with the right tools is key to navigating its challenges. Explore these resources to see how Microsoft Security can help you confidently adopt AI in your organization.
- Learn more about Security for AI solutions on our webpage
- Get started with Microsoft Purview
- Get started with Microsoft Defender for Cloud
- Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial
Learn more about the innovations designed to help your organization protect data, defend against cyber threats, and stay compliant. Join Microsoft leaders online at Microsoft Secure on April 9.
[1] 2024 Work Trend Index Annual Report, Microsoft and LinkedIn, May 2024, N=31,000.
[2] Gartner®, Gartner Peer Community Poll – If your org’s using any virtual assistants with AI capabilities, are you concerned about indirect prompt injection attacks? GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Updated Mar 25, 2025
Version 3.0LouAdesida
Microsoft
Joined October 17, 2023
Microsoft Security Blog
Follow this blog board to get notified when there's new activity