Blog Post

Security, Compliance, and Identity Blog
14 MIN READ

Accelerate AI adoption with next-gen security and governance capabilities

LouAdesida's avatar
LouAdesida
Icon for Microsoft rankMicrosoft
Nov 19, 2024

Today, we are excited to announce new capabilities that help you further accelerate AI transformation with strong security and governance.

As generative AI (GenAI) adoption continues to accelerate, it is transforming businesses and reshaping how they operate. Based on Microsoft-conducted research, 95% of organizationsi are planning to or already using and developing AI applications.  

However, this rapid adoption comes with its own set of challenges. According to Gartner®, “decentralized business units are eagerly experimenting with AI and GenAI use cases without the guidance of an enterprise framework to manage AI trust, risk and securityii.” Security and risk leaders are concerned about emerging risks such as data oversharing and leakage, supply chain vulnerabilities, prompt injection attacks, and compliance with new regulations. 

A successful AI transformation starts with a security-first approach. 80% of security and risk leaders currently have or plan to have a dedicated team to address security for AI, and 78% believe their IT security budget will increase with the adoption of GenAI. 

Acknowledging the priority organizations place on secure AI adoption, Microsoft is committed to providing the necessary support and solutions. As we announced earlier in the Spring, our current product portfolio, including Microsoft Defender, Entra, Intune, and Purview, provides comprehensive security and governance solutions to protect your data and AI, whether you're using Microsoft 365 Copilot, third-party AI apps, or building and customizing your own AI apps. 

Organizations are actively using Microsoft solutions to secure AI. Customers have used Defender for Cloud Apps to discover and secure more than 750,000 GenAI app instances to address AI risks and used Microsoft Purview to audit over 1 billion Copilot interactions to meet their compliance obligations. One notable example is Cummins, which used Microsoft Purview to automate data classification, manage document lifecycles, and secure collaboration efforts, enhancing their audit efficiency and compliance. Additionally, Mia Labs used Microsoft Defender for Cloud to protect its AI agent, Mia, from GenAI risks and threats, which resulted in them successfully resisting multiple jailbreak attempts.   

Today, we are excited to announce new capabilities that help you further accelerate AI transformation with strong security and governance: 

  • Prevent data oversharing and detect insider risks in Microsoft 365 Copilot
    • Data Security Posture Management (DSPM) for AI, formerly known as Microsoft Purview AI Hub, is now generally available, helping IT and security teams proactively discover and manage data risks, including new oversharing assessments in public preview. 
    • Microsoft Purview DLP for Microsoft 365 Copilot, now in public preview, enables admins to configure DLP policies to restrict Microsoft 365 Copilot from processing files based on their sensitivity label. 
    • Microsoft Purview Insider Risk Management and Communication Compliance have new capabilities in public preview to help detect and investigate risky AI usage, such as deliberately or inadvertently accessing sensitive data including protected content, and attempting prompt injection attacks. 
  • Secure and govern custom AI with new capabilities in Purview, Defender, and Azure AI
    • Microsoft Purview DSPM for AI, Insider Risk Management, and data compliance controls for agents built in Copilot Studio are in public preview.
    • AI-Security Posture Management (AI-SPM) in Defender for Cloud is now generally available, helping admins discover and reduce risk to their AI asset inventory in Azure Open AI, Azure ML, and AWS Bedrock, including sensitive data used in grounding LLMs.  
    • Azure AI Foundry introduces a new management center in public preview to facilitate easier collaboration in resource management, security and compliance workflows between developers and security teams. 
  • Prepare for AI regulations, such as the EU AI Act with new capabilities in Purview and Azure AI 
    • Four new AI compliance assessments, including the EU AI Act, NIST AI RMF, ISO 42001, and ISO 23894, in Microsoft Purview Compliance Manager are now generally available to help you assess, implement, and strengthen your controls. 
    • AI reports in Azure AI in private preview enables developers to record models used in AI and the security and safety configurations to help meet compliance and audit requirements. 

In this blog, we will dive into each of these announcements and how they address the main challenges organizations face when using and developing AI.  

Prevent data oversharing and detect insider risks in Microsoft 365 Copilot with new Microsoft Purview capabilities 

As organizations adopt AI, the risk of data oversharing, where sensitive information is shared beyond its intended audience, and data leakage has become the number one concern for security leadersiii. To help alleviate these concerns, today we are introducing new controls to help you address oversharing risks in Microsoft 365 Copilot.  

The first step to addressing data oversharing in AI is having clear visibility into where and how it occurs. With that, we are excited to announce the general availability of Data Security Posture Management (DSPM) for AI, formerly known as Microsoft Purview AI Hub. DSPM for AI enables IT and security teams to proactively discover data risks, such as sensitive data in user prompts, and receive recommended actions and insights for quick responses during incidents, proactively fostering a more secure and compliant AI environment.  

Figure 1: In this figure, the Data Security Posture Management for AI dashboard shows capabilities for admins to discover data risks in Microsoft 365 Copilot and other AI applications, view recommendations to prevent oversharing, and track Microsoft 365 Copilot interactions over time

Along with the GA announcement, we are also introducing new Data oversharing assessments in public preview. This new capability helps organizations discover data at risk of oversharing by scanning files containing sensitive data and identifying data repositories such as SharePoint sites with overly permissive user access.  

Organizations can run assessments both pre- and post-deployment of Copilot. Pre-deployment, they can use the assessment to identify unlabeled files being accessed by specific users, highlighting potential risks before Copilot is rolled out broadly. Post-deployment, the assessment helps pinpoint sensitive data referenced in Copilot responses, allowing organizations to prioritize permission clean up with recommendations on how to fix permissions and to enhance data protection efforts with labeling policies for data at highest risk of oversharing. 

Figure 2: Data oversharing assessment, where you can discover data at risk of oversharing and receive recommendations for fixing permissions and protecting sensitive data

Each assessment report provides recommendations on how to fix permissions using controls like Restricted Content Discoverability (RCD) to prevent specific content from appearing in Copilot queries, Entra ID Governance user access review to conduct periodic reviews and ensure users have appropriate access, and SharePoint access review to allow owners to review and limit access to only necessary users. Additionally, it offers recommendations on how to protect sensitive data by configuring auto-labeling policies or default labels for files within the over-permissioned site. Learn More. 

Today, Microsoft 365 Copilot honors permission based on sensitivity labels assigned by Microsoft Purview Information Protection when referencing sensitive documents. To help further reduce the risk of AI-related oversharing at scale, we are thrilled to introduce Microsoft Purview Data Loss Prevention for Microsoft 365 Copilot. With this new capability, data security admins can now create DLP policies to exclude documents with specified sensitivity labels from being summarized or used in responses in Microsoft 365 Copilot Business Chat. This capability, which currently works with Office files and PDFs in SharePoint, helps ensure that potentially sensitive content within a labeled document is not readily available to users to copy and paste into other applications. Additionally, this capability ensures that content within a labeled document is not processed by Microsoft 365 Copilot for grounding data. In other words, it prevents Copilot from using the labeled content to generate or inform its responses, thereby protecting sensitive data from being inadvertently overshared. An example of such content includes confidential legal documents with highly specific semantics that could lead to improper guidance if summarized by AI or modified by end users. This can also apply to "Internal only” documents with data that shouldn’t be copy & pasted into emails sent outside of the organization. 

These restrictions can be configured at a file, group, site, and/or user level. For example, if you have users who are privy to a Merger and Acquisition (M&A) and scoped into an M&A group, you can configure your DLP for Microsoft 365 Copilot policy to prevent Copilot from summarizing M&A-labeled documents for everyone except those in the M&A group. Learn More. 

Figure 3: DLP admins can now restrict Microsoft 365 Copilot from processing files with a specified sensitivity label as a DLP policy action.

Data doesn’t move itself. People move and access data. Insider risks are the leading cause of data breaches, as 63% of these incidents stem from inadvertent, negligent or malicious insiders1. In addition to data and permission controls that help address data oversharing or leakage, security teams also need ways to detect users' risky activities in AI applications that could potentially lead to data security incidents. We are excited to introduce risky AI usage indicators, policy template, and analytics report in Microsoft Purview Insider Risk Management. These new capabilities can help security teams with appropriate permissions detect risky activities, such as a departing employee entering an unusual number of Copilot prompts containing sensitive data, which deviates from their past activity patterns. Security teams can then effectively detect and respond to these potential incidents to minimize the negative impact. For example, security teams can detect a malicious user attempting to access sensitive data from an over-permissioned SharePoint site, efficiently investigate the incident with correlated insights, and configure Adaptive Protection to assess insider risk levels based on these detections.  High-risk users can then be automatically restricted from accessing that same SharePoint site, based on a Conditional Access policy.

Figure 4: An IRM alert from a Risky AI usage policy, showing risky activity and sequenced activities in GenAI appsFigure 5: IRM analytics provide an overview of top risks related to risky AI usage within an organization such as entering risky prompts or receiving sensitive responses in Microsoft 365 Copilot, along with policy recommendations to protect sensitive data from those risks

These insights into risky AI usage will also be integrated into Microsoft Defender XDR, allowing the SOC team to investigate new AI-related risks holistically. Alerts can be correlated across Microsoft Entra, Insider Risk Management, and Data Loss Prevention, providing a comprehensive view of potential threats. For example, the SOC can now investigate incidents involving credential access in Entra, attempts at using Microsoft 365 Copilot to find sensitive data and associated data exfiltration in Purview, and remediation actions by forcing a password reset with Adaptive Protection and Conditional Access across Purview and Entra. This integration ensures a more robust and holistic approach to investigate and respond to AI-related risks. Learn More. 

Figure 6: Defender XDR identifies a potentially compromised user performing a multi-stage attack leveraging GenAI apps through IRM alerts

GenAI introduces new security and safety risks that require new controls to address. For instance, malicious users can perform prompt injection attacks to elicit unauthorized behaviors from GenAI, and users can create content that may violate copyrights. Gartner suggests that: “Build custom GenAI applications securely with LLM built-in security and safety guardrails, and with third-party GenAI TRiSM products to mitigate adversarial prompting and output risksiv.” and compliant use of Microsoft 365 Copilot is critical. To support this, we are introducing two new GenAI risk detections in Microsoft Purview Communication Compliance - prompt injection attacks and protected materials. These new risk detections will flow into Insider Risk Management as indicators and will be correlated with other user activities that may lead to potential security incidents. 

 

Prompt Injection detection spots both direct and indirect prompt injection attacks.  For example, Communication Compliance can detect and flag for admins when a user is deliberately trying to exploit system vulnerabilities to elicit unauthorized behavior from Microsoft 365 Copilot. Admins with appropriate permissions can then conduct a more holistic investigation in Insider Risk Management.  

Figure 7: An IRM alert from a Risky AI usage policy showcases a potential prompt injection attack identified by Communication ComplianceFigure 8: Communication Compliance detects a potential prompt injection attack

Protected materials detection identifies sensitive content in Copilot responses, such as news articles, lyrics, code content from known GitHub repositories, and software libraries within Copilot responses. Admins can similarly investigate potential incidents which could put the organization out of compliance with intellectual property laws. Learn More. 

Secure and govern your low-code/no-code and pro-code custom AI with new capabilities in Purview, Defender for Cloud, and Azure AI 

66% of organizations are either planning or already developing custom AI applications to drive business innovation, gain more control over their data, and seamlessly integrate with existing systems. Low-code/no-code AI platforms, such as Copilot Studio, further encourage AI app customization1. Among companies developing or having developed AI apps, the average number of applications being worked on is more than 131, underscoring the complexity of managing risks and vulnerabilities across multiple AI apps simultaneously. To help secure and govern AI innovation, we’re introducing new capabilities in Microsoft Purview, Microsoft Defender for Cloud, and Azure to provide data controls and manage vulnerabilities. 

When building low-code/no-code AI, citizen developers often lack security expertise and tools to enable security and compliance controls. To help with this, we are introducing new Microsoft Purview data controls for agents built in Copilot Studio, providing data security and compliance that enable low-code/no-code developers to build more secure agents. These agents could vary in levels of complexity and be tailored to various business processes, such as a customer support or sales automation agent. By incorporating SharePoint sites as a knowledge base, you can build agents that provide Microsoft Purview data security and compliance controls on the data within those sites. For instance, when a user interacts with the agent, it accesses sensitive data by recognizing and honoring sensitivity labels. Microsoft Purview also protects the sensitive data generated by the agent through label inheritance and enforces label permissions, ensuring that only authorized users have access. 

For data security admins, Microsoft Purview offers the ability to discover the sensitivity of data in user prompts and agent responses within DSPM for AI. Additionally, it can detect user activity anomalies and risky AI usage, and govern user prompts and agent responses with audit, eDiscovery, retention policies, and non-compliant usage detection.

Figure 9a: DSPM for AI highlights activities like total interactions over time in Copilot StudioFigure 9b: DSPM for AI highlights insights like sensitive interactions and user information like insider risk severity for Copilot Studio

Next, as organizations increasingly develop and deploy AI applications with complex configurations across models, orchestrators, SDKs, and connected datastores, visibility into their inventory and associated risks is more important than ever.  To help customers better understand their deployed AI applications and anticipate potential threats, we’re announcing the general availability of AI security posture management (AI-SPM) in Microsoft Defender for Cloud, introducing new capabilities within AI-SPM like expanded support for Amazon Bedrock and AI grounding data insights. AI-SPM enables organizations to discover and inventory their entire AI stack across multicloud environments, including Azure and AWS, continuously assess vulnerabilities, and provide remediation workflows to defend against potential attack paths. New enhancements in GA include:  

  • Expanded support of Amazon Bedrock: Deeper discovery of AWS AI technologies, new recommendations, and attack paths. Additional support for AWS such as Amazon OpenSearch (service domains and service collections), Amazon Bedrock Agents, and Amazon Bedrock Knowledge Bases. 
  • New AI grounding data insights: Grounding is the hidden link between organizational data and AI applications. Enriched data insights provide resource context to its use as a grounding source within an AI application. Ensuring the right data is correctly configured for grounding can validate outputs for accuracy – preventing hallucinations and sensitive data exposure. Defender for Cloud identifies the datasets used for grounding and uses the findings to help customers prioritize recommendations and attack paths to reduce the risk of grounding data poisoning and sensitive data exposure.    

With this announcement, organizations can start enabling posture management to cover all your cloud security risks, including new AI posture features. 

Figure 10: Query your multicloud environment to identify data sources used for AI grounding and gain insights into each specific AI resource

Lastly, securing and governing AI applications requires teamwork between developers, security, and risk teams. In the era of AI, the security shift-left motion urges developers to gradually take on more responsibilities to enable and validate the security, compliance, and safety of AI applications that they develop. Last month, Azure AI released built-in policies that enable AI administrators to specify which models from the Azure AI model catalog are approved for deployment within their organization. This simplifies model selection for developers and enables security teams to manage risks around model vulnerabilities in a more proactive way. Today, we are excited to announce the public preview of management center in Azure AI Foundry, which provides cross-functional teams with a simplified and centralized management and governance experience. Now, AI development and compliance teams can easily create, manage, and audit their organization’s hub, projects, and resources from within Azure AI Foundry, saving developers significant time, facilitating easier resource management, and reducing the risk of regulatory non-compliance.   

Start preparing for AI regulations, such as the EU AI Act with new capabilities in Purview and Azure AI 

With new AI regulations like the EU AI Act and NIST AI RMF emerging, 55% of leaders lack understanding of how AI is and will be regulated and are seeking guidance on how to adhere to these requirements, reduce risks, and avoid penalties when deploying AI solutions. To support this need, we are announcing the general availability of Microsoft Purview Compliance Manager assessment templates for the EU AI Act, NIST AI RMF, ISO 23894 and ISO 42001 to help assess and strengthen compliance controls. For compliance admins preparing for the EU AI Act, the out-of-the-box assessment template helps identify the necessary technology, people, and process control requirements. It provides step-by-step guidance on recommended actions to implement, test and verify the controls. Each control includes specific instructions that allow admins to assess and record the control status, providing a structured approach to compliance. Additionally, it offers visibility into Microsoft managed controls, highlighting those that Microsoft is responsible for and has already satisfied. Admins can view their compliance score, which measures their current compliance posture and progress against the regulation. While these templates provide valuable support, using them does not guarantee compliance; organizations should consult legal professionals for specific guidance. Learn more. 

Figure 11: Assess compliance and learn how to mitigate risk with the EU AI Act Compliance Score overview in Microsoft Purview Compliance Manager

As organizations develop and build AI applications and systems, it’s critical to maintain auditability to respond to AI regulations like the EU AI Act. AI reports in Azure AI will be available soon in private preview, helping developers assemble key AI project details, such as model cards, model versions, content safety filter configurations and evaluation metrics into a unified report from within their coding environment. This report can be exported in PDF or SPDX formats, helping development teams demonstrate production readiness within governance, risk, and compliance (GRC) workflows and facilitate easier, ongoing audits of applications in production.  

Other capabilities that help you secure and govern AI 

Microsoft Purview's integration with ChatGPT Enterprise Compliance API (public preview) 

With this integration in public preview, security teams can discover usage and actionable insights to proactively strengthen data posture with Purview DSPM for AI, detect risky AI, and implement essential compliance controls, such as retention or deletion policies, eDiscovery, audit, and communication compliance, for user prompts and AI-generated outputs in ChatGPT Enterprise. For example, organizations can identify potential data exposure risks by detecting instances of sensitive information that's shared with ChatGPT Enterprise. If any data leakage or data exfiltration occurs, customers can use eDiscovery to collect prompts and responses for legal investigations  

Retention and deletion policies for Microsoft 365 Copilot (public preview) 

With the changing legal and compliance landscape organizations need flexibility in managing prompt and response data. For example, they may want to keep an executive’s Microsoft 365 Copilot activity for several years but delete the activity of a non-executive user after one year. Last year, we announced retention and deletion policies for Copilot and Teams chat locations as a single policy configuration. Today, we are separating Copilot interactions from Teams chats to give customers more flexibility in defining retention and deletion policies for each location.  

Explore additional resources for securing and governing AI 

With the new capabilities we've announced to support your secure AI transformation, Microsoft stands as the only security provider offering comprehensive AI security solutions, including data security and compliance, security posture management, threat protection, safety system and governance for AI workloads. Below are additional resources to help deepen your understanding and get started with these new features: 

  • Read our new whitepaper – Accelerating AI transformation by prioritizing security, a path to implementing effective security for AI that enables innovation 

 

 

1Protect your data and recover from insider data sabotage, Microsoft Tech Community Blog May 2024

iSecurity for AI Thought Leadership Whitepaper, November 2024 

iiGartner®, Use TRiSM to Manage AI Governance, Trust, Risk and Security, Avivah Litan, September 25 2024. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. 

iiiAccelerating AI transformation by prioritizing security, Microsoft Oct 2024. 

ivGartner, Generative AI Adoption: Top Security Threats, Risks and Mitigations, Dennis Xu, 17 January 2024. 
Updated Nov 19, 2024
Version 2.0
No CommentsBe the first to comment