Secure your data to confidently take advantage of Generative AI with Microsoft Purview
Published May 06 2024 09:00 AM 9,472 Views
Microsoft

Security teams often find themselves in the dark when it comes to data security risks associated with AI usage. An alarming 80% of leaders cite the leakage of sensitive data as their primary concern [1]. And more than 30% of decision makers say they don’t know where or what their sensitive business critical data is [2], and with generative AI generating more data, getting that visibility into how sensitive data is flowing through AI and how your users are interacting with generative AI applications is essential. Without proper visibility, organizations struggle to safeguard their assets effectively. Organizations want to get ahead of and minimize the inherent risks of data being shared with generative AI applications, such as data oversharing, data leakage and non-compliant use of GenAI apps. Instead of restricting AI use to avoid these outcomes, security teams can mitigate and manage risks more effectively by proactively gaining visibility into AI usage within the organization and implementing corresponding protection and governance controls. 

 

On top of that, evolving regulatory environments and rapid technological advancements create complex challenges for customers. Adhering to new regulations, particularly those pertaining to cutting-edge technologies like GenAI, is essential for devising an effective security and compliance strategy. In this era where AI regulations and standards, such as the EU AI Act and NIST AI RMF, are taking shape, it is imperative for organizations to develop and use AI applications in a manner that is safe, transparent and responsible. According to Gartner®, “by 2027 at least one global company will see its AI deployment banned by a regulator for noncompliance with data protection or AI governance legislation.” [3] AI will be a catalyst for regulatory changes, and having secure and compliant AI will become fundamental. 

 

At Microsoft Ignite 23’ and Microsoft Secure 24’, we introduced new capabilities to help organizations discover, protect and govern data in an AI-first world with Microsoft Purview.  

 

Today, we are excited to announce new innovations from Microsoft Purview to help you secure and govern AI: 

  • Microsoft Purview AI Hub, which helps organizations discover how AI applications such as Copilot for M365 and third-party AI apps are being used in their organization and provides ready-to-use policies to protect data, is now available in public preview. 
  • New insights into unlabeled files and SharePoint sites referenced by Microsoft Copilot for Microsoft 365 will also be included in the public preview release of the AI Hub. This helps organizations prioritize the most critical data risks and put in place protection policies to prevent potential oversharing of sensitive data. 
  • New insights into non-compliant and unethical use of AI interactions will also be included in the public preview release of the AI Hub. This helps organizations quickly gain insight into unethical use such as regulatory collusion, money laundering, targeted harassment and more. 
  • New Compliance Manager assessment templates for EU AI Act, NIST AI RMF, ISO/IEC 23894:2023 and ISO/IEC 42001 to help assess, implement and strengthen compliance controls to meet AI regulatory requirements and standards. These insights will also be surfaced in the AI Hub and available in public preview. 

 

Discover how AI applications are being used 

To help customers gain a better understanding of which AI applications are being used and how – we are announcing the public preview of Microsoft Purview AI Hub – which includes insights like sensitive data shared with AI apps (whether Copilot for M365 or third-party AI apps), total number of users interacting with AI apps and their associated risk level, pulled from Microsoft Purview Insider Risk Management, and more.  

 

As organizations adopt Copilot for Microsoft 365, data security controls become paramount to avoid potential overexposure of sensitive data or SharePoint sites. And we know that it is challenging to manage and label vast amounts of information, often leaving sensitive data vulnerable to data oversharing. Microsoft Purview AI Hub addresses this challenge by surfacing unlabeled files and SharePoint sites referenced by Copilot, helping you prioritize your most critical data risks and prevent potential oversharing of sensitive data. 

 

Figure 1: Gain insights into unlabeled files and SharePoint sites referenced in Copilot responses in Microsoft Purview AI HubFigure 1: Gain insights into unlabeled files and SharePoint sites referenced in Copilot responses in Microsoft Purview AI Hub

 

Additionally, we are excited to announce the public preview of non-compliant usage insights in the AI Hub to discover unethical use in AI interactions, such as regulatory collusion, money laundering, targeted harassment and more. 

 

Figure 2: Top unethical use in AI  in Microsoft Purview AI HubFigure 2: Top unethical use in AI  in Microsoft Purview AI Hub

 

Protect sensitive data throughout its AI journey 

Organizations have expressed concerns about implementing an AI strategy, due to the lack of controls to detect and mitigate risks, particularly around the leakage of intellectual property through AI tools. For example, in an instance where confidential details of a highly sensitive project are inadvertently disclosed to unauthorized users. Unsurprisingly, organizations want to ensure that sensitive data does not get into the hands of people who should not have access to it. 

 

Microsoft Purview provides data security controls to ensure that Copilot for M365 responses are based on a user’s permissions, providing content only to those with appropriate access rights. Additionally, Microsoft Purview Information Protection provides controls on prompts and responses such as encryption, watermarking, auto labeling and label inheritance to prevent sensitive data from being overshared. 

 

For third party GenAI apps, Microsoft Purview Data Loss Prevention provides capabilities to prevent users from pasting sensitive data into GenAI prompts. Adaptive Protection in Microsoft Purview enables you to take a dynamic approach to security, by proactively blocking high-risk users from pasting sensitive data into a third-party GenAI app while allowing low risk users to do so. By leveraging these capabilities, organizations can confidently safeguard their sensitive data from potential risks posed by AI applications.  

 

Govern the use of Copilot for M365 

In today's regulatory landscape, compliance and risk managers are increasingly concerned about non-compliant AI usage, which can lead to severe regulatory violations and hefty fines. Microsoft Purview offers integrated compliance tools specifically designed to address these challenges. 

Compliance and risk managers are also concerned about non-compliant AI usage, which may result in regulatory compliance violations and fines. Microsoft Purview provides integrated compliance tools to help govern AI usage. Organizations can use Microsoft Purview Audit to capture Copilot interactions and configure Microsoft Purview Data Lifecycle Management retention or deletion policies for Copilot prompts and responses. 

 

With Microsoft Purview Communication Compliance, organizations can proactively detect and mitigate risks associated with Copilot interactions. By utilizing advanced machine learning classifiers, Communication Compliance can detect risky Copilot prompts and responses, such as those involving gift giving or unauthorized disclosure of sensitive information. These insights are also surfaced in the AI Hub so that you can proactively gain visibility into unethical and non-compliant use of AI. This level of insight enables organizations to enforce compliance policies effectively and prevent potential regulatory breaches. 

 

Furthermore, Microsoft Purview eDiscovery enhances legal response capabilities by streamlining the preservation and collection of relevant Copilot data. This ensures that organizations can efficiently respond to legal challenges and investigations, minimizing potential legal exposure and safeguarding their reputation. 

 

Address emerging AI regulatory compliance requirements 

As new AI regulations and standards continue to emerge, so do the regulatory requirements that organizations need to adhere to. When onboarding and deploying AI solutions, customers need to comply with these regulations to not only avoid penalties but also to reduce their data security, data compliance and data governance risks. In recent times, notable legislation and frameworks, including the EU AI Act, NIST AI RMF, ISO/IEC 23894:2023 and ISO/IEC 42001, have been introduced. These regulations align with compliance requirements such as monitoring AI interactions and preventing data loss in AI applications.  

 

Today, we are excited to announce four new Microsoft Purview Compliance Manager assessment templates to help your organization assess, implement and strengthen its compliance against AI regulations, including EU AI Act, NIST AI RMF, ISO/IEC 23894:2023 and ISO/IEC 42001. These details will be surfaced within the Microsoft Purview AI Hub. 

 Figure 3: Get guided assistance to AI regulations in the Microsoft Purview AI HubFigure 3: Get guided assistance to AI regulations in the Microsoft Purview AI Hub

 

Get started 

 

[3] Gartner, What Is Artificial Intelligence, 2024. https://www.gartner.com/en/topics/artificial-intelligence#:~:text=By%202027%2C%20at%20least%20two,pr.... GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. 

Co-Authors
Version history
Last update:
‎May 06 2024 11:39 AM
Updated by: