Securing data in an AI-first world with Microsoft Purview
Published Nov 15 2023 08:00 AM 41.7K Views
Microsoft

We are living in an artificial intelligence (AI) first world in which organizations have a once-in-a-lifetime opportunity to leverage AI to transform their business and drive innovation at an accelerated rate. AI is a powerful tool that can enable organizations leverage data and insights in new ways, generate novel solutions to complex problems, and enhance human capabilities and experiences. It is no surprise that organizations are racing to adopt AI – a recent Microsoft research showed that 97% of organizations have implemented, developed, or are developing an AI strategy[1]. But with these benefits, AI also poses some data security, compliance, and privacy challenges for organizations that, if not addressed properly, can slow down adoption of the technology. Due to a lack of visibility and controls to protect data in AI, organizations are pausing or in some instances even banning the use of AI out of abundance of caution. To prevent business critical data being compromised and to safeguard their competitive edge, reputation, and customer loyalty, organizations need integrated data security and compliance solutions to safely and confidently adopt AI technologies and keep their most important asset – their data – safe.

 

As the industry-leading solution, Microsoft Purview enables organizations to comprehensively govern, protect, and manage their entire data estate. By combining these capabilities with Microsoft Defender, organizations are strongly equipped to protect both their data and security workloads. Today, we are extremely thrilled to announce a set of capabilities in Microsoft Purview and Microsoft Defender to help you secure your data and apps as you leverage generative AI. At Microsoft, we are committed to helping you protect and govern your data – no matter where it lives or travels. 

 

Building on this vision, today we are advancing our journey to protect your data and apps across all generative AI applications - Microsoft Copilots, custom AI apps built by your organization, as well as consumer AI apps such as ChatGPT, Bard, Bing Chat, and more. These capabilities from Microsoft Purview & Microsoft Defender will provide you with:

  • Comprehensive visibility into the usage of generative AI apps, including sensitive data usage in AI prompts and total number of users interacting with AI.  
  • Extensive protection with the ability to block risk generative AI apps and ready-to-use customizable policies to prevent data loss in AI prompts and protect AI responses.
  • Compliance controls to help detect business or code of conduct violations and easily meet regulatory requirements.

Comprehensive visibility to understand the risks associated with AI use

We have heard from security practitioners that visibility into sensitive data is the biggest challenge to develop smart plans and actionable strategies to ensure data security. More than 30% of decision makers say they don’t know where or what their sensitive business critical data is[2], and with generative AI generating more data, getting that visibility into how sensitive data is flowing through AI and how your users are interacting with generative AI applications is essential.

 

To enable customers gain a better understanding of which AI applications are being used and how - we are announcing private preview of our AI hub in Microsoft Purview. Microsoft Purview can automatically and continuously discover data security risks for Microsoft Copilot for Microsoft 365 and provide organizations with an aggregated view of total prompts being sent to Copilot and the sensitive information included in those prompts. Organizations can also see an aggregated view of the number of users interacting with Copilot and their associated risk level – high, medium, or low. And we are extending these capabilities to provide insights for over 100 most commonly used consumer generative AI applications such as ChatGPT, Bard, Dall-e, and more.

 

Figure 1: Visibility into AI activity over timeFigure 1: Visibility into AI activity over time

From the AI hub in Purview, admins with the right permissions can drill down to understand the activity and see details such as the time of the activity, the policy name, as well as the sensitive information included in the AI prompt using the familiar experience of Activity explorer in Microsoft Purview.

 

Figure 2: Detailed view of an activity related to AIFigure 2: Detailed view of an activity related to AI

AI hub is built with privacy first and role-based access controls are in place. AI hub is in private preview, and you can join Microsoft Purview Customer Connection Program to get access. Sign up here, an active NDA is required. Licensing and packaging details will be announced at a later date.

 

Within Microsoft Defender for Cloud Apps, we are excited to announce the public preview of 400+ Generative AI apps that have been added to the cloud app catalog. This means that organizations can now benefit from the rich discovery capabilities that provide visibility into the apps in use, their associated risk and apply controls to either approve and/or block users from accessing these applications. The combined visibility of Microsoft Defender and Microsoft Purview ensures that customers have complete transparency and control into AI app usage and risk across their entire digital estate.

 

Extensive protection for sensitive information in AI prompts and responses

Recent Microsoft research found that 97% of the organizations surveyed said they had concerns about implementing an AI strategy despite initial excitement, due to the lack of controls to detect and mitigate risks and leak of intellectual property through AI tools rising to the top of their concerns [3]. Together with Microsoft Defender and Microsoft Purview, organizations can block apps that pose a risk to their employees and protect sensitive data as they interact with those applications – both in AI prompts and responses. This ensures that sensitive data does not get into the hands of people who should not have access to it.

 

Microsoft Purview is the only solution that has its information protection capabilities built-into into Microsoft Copilot for Microsoft 365, helping strengthen the data security for Microsoft Copilot for Microsoft 365. Microsoft Copilot for Microsoft 365, is built on Microsoft’s comprehensive approach to security, compliance, privacy, and responsible AI – so it is enterprise ready! With Microsoft Purview, customers can get additional data security capabilities such as sensitivity label citation and inheritance.

 

Microsoft Copilot for Microsoft 365 understands and honors sensitivity labels from Microsoft Purview and the permissions that come with the labels despite whether the documents were labeled manually or automatically. With this integration, Copilot conversations and responses automatically inherit the label from reference files and ensure they are applied to the AI-generated outputs. As these are the same sensitivity labels that other Microsoft Purview solutions are aware of, organizations can instantly gain the benefits of Purview Data Loss Prevention, Insider Risk Management, Adaptive Protection on these labeled documents. Some example scenarios are:

 

  • When users reference a labeled file in a Copilot prompt or conversation, they can clearly see the sensitivity label of the document. This visual cue informs the user that Copilot is interacting with a sensitive document and that they should adhere to their organization’s data security policies.

Figure 3: Sensitivity label is shown in the Copilot prompt when the referenced file is labeledFigure 3: Sensitivity label is shown in the Copilot prompt when the referenced file is labeled

Figure 4: Sensitivity label is visible in the files referenced in Copilot conversation.Figure 4: Sensitivity label is visible in the files referenced in Copilot conversation.

  • When users reference a labeled document in a Copilot conversation the Copilot responses in that conversation inherit the sensitivity label from the referenced document. Similarly, if a user asks Copilot to create new content based on a labeled document, Copilot created content automatically inherits the sensitivity label along with all its protection, from the referenced file. In cases in which a user references multiple documents with different sensitivity label, the Copilot conversation or the generated content inherits the most protective sensitivity label.

Figure 5: Copilot conversation inherits the sensitivity label from the referenced fileFigure 5: Copilot conversation inherits the sensitivity label from the referenced file

Figure 6: Copilot conversation inherits sensitivity labelFigure 6: Copilot conversation inherits sensitivity label

Copilot will only summarize content for users when users have the right permissions (example copy, edit, or taking a screenshot) to the content. For a user that has only VIEW permissions, Copilot will not be able to summarize. This is to ensure that Copilot does not expose content that users do have the relevant permission for.

 

And if you already have an Information Protection auto-labeling policy defined that labels your documents based on certain sensitive information, Copilot generated content is included in the scope of the auto-labeling policy.  

 

The sensitivity label integration with Microsoft Copilot for Microsoft 365 is generally available. Learn more here

 

Extending protection beyond Copilot for Microsoft 365

We realize there is a broad spectrum of generative AI applications that your users use every day, and these applications can pose varying amounts of risks to your organization and data. And, with how quickly users want to use AI applications, training them to better manage sensitive data can slow adoption and productivity. Research shows that 11% of all data in ChatGPT is confidential[5], making it critical that organizations have controls to prevent users from sending sensitive data to AI applications. We are excited to share that Microsoft Purview extends protection beyond Copilot for Microsoft 365 - in over 100 commonly used consumer AI applications such as ChatGPT, Bard, Bing Chat and more.

 

We are introducing a new indicator in Insider Risk Management for browsing generative AI sites in public preview. Security teams can use this indicator to gain visibility into generative AI sites usage, including the types of generative AI sites visited, the frequency that these sites are being used, and the types of users visiting them. With this new capability, organizations can proactively detect the potential risks associated with AI usage and take action to mitigate it. This list of generative AI sites, powered by Netstar, is automatically kept up to date as new sites are added or becoming more popular. User information is pseudonymized by default with strong privacy controls in place to protect end user trust. Learn more about our Insider Risk Announcements in this blog

 

A few months ago, we announced that Microsoft Purview Data Loss Prevention can prevents users from pasting sensitive data in generative AI prompts in public preview when accessed through supported web browsers. Today we are announcing that you can also use Adaptive Protection to make these policies dynamic such that elevated-risk users are prevented from interacting with sensitive data in AI prompts while low-risk users can maintain productivity.

 

Figure 7: Preventing sensitive information from being pasted in generative AI sitesFigure 7: Preventing sensitive information from being pasted in generative AI sites

 

Compliance controls to easily meet business and regulatory requirements

In addition to the security concerns highlighted above, there are growing concerns about data compliance, privacy, and potential biases from generative AI applications that might lead to unfair outcomes. According to Gartner, by 2027, at least one global company will see its AI deployment banned by a regulator for noncompliance with data protection or AI governance legislation[1]. It is essential that as organizations use AI, they begin to prepare for the upcoming regulations and standards.  

 

We are excited to share that Microsoft Purview can help with compliance management for Microsoft Copilot for Microsoft 365. The following capabilities are generally available.

  • Ability to capture events and detect user interactions with Copilot using Microsoft Purview Audit. It is essential to be able to audit and understand when a user requests assistance from Copilot, and what assets are affected by the response. As an example, consider a Teams meeting in which confidential information and content was discussed and shared, and Copilot was used to recap the meeting. The Audit logs can be used to let you know precisely when the user was in the Teams meeting, the ID of the meeting, as well as the files and sensitivity label assigned to the documents that Copilot accessed.

Figure 8: Searching Audit logs for Copilot interactions.Figure 8: Searching Audit logs for Copilot interactions.

  • Identify, preserve, and collect relevant data for litigation, investigations, audits, or inquiries with Microsoft Purview eDiscovery. Copilot prompts and responses may contain sensitive or confidential information, or evidence of intellectual property creation or infringement and need to be discoverable during investigations or litigation. For example, if Copilot is used within Word, and that document is shared in a Teams chat, then the Copilot interactions will be preserved and included as part of that Teams chat content during collection and review. In addition, for regulatory reasons, organizations may need to have visibility and control over the data that Copilot interacts with and be able to identify, preserve, collect, review the interactions, and export it for legal or regulatory purposes. Learn more here. For additional information about the new capabilities coming to Microsoft Purview eDiscovery, read the blog.

Figure 9: eDiscovery can preserve and collect Copilot dataFigure 9: eDiscovery can preserve and collect Copilot data

  • Managing retention and deletion policies for Copilot using Microsoft Purview Data Lifecycle Management. With the changing legal and compliance landscape, it is important to provide organizations with flexibility to decide for themselves how to manage prompt and response data. As an example, organizations may want to keep an executive’s Copilot for Microsoft 365 activity for several years but delete the activity of a non-executive user after one year. To enable these scenarios, Copilot for Microsoft 365 interactions are now included in the Microsoft Teams chats location. Any previously configured retention policies for Teams chats now automatically include user prompts and responses to and from Copilot for Microsoft 365. Learn more here.

Figure 10: Retention and deletion policy for Copilot for Microsoft 365Figure 10: Retention and deletion policy for Copilot for Microsoft 365

  • Identifying potential risk and business or regulatory compliance violations with Microsoft Purview Communication Compliance. We are excited to announce that we are extending the detection analysis in Communication Compliance to help identify risky communication within Copilot prompt and responses. This capability will allow an investigator, with relevant permissions, to examine and check Copilot interactions that have been flagged as potentially containing inappropriate or confidential data leaks. To detect such violations, admins can select Copilot as a location in the policy creation wizard. Additionally, we’ve introduced a template for creating policies dedicated to checking all Copilot chats, empowering admins to fine-tune their management strategy precisely to their organization's needs, with a focus on user privacy protection - ensuring organization's communication remains secure, compliant, and respectful of user privacy. 

Figure 11: Communication Compliance detecting business conduct violation.Figure 11: Communication Compliance detecting business conduct violation.

 

See below for a summary of the capabilities covered in this blog.

summary.png

Get Started

AI hub in Microsoft Purview is in private preview. To get access to AI hub and other private preview capabilities, we recommend you join Microsoft Purview customer connection program – an active NDA is required. Sign up here.  

 

You can get started with the Microsoft Purview capabilities in Copilot today as they are generally available. All you need is a Microsoft 365 E3 or E5 subscription depending on the capability you want to use. If you do not have a Microsoft 365 E5 subscription, you can sign up for a free trial.

 

Join us to learn more about these capabilities and see them in action at Ignite breakout sessions!

Resources

  • Learn more about Microsoft Purview here
  • Learn more about Microsoft Copilot for Microsoft 365 here
  • Learn more about Microsoft Copilot for 365 and Microsoft Purview here
  • Learn more about Microsoft Defender for Cloud Apps here

[1], [3] Data security market research, n = 638, commissioned by Microsoft

[2] CISO tracker, Microsoft

[4] Gartner Security Leader’s Guide to Data Security, Andrew Bales, September 7, 2023

[5] Verizon

5 Comments
Version history
Last update:
‎Jan 09 2024 12:22 PM
Updated by: