Discover, protect, and govern AI usage with Microsoft Security
Published Mar 13 2024 09:00 AM 23.2K Views
Microsoft

Generative AI (GenAI) is being adopted at an unprecedented rate as organizations actively engage with or experiment with GenAI in various capacities. Excitement and anxiety coexist as businesses embrace this transformative technology—as the technological advancements enabling innovations and business opportunities also introduce additional security and governance risks. Recent research indicates that 93 percent of businesses are either implementing or developing an AI strategy1. Meanwhile, leaders are feeling the generative AI-nxiety2 with GenAI adoptions. Moreover, a recent survey highlights the primary concerns among leaders over adopting AI, including:

  • Potential leakage of sensitive data
  • Generation of harmful or biased outputs
  • Lack of understanding regarding upcoming regulations and strategies to address them

Consequently, 48 percent of security leaders surveyed anticipate continuing to prohibit AI use in the workplace3. However, such restrictions hamper innovation and employee productivity and could result in missed opportunities to leverage the benefits of AI.

 

Instead of restricting AI, security teams can mitigate and manage risks more effectively by proactively gaining visibility into AI usage within the organization and implementing corresponding controls. Microsoft Security offers a comprehensive suite of solutions to enable the secure and responsible adoption of AI. Our portfolio of products, including Microsoft Defender, Entra, Purview, and Intune, work together to help you secure your data and interactions in AI applications, whether they are Copilot for Microsoft 365 or third-party AI applications.  

 

With the breadth of capabilities across the Microsoft Security portfolio, your teams can:

  • Discover potential risks associated with AI usage, such as sensitive data leaks and users accessing high-risk applications.
  • Protect the AI applications in use and the sensitive data being reasoned over or generated by them, including the prompts and responses.
  • Govern the use of by retaining and logging interactions, detecting any regulatory or organizational policy violations, and investigating incidents once they arise.

Figure 1: Microsoft Security portfolio working together to secure and govern AI usage.Figure 1: Microsoft Security portfolio working together to secure and govern AI usage.As more Generative AI services are introduced in the market, there are growing security concerns over limited visibility into the unsanctioned use of Generative AI and the need to protect the data exchanged between user and apps. Microsoft Security is dedicated to helping organizations to secure the usage of Generative AI apps. Last November, we announced a few Microsoft Purview and Defender for Cloud Apps capabilities to address these concerns. Today, we are excited to share that we have expanded our out-of-the-box detections in Microsoft Defender for Cloud Apps by adding new dedicated detections for Copilot for Microsoft 365 to alert you to any risky activity associated with the use of Copilot for Microsoft 365. For instance, should Copilot be used from a risky IP address to access confidential files in SharePoint, an alert will be generated for your attention.

 

In this blog, we will explore how Microsoft Security helps you discover, protect, and govern the use of both Copilot for Microsoft 365 and other third-party AI applications.

 

Discover, protect, and govern the use of Copilot for Microsoft 365

Copilot for Microsoft 365 is designed with security, compliance, privacy, and responsible AI built into the application. For example, it harnesses Azure AI content safety filters to help detect and block potential jailbreak or prompt-injection attacks. Copilot for Microsoft 365 adheres to our Responsible AI Standard, emphasizing fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Additional details about the built-in controls that prepare Copilot for Microsoft 365 for enterprise use are available in our documentation. Beyond these built-in controls, Microsoft Security provides additional capabilities to help you further strengthen your security and governance when using Copilot, ensuring secure productivity similar to how we secure and govern other Microsoft 365 applications.

 

Discover data security risks with the AI hub in Microsoft Purview
To ensure the security of data within AI applications such as Copilot, it is essential to gain visibility into potential risks. This involves identifying and understanding the sensitive data involved in AI interactions and how users engage with these AI applications. By taking this initial step, organizations can enhance their data protection strategies and mitigate potential risks. Within the Microsoft Purview portal, the AI Hub dashboard provides consolidated insights into AI activities such as total number of users who created prompts in Copilot, the number of prompts that contain sensitive information, and much more. By quickly discovering how AI is being used within the organization, you can discern patterns, pinpoint potential risks, and subsequently make informed decisions to best protect your organization.

 

Figure 2: AI Hub dashboard provides security teams with the visibility to understand data security posture for AI applications.Figure 2: AI Hub dashboard provides security teams with the visibility to understand data security posture for AI applications.

Protect sensitive data throughout its Copilot journey

Microsoft Purview is the only solution that provides data security controls natively integrated with Copilot for Microsoft 365. The native integration enables organizations to leverage the power of GenAI when working with sensitive data as Copilot can understand and honor the controls such as encryption and provide comprehensive visibility into usage.

 

Organizations can configure appropriate controls based on the insights, such as unlabeled and unprotected sensitive data, uncovered in the AI hub. For example, upon identifying a Copilot interaction containing a sensitive term “Project Obsidian”, a top-secret internal project, security teams can configure auto-labeling policies with associated encryption and watermark controls to prevent files containing this sensitive term from being overshared.

 

This means that Copilot’s responses are contingent based on a user’s permissions, providing summaries or content only to those with appropriate access rights. The permissions can be managed by Microsoft Purview’s labeling policies, which leverage over 300 ready-to-use classifiers to identify sensitive data and automatically apply associated controls, such as Rights Management Service (RMS) encryption, watermarking, and data loss prevention (DLP). As a result, sensitive information remains safeguarded throughout Copilot’s interactions, with responses seamlessly inheriting labels from reference files. For example, when referencing the Project Obsidian file, which was labeled as confidential, the Copilot generated response will inherit the confidential label.

 

While organizations use RMS for encryption efficiently in Microsoft Entra ID groups, it is important to ensure that the group membership is managed and monitored regularly to prevent unauthorized access. With Microsoft Entra ID Governance, admins can create periodic access reviews, ensuring that the group owners can consistently update memberships, avoiding potential idle or outdated access. By doing so, data security controls are effectively applied to the appropriate scope of users.

 

Figure 3: Copilot for Microsoft 365 honors the sensitivity label of referenced files, makes file sensitivity visible in responses, and automatically inherits the most restrictive label policy for its responses.Figure 3: Copilot for Microsoft 365 honors the sensitivity label of referenced files, makes file sensitivity visible in responses, and automatically inherits the most restrictive label policy for its responses. Another challenge for IT and Security teams is protecting corporate data across devices, as users may access Copilot from their personal and work devices. Microsoft Intune plays a critical role in protecting Copilot for Microsoft 365-generated corporate data within the designated app across managed and unmanaged devices. Whether it’s a mobile device or a personal Windows desktop, Microsoft Intune’s app protection policies are rules that ensure an organization’s data remains safe or contained within the application. These policies allow protecting corporate data without requiring device enrollment and controlling how Copilot-generated data is accessed and shared by apps.

 

Figure 4: Intune App protection policies to control how Copilot for Microsoft 365 generated data is accessed and shared between apps on personal unmanaged and managed devices.Figure 4: Intune App protection policies to control how Copilot for Microsoft 365 generated data is accessed and shared between apps on personal unmanaged and managed devices.

Beyond the preventative controls discussed earlier, security teams need to detect and respond to potential threats quickly as they arise. With the new out-of-the-box threat detections in Microsoft Defender for Cloud Apps, security teams can detect a user’s suspicious interactions with Copilot and respond and mitigate the threat. For instance, when a user accesses sensitive data via Copilot from a risky IP address, Defender for Cloud Apps will trigger an alert to flag this suspicious activity with important details including the MITRE attack technique, IP address, and other fields that security teams can use to investigate this alert further.

 

Moreover, to help security teams understand the full scope of the attack, Defender for Cloud Apps is part of Microsoft’s industry-leading XDR platform, Microsoft Defender XDR, that delivers a unified threat investigation and response experience with a comprehensive view of the entire attack chain. Security teams can also use the hunting capability in Defender XDR to dive deeper into the suspicious user’s Copilot activities and create custom rules to detect and respond to these types of activities more efficiently.

 

Figure 5: An alert in Microsoft Defender triggered by Defender for Cloud Apps to flag suspicious interactions with Copilot for Microsoft 365.Figure 5: An alert in Microsoft Defender triggered by Defender for Cloud Apps to flag suspicious interactions with Copilot for Microsoft 365.

Govern the use of Copilot for Microsoft 365

Compliance and risk managers are equally concerned about non-compliant AI use, which may result in regulatory requirement violations. Microsoft Purview offers integrated compliance controls for governing AI usage. The Audit functionality ensures organizations can capture events and track user interactions with Copilot, providing a detailed record of when an interaction happened, referenced files, and their sensitivity labels. This feature is essential for maintaining transparency and accountability in AI usage. Additionally, enables organizations to manage retention and deletion policies for Copilot interactions, aligning with specific organizational needs. These features empower organizations to proactively govern their AI usage and adhere to evolving regulatory requirements.

 

With the ease of content creation using GenAI, users can effortlessly generate new content, including unethical or high-risk content such as fake news, fraudulent content, or stock manipulation. To detect non-compliant use, Microsoft Puriew Communication Compliance provides machine-learning powered classifiers to detect risky Copilot prompts and responses, such as gift giving, unauthorized disclosure, regulatory collusion, and much more. Microsoft Purview eDiscovery then identifies, preserves and collects relevant Copilot data for litigation, investigations, audits or inquiries. Whether Copilot is used within applications like Word or shared in Teams chats, the interactions are preserved and can be exported for legal or regulatory purposes. This capability enhances your organization’s ability to respond to legal challenges and investigations efficiently.

 

Figure 6: Microsoft Purview Communication Compliance detects non-compliant Copilot prompts and responses, such as gift giving.Figure 6: Microsoft Purview Communication Compliance detects non-compliant Copilot prompts and responses, such as gift giving.

Microsoft Security provides an extensive set of security and compliance controls to enable organizations to adopt and use Copilot for Microsoft 365 like they use other Microsoft 365 apps, making GenAI technology a part of secure productivity tools for the workplace.

 

Discover, protect, and govern the use of third-party AI apps

Copilot for Microsoft 365 is not the only GenAI app in use today. Employees engage with a variety of AI applications in the workplace. Microsoft Security offers capabilities to help you discover, protect, and manage third-party AI applications. These capabilities enable you to block high-risk applications, restrict high-risk access, and prevent sensitive data or high-risk content from being shared with third-party AI applications. By implementing multi-layered controls at the app, access, and data levels, Microsoft Security can help lower overall risk and provide end-to-end security for AI solutions.

 

Discover third-party AI application usage and protect organizations from risky apps
To effectively secure and govern third-party AI applications, you need to start by understanding the types of applications being used and the associated risks. Microsoft Defender for Cloud Apps plays a crucial role in this process. Within Defender for Cloud Apps, we have expanded our extensive discovery capabilities to include over 400 GenAI applications, continuously updating the list as new ones gain popularity. The Defender for Cloud Apps discovery capabilities enable you to gain comprehensive visibility into the usage of AI applications, assess their risks, and apply controls accordingly, such as approving or blocking an app, leveraging the Defender for Endpoint integration.

 

Moreover, you can now avoid the time-consuming task of individually assessing each app for its risk with these ready-to-use risk assessments for over 400 GenAI applications. They can easily identify potential high-risk applications in use and tag them as unsanctioned to prevent users from accessing these applications until thorough due diligence is conducted. To scale the effort of blocking high-risk applications, you can automate the tagging by creating policies tailored to the organization. For example, a healthcare organization could automatically tag applications as unsanctioned if they have a risk score of 5 or lower and do not comply with HIPAA requirements, effectively preventing users from accessing them.

 

Figure 7: The discovered GenAI applications sorted by risk score in Microsoft Defender for Cloud Apps.Figure 7: The discovered GenAI applications sorted by risk score in Microsoft Defender for Cloud Apps.

Secure access to AI applications and enable least privileged access

Even though organizations can leverage Microsoft Defender to block access to high-risk applications, granting unrestricted access to low-risk applications can still lead to potential security risks. Implementing identity and access management is essential to address the risks. Microsoft Entra Conditional Access enables you to create specific policies for GenAI applications such as ChatGPT, or a set of GenAI applications. These policies can specify conditions under which access is permitted exclusively to users on compliant devices, and only after they have accepted the Terms of Use.

 

Furthermore, with Microsoft Entra ID Governance, you can enforce manager or admin approval before access is granted. They can further mitigate risks by setting time limits on how long a user can access a GenAI application, reducing the likelihood of attackers or malicious insiders exploiting established privileges. Lastly, regular access reviews can be conducted to ensure that only those who need access retain it, enhancing the security posture and minimizing unnecessary exposure.

 

Protect sensitive data from loss and enable risk-adaptive controls

The leakage of sensitive data is among the top concerns for security leaders regarding AI usage, highlighting the importance for organizations to implement controls that prevent users from sharing sensitive information with third-party AI applications. The AI hub in Microsoft Purview provides visibility into the types of sensitive data that is being shared with third-party AI applications. With these insights, organizations can identify and mitigate the potential data security risks associated with AI usage.

 

Microsoft Purview Data Loss Prevention enables you to prevent users from pasting sensitive data into GenAI prompts within supported browsers. You can also leverage Adaptive Protection in Microsoft Purview to make the DLP policy adaptive to insider risk levels, applying restrictions primarily to users deemed at elevated risk. For example, elevated-risk users are restricted from pasting sensitive data into AI applications, while low-risk users can continue their productivity uninterrupted. By leveraging these Microsoft Purview capabilities, organizations can confidently safeguard their sensitive data from potential risks posed by third-party AI applications.

 

Figure 8: Data Loss Prevention policy can block sensitive data from being pasted to third-party AI applications in supported browsers.Figure 8: Data Loss Prevention policy can block sensitive data from being pasted to third-party AI applications in supported browsers.

Microsoft Security enables you to effectively discover, protect, and govern the usage of Copilot for Microsoft 365 as well as third-party AI applications including Google Gemini, ChatGPT, and more. Our comprehensive security controls for applications, access, and data allow AI to become your everyday companion at work in a secure and safe way.

 

Learn more:

 

1. Microsoft internal research, May 2023, N=638

2. Reid Blackman. "Generative AI-nxiety." Harvard Business Review, August 14, 2023, https://hbr.org/2023/08/generative-ai-nxiety

3. ISMG First Annual Generative AI Study: Business rewards vs. Security Risks

2 Comments
Version history
Last update:
‎Mar 14 2024 11:05 AM
Updated by: