Blog Post

Security, Compliance, and Identity Blog
6 MIN READ

Insider Risk Management empowering risky AI usage visibility and security investigations

Nathalia_Borges's avatar
Nov 19, 2024

Discover how Microsoft Purview Insider Risk Management helps you safeguard your data in the AI era and empowers security operations centers to enhance incident investigations with comprehensive data security context

As businesses grow more agile and connected, employees are accessing sensitive information from all corners of the globe. While this flexibility may boost productivity, it also opens the door to unprecedented risks. With 63% of data breach incidents stemming from insiders[1], organizations are quickly realizing that a path to strong data security lies in understanding and mitigating insider risks—whether it’s a well-meaning employee accidentally sharing confidential data or a malicious actor exploiting their access for personal gain. In addition, the rapid adoption of generative AI (GenAI) is further intensifying the need for stronger data security controls, requiring organizations to rethink their strategies in order to properly protect data on these new GenAI interactions.

We are excited to share the most recent developments on Microsoft Purview Insider Risk Management (IRM) to help customers identify and respond to risky use of GenAI applications, to better integrate insider risk context into security operations, and to improve their overall data security and insider risk management programs.

IRM correlates various signals to identify potential malicious or inadvertent insider risks, such as IP theft, data leakage, and security violations. Insider Risk Management enables customers to create policies based on their own internal policies, governance, and organizational requirements. Built with privacy by design, users are pseudonymized by default, and role-based access controls and audit logs are in place to help ensure user-level privacy.

Expanding visibility on insider risks to protect data in the era of AI

Understanding how users interact with GenAI is crucial. 84% of organizations believe they need to do more to protect against risky employee use of AI tools[2]. These activities can significantly impact data security via sharing sensitive information on GenAI apps, potentially leading to data leaks or unauthorized access. When it comes to addressing data oversharing on GenAI app responses, or users trying to bypass systems-imposed restrictions - customers need to understand how their insiders are using GenAI apps, identify users who pose the most risk, and dynamically enforce controls over potential risky behavior to avoid productivity constraints.

Today we’re excited to announce the public preview of risky AI usage detections in IRM, focused on identifying risky activities on GenAI apps such as prompts containing sensitive information, as well as responses generated from sensitive files or containing sensitive information. These new detections will cover user interactions with M365 Copilot, Copilot Studio, and ChatGPT Enterprise, preventing misuse of these technologies by inadvertent, negligent, or malicious insiders. This capability classifies users' risk based on their activities, allowing admins to then identify risky users and apply data security policies on users who pose the most risk.

On IRM analytics, where customers can explore the top user risks to their data environment as well as policy recommendations, they will also now be able to identify top risks related to risky AI usage, such as entering risky prompts in M365 Copilot, and to set up a recommended policy to better control these possible new risks.

Figure 1: IRM analytics showing a view of top risks around risky AI usage in the organization, as well as a policy recommendation to protect sensitive data from those risks

When setting up policies. IRM will provide a policy template dedicated to easier identifying data risks with the use of GenAI, with new indicators such as risky Copilot prompts, sensitive Copilot responses, and risky prompts in other GenAI Apps. These new indicators can also then be leveraged by Adaptive Protection to assign insider risk levels based on the organization’s risk appetite and priorities, and to apply dynamic controls with Data Loss Prevention (DLP), Data Lifecycle Management (DLM), or Conditional Access (CA), based on the user’s risk level.

Figure 2: An IRM alert from a Risky AI usage policy, showing risky activity and sequenced behavior on GenAI apps

For example, if organizations detect an elevated risk user who is leaving the organization attempting to access sensitive data from an over-permissioned SharePoint site, admins can quickly identify the issue, lock down the data, and set up an Adaptive Protection policy to enable the automatic block of elevated risk users from accessing those confidential SharePoint sites. Organizations can also set up an Entra Conditional access policy to automatically block high-risk users of GenAI apps from accessing confidential SharePoint sites or from using M365 Copilot.

 Customers can also detect additional risky activities from Communication Compliance as IRM indicators, such as attempts to access unauthorized sensitive information with prompt injections, adding another layer of protection against misuse of GenAI apps.

Increasing data security and SOC integration

Understanding data and user context has become increasingly important not only for data security teams, but also for other parts of the organization, such as security operations (SOC), to better prioritize tasks and investigations based on the sensitivity of the data involved. By integrating data security insights—such as data classification, access controls, and user activities— into the SOC experience, organizations gain a comprehensive understanding of the potential impact of security incidents in an organization's sensitive data. This integration can help uncover indicators of potential user compromise, reduce false alerts, and improve incident containment by ensuring that protective actions are effective.

To expand data security context within the SOC experience, today we are announcing the most recent integration of Microsoft Purview and Microsoft Defender, with the public preview of IRM alerts in Microsoft Defender XDR. Previously SOC teams could only access DLP alerts as part of an incident investigation. With this update, they can now also access IRM alerts as part of an incident investigation - providing important context that can impact the incident severity and help identify additional aspects for that investigation, especially for differentiating an internal incident from an external attack via a compromised user. These IRM alerts can now also impact the incident severity in Defender XDR, this new capability provides better understanding of data interactions related to specific incidents, streamlining the investigation process and helping identify additional aspects of an incident.

Figure 3: Full IRM alert as part of an incident investigation in Microsoft Defender XDR

For cases requiring deeper investigation, SOC analysts can now also access IRM data through Advanced Hunting, a capability in Defender XDR that enables analysts to perform more complex analysis and custom queries for specific inquiries. This integration brings new depth of insights from IRM, previously not available across the solutions. This capability can help teams detect attack patterns and find opportunities for data security policy tuning, therefore fostering stronger collaboration between security operations and data security teams and a more proactive security posture for the organization.

Figure 4: Data Security tables in Defender XDR's advanced hunting capability

For data security teams, besides gathering investigation insights and tunning recommendations from the SOC, now they can also visualize Microsoft Entra compromised user indicators in IRM. With this feature, admins can identify if an insider has any compromised user alerts in Microsoft Entra. This will help them formulate the right response action, like escalating the Incident to SOC teams for quick remediation.

Improving breadth and visibility with new features on Insider Risk Management

We are also excited to announce several additional new capabilities in public preview to enhance customers’ experience with IRM:

o   Visibility across alerts and cases in IRM is important for organizations to properly measure the success of their IRM policies and strategy. IRM is now offering operational reporting enhancements, including alerts and case trends over time, breakdown by region and department, top triggering events, and average time to remediate a case. These new reports will help customers better understand their risk scenario and plan for resource allocation.

o   To help triage investigations and minimize fatigue from the high volume of alerts, we are introducing alert spotlighting in IRM.  With this capability, high-priority alerts will be spotlighted to help analysts prioritize the most important items based on risk score, activity type, priority content, and potential user impact.

o   To facilitate the setup of new data security policies, we are also adding two additional pre-configured quick policy templates for the protection of crucial data sets and sensitive data exfiltration to personal email. These templates are designed for admins who want to deploy scenario-specific policies with minimal configuration, enabling a faster and more efficient start.

o   Now, customers will also be able to transfer IRM data to other solutions through Graph APIs for large-scale data export involving IRM incidents, alerts, indicators, and activities, enabling richer extensibility to organizations to manage their teams’ cases and tasks the tool of their preference.

These capabilities will start rolling out to customers’ tenants within the coming weeks.

Get started

 

[1] Rethinking Security from the Inside Out – Microsoft Security (March 2024)

[2] Data security as a foundation for secure AI adoption – Microsoft Security (August 2024)
Updated Nov 19, 2024
Version 2.0
  • Mbulelo's avatar
    Mbulelo
    Copper Contributor

    Hi There,

    Great article, the link to the Rethinking Security from the Inside Out – Microsoft Security (March 2024) only redirects back to this article