Blog Post

Microsoft Security Community Blog
7 MIN READ

Mitigating insider risks in the age of AI with Microsoft Purview Insider Risk Management

Nathalia_Borges's avatar
Mar 24, 2025

New GenAI signals, enhanced capabilities and agentic workflows in Insider Risk Management to help you protect your data

The rapid rise of generative AI presents both transformative opportunities and critical security challenges for organizations handling sensitive data. As data security teams grapple with an increasingly fragmented tooling landscape and a relentless stream of alerts, the use of AI within organizations also might bring new risks such as data leakage and exposure of sensitive information on 3rd party generative AI apps. AI has the potential to both reinforce security protocols and automate defenses, enhancing resilience against evolving data risks. However, securing AI itself is just as vital, ensuring the very tools organizations rely on remain protected. By adopting integrated and intelligent data security solutions, businesses can not only safeguard sensitive data but also empower teams to operate more efficiently, shifting focus from reactive to proactive defense.

Microsoft Purview Insider Risk Management (IRM) addresses these pressing needs by offering comprehensive visibility over how users interact with data within your organization. It integrates machine learning-based detection controls, dynamic protections, and advanced privacy controls to help organizations effectively manage and mitigate insider risks. IRM correlates various signals to identify potential malicious or inadvertent insider risks, such as IP theft, data leakage, and security violations. Insider Risk Management enables customers to create policies based on their own internal policies, governance, and organizational requirements. Built with privacy by design, users are pseudonymized by default, and role-based access controls and audit logs are in place to help ensure user-level privacy.

 

Expanding visibility into risky AI usage across more AI workloads

Despite the high interest in AI adoption, the 2024 Microsoft Data Security Index[1] reveals that 84% of surveyed organizations want greater confidence in managing and discovering data input into AI applications. Data leakage remains a top concern for 80% of business leaders, while the rise of shadow AI adds to the complexity, with 78% of users bringing their own AI tools like ChatGPT. Emerging threats such as indirect prompt injection attacks are also on the radar, with 11% identifying them as critical risks. Therefore, it is crucial for organizations to understand how their employees interact with generative AI tools. At Ignite last fall, we announced several new capabilities to help identify risky generative AI usage by insiders on Microsoft 365 Copilot, ChatGPT Enterprise, and Copilot Studio, enabling organizations to accelerate their AI adoption while ensuring robust data security and governance.

To continue addressing the expansion of GenAI tools and scenarios, today we’re excited to announce new Risky GenAI usage detections in IRM for the enterprise-built apps Copilot for Fabric and Security Copilot, as well as for 3rd party apps such as Gemini, ChatGPT consumer, Copilot Chat, and DeepSeek.

The detections will cover a wide range of activities, including risky prompts that contain sensitive information or exhibit risky intent, as well as sensitive responses that either contain sensitive information or are generated from sensitive files or sites, enabling admins to identify and mitigate risky AI usage.

 

(Figure 1: IRM risky AI usage activity detected in Copilot for Fabric)

 

Additionally, these signals will contribute to Adaptive Protection insider risk levels, further enhancing the data security posture of the organization, and facilitating the balance between protection and productivity. Adaptive Protection will also be leveraged by the new data security capabilities native within the Microsoft Edge for Business browser to dynamically enforce different levels of protection based on the risk level of the user interacting with the AI application. For example, Adaptive Protection can enable admins to block low-risk users from submitting prompts containing the highest-sensitivity classifiers for their organization, such as M&A-related data or intellectual property, while blocking prompts containing any sensitive information type (SIT) for an elevated-risk user.

These updates will empower organizations to better manage and secure their AI usage and safeguard valuable data, increasing their confidence level in their AI adoption. Check out all the new capabilities we're announcing today across Microsoft Security to secure data in the era of AI.

 

Introducing Alert Triage Agents in Insider Risk Management

There are also significant opportunities to leverage generative AI to enhance data security teams' efficiency and enable them to prioritize critical tasks and risks. Organizations face an average of 66 data security alerts per day, but teams only have time to review 63% of them[2]. The large volume of alerts, combined with an ongoing shortage of security professionals, makes it increasingly challenging for organizations to stay ahead of potential data security risks and avoid blind spots in their data security programs.

To support customers in addressing these challenges, we are thrilled to announce Alert Triage Agent in IRM. This new autonomous Security Copilot capability integrated into IRM will offer an agent-managed alert queue that highlights the IRM alerts posing the greatest risk to your organization, that should be tackled first. The agent analyses the content and potential intent behind an alert, based on the organization’s chosen parameters, to identify which alerts might signal bigger impacts on sensitive data and need to be prioritized, providing also explanation for the categorization logic.

Today, most teams still rely on manual triage, static rule-based filtering, and siloed security tools, which are often ineffective and create blind spots on data security programs. Now, admins can choose from which IRM policies they’d like to triage alerts and which information the agent should focus on, as well as provide the agent with inputs to calibrate results to better match the organization’s priorities.

 

(Figure 2: Alert Triage Agent in IRM queue, with prioritization rationale )


Customers will be able to leverage the following benefits:

  • Enhanced alert management: Improves alert prioritization, addressing critical risks first and leading to faster response times.
  • Increased team efficiency: Teams of varying degrees of expertise will be able to efficiently handle more alerts, improving overall percentage of risks addressed.
  • Dynamic response: The agent will autonomously identify important alerts based on the selected parameters and will learn from feedback in natural language, dynamically fine-tuning alert prioritization.

The Alert Triage Agent is seamlessly integrated within IRM to easily enhance workflow efficiency through Security Copilot, a trusted and reliable platform that dynamically learns and adapts to emerging threats with a proven track record.  [3]

Alert Triage Agents in Purview public preview starts rolling out on April 27. To get started, check out the visit the Security Copilot product page for more information. Already using Security Copilot? Make sure you’re signed up for the Security Copilot Customer Connection Program (CCP) to receive the latest updates and try the new features — join today at aka.ms/JoinCCP.

 

New insider risk scenarios and continuous product experience improvement

We are also continually expanding IRM scenarios and improving admin experiences to better address the most pressing challenges customers face.

When facing data breaches, organizations struggle to understand the sensitivity and importance of the data involved due to fractured workflows and multiple tools. Breaches involving stolen credentials take nearly 10 months to identify and contain[4], and customers have expressed the need for a unified product to reduce incident resolution time and safeguard their data. Today we’re excited to announce the integration of IRM with the new Microsoft Purview Data Security Investigations (DSI). DSI accelerates data risk mitigation using generative AI-powered deep content analysis enriched with activity insights to dive deep into organizations’ emails, instant messages, and documents. When evaluating a risky user with IRM, you can now escalate the case to DSI, instead of reviewing files individually. The integration between IRM and DSI allows a data security admin to identify when a risky user needs deeper investigation to launch a pre-scoped investigation directly from the user activity pane, allowing them to view content analysis related to that user and better assess post-incident data impact.

 

(Figure 3: DSI case being launched from Insider Risk Management)

 

Data security context is also vital for SOC teams to better understand the user intent and sensitivity of the data involved in a possible attack. To strengthen the integration of data security into the SOC experience, we are bringing insider risk user analytics to Microsoft Defender XDR on the user entity page, for all users. Now, any potential risky behavior related to a user involved in an XDR incident will be surfaced, regardless of their triggering an IRM policy, enabling SOC analysts to evaluate behavior patterns that could have influenced the incident. User analytics will also be available for DLP and Communication Compliance investigations, and on Defender XDR Advanced Hunting tables in a few months.

Increasing the connection of IRM with the broader Microsoft Purview stack, we’re now adding DLP alerts as IRM indicators to detect when a user activity triggers a DLP policy. This capability will provide admins greater visibility and efficiency by consolidating a user’s risky activity triggering DLP and/or IRM policies, eliminating the need to switch between solutions for better evaluating data risks. We are also bringing a new indicator for ‘Email to personal email accounts’ to alert when business-sensitive data is potentially leaked via email attachments to free public domains or personal email accounts. Now, admins will be able to better understand the intent behind emails with sensitive data attached being sent to a personal email for non-business reasons.

To enhance the end-user experience, we have made several improvements that enable teams to refine their data security strategy and facilitate insider risk investigations. Enhancements include:

  • Increasing IRM policy template units:  Increase policy creation limits from 20 to 100 policies per template, enabling organizations to create more a granular policy strategy to better fit their needs, such as different data security needs in different groups of the organization or regulatory requirements.
  • Endpoint collection policy update: Admins can now leverage collection policies to more granularly scope what is collected from the endpoint and used in IRM policies.
  • Email signature exclusion enhancement: Inclusion of keyword exclusion logic update to exclude noisy signals when email signature images are considered as attachments on a policy.

These capabilities will start rolling out to customers’ tenants within the coming weeks.

Learn more about the innovations designed to help your organization protect data, defend against cyber threats, and stay compliant. Join Microsoft leaders online at Microsoft Secure on April 9.

Get started

 

[1] Microsoft Data Security Index annual report highlights evolving generative AI security needs | Microsoft Security Blog

[2]  Microsoft Data Security Index annual report highlights evolving generative AI security needs | Microsoft Security Blog

[3] Randomized Controlled Trial for Copilot for Security

[4] Cost of a data breach 2024 | IBM

Updated Mar 26, 2025
Version 3.0
No CommentsBe the first to comment