microsoft purview
548 TopicsPurview DLP Behaviours in Outlook Desktop
We are currently testing Microsoft Purview DLP policies for user awareness, where sensitive information shared externally triggers a policy tip, with override allowed (justification options enabled) and no blocking action configured. We are observing the following behaviours in Outlook Desktop: Inconsistent policy tip display (across Outlook Desktop Windows clients) – For some users, the policy tip renders correctly, while for others it appears with duplicated/stacked lines of text. This is occurring across users with similar configurations. Override without justification – Users are able to click “Send Anyway/Confirm and send” without selecting any justification option (e.g. business justification, manager approval, etc.), which bypasses the intended control. New Outlook: Classic Outlook: This has been observed on Outlook Desktop (Microsoft 365 Apps), including: Version 2602 (Build 19725.20170 Click-to-Run) Version 2602 (Build 16.0.19725.20126 MSO) Has anyone experienced similar behaviour with DLP policy tips or override enforcement in Outlook Desktop? Keen to understand if this is a known issue or if there are any recommended fixes or workarounds.42Views0likes1CommentHow to identify users handling SITs before purchasing Microsoft Purview licenses?
Posting this on behalf of a customer we are currently advising as a Microsoft Partner. The customer is in the evaluation stage of Microsoft Purview and has raised a licensing concern that we would like the community's guidance on. CUSTOMER'S CONCERN Purview licenses are user-based, meaning every user who directly or indirectly benefits from the service needs to be licensed. However, to determine which users actually handle sensitive data (and therefore require a license), tools like Content Explorer and Activity Explorer are needed — both of which require an E5 or equivalent license to access in the first place. This creates a chicken-and-egg problem for the customer: They need Purview to identify who handles sensitive data, but they need to know who handles sensitive data to decide how many Purview licenses to buy. QUESTIONS ON BEHALF OF THE CUSTOMER 1. Is there an official Microsoft-supported mechanism or tool that allows customers to assess their SIT exposure and identify affected users before committing to a full Purview license purchase? 2. Is it viable for the customer to purchase a single license (1 qty) assigned to an admin account to perform a tenant-wide scoping and discovery exercise — and would that single license provide sufficient access to identify all users handling SITs across the tenant? 3. If the 90-day Purview E5 trial is the recommended path, does Content Explorer automatically scan and surface SIT matches across all users in the tenant without requiring any pre-configured DLP policies or sensitivity labels to be set up first? As a partner, we want to ensure we are guiding our customer toward the correct pre-purchase assessment approach before recommending a licensing SKU and quantity. Any guidance from the community or Microsoft would be greatly appreciated.Solved38Views0likes2CommentsDetecting Plain‑Text Password Exposure Using Custom Regex in Microsoft Purview
Strong authentication controls like MFA significantly reduce account compromise — but they don’t eliminate the risk of password exposure. In many organizations, users still interact with legacy systems, third‑party tools, or service accounts that rely on password‑only authentication. When those credentials are shared or stored in plain text — whether accidentally or out of convenience — they introduce a serious security risk. Microsoft Purview helps organizations identify and protect sensitive information using Sensitive Information Types (SITs). While built‑in detections provide a solid foundation, certain scenarios benefit from organization‑specific context and policy‑driven patterns. This post walks through how to extend password detection using a custom regex pattern — allowing you to identify strong passwords stored in plain text and respond before exposure turns into an incident. The Challenge: Passwords Still Appear in Everyday Content Despite user awareness training and improved security posture, passwords still surface in places like: Emails shared for “quick access” Documents stored in collaboration sites Notes created during troubleshooting Spreadsheets used for credential tracking Even a single exposed password — especially for non‑MFA‑protected systems — can lead to unauthorized access or data leakage. Extending Password Detection to Align with Organizational Policies Microsoft Purview includes built‑in patterns to detect generic password formats. These offer a strong baseline and are effective for broad protection scenarios. However, many organizations define specific password standards and want detection logic that reflects how passwords are referenced according to their organization policy. For example: Enforcing minimum and maximum password length Requiring complexity (letters, digits, special characters) Detecting passwords only when explicitly referenced, such as near the word password Reducing false positives from random strong strings (API keys, hashes, tokens) In these cases, custom regex‑based Sensitive Information Types allow organizations to build on existing protection and apply targeted, high‑confidence detection. Detection Requirements for This Scenario In this example, we want to identify passwords that meet all of the following criteria: ✔ Minimum length: 10 characters ✔ Maximum length: 20 characters ✔ Must contain: At least one alphabet character At least one digit At least one special character ✔ Must appear in close proximity (within 2 characters) to a keyword such as: password pwd passcode This ensures we’re detecting intentional password disclosures, not unrelated strong strings. In this scenario, the detection logic is intentionally split across three components: Primary element – Detects password length and structure First supporting element – Validates password complexity rules Second supporting element (keywords) – Adds human context using proximity This structured design ensures that detection aligns closely with real‑world password disclosure patterns. Detection Architecture Overview Component Purpose Primary Element Identifies candidate password strings Supporting Element (Complexity) Confirms password strength Supporting Element (Keywords) Confirms contextual intent Primary Element: Password Length Identification The primary element focuses purely on identifying potential password strings based on length. Regex Pattern \S{10,20} What this enforces No whitespace characters Minimum length: 10 characters Maximum length: 20 characters Proximity Configuration Distance between Primary and Supporting Element: 1 character This ensures that the supporting complexity patterns evaluate directly against the same string, rather than unrelated values nearby. First Supporting Element: Password Complexity Validation The first supporting element ensures that the detected string meets organizational password complexity requirements. All the following patterns are grouped within the same supporting element, and no internal proximity is configured (as they evaluate the same primary value). Complexity Patterns Included Requirement Regex Pattern At least one uppercase letter [A-Z] At least one lowercase letter [a-z] At least one digit [0-9] Allowed character set [A-Za-z0-9!@#$%^&*()_+\-=]{10,} At least one special character [!@#$%&*+=] This approach avoids relying on a single large regex, making the detection more readable, maintainable, and auditable. Second Supporting Element: Keyword Context (Human Intent) To further improve accuracy, a second supporting element is used to ensure the password appears in a meaningful, human context. Keyword List (Case‑Insensitive) credential password pwd pswd Keywords are configured in case‑insensitive mode to match variations such as Password, PWD, or Pswd. (You can change the keyword and Proximity Character as per the need) Proximity Configuration Proximity value: 30 characters Why 30 Characters? This value accounts for: Maximum keyword length: 10 characters Maximum password length: 20 characters This ensures the keyword and password must appear within the same meaningful sentence or fragment, for example: Password: P@ssW0rd123! credential=Adm1n#Secure pwd -> Qwerty@2024! It avoids triggering on: RandomStrongString123! API_KEY = A9$kLmZpQw How This Comes Together in Microsoft Purview When implemented as a custom Sensitive Information Type: The primary element detects candidate passwords The first supporting element confirms password strength The second supporting element confirms user intent via keywords Proximity rules ensure all components relate to the same disclosure This SIT can then be used across: Data Loss Prevention (DLP) Endpoint DLP Auto‑labelling Email and collaboration workload protection Why This Design Is Effective This structured approach allows organizations to: Detect real password disclosures with high confidence Align detection with internal password policy Reduce false positives from random strong strings Apply protection consistently across Microsoft 365 workloads Maintain a clean, auditable detection design Most importantly, it extends Microsoft Purview’s native capabilities without changing the underlying security model. Final Takeaway Even in environments with strong authentication controls, password exposure remains a real risk — especially for legacy and third‑party systems. By combining length validation, complexity enforcement, and contextual keyword proximity, Microsoft Purview enables precise and scalable password detection, helping organizations identify and protect sensitive credentials before they are misused.Why UK Enterprise Cybersecurity Is Failing in 2026 (And What Leaders Must Change)
Enterprise cybersecurity in large organisations has always been an asymmetric game. But with the rise of AI‑enabled cyber attacks, that imbalance has widened dramatically - particularly for UK and EMEA enterprises operating complex cloud, SaaS, and identity‑driven environments. Microsoft Threat Intelligence and Microsoft Defender Security Research have publicly reported a clear shift in how attackers operate: AI is now embedded across the entire attack lifecycle. Threat actors use AI to accelerate reconnaissance, generate highly targeted phishing at scale, automate infrastructure, and adapt tactics in real time - dramatically reducing the time required to move from initial access to business impact. In recent months, Microsoft has documented AI‑enabled phishing campaigns abusing legitimate authentication mechanisms, including OAuth and device‑code flows, to compromise enterprise accounts at scale. These attacks rely on automation, dynamic code generation, and highly personalised lures - not on exploiting traditional vulnerabilities or stealing passwords. The Reality Gap: Adaptive Attackers vs. Static Enterprise Defences Meanwhile, many UK enterprises still rely on legacy cybersecurity controls designed for a very different threat model - one rooted in a far more predictable world. This creates a dangerous "Resilience Gap." Here is why your current stack is failing- and the C-Suite strategy required to fix it. 1. The Failure of Traditional Antivirus in the AI Era Traditional antivirus (AV) relies on static signatures and hashes. It assumes malicious code remains identical across different targets. AI has rendered this assumption obsolete. Modern malware now uses automated mutation to generate unique code variants at execution time, and adapts behaviour based on its environment. Microsoft Threat Intelligence has observed threat actors using AI‑assisted tooling to rapidly rewrite payload components, ensuring that every deployment looks subtly different. In this model, there is no reliable signature to detect. By the time a pattern exists, the attacker has already moved on. Signature‑based detection is not just slow - it is structurally misaligned with AI‑driven attacks. The Risk: If your security relies on "recognising" a threat, you are already breached. By the time a signature exists, the attacker has evolved. The C-Suite Pivot: Shift investment from artifact detection to EDR/XDR (Extended Detection and Response). We must prioritise behavioural analytics and machine learning models that identify intent rather than file names. 2. Why Perimeter Firewalls Fail in a Cloud-First World Many UK enterprise still rely on firewalls enforcing static allow/deny rules based on IP addresses and ports. This model worked when applications were predictable and networks clearly segmented. Today, enterprise traffic is encrypted, cloud‑hosted, API‑driven, and deeply integrated with SaaS and identity services. AI‑assisted phishing campaigns abusing OAuth and device‑code flows demonstrate this clearly. From a network perspective, everything looks legitimate: HTTPS traffic to trusted identity providers. No suspicious port. No malicious domain. Yet the attacker successfully compromises identity. The Risk: Traditional firewalls are "blind" to identity-based breaches in cloud environments. The C-Suite Pivot: Move to Identity-First Security. Treat Identity as the new Control Plane, integrating signals like user risk, device health, and geolocation into every access decision. 3. The Critical Weakness of Single-Factor Authentication Despite clear NCSC guidance, single-factor passwords remain a common vulnerability in legacy applications and VPNs. AI-driven credential abuse has changed the economics of these attacks. Threat actors now deploy adaptive phishing campaigns that evolve in real-time. Microsoft has observed attackers using AI to hyper-target high-value UK identities- specifically CEOs, Finance Directors, and Procurement leads. The Risk: Static passwords are now the primary weak link in UK supply chain security. The C-Suite Pivot: Mandate Phishing‑resistant MFA (Passkeys or hardware security keys). Implement Conditional Access policies that evaluate risk dynamically at the moment of access, not just at login. Legacy Security vs. AI‑Era Reality 4. The Inherent Risk of VPN-Centric Security VPNs were built on a flawed assumption: that anyone "inside" the network is trustworthy. In 2026, this logic is a liability. AI-assisted attackers now use automation to map internal networks and identify escalation paths the moment they gain VPN access. Furthermore, Microsoft has tracked nation-state actors using AI to create synthetic employee identities- complete with fake resumes and deepfake communication. In these scenarios, VPN access isn't "hacked"; it is legally granted to a fraudster. The Risk: A compromised VPN gives an attacker the "keys to the kingdom." The C-Suite Pivot: Transition to Zero Trust Architecture (ZTA). Access must be explicit, scoped to the specific application, and continuously re‑evaluated using behavioural signals. 5. Data: The High-Velocity Target Sensitive data sitting unencrypted in legacy databases or backups is a ticking time bomb. In the AI era, data discovery is no longer a slow, manual process for a hacker. Attackers now use AI to instantly analyse your directory structures, classify your files, and prioritise high-value data for theft. Unencrypted data significantly increases your "blast radius," turning a containable incident into a catastrophic board-level crisis. The Risk: Beyond the technical breach, unencrypted data leads to massive UK GDPR fines and irreparable brand damage. The C-Suite Pivot: Adopt Data-Centric Security. Implement encryption by default, classify data while adding sensitivity labels and start board-level discussions regarding post‑quantum cryptography (PQC) to future-proof your most sensitive assets. 6. The Failure of Static IDS Traditional Intrusion Detection Systems (IDS) rely on known indicators of compromise - assuming attackers reuse the same tools and techniques. AI‑driven attacks deliberately avoid that assumption. Threat actors are now using Large Language Models (LLMs) to weaponize newly disclosed vulnerabilities within hours. While your team waits for a "known pattern" to be updated in your system, the attacker is already using a custom, AI-generated exploit. The Risk: Your team is defending against yesterday's news while the attacker is moving at machine speed. The C-Suite Pivot: Invest in Adaptive Threat Detection. Move toward Graph‑based XDR platforms that correlate signals across email, endpoint, and cloud to automate investigation and response before the damage spreads. From Static Security to Continuous Security Closing Thought: Security Is a Journey, Not a Destination For UK enterprises, the shift toward adaptive cybersecurity is no longer optional - it is increasingly driven by regulatory expectation, board oversight, and accountability for operational resilience. Recent UK cyber resilience reforms and evolving regulatory frameworks signal a clear direction of travel: cybersecurity is now a board‑level responsibility, not a back‑office technical concern. Directors and executive leaders are expected to demonstrate effective governance, risk ownership, and preparedness for cyber disruption - particularly as AI reshapes the threat landscape. AI is not a future cybersecurity problem. It is a current force multiplier for attackers, exposing the limits of legacy enterprise security architectures faster than many organisations are willing to admit. The uncomfortable truth for boards in 2026 is that no enterprise is 100% secure. Intrusions are inevitable. Credentials will be compromised. Controls will be tested. The difference between a resilient enterprise and a vulnerable one is not the absence of incidents, but how risk is managed when they occur. In mature organisations, this means assuming breach and designing for containment: Access controls that limit blast radius Least privilege and conditional access restricting attackers to the smallest possible scope if an identity is compromised Data‑centric security using automated classification and encryption, ensuring that even when access is misused, sensitive data cannot be freely exfiltrated As a Senior Enterprise Cybersecurity Architect, I see this moment as a unique opportunity. AI adoption does not have to repeat the mistakes of earlier technology waves, where innovation moved fast and security followed years later. We now have a rare chance to embed security from day one - designing identity controls, data boundaries, automated monitoring, and governance before AI systems become business‑critical. When security is built in upfront, enterprises don’t just reduce risk - they gain the confidence to move faster and unlock AI’s value safely. Security is no longer a “department”. In the age of AI, it is a continuous business function - essential to preserving trust and maintaining operational continuity as attackers move at machine speed. References: Inside an AI‑enabled device code phishing campaign | Microsoft Security Blog AI as tradecraft: How threat actors operationalize AI | Microsoft Security Blog Detecting and analyzing prompt abuse in AI tools | Microsoft Security Blog Post-Quantum Cryptography | CSRC Microsoft Digital Defense Report 2025 | Microsoft https://www.ncsc.gov.uk/news/government-adopt-passkey-technology-digital-servicesFeature Request: Extend Security Copilot inclusion (M365 E5) to M365 A5 Education tenants
Background At Ignite 2025, Microsoft announced that Security Copilot is included for all Microsoft 365 E5 customers, with a phased rollout starting November 18, 2025. This is a significant step forward for security operations. The gap Microsoft 365 A5 for Education is the academic equivalent of E5 — it includes the same core security stack: Microsoft Defender, Entra, Intune, and Purview. However, the Security Copilot inclusion explicitly covers only commercial E5 customers. There is no public roadmap or timeline for extending this benefit to A5 education tenants. Why this matters Education institutions face the same cybersecurity threats as commercial organizations — often with fewer dedicated security resources. The A5 license was positioned as the premium security offering for education. Excluding it from Security Copilot inclusion creates an inequity between commercial and education customers holding functionally equivalent license tiers. Request We would like Microsoft to: Confirm whether Security Copilot inclusion will be extended to M365 A5 Education tenants If yes, provide an indicative timeline If no, clarify the rationale and what alternative paths exist for education customers Are other EDU admins in the same situation? Would appreciate any upvotes or comments to help raise visibility with the product team.108Views4likes1CommentSecurity Copilot Skilling Series
Security Copilot joins forces with your favorite Microsoft Security products in a skilling series miles above the rest. The Security Copilot Skilling Series is your opportunity to strengthen your security posture through threat detection, incident response, and leveraging AI for security automation. These technical skilling sessions are delivered live by experts from our product engineering teams. Come ready to learn, engage with your peers, ask questions, and provide feedback. Upcoming sessions are noted below and will be available on-demand on the Microsoft Security Community YouTube channel. Coming Up Apr. 23 | Getting started with Security Copilot New to Security Copilot? This session walks through what you actually need to get started, including E5 inclusion requirements and a practical overview of the core experiences and agents you will use on day one. Apr. 28 | Security Copilot Agents, DSPM AI Observability, and IRM for Agents This session covers an overview of how Microsoft Purview supports AI risk visibility and investigation through Data Security Posture Management (DSPM) and Insider Risk Management (IRM), alongside Security Copilot–powered agents. This session will go over what is AI Observability in DSPM as well as IRM for Agents in Copilot Studio and Azure AI Foundry. Attendees will learn about the IRM Triage Agent and DSPM Posture Agent and their deployment. Attendees will gain an understanding of how DSPM and IRM capabilities could be leveraged to improve visibility, context, and response for AI-related data risks in Microsoft Purview. Now On-Demand Apr. 2 | Current capabilities of Copilot in Intune Speakers: Amit Ghodke and Carlos Brito This session on Copilot in Intune & Agents explores the current embedded Copilot experiences and AI‑powered agents available through Security Copilot in Microsoft Intune. Attendees will learn how these capabilities streamline administrative workflows, reduce manual effort, and accelerate everyday endpoint management tasks, helping organizations modernize how they operate and manage devices at scale. March 5 | Conditional Access Optimization Agent: What It Is & Why It Matters Speaker: Jordan Dahl Get a clear, practical look at the Conditional Access Optimization Agent—how it automates policy upkeep, simplifies operations, and uses new post‑Ignite updates like Agent Identity and dashboards to deliver smarter, standards‑aligned recommendations. February 19 | Agents That Actually Work: From an MVP Speaker: Ugur Koc, Microsoft MVP Microsoft MVP Ugur Koc will share a real-world workflow for building agents in Security Copilot, showing how to move from an initial idea to a consistently performing agent. The session highlights how to iterate on objectives, tighten instructions, select the right tools, and diagnose where agents break or drift from expected behavior. Attendees will see practical testing and validation techniques, including how to review agent decisions and fine-tune based on evidence rather than intuition to help determine whether an agent is production ready. February 5 | Identity Risk Management in Microsoft Entra Speaker: Marilee Turscak Identity teams face a constant stream of risky user signals, and determining which threats require action can be time‑consuming. This webinar explores the Identity Risk Management Agent in Microsoft Entra, powered by Security Copilot, and how it continuously monitors risky identities, analyzes correlated sign‑in and behavior signals, and explains why a user is considered risky. Attendees will see how the agent provides guided remediation recommendations—such as password resets or risk dismissal—at scale and supports natural‑language interaction for faster investigations. The session also covers how the agent learns from administrator instructions to apply consistent, policy‑aligned responses over time. January 28 | Security Copilot in Purview Technical Deep Dive Speakers: Patrick David, Thao Phan, Alexandra Roland Discover how AI-powered alert triage agents for Data Loss Prevention (DLP) and Insider Risk Management (IRM) are transforming incident response and compliance workflows. Explore new Data Security Posture Management (DSPM) capabilities that deliver deeper insights and automation to strengthen your security posture. This session will showcase real-world scenarios and actionable strategies to help you protect sensitive data and simplify compliance. January 22 | Security Copilot Skilling Series | Building Custom Agents: Unlocking Context, Automation, and Scale Speakers: Innocent Wafula, Sean Wesonga, and Sebuh Haileleul Microsoft Security Copilot already features a robust ecosystem of first-party and partner-built agents, but some scenarios require solutions tailored to your organization’s specific needs and context. In this session, you'll learn how the Security Copilot agent builder platform and MCP servers empower you to create tailored agents that provide context-aware reasoning and enterprise-scale solutions for your unique scenarios. December 18 | What's New in Security Copilot for Defender Speaker: Doug Helton Discover the latest innovations in Microsoft Security Copilot embedded in Defender that are transforming how organizations detect, investigate, and respond to threats. This session will showcase powerful new capabilities—like AI-driven incident response, contextual insights, and automated workflows—that help security teams stop attacks faster and simplify operations. Why Attend: Stay Ahead of Threats: Learn how cutting-edge AI features accelerate detection and remediation. Boost Efficiency: See how automation reduces manual effort and improves SOC productivity. Get Expert Insights: Hear directly from product leaders and explore real-world use cases. Don’t miss this opportunity to future-proof your security strategy and unlock the full potential of Security Copilot in Defender! December 4 | Discussion of Ignite Announcements Speakers: Zineb Takafi, Mike Danoski and Oluchi Chukwunwere, Priyanka Tyagi, Diana Vicezar, Thao Phan, Alex Roland, and Doug Helton Ignite 2025 is all about driving impact in the era of AI—and security is at the center of it. In this session, we’ll unpack the biggest Security Copilot announcements from Ignite on agents and discuss how Copilot capabilities across Intune, Entra, Purview, and Defender deliver end-to-end protection. November 13 | Microsoft Entra AI: Unlocking Identity Intelligence with Security Copilot Skills and Agents Speakers: Mamta Kumar, Sr. Product Manager; Margaret Garcia Fani, Sr. Product Manager This session will demonstrate how Security Copilot in Microsoft Entra transforms identity security by introducing intelligent, autonomous capabilities that streamline operations and elevate protection. Customers will discover how to leverage AI-driven tools to optimize conditional access, automate access reviews, and proactively manage identity and application risks - empowering them into a more secure, and efficient digital future. October 30 | What's New in Copilot in Microsoft Intune Speaker: Amit Ghodke, Principal PM Architect, CxE CAT MEM Join us to learn about the latest Security Copilot capabilities in Microsoft Intune. We will discuss what's new and how you can supercharge your endpoint management experience with the new AI capabilities in Intune. October 16 | What’s New in Copilot in Microsoft Purview Speaker: Patrick David, Principal Product Manager, CxE CAT Compliance Join us for an insider’s look at the latest innovations in Microsoft Purview —where alert triage agents for DLP and IRM are transforming how we respond to sensitive data risks and improve investigation depth and speed. We’ll also dive into powerful new capabilities in Data Security Posture Management (DSPM) with Security Copilot, designed to supercharge your security insights and automation. Whether you're driving compliance or defending data, this session will give you the edge. October 9 | When to Use Logic Apps vs. Security Copilot Agents Speaker: Shiv Patel, Sr. Product Manager, Security Copilot Explore how to scale automation in security operations by comparing the use cases and capabilities of Logic Apps and Security Copilot Agents. This webinar highlights when to leverage Logic Apps for orchestrated workflows and when Security Copilot Agents offer more adaptive, AI-driven responses to complex security scenarios. All sessions will be published to the Microsoft Security Community YouTube channel - Security Copilot Skilling Series Playlist __________________________________________________________________________________________________________________________________________________________________ Looking for more? Keep up on the latest information on the Security Copilot Blog. Join the Microsoft Security Community mailing list to stay up to date on the latest product news and events. Engage with your peers one of our Microsoft Security discussion spaces.2.8KViews1like0CommentsMaking AI Apps Enterprise-Ready with Microsoft Purview and Microsoft Foundry
Building AI apps is easy. Shipping them to production is not. Microsoft Foundry lets developers bring powerful AI apps and agents to production in days. But managing safety, security, and compliance for each one quickly becomes the real bottleneck. Every enterprise AI project hits the same wall: security reviews, data classification, audit trails, DLP policies, retention requirements. Teams spend months building custom logging pipelines and governance systems that never quite keep up with the app itself. There is a faster way. Enable Purview & Ship Faster! Microsoft Foundry now includes native integration with Microsoft Purview. When you enable it, every AI interaction in your subscription flows into the same enterprise data governance infrastructure that already protects your Microsoft 365 and Azure data estate. No SDK changes. No custom middleware. No separate audit system to maintain. Here is what you get: Visibility within 24 hours. Data Security Posture Management (DSPM) shows you total interactions, sensitive data detected in prompts and responses, user activity across AI apps, and insider risk scoring. This dashboard exists the moment you flip the toggle. Automatic data classification. The same classification engine that scans your Microsoft 365 tenant now scans AI interactions. Credit card numbers, health information, SSNs, and your custom sensitive information types are all detected automatically. Audit logs you do not have to build. Every AI interaction is logged in the Purview unified audit log. Timestamps, user identity, the AI app involved, files accessed, sensitivity labels applied. When legal needs six months of AI interactions for an investigation, the data is already there. DLP policy enforcement. Configure policies that block prompts containing sensitive information before they reach the model. This uses the same DLP framework you already know. eDiscovery, retention, and communication compliance. Search AI interactions alongside email and Teams messages. Set retention policies by selecting "Enterprise AI apps" as the location. Detect harmful or unauthorized content in prompts. How to Enable Prerequisite: You need the “Azure AI Account Owner” role assigned by your Subscription Owner. Open the Microsoft Foundry portal (make sure you are in the new portal) Select Operate from the top navigation Select Compliance in the left pane Select the Security posture tab Select the Azure Subscription Enable the toggle next to Microsoft Purview Repeat the above steps for other subscriptions By enabling this toggle, data exchanged within Foundry apps and agents' starts flowing to Purview immediately. Purview reports populate within 24 hours. What shows up in Purview? Purview Data Security Admins: Go to the Microsoft Purview portal, open DSPM, and follow the recommendation to setup “Secure interactions from enterprise AI apps” . Navigate to DSPM > Discover > Apps and Agents to review and monitor the Foundry apps built in your organization Navigate to DSPM > Activity Explorer to review the activity on a given agent/application What About Cost? Enabling the integration is free. Audit Standard is included for Foundry apps. You will only be charged for data security policies you setup for governing Foundry data. A Real-World Scenario: The Internal HR Assistant Consider a healthcare company building an internal AI agent for HR questions. The Old Way: The developer team spends six weeks building a custom logging solution to strip PII/PHI from prompts to meet HIPAA requirements. They have to manually demonstrate these logs to compliance before launch. The Foundry Way: The team enables the Purview toggle. Detection: Purview automatically flags if an employee pastes a patient ID into the chat. Retention: The team selects "Enterprise AI Apps" in their retention policy, ensuring all chats are kept for the required legal period. Outcome: The app ships on schedule because Compliance trusts the controls are inherited, not bolted on. Takeaway Microsoft Purview DSPM is a gamechanger for organizations looking to adopt AI responsibly. By integrating with Microsoft Foundry, it provides a comprehensive framework to discover, protect, and govern AI interactions ensuring compliance, reducing risk, and enabling secure innovation. We built this integration because teams kept spending months on compliance controls that already exist in Microsoft's stack. The toggle is there. The capabilities are real. Your security team already trusts Purview. Your compliance team already knows the tools. Enable it. Ship your agent. Let the infrastructure do what infrastructure does best: work in the background while you focus on what your application does. Additional Resources Documentation: Use Microsoft Purview to manage data security & compliance for Microsoft Foundry | Microsoft LearnCredential Exposure Risk & Response Workbook
How to set up the Workbook Use the steps outlined in the Identify and Remediate Credentials article to get the right rules in place to start capturing credential data. You may choose to use custom regex patterns or more specific SITs that align with your scenario. This workbook will help you once that is done. This workbook transforms credential leakage detection into a measurable, executive-ready capability. End‑to‑end situational awareness: Correlates alerts across workloads, departments, credential types, and users to surface material exposure quickly. Actionable triage & forensics: Drill from trends to the artifact (message/file/URL), accelerating containment and root‑cause analysis. Risk‑aligned decisions: Quantifies exposure and response performance (creation vs. resolution trends) to guide investment and policy changes. Audit‑ready governance: Captures decisions, timelines, and outcomes for PCI/PII controls, identity hygiene, and secrets management. Prerequisites License requirements for Microsoft Purview Information Protection depend on the scenarios and features you use. To understand your licensing requirements and options for Microsoft Purview Information Protection, see the Information Protection sections from Microsoft 365 guidance for security & compliance and the related PDF download for feature-level licensing requirements. Before you start, all endpoint interaction with Sensitive content is already being included in the audit logging with Endpoint DLP enabled (Endpoint DLP must be enabled). For Microsoft 365 SharePoint, OneDrive Exchange, and Teams you can enable policies that generate events but not incidents for important sensitive information types. Install Power BI Desktop to make use of the templates Downloads - Microsoft Power BI Step-by-step guided walkthrough In this guide, we will provide high-level steps to get started using the new tooling. Get the latest version of the report that you are interested in. In this case, we will show the Board report. Open the report. If Power BI Desktop is installed, it should look like this: 3. You must authenticate with the https://api.security.microsoft.com, select Organizational account, and sign in. Then click Connect. 4. You will also have to authenticate with httpps://api.security.microsoft.com/api/advancedhunting, select Organizational account, and sign in. Then click Connect. What the Workbook Delivers The workbook moves programs to something that is measurable. Combined with customers' outcome‑based metrics (operational risk, control risk, end‑user impact), it enables an executive‑level, data‑driven narrative for investment and policy decisions. End‑to‑end situational awareness: Correlates alerts across workloads, departments, credential types, and users to surface material exposure quickly. Actionable triage & forensics: Drill from trends to the artifact (message/file/URL), accelerating containment and root‑cause analysis. Risk‑aligned decisions: Quantifies exposure and response performance (creation vs. resolution trends) to guide investment and policy changes. Audit‑ready governance: Captures decisions, timelines, and outcomes for PCI/PII controls, identity hygiene, and secrets management. Troubleshooting tips: If you are receiving a (400): Bad request error, it is likely that you do not have the necessary tables from the endpoint in Advanced Hunting. Those errors may also show if there are empty values passed from the left-hand side of the KQL queries. Detection trend Apply filtering to this view based on the DLP policies that monitor credentials. Trend Analysis Over Time Displays daily detection counts, helping identify spikes in credential leakage activity and enabling proactive investigation. Workload and Credential Type Breakdown Shows which workloads (e.g., Endpoint, Exchange, OneDrive) and credential types are most affected, guiding targeted security measures. Detection Source Visibility Highlight which security tools (Sentinel, Cloud App Security, Defender) are catching leaks, ensuring monitoring coverage, and identifying gaps. Detailed Credential Exposure Lists exposed credentials for quick validation and remediation, reducing the risk of misuse or compromise. (This part is dependent on the AI component) Supports Incident Response Enables rapid triage by correlating detection trends with specific credentials and sources, improving response times. Compliance and Audit Readiness Provides clear evidence of credential monitoring and leakage detection for regulatory and governance reporting. Credential incident trends Lifecycle Tracking of Credential Alerts Visualizes creation and resolution trends over time, helping teams measure response efficiency and identify periods of heightened risk. Workload and Credential Type Breakdown Shows which workloads (Endpoint, Exchange, OneDrive) and credential types are most impacted, enabling targeted mitigation strategies. Incident Type Analysis Highlights the distribution of alerts by category (e.g., CredRisk, Agent), supporting prioritization of critical incidents. Detailed Alert Context Provides message IDs and associated credentials for precise investigation and remediation, reducing time to contain threats. Performance and SLA Monitoring Tracks resolution timelines to ensure compliance with internal security SLAs and regulatory requirements. Audit and Governance Support Offers clear evidence of alert handling and closure, strengthening accountability and reporting. Content view Workload-Level Risk Visibility Highlights which workloads (e.g., SharePoint, Endpoint) have the highest credential exposure, enabling targeted security hardening. Departmental Risk Breakdown Shows which departments (Security, Logistics, Sales) are most impacted, helping prioritise remediation for critical business areas. Credential Type Analysis Identifies exposed credential types such as API keys, shared access keys, and tokens, guiding policy enforcement and rotation strategies. User and Document Correlation Links exposed credentials to specific users and documents, supporting rapid investigation and containment of leaks. Comprehensive Drill-Down Enables navigation from department → credential type → user → document for precise root cause analysis. Governance and Compliance Support Provides auditable evidence of credential exposure across workloads and departments, strengthening regulatory reporting. For endpoint, this view is an excellent way to catch applications that are not treating secrets in a safe way and expose them in temporary files. Force-directed graph Visual Alert Correlation Displays a force-directed graph linking users to alert categories, making it easy to identify patterns and clusters of credential-related risks. High-Risk User Identification Highlights users with multiple or severe alerts, enabling prioritisation for investigation and remediation. Credential Type and Department Context Shows which credential types and departments are most associated with alerts, supporting targeted security measures. Alert Severity and Details Provides a detailed table of alerts with severity and category, helping analysts quickly assess impact and urgency. Improved Threat Hunting Enables analysts to trace relationships between users, alert types, and credential exposure for deeper root cause analysis. Compliance and Reporting Offers clear evidence of monitoring and categorisation of credential-related alerts for governance and audit purposes. Security incidents correlated to credential leakage Focused on Credential Leakage Provides a dedicated view of alerts related to exposed credentials, enabling quick detection and response. Role-Based Risk Analysis Breaks down incidents by department and role, helping prioritise remediation for high-risk groups such as developers and security teams. User-Level Investigation Allows drill-down to individual users involved in credential-related alerts for rapid containment and corrective action. Credential Type Insights Highlight which types of credentials (e.g., API keys, passwords) are most vulnerable, guiding policy improvements and rotation strategies. Alert Source Correlation Displays which security tools (Sentinel, MCAS, Defender) are detecting leaks, ensuring coverage and identifying monitoring gaps. Compliance and Governance Support Offers auditable evidence of credential monitoring, supporting regulatory and internal security requirements. App and Network correlated to credential leakage For network detection, adjust the query in production to remove standard applications if they are too noisy. We have seen cases where Word and other commonly used applications make calls using FTP services as an example. While other applications may add too much noise. Token Detection Event Traceability Shows detected Token credentials events linked directly to individual User IDs and Device IDs for investigation. Application Usage Context Identifies that the detected activity is associated with the application ms‑teams.exe as an example. External URL Association Displays the Remote URL connected to the token detection event. Remote IP Visibility Lists the Remote IP addresses associated with the activity. Entity-Level Correlation Links UserId, DeviceId, Application, Remote URL, and Remote IP within a single event flow. You can select port used or how Apps are linked as well. Detection Count Aggregation Summarises the number of credential events tied to each correlated entity path. Turn detection into decisions. Deploy the workbook today to get measurable insights, accelerate triage, and deliver audit-ready governance. Start driving risk-aligned investment and policy changes with confidence. The PBI report is located here. Based on what you identify, you may be using tools such as Data Security Investigations to go deeper. We are also working on surfacing the AI triaging in a context that will enrich the DLP analyst experience.Why External Users Can’t Open Encrypted Attachments in Certain Conditions & How to Fix It Securely
When Conditional Access policies enforce MFA across all cloud apps and include external users, encrypted attachments may require additional considerations. This post explains why. This behavior applies only in environments where all of the following are true: Microsoft Purview encryption is used for emails and attachments A Conditional Access (CA) policy is configured to: Require MFA Apply to all cloud applications Include guest or external users The Situation: Email Opens, Attachment Doesn’t When an email is encrypted using: Microsoft Purview Sensitivity Labels, or Information Rights Management (IRM) Any attached Office document automatically inherits encryption. This inheritance is intentional and enforced by the service, Ensures consistent protection of sensitive content. That inheritance is mandatory and cannot be disabled. So far, so good. But here’s where things break for external recipients. The Hidden Dependency: Identity & Conditional Access Reading an encrypted email and opening an encrypted attachment are two different flows. External users can usually read encrypted emails by authenticating through: One-Time Passcode (OTP) Microsoft personal accounts Their own organization’s identity However, encrypted attachments use Microsoft Rights Management Services (RMS) — and RMS expects an identity the sender’s tenant can evaluate. If your organization has: A global Conditional Access policy Enforcing MFA for all users Applied to all cloud apps external users can get blocked even after successful email decryption. This commonly results in errors like: “This account does not exist in the sender’s tenant…” AADSTS90072: The external user account does not exist in our tenant and cannot access the Microsoft Office application. The account needs to be added as an external user in the tenant or use an alternative authentication method. When It Works (and Why It Often Doesn’t) External access to encrypted attachments works only when one of these conditions is met: The sender trusts the recipient’s tenant MFA via Cross‑Tenant Access (MFA trust) The recipient already exists as a guest account in the sender’s tenant In real-world scenarios, these conditions often fail: External recipients use consumer or non‑Entra identities Recipient domains are not predictable Guest onboarding does not scale Cross‑tenant trust is intentionally restricted In such cases, Conditional Access policies designed for internal users can affect RMS evaluation for external users. So what’s the alternative? The Practical, Secure Alternative When the two standard access conditions (cross‑tenant trust or guest presence) cannot be met , you can refine Conditional Access evaluation without weakening encryption. The goal is not to remove MFA, but to ensure it is applied appropriately based on identity type and access path. In this scenario: MFA remains enforced for all internal users, including access to Microsoft Rights Management Services (RMS) MFA remains enforced for external users across cloud applications other than RMS The Key Idea Let encryption stay strong, but stop blocking external RMS authentication. This is achieved by: Keeping the existing Conditional Access policy that enforces MFA for all internal users across all cloud applications, including RMS Excluding guest and external users from that internal‑only policy Deploying a separate Conditional Access policy scoped to guest and external users to: Continue enforcing MFA for external users where supported Explicitly exclude Microsoft Rights Management Services (RMS) from evaluation RMS can be excluded from the external‑user policy by specifying the following application (client) ID: RMS App ID: 00000012-0000-0000-c000-000000000000 Why This Is Still Secure This approach: ✅ Keeps email and attachment encryption fully intact ✅ Internal security posture is unchanged ✅ External users remain protected by MFA where applicable ✅ Allows external users to authenticate using supported methods ✅ Avoids over-trusting external tenants ✅ Scales for large, unpredictable recipient sets Final Takeaway Encrypted attachment access is governed by identity recognition and policy design, not by email encryption alone. By aligning Conditional Access with how encrypted content is evaluated, organizations can enable secure external collaboration while maintaining strong protection standardsRetirement notification for the Azure Information Protection mobile viewer and RMS Sharing App
Over a decade ago, we launched Azure Information Protection (AIP) mobile app for iOS and Android and Rights Management Service (RMS) Sharing app for Mac to fill an important niche in our non-Office file ecosystem to enable users to securely view protected filetypes like (P)PDF, RPMSG and PFILEs outside of Windows. These viewing applications are integrated with sensitivity labels from Microsoft Purview and encryption from the Rights Management Service to view protected non-Office files and enforce protection rights. Today, usage of these app is very low, especially for file types other than PDFs. Most PDF use cases have already shifted to native Office apps and modern Microsoft 365 experiences. As part of our ongoing modernization efforts, we’ve decided to retire these legacy apps. We are officially announcing the retirement of the AIP Mobile and RMS Sharing and starting the 12-month clock, after which it will reach retirement on May 30, 2026. All customers with Azure Information Protection P1 service plans will also receive a Message Center post with this announcement. In this blog post, we will cover what you need to know about the retirement, share key resources to support your transition, and explain how to get help if you have questions. Q. How do I view protected non-Office files on iOS and Android? Instead of one application for all non-Office file types, view these files in apps where you’d most commonly see them. For example, use the OneDrive app or the Microsoft 365 Copilot app to open protected PDFs. Here’s a summary of which applications support each file type: 1) PDF and PPDF: Open protected PDF files with Microsoft 365 Copilot or OneDrive. These applications have native support to view labels and enforce protection rights. Legacy PPDF files must be opened with the Microsoft Information Protection File Labeler on Windows and saved as PDF before they can be viewed. 2) PFILE: These files are no longer viewable on iOS and Android. PFILEs are file types supported for classification and protection and include file extensions like PTXT, PPNG, PJPG and PXML. To view these files, use the Microsoft Purview Information Protection Viewer on Windows. 3) RPMSG: These files are also no longer viewable on iOS and Android. To view these files, use Classic Outlook on Windows. Q. Where can I download the required apps for iOS, Android or Windows? These apps are available for download on the Apple App Store, Google Play Store, Microsoft Download Center or Microsoft Store. Microsoft 365 Copilot: Android / iOS Microsoft OneDrive: Android / iOS Microsoft Purview Information Protection Client: Windows Classic Outlook for Windows: Windows Q. Is there an alternative app to view non-Office files on Mac? Before May 30, 2026, we will release the Microsoft Purview Information Protection (MPIP) File Labeler and Viewer for Mac devices. This will make the protected non-Office file experience on Mac a lot better with the ability to not only view but modify labels too. Meanwhile, continue using the RMS Sharing App. Q. Is the Microsoft Purview Information Protection Client Viewer going away too? No. The Microsoft Purview Information Protection Client, previously known as the Azure Information Protection Client, continues to be supported on Windows and is not being retired. We are actively improving this client and plan to bring its viewing and labeling capabilities to Mac as well. Q. What happens if I already have RMS Sharing App or AIP Mobile on my device? You can continue using these apps to view protected files and download onto new devices until retirement on May 30, 2026. At that time, these apps will be removed from app stores and will no longer be supported. While existing versions may continue to function, they will not receive any further updates or security patches. Q. I need more help. Who can I reach out to? If you have additional questions, you have a few options: Reach out to your Microsoft account team. Reach out to Microsoft Support with specific questions. Reach out to Microsoft MVPs who specialize in Information Protection.2.2KViews1like3Comments