microsoft purview
273 TopicsBuild Sensitivity Label‑Aware, Secure RAG with Azure AI Search and Purview
Introduction: Why This Matters Now Most developers building solutions with Azure AI Search haven’t had to think about Microsoft Purview sensitivity labels before. Sensitivity labels are applied at the source—SharePoint, OneLake, OneDrive—and they classify and protect documents through encryption, access rules, and usage rights. As a result, developers often don’t see these labels directly, and many are unaware that labeled or encrypted documents behave differently when used in AI and search workloads. This matters because RAG and Copilot‑style applications rely on complete, context‑rich data to return accurate answers. If labeled content isn’t accessible to the indexing pipeline—or if Azure AI Search isn’t configured to interpret label metadata—your retrieval layer may unintentionally miss protected documents, leading to incomplete grounding, reduced answer quality, or inconsistent user experiences. For context, Copilot-style apps are context‑aware AI applications that combine a large language model (LLM) with enterprise data to help users ask questions, generate content, and complete tasks inside an existing workflow. Historically, search experiences haven’t fully honored Purview label protections. While Azure AI Search can enforce document‑level permissions in sources such as SharePoint in Microsoft 365, ADLS Gen2, Azure Blob storage (when configured), ACLs only answer who can see the document, whereas sensitivity labels define how the content must be handled once accessed. Also, enterprise security and compliance teams expect label-based access enforcement, when configured. If Purview integration is not enabled, documents with certain label protections—especially encrypted ones—may simply not be indexable, which reduces the corpus available to AI Search. This blog explains how Azure AI Search now integrates with Purview sensitivity labels, why this configuration is increasingly important for secure and complete enterprise RAG, and how to enable it in your environment. What Are Sensitivity Labels & Why They Impact AI Search Microsoft Purview sensitivity labels classify and protect organizational data by applying encryption, access controls, and visual markings across documents, emails, and collaboration spaces. When labels are applied, Microsoft Purview governs, among other functionality: Who can read a document Whether it’s encrypted What usage rights apply How the data must be treated Purview sensitivity labels and Azure AI Search Developers often assume these label-based enforcements “just work,” but unless Azure AI Search is configured to extract and evaluate label metadata, AI systems cannot retrieve protected content and/or enforce the behavior expected of data carrying those labels, leading to incomplete and sometimes insecure RAG answers. Azure AI Search now supports the following actions as part of sensitivity label support in preview: Sensitivity label ingestion at indexing time Label-based document-level access control at query-time What the Integration Enables (And What Happens If You Don’t Turn It On) When purview labels are integrated with AI Search Labeled documents are successfully indexed Label metadata is stored alongside the document Query-time filters enforce Purview EXTRACT rights RAG apps, copilots, and agents return only what a user can access No risk of “silent missing labeled-context” in retrieval Unified Purview governance across Microsoft 365 documents and AI Search If You Don’t Enable It Documents with labels with configured protections won’t index, leading to incomplete data available for AI Search, reducing answer quality Search results won’t enforce protections based on labels, impacting user experience End users won't have visibility into labels applied to their documents based on compliance requirements, impacting user experience as well Sources Supported These are the data sources where purview labels are supported in AI Search today: Azure Blob Storage ADLS Gen2 SharePoint (Preview) OneLake End-to-end flow Next steps Follow the documentation and resources below to enable your Azure AI Search indexes with Purview sensitivity labels Documentation: Indexing sensitivity labels in Azure AI Search Query-Time Microsoft Purview Sensitivity Label Enforcement in Azure AI Search Demo app repo: https://aka.ms/Ignite25/aisearch-purview-sensitivity-labels-repo Demo video: tivity labels in Azure AI Search demoAccelerate Your Security Copilot Readiness with Our Global Technical Workshop Series
The Security Copilot team is delivering virtual hands-on technical workshops designed for technical practitioners who want to deepen their AI for Security expertise with Microsoft Entra, Intune, Microsoft Purview, and Microsoft Threat Protection. These workshops will help you onboard and configure Security Copilot and deepen your knowledge on agents. These free workshops are delivered year-round and available in multiple time zones. What You’ll Learn Our workshop series combines scenario-based instruction, live demos, hands-on exercises, and expert Q&A to help you operationalize Security Copilot across your security stack. These sessions are all moderated by experts from Microsoft’s engineering teams and are aligned with the latest Security Copilot capabilities. Every session delivers 100% technical content, designed to accelerate real-world Security Copilot adoption. Who Should Attend These workshops are ideal for: Security Architects & Engineers SOC Analysts Identity & Access Management Engineers Endpoint & Device Admins Compliance & Risk Practitioners Partner Technical Consultants Customer technical teams adopting AI powered defense Register now for these upcoming Security Copilot Virtual Workshops Start building Security Copilot skills—choose the product area and time zone that works best for you. Please take note of pre-requisites for each workshop in the registration page Security Copilot Virtual Workshop: Copilot in Defender March 4, 2026 at 8:00-9:00 AM (PST) - register here Asia Pacific optimized delivery schedules Time conversion: 4:00-5:30 PM NZDT; 11:00-12:30 AM GMT +8; 8:30-10:00 AM IST; 7:00-8:30 PM PST March 5, 2026 at 2:00-3:30 PM (AEDT) - register here Security Copilot Virtual Workshop: Copilot in Entra February 25, 2026 at 8:00 - 9:30 AM (PST) - register here Asia Pacific optimized delivery schedules Time conversion: 4:00-5:30 PM NZDT; 11:00-12:30 AM GMT +8; 8:30-10:00 AM IST; 7:00-8:30 PM PST February 26, 2026 at 2:00-3:30 PM (AEDT) - register here March 26, 2026 at 2:00-3:30 PM (AEDT) - register here Security Copilot Virtual Workshop: Copilot in Intune March 11, 2026 at 8:00-9:30 AM (PST) - register here Asia Pacific optimized delivery schedules Time conversion: 4:00-5:30 PM NZDT; 11:00-12:30 AM GMT +8; 8:30-10:00 AM IST; 7:00-8:30 PM PST March 12, 2026 at 2:00-3:30 PM (AEDT) - register here April 9, 2026 at 2:00 - 3:30 PM AEDT Security Copilot Virtual Workshop: Copilot in Purview March 18, 2026 8:00 - 9:30 AM (PST) - register here Asia Pacific optimized delivery schedules Time conversion: 4:00-5:30 PM NZDT; 11:00-12:30 AM GMT +8; 8:30-10:00 AM IST; 7:00-8:30 PM PST March 19, 2026 2:00-3:30 PM (AEDT)- register here Learn and Engage with the Microsoft Security Community Log in and follow this Microsoft Security Community Blog and post/ interact in the Microsoft Security Community discussion spaces. Follow = Click the heart in the upper right when you're logged in 🤍 Join the Microsoft Security Community and be notified of upcoming events, product feedback surveys, and more. Get early access to Microsoft Security products and provide feedback to engineers by joining the Microsoft Security Advisors.. Learn about the Microsoft MVP Program. Join the Microsoft Security Community LinkedIn and the Microsoft Entra Community LinkedIn3.4KViews5likes1CommentData Quality (DQ) for Standalone Data Assets at Microsoft Purview
Why is Data Quality for your Data Asset critical today? Many companies today cannot activate their data estate. Our research shows that 75% of the companies today do not even have a data quality program. This is such a major problem because data quality is becoming a major focus of AI. Your AI is only as good as your data. In the old days, when we weren't necessarily content aware, it was fine because human was always in the loop and corrected data as needed, but in the new world right now where the AI looks at the content, not just the structure but the content of the data and if that content of the data is fundamentally off, whether it's misrepresented, inconsistent, illegible, inaccurate, insecure, incompliance, it will make your AI wrong. It will make your AI ugly. And it will impact your company and impact you as an employee building AI and BI/Insights. There is nothing worse than building something that doesn't get used. What Is Data Quality for a Standalone Data Asset? A standalone data asset is a dataset that is not linked to any data product. Data Quality (DQ) for standalone assets enables organizations to measure, monitor, and improve the quality of these datasets independently—without requiring them to be part of a data product. Why This Matters Improve Data Before Linking to a Data Product Users can profile, assess, clean, and standardize data before associating it with a data product. This ensures higher quality at the time of onboarding. Make Better Curation Decisions Understanding the quality of standalone assets helps organizations decide which datasets are suitable for inclusion in governed data products. Support Broader Use Cases Not all datasets are used for analytics. Standalone assets may support: Data monetization AI grounding or training data Operational workloads Unified Data Quality Management Organizations can use a single Purview Data Quality solution to manage both standalone assets and data product–associated assets, including issue remediation. Optimize Storage and Reduce Costs Low-quality or unused datasets can be archived to lower-cost storage or removed using data minimization principles, reducing storage costs and improving ROI. Accelerate Governance Adoption Organizations can start measuring and improving data quality immediately—without waiting for formal data product definitions—helping mature governance practices faster. Measure and Monitor Data Quality for Standalone Assets using Microsoft Purview To start measuring and monitoring the data quality of a standalone data asset, first add the asset from the data map to a governance domain. If a connection to the data source has not already been configured, you must create one for the selected source system. Once the asset is added to the domain and the data source connection is established, select the asset and run data profiling. You can accept the recommended columns or customize the selection by adding or removing columns. After profiling completes, review the results to understand the structure and quality of the data. Based on profiling insights, you can create custom rules, apply out-of-the-box rules, or use AI-enabled rule suggestions. After adding rules to the selected columns, run a data quality scan to generate column-level data quality scores and assess the overall health of the dataset. To continuously measure and improve data quality, configure data quality error record storage and schedule recurring data quality scans. You can also associate a standalone data asset—along with its data quality scores and rules—to a data product. This allows data product owners to reuse rules created for standalone assets. The same data asset can be cloned and associated with multiple data products to support different use cases. Note that the data product must be in Draft state before a standalone asset with a data quality score can be associated with it. Additionally, you can configure alerts to notify your team when a data quality score falls below a defined threshold or when a data quality scan fails. Because Purview uses role-based access control (RBAC), only users with the Data Product Owner role can associate a standalone data asset with a data product. Users must also have access to the relevant governance domain. A Data Product Owner must have a Domain Reader (local or global) or Data Quality Reader role to browse standalone asset DQ pages and associated assets. Summary Data Quality for standalone data assets allows organizations to independently assess, improve, and monitor datasets before or without linking them to data products. This approach increases governance agility, improves decision-making, supports diverse use cases beyond analytics, reduces storage costs, and accelerates enterprise data maturity. References Data Quality Scan for a Data Asset in Unified Catalog (preview) | Microsoft LearnIntroducing Security Dashboard for AI (Now in Public Preview)
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is available in public preview. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 2026Introducing eDiscovery Graph API Standard and Enhancements to Premium APIs
We have been busy working to enable organisations that leverage the Microsoft Purview eDiscovery Graph APIs to benefit from the enhancements in the new modern experience for eDiscovery. I am pleased to share that APIs have now been updated with additional parameters to enable organisations to now benefit from the following features already present in the modern experience within the Purview Portal: Ability to control the export package structure and item naming convention Trigger advanced indexing as part of the Statistics, Add to Review and Export jobs Enables for the first time the ability to trigger HTML transcription of Teams, Viva and Copilot interaction when adding to a review set Benefit from the new statistic options such as Include Categories and Include Keyword Report More granular control of the number of versions collected of modern attachments and documents collected directly collected from OneDrive and SharePoint These changes were communicated as part of the M365 Message Center Post MC1115305. This change involved the beta version of the API calls being promoted into the V1.0 endpoint of the Graph API. The following v1.0 API calls were updated as part of this work: Search Estimate Statistics – ediscoverySearch: estimateStatistics Search Export Report - ediscoverySearch: exportReport Search Export Result - ediscoverySearch: exportResult Search Add to ReviewSet – ediscoveryReviewSet: addToReviewSet ReviewSet Export - ediscoveryReviewSet: export The majority of this blog post is intended to walk through the updates to each of these APIs and provide understanding on how to update your calls to these APIs to maintain a consistent outcome (and benefit from the new functionality). If you are new to the Microsoft Purview eDiscovery APIs you can refer to my previous blog post on how to get started with them. Getting started with the eDiscovery APIs | Microsoft Community Hub First up though, availability of the Graph API for E3 customers We are excited to announce that starting September 9, 2025, Microsoft will launch the eDiscovery Graph API Standard, a new offering designed to empower Microsoft 365 E3 customers with secure, automated data export capabilities. The new eDiscovery Graph API offers scalable, automated exports with secure credential management, improved performance and reliability for Microsoft 365 E3 customers. The new API enables automation of the search, collect, hold, and export flow from Microsoft Purview eDiscovery. While it doesn’t include premium features like Teams/Yammer conversations or advanced indexing (available only with the Premium Graph APIs), it delivers meaningful value for Microsoft 365 E3 customers needing to automate structured legal exports. Key capabilities: Export from Exchange, SharePoint, Teams, Viva Engage and OneDrive for Business Case, search, hold and export management Integration with partner/vendor workflows Support automation that takes advantage of new features within the modern user experience Pricing & Access Microsoft will offer 50 GB of included export volume per tenant per month, with additional usage billed at $10/GB—a price point that balances customer value, sustainability, and market competitiveness. The Graph API Standard will be available in public preview starting September 9. For more details on pay-as-you-go features in eDiscovery and Purview refer to the following links. Billing in eDiscovery | Microsoft Learn Enable Microsoft Purview pay-as-you-go features via subscription | Microsoft Learn Wait, but what about the custodian and noncustodial locations workflow in eDiscovery Classic (Premium)? As you are probably aware, in the modern user experience for eDiscovery there have been some changes to the Data Sources tab and how it is used in the workflow. Typically, organisations leveraging the Microsoft Purview eDiscovery APIs previously would have used the custodian and noncustodial data sources APIs to add the relevant data sources to the case using the following APIs. ediscoveryCustodian resource type - Microsoft Graph v1.0 | Microsoft Learn ediscoveryNoncustodialDataSource resource type - Microsoft Graph v1.0 | Microsoft Learn Once added via the API calls, when creating a search these locations would be bound to a search. This workflow in the API remains supported for backwards compatibility. This includes the creation of system generated case hold policies when applying holds to the locations via these APIs. Organisations can continue to use this approach with the APIs. However, to simplify your code and workflow in the APIs consider using the following API call to add additional sources directly to the search. Add additional sources - Microsoft Graph v1.0 | Microsoft Learn Some key things to note if you continue to use the custodian and noncustodial data sources APIs in your automation workflow. This will not populate the new data sources tab in the modern experience for eDiscovery They can continue to be queried via the API calls Advanced indexing triggered via these APIs will have no influence on if advanced indexing is used in jobs triggered from a search Make sure you use the new parameters to trigger advanced indexing on the job when running the Statistics, Add to Review Set and Direct Export jobs Generating Search Statistics ediscoverySearch: estimateStatistics In eDiscovery Premium (Classic) and the previous version of the APIs, generating statistics was a mandatory step before you could progress to either adding the search to a review set or triggering a direct export. With the new modern experience for eDiscovery, this step is completely optional and is not mandatory. For organizations that previously generated search statistics but never checked or used the results before moving to adding the search to a review set or triggering a direct export job, they can now skip this step. If organizations do want to continue to generate statistics, then calling the updated API with the same parameters call will continue to generate statistics for the search. An example of a previous call would look as follows: POST /security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/estimateStatistics Historically this API didn’t require a request body. With the APIs now natively working with the modern experience for eDiscovery; the API call now supports a request body, enabling you to benefit from the new statistic options. Details on these new options can be found in the links below. Create a search for a case in eDiscovery | Microsoft Learn Evaluate and refine search results in eDiscovery | Microsoft Learn If a search is run without a request body it will still generate the following information: Total matches and volume Number of locations searched and the number of locations with hits Number of data sources searched and the number of data sources with hits The top five data sources that make up the most search hits matching your query Hit count by location type (mailbox versus site) As the API is now natively working with the modern experience for eDiscovery you can optionally include a request body to pass the statisticOptions parameter in the POST API call. With the changes to how Advanced Indexing works within the new UX and the additional reporting categories available, you can use the statisticsOptions parameter to trigger the generate statistic job with the additional options within the modern experience for the modern UX. The values you can include are detailed in the table below. Property Option from Portal includeRefiners Include categories: Refine your view to include people, sensitive information types, item types, and errors. includeQueryStats Include query keywords report: Assess keyword relevance for different parts of your search query. includeUnindexedStats Include partially indexed items: We'll provide details about items that weren't fully indexed. These partially indexed items might be unsearchable or partially searchable advancedIndexing Perform advanced indexing on partially indexed items: We'll try to reindex a sample of partially indexed items to determine whether they match your query. After running the query, check the Statistics page to review information about partially indexed items. Note: Can only be used if includeUnindexedStats is also included. locationsWithoutHits Exclude partially indexed items in locations without search hits: Ignore partially indexed items in locations with no matches to the search query. Checking this setting will only return partially indexed items in locations where there is already at least one hit. Note: Can only be used if includeUnindexedStats is also included. In eDiscovery Premium (Classic) the advanced indexing took place when a custodian or non-custodial data location was added to the Data Sources tab. This means that when you triggered the estimate statistics call on the search it would include results from both the native Exchange and SharePoint index as well as the Advanced Index. In the modern experience for eDiscovery, the advanced indexing runs as part of the job. However, this must be selected as an option on the job. Note that not all searches will benefit from advanced indexing, one example would be a simple date range search on a mailbox or SPO site as this will still have hits on the partially indexed items (even partial indexed email and SPO file items have date metadata in the native indexes). The following example using PowerShell and the Microsoft Graph PowerShell module and passes the new StatisticsOptions parameter to the POST call and selects all available options. # Generate estimates for the newly created search $statParams = @{ statisticsOptions = "includeRefiners,includeQueryStats,includeUnindexedStats,advancedIndexing,locationsWithoutHits" } $params = $statParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/estimateStatistics" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Write-Host "Estimate statistics generation triggered for search ID: $searchID" Once run, it will create a generated statistic job with the additional options selected. Direct Export - Report ediscoverySearch: exportReport This API enables you to generate an item report directly form a search without taking the data into a review set or exporting the items that match the search. With the APIs now natively working with the modern experience for eDiscovery, new parameters have been added to the request body as well as new values available for existing parameters. The new parameters are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. These new parameters reflect the changes made in the modern experience for eDiscovery that provides more granular control for eDiscovery managers to apply different collection options based on where the SPO item was collected from (e.g. directly from a SPO site vs a cloud attachment link included in an email). Within eDiscovery Premium (Classic) the All Document Versions option applied to both SharePoint and OneDrive files collected directly from SharePoint and any cloud attachments contained within email, teams and viva engage messages. Historically for this API, within the additionalOptions parameter you could include the allDocumentVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allDocumentVersions value can still be included in the additionalOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalOptions parameter and switch to using the new documentVersion parameter. As described earlier, to benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the direct export job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalOptions advancedIndexing Perform advanced indexing on partially indexed items exportCriteria searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items exportLocation responsiveLocations, nonresponsiveLocations Exclude partially indexed items in locations without search hits. Finally, in the new modern experience for eDiscovery more granular control has been introduced to enable organisations to independently choose to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and the ability to collect up to 12 hours of related conversations when a message matches a search. This is reflected in the job settings by the following options: Organize conversations into HTML transcripts Include Teams and Viva Engage conversations In the classic experience this was a single option titled Teams and Yammer Conversations that did both actions and was controlled by including the teamsAndYammerConversations value in the additionalOptions parameter. With the APIs now natively working with the modern experience for eDiscovery, the teamsAndYammerConversations value can still be included in the additionalOptions parameter but it will only trigger the collection of up to 12 hours of related conversations when a message matches a search without converting the items into HTML transcripts. To do this we need to include the new value of htmlTranscripts in the additionalOptions parameter. As an example, lets look at the following direct export report job from the portal and use the Microsoft Graph PowerShell module to call the exportReport API call with the updated request body. $exportName = "New UX - Direct Export Report" $exportParams = @{ displayName = $exportName description = "Direct export report from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportReport" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Direct Export - Results ediscoverySearch: exportResult - Microsoft Graph v1.0 | Microsoft Learn This API call enables you to export the items from a search without taking the data into a review set. All the information from the above section on the changes to the exportReport API also applies to this API call. However with this API call we will actually be exporting the items from the search and not just the report. As such we need to pass in the request body information on how we want the export package to look. Previously with direct export for eDiscovery Premium (Classic) you had a three options in the UX and in the API to define the export format. Option Exchange Export Structure SharePoint / OneDrive Export Structure Individual PST files for each mailbox PST created for each mailbox. The structure of each PST is reflective of the folders within the mailbox with emails stored based on their original location in the mailbox. Emails named based on their subject. Folder for each mailbox site. Within each folder, the structure is reflective of the SharePoint/OneDrive site with documents stored based on their original location in the site. Documents are named based on their document name. Individual .msg files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folders within the mailbox with emails stored as .msg files based on their original location in the mailbox. Emails named based on their subject. As above. Individual .eml files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folder within the mailbox with emails stored as .eml files based on their original location in the mailbox. Emails named based on their subject As above. Historically with this API, the exportFormat parameter was used to control the desired export format. Three values could be used and they were pst, msg and eml. This parameter is still relevant but only controls how email items will be saved, either in a PST file, as individual .msg files or as individual .eml files. Note: The eml export format option is depreciated in the new UX. Going forward you should use either pst or msg. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. More information on the export package options and what they control can be found in the following link. https://learn.microsoft.com/en-gb/purview/edisc-search-export#export-package-options To support this, new values have been added to the additionalOptions parameter for this API call, these must be included in the request body otherwise the export structure will be as follows. exportFormat value Exchange Export Structure SharePoint / OneDrive Export Structure pst PST files created that containing data from multiple mailboxes. All emails contained within a single folder within the PST. Emails named a based on an assigned unique identifier (GUID) One folder for all documents. All documents contained within a single folder. Documents are named based on an assigned unique identifier (GUID) msg Folder created containing data from all mailboxes. All emails contained within a single folder stored as .msg files. Emails named a based on an assigned unique identifier (GUID) As above. The new values added to the additionalOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal splitSource Organize data from different locations into separate folders or PSTs includeFolderAndPath Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive a similar output structure when previously using the pst or msg values in the exportFormat parameter I would include all of the above values in the additionalOptions parameter. For example, to generate a direct export where the email items are stored in separate PSTs per mailbox, the structure of the PST files reflects the mailbox and each items is named as per the subject of the email; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - PST" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = “https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params If I want to export the email items as individual .msg files instead of storing them in PST files; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - MSG" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = " https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" Add to Review Set ediscoveryReviewSet: addToReviewSet This API call enables you to commit the items that match the search to a Review Set within an eDiscovery case. This enables you to review, tag, redact and filter the items that match the search without exporting the data from the M365 service boundary. Historically with this API call it was more limited compared to triggering the job via the eDiscovery Premium (Classic) UI. With the APIs now natively working with the modern experience for eDiscovery organizations can make use of the enhancements made within the modern UX and have greater flexibility in selecting the options that are relevant for your requirements. There is a lot of overlap with previous sections, specifically the “Direct Export – Report” section on what updates are required to benefit from updated API. They are as follows: Controlling the number of versions of SPO and OneDrive documents added to the review set via the new cloudAttachmentVersion and documentVersion parameters Enabling organizations to trigger the advanced indexing of partial indexed items during the add to review set job via new values added to existing parameters However there are some nuances to the parameter names and the values for this specific API call compared to the exportReport API call. For example, with this API call we use the additionalDataOptions parameter opposed to the additionalOptions parameter. As with the exportReport and exportResult APIs, there are new parameters to control the number of versions of SPO and OneDrive documents added to the review set are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. Historically for this API call, within the additionalDataOptions parameter you could include the allVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allVersions value can still be included in the additionalDataOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalDataOptions parameter and switch to using the new documentVersion parameter. To benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the add to review set job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalDataOptions advancedIndexing Perform advanced indexing on partially indexed items itemsToInclude searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items additionalDataOptions locationsWithoutHits Exclude partially indexed items in locations without search hits. Historically the API call didn’t support the add to review set job options to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and collect up to 12 hours of related conversations when a message matches a search. With the APIs now natively working with the modern experience for eDiscovery this is now possible by adding support for the htmlTranscripts and messageConversationExpansion values to the addtionalDataOptions parameter. As an example, let’s look at the following add to review set job from the portal and use the Microsoft Graph PowerShell module to invoke the addToReviewSet API call with the updated request body. $commitParams = @{ search = @{ id = $searchID } additionalDataOptions = "linkedFiles,advancedIndexing,htmlTranscripts,messageConversationExpansion,locationsWithoutHits" cloudAttachmentVersion = "latest" documentVersion = "latest" itemsToInclude = "searchHits,partiallyIndexed" } $params = $commitParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/addToReviewSet" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Export from Review Set ediscoveryReviewSet: export This API call enables you to export items from a Review Set within an eDiscovery case. Historically with this API, the exportStructure parameter was used to control the desired export format. Two values could be used and they were directory and pst. This parameter has had been updated to include a new value of msg. Note: The directory value is depreciated in the new UX but remains available in v1.0 of the API call for backwards compatibility. Going forward you should use msg alongside the new exportOptions values. The exportStructure parameter will only control how email items are saved, either within PST files or as individual .msg files. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. As with the exportResults API call for direct export, new values have been added to the exportOptions parameter for this API call. The new values added to the exportOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal splitSource Organize data from different locations into separate folders or PSTs includeFolderAndPath Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive an equivalent output structure when previously using the pst value in the exportStructure parameter I would include all of the above values in the exportOptions parameter within the request body. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - PST" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportStructure = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params To receive an equivalent output structure when previously using the directory value in the exportStructure parameter I would instead use the msg value within the request body. As the condensed directory structure format export all items into a single folder, all named based on uniquely assigned identifier I do not need to include the new values added to the exportOptions parameter. An example using the Microsoft Graph PowerShell module can be found below. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - MSG" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles" exportStructure = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Continuing to use the directory value in exportStructure will produce the same output as if msg was used. Wrap Up Thank you for your time reading through this post. Hopefully you are now equipped with the information needed to make the most of the new modern experience for eDiscovery when making your Graph API calls.General Availability of Microsoft Purview eDiscovery Graph API for E3 Customers
As of December 1 st , 2025, the Microsoft Purview eDiscovery Graph API Standard hit General Availability (GA). It provides a programmatic way to manage eDiscovery cases, searches, holds, and exports for organizations that have only M365 E3 licenses. It extends capabilities that were previously exclusive to eDiscovery Premium customers with M365 E5 (or equivalent add-on SKU) licenses. With the eDiscovery Graph APIs, organisations can realise efficiency savings by interacting programmatically with the following elements of their eDiscovery workflow: Case management – create new cases, list or get case details, update cases, and close or reopen cases. Hold management – place content locations on hold (or remove holds) within a case and list hold policies. Search management – create new searches, list existing searches, update search parameters, delete searches, and run a statistics job on a search. View progress of eDiscovery jobs – view lists of all jobs run in the case, including their status and run times. Export operations – trigger an export job from a search, generate an export report, and download the export packages programmatically. Execute Search and Purge operations - ability to purge email messages from mailboxes identified in a modern UX search. For existing customers utilizing the legacy eDiscovery PowerShell cmdlets it presents an opportunity to enrich and enhance any existing script based processes. eDiscovery Investigators, and teams supporting them, can benefit from the Microsoft Graph-based PowerShell cmdlets to also realize efficiency savings without implementing full end-to-end automation. Why use Graph APIs instead of eDiscovery PowerShell cmdlets? Moving from the old Exchange/Compliance PowerShell cmdlets to Graph API might seem like a significant change. The Graph API is aligned with the latest Purview eDiscovery (modern) experience, unlocking organizations to scale their workflows to benefit fully from its new features and improvements. Searches created and run through the APIs can be viewed and managed from the Purview Portal. Microsoft continues to update and enhance the Graph APIs as Microsoft Purview eDiscovery continues to evolve Where as the legacy Cmdlets are tied to the older eDiscovery (Classic) model with limitations in the eDiscovery activities you can perform. There are no new capabilities or enhancements to the legacy cmdlets planned. What do I need to do to start using the APIs? For customers with only E3 licenses in their tenant, before you can start using the APIs you must enable Microsoft Purview pay-as-you-go features as per the following article: Enable Microsoft Purview pay-as-you-go features via subscription | Microsoft Learn If today, your eDiscovery Investigators and supporting operational teams are connecting to Security & Compliance PowerShell using the Exchange Online PowerShell module and are authenticating as themselves then the delegated permission model would be the equivalent model with the APIs. Investigators using the Microsoft Graph PowerShell module eDiscovery cmdlets: Can only work with cases they have created or have been granted access to Limits their actions based on the eDiscovery Purview roles assigned to them To enable investigators to authenticate and use the eDiscovery cmdlets from the Microsoft Graph API PowerShell module; the Microsoft Graph Command Line Tools enterprise app must be added to Entra ID and the appropriate Graph API permissions consented to. There are two eDiscovery related Microsoft Graph Permissions and organizations should select the appropriate permissions based on their requirements. They are as follows: eDiscovery.Read.All: Allows the app to read eDiscovery objects such as cases, searches, holds and other related objects (link) eDiscovery.ReadWrite.All: Allows the app to read and write eDiscovery objects such as cases, searches, holds and other related objects (link) You can check if these permissions are already granted to the Microsoft Graph Command Line Tools by searching for and reviewing the enterprise app in Microsoft Entra. If they have not been granted then an administrator with either the Application Administrator, Cloud Application Administrator or Privileged Role Administrator Entra roles assign can use the following steps to grant the permissions and provide admin consent for all users in the tenant. As delegate permissions are being used in this scenario, even though the consent applies to all users. The signed in user must also be assigned the relevant eDiscovery Purview Roles to make use of the APIs. Install the Microsoft Graph PowerShell Module (link) Connect to Microsoft Graph using the following PowerShell cmdlet Connect-MgGraph -scopes "eDiscovery.ReadWrite.All" -TenantId "<tenant id>" On the permissions screen select “Consent on behalf of your organization” and select accept Now that this has been configured, eDiscovery Investigators will require the Microsoft Graph PowerShell module to connect and make use of the Microsoft Graph PowerShell modules eDiscovery cmdlets. Frequently asked questions Where can I find details on what APIs are available and what they can do? Documentation for the Microsoft Purview eDiscovery Graph APIs can be found here: Use the Microsoft Purview eDiscovery API in Microsoft Graph - Microsoft Graph v1.0 | Microsoft Learn The documentation provides information on what the API call does, what properties are supported and examples of the request and responses. Is there costs associated with using the Microsoft Purview eDiscovery Graph API Standard? Currently, usage of all APIs except the Export API does not contribute towards billing. ediscoverySearch: exportResult - Microsoft Graph v1.0 | Microsoft Learn Usage of the Export API is billed based on the volume of data exported. Each organization receives an included amount of free storage per month (50GB), with additional usage billed at a set price per GB ($10). More information can be found here: Billing in eDiscovery | Microsoft Learn What happens if I have a mix of E3 and E5 licenses? When triggering an export using the Export API for a Microsoft Purview eDiscovery case with the premium features enabled, no costs will be incurred. When triggering an export using the Export API for a Microsoft Purview eDiscovery case with the premium features disabled, it will contribute towards billing. Can I still use PowerShell to work with eDiscovery Cases? Yes, you can use PowerShell cmdlets to work with Microsoft Purview eDiscovery via the APIs. Within the Microsoft Graph PowerShell module there are eDiscovery cmdlets available. When reviewing the API documentation the appropriate PowerShell cmdlets are listed within the examples section. Take, for instance, creating a new case, when reviewing the documentation (link), we can see the New-MgSecurityCaseEdiscoveryCase is available. Can I integrate the APIs into a custom-developed application? Yes, you can install the Microsoft Graph SDK and it is available for the following languages: .NET Go Java JavaScript PHP PowerShell Python More information on installing the Microsoft Graph SDK can be found here: Install a Microsoft Graph SDK - Microsoft Graph | Microsoft Learn Can I use Microsoft Graph application permissions if I am an E3 customers? For E3 customers you can only make use of Microsoft Graph delegate permissions when working with eDiscovery cases without Premium features. The use of application permissions requires E5 licenses and eDiscovery Premium cases. Do third-party applications integrate with the Microsoft Purview eDiscovery APIs? Please check with your third-party vendor for their support for the Microsoft Purview eDiscovery APIs. Is this available in GCC, GCC High, and DoD tenants? Yes, it is available across these environments. Is there any scenario-based guidance for me with examples on how to use the Graph API? There will be soon! We will be releasing further scenario-based guidance to help both Microsoft 365 E3 and E5 organizations adopt and benefit from the Microsoft Purview eDiscovery Graph APIs.Building Secure, Enterprise Ready AI Agents with Purview SDK and Agent Framework
At Microsoft Ignite, we announced the public preview of Purview integration with the Agent Framework SDK—making it easier to build AI agents that are secure, compliant, and enterprise‑ready from day one. AI agents are quickly moving from demos to production. They reason over enterprise data, collaborate with other agents, and take real actions. As that happens, one thing becomes non‑negotiable: Governance has to be built in. That’s where Purview SDK comes in. Agentic AI Changes the Security Model Traditional apps expose risks at the UI or API layer. AI agents are different. Agents can: Process sensitive enterprise data in prompts and responses Collaborate with other agents across workflows Act autonomously on behalf of users Without built‑in controls, even a well‑designed agent can create compliance gaps. Purview SDK brings Microsoft’s enterprise data security and compliance directly into the agent runtime, so governance travels with the agent—not after it. What You Get with Purview SDK + Agent Framework This integration delivers a few key things developers and enterprises care about most: Inline Data Protection Evaluate prompts and responses against Data Loss Prevention (DLP) policies in real time. Content can be allowed or blocked automatically. Built‑In Governance Send AI interactions to Purview for audit, eDiscovery, communication compliance, and lifecycle management—without custom plumbing. Enterprise‑Ready by Design Ship agents that meet enterprise security expectations from the start, not as a follow‑up project. All of this is done natively through Agent Framework middleware, so governance feels like part of the platform—not an add‑on. How Enforcement Works (Quickly) When an agent runs: Prompts and responses flow through the Agent Framework pipeline Purview SDK evaluates content against configured policies A decision is returned: allow, redact, or block Governance signals are logged for audit and compliance This same model works for: User‑to‑agent interactions Agent‑to‑agent communication Multi‑agent workflows Try It: Add Purview SDK in Minutes Here’s a minimal Python example using Agent Framework: That’s it! From that point on: Prompts and responses are evaluated against Purview policies setup within the enterprise tenant Sensitive data can be automatically blocked Interactions are logged for governance and audit Designed for Real Agent Systems Most production AI apps aren’t single‑agent systems. Purview SDK supports: Agent‑level enforcement for fine‑grained control Workflow‑level enforcement across orchestration steps Agent‑to‑agent governance to protect data as agents collaborate This makes it a natural fit for enterprise‑scale, multi‑agent architectures. Get Started Today You can start experimenting right away: Try the Purview SDK with Agent Framework Follow the Microsoft Learn docs to configure Purview SDK with Agent Framework. Explore the GitHub samples See examples of policy‑enforced agents in Python and .NET. Secure AI, Without Slowing It Down AI agents are quickly becoming production systems—not experiments. By integrating Purview SDK directly into the Agent Framework, Microsoft is making governance a default capability, not a deployment blocker. Build intelligent agents. Protect sensitive data. Scale with confidence.Comprehensive Guide to DLP Policy Tips
Feature Support and Compatibility Q: Which Outlook clients support DLP Policy Tips? A: DLP policy tips are supported across several Outlook clients, but the experience and capabilities vary depending on the end user’s client version and the Microsoft 365 license (E3 vs. E5). For detailed guidance on policy tip support across Microsoft apps, read more here. Below is a breakdown of policy tip support across Outlook clients: Glossary: Basic Policy Tip Support: Display of simple warnings or notifications based on DLP rules. Top 10 Predicates: Most commonly used conditions in DLP rules. Content is shared from M365 Content contains SITs Content contains sensitivity label Subject or Body contains words or phrases Sender is Sender is a member of Sender domain is Recipient is Recipient domain is Recipient is a member of Default Oversharing Dialog: A built-in popup warning users about potential data oversharing. Custom Oversharing Dialog: A tailored version of the oversharing warning. Wait on Send: A delay mechanism that gives users time to review sensitive content before sending. Out-of-box SITs: Out-of-box sensitive information types (SITs), like SSNs or credit card numbers. Custom SITs: User-defined sensitive data patterns. Exact Data Match: Used for precise detection of structured sensitive data. Important considerations: Client version matters: Even within the same client (e.g., Outlook Win32), the version must be recent enough to support the latest DLP features. Older builds may lack support for newer DLP features. Policy tip visibility: Policy tips may not appear if the DLP rule uses unsupported predicates or if the client is offline. Licensing: E5 licenses unlock advanced features like oversharing dialogs and support for custom sensitive information types (SITs). Q: Why don’t Policy tips appear for some users or rules? A: While the underlying DLP rules are always enforced, policy tips may not appear for some users due to several factors: Outlook Client Version: Policy yips are only supported in specific versions of Outlook. For example, older builds of Outlook Win32 may not support the latest DLP capabilities. To ensure the Outlook client version you’re using supports the latest capabilities, read more. Licensing: Users with E3 licenses may only see basic policy tips, and some features may not be available at all, while E5 licenses unlock advanced DLP capabilities such as the custom oversharing dialog. For more information on licensing, read more. Unsupported Conditions or Predicates: If a DLP rule uses unsupported predicates, the policy tip will not be displayed even though the rule is enforced. To ensure compatibility, refer to our documentation for a list of supported conditions by client version. Offline Mode: Policy tips rely on real-time evaluation of message content against Data Loss Prevention (DLP) rules by Microsoft 365 services. When a user is offline, their Outlook client cannot communicate with these services, which affects the visibility of policy tips. What about offline E5 users? Even if a user has an E5 license, which includes advanced DLP features, the client must be online to evaluate and display these advanced policy tips. While the message may still be blocked or logged according to the DLP rule, the user won’t see any tip or warning until they reconnect. Q: Are trainable classifiers supported in policy tips? A: Yes, but with specific limitations. Trainable classifiers are supported in DLP policy tips, but only under specific conditions related to licensing, client version, and connectivity: Licensing: The user must have a Microsoft 365 E5 license. Trainable classifiers are part of Microsoft Purview’s advanced classification capabilities, which are only available with E5 or equivalent add-ons. Client Support: Only certain Outlook clients support policy tips triggered by trainable classifiers. These include: Outlook Classic (Win32) New Outlook for Windows (Monarch) Other clients (such as Outlook Web App (OWA), Outlook for Mac, and Outlook Mobile) do not currently support this feature. Connectivity: The Outlook client must be online. Trainable classifiers rely on the Microsoft 365 Data Classification Service (DCS), which performs real-time content evaluation in the cloud. If the client is offline, policy tips based on trainable classifiers will not appear, even though the DLP rule may still be enforced when the message is sent. Q: Is OCR supported in Policy Tips? A: No, there is currently no support for OCR in policy tips. However, our goal is to support OCR in policy tips in the future. Setup & Configuration Q: What are the prerequisites for enabling DLP policy tips? A: DLP policy tips notify users in real time when their actions may violate data protection policies. To enable and use them effectively, the following prerequisites must be met: Licensing Considerations Microsoft 365 E5 is required for full feature access, including real-time policy tips, trainable classifiers, and connected experiences. Connected Experiences must be enabled in the tenant for real-time tips to appear. License Requirement Microsoft 365 E5 Required for full feature support including trainable classifiers, advanced predicates, and connected experiences. Microsoft 365 E3 Limited support, some advanced features may not be available. Client Compatibility: DLP policy tips are supported across several Outlook clients, but the experience and capabilities vary depending on the client version, licensing, and configuration. Refer to the comprehensive compatibility matrix (provided at the beginning of this guide) to learn about policy tip support across Outlook clients. Permissions To configure and manage DLP policy tips in Microsoft Purview, specific roles and permissions are required. These permissions ensure that only authorized personnel can create, deploy, and monitor DLP policies and their associated tips. Required Roles: Role Group Capabilities Compliance Administrator Full access to create, configure, and deploy DLP policies and tips. Compliance Data Administrator Manage DLP policies and view alerts. Information Protection Admin Configure sensitivity labels and integrate with DLP. Security Administrator View and investigate DLP alerts and incidents. Q: How do I configure a custom policy tip message using JSON? A: You can configure a custom policy tip dialog in DLP policies using a JSON file. This allows you to tailor the message shown to users when a policy is triggered, such as for oversharing or sensitive content detection. JSON must follow the schema outlined in Microsoft’s documentation and internal engineering guidance. Applies to: Microsoft 365 online E5 users with connected experience enabled. This feature is supported in Outlook Classic (Win32) and Monarch. JSON-based dialogs are not supported in Outlook on the Web (OWA), Mac, or Mobile clients. Q: Can I localize policy tips for different languages? A: Localization of DLP policy tips allows users to see messages in their preferred language, improving clarity and compliance across global teams. Microsoft Purview supports localization through JSON-based configuration, but support varies by client. Supported clients: Outlook Classic (Win32) How to configure: Use the LocalizationData block in your custom Policy Tip JSON. Example: LocalizationData block in your custom Policy Tip JSON Upload this JSON using PowerShell with the NotifyPolicyTipCustomDialog parameter. Q: What roles and permissions are required to manage DLP policy tips? A: To manage Data Loss Prevention (DLP) policies and policy tips in Microsoft Purview, you only need to be assigned one of the following roles. Each role provides different levels of access depending on your responsibilities. Role Group Capabilities Compliance Administrator Full access to create, configure, and deploy DLP policies and Policy Tips. Compliance Data Administrator Manage DLP policies and access compliance data. Information Protection Admin Configure sensitivity labels and integrate with DLP policies. Security Administrator View and investigate DLP alerts and incidents. Note: Microsoft recommends assigning the least privileged role necessary to perform the required tasks to enhance security. These roles are assigned in the Microsoft Purview portal under Roles and Scopes. Administrative Unit–scoped roles are also supported for organizations that segment access by department or geography. Troubleshooting & Known Issues Q: Why are policy tips delayed or not appearing at all? A: If you’re not seeing policy tips, follow this checklist to find out why: Outlook Client Compatibility and Licensing Check if your Outlook client supports policy tips. Policy tips are not supported on all Outlook clients. Refer to Q: Which Outlook clients support DLP Policy Tips? Confirm your license. Advanced policy tips (e.g., those using trainable classifiers or oversharing dialogs) require a Microsoft 365 E5 license. Refer to Q: What are the prerequisites for enabling DLP Policy Tips? Policy Configuration Issues Review your DLP policy configuration and check for unsupported conditions. Refer to Q: What predicates are supported across different Outlook clients? Watch for message size limits Only the first 4 MB of the email body and subject, and 2 MB per attachment, are scanned for real-time tips. Use Microsoft’s diagnostic tool Run a built-in diagnostic to test your DLP policy setup. Run the diagnostic. Q: What logs or data should I collect for support escalation? A: To ensure a smooth and complete escalation to Microsoft support or engineering, collect the following logs and metadata depending on the client type. This helps accelerate triage and resolution. Fiddler trace Must include: Timestamp of issue Correlation ID (found as updateGuid in the DLP response) Tenant ID User ID / SMTP address Tenant DLP Policies and Rules Expected rule match conditions and Rule IDs (Optional): Draft email or data input (sender, recipient, subject, message body) ETL logs from %temp%\Outlook Logging PNR logs (Problem Steps Recorder or screenshots) Tenant ID Tenant DLP Policies and Rules Expected rule match conditions and Rule IDs Q: Are there known limitations with policy tips? Unable to detect sensitivity labels in compressed files. Unable to detect CCSI (SITs/Trainable SITs) in encrypted files. Q: What are the limitations of the custom dialog? The title and the body and override justifications options can be customized using the JSON file. Basic text formatting is allowed: bold, underline, italic and line break. Justification options can be up to 3 plus an option for free-text input. The text for false positive and acknowledgment is not customizable. Below is the required structure of the JSON files that admins will create to customize the dialog for matched rules. The keys are all case-sensitive. Formatting and dynamic tokens for matched conditions can only be used in the Body key. Keys Mandatory? Rules/Notes {} Y Container LocalizationData Y Array that contains all the language options. Language Y Specify language code: "en", "es", "fr", "de". Title Y Specify the title for the dialog. Limited to 80 characters. Body Y Specify the body for the dialog. Limited to 1000 characters. Dynamic tokens for matched conditions can be added in the body. Options N Up to three options can be included. One more can be added by setting HasFreeTextOption = true. HasFreeTextOption N This can be true or false, true will display a text box below the last option added to the JSON file. DefaultLanguage Y Must be one of the languages defined within the LocalizationData key. The user must include at least one.Security Copilot Skilling Series
Security Copilot joins forces with your favorite Microsoft Security products in a skilling series miles above the rest. The Security Copilot Skilling Series is your opportunity to strengthen your security posture through threat detection, incident response, and leveraging AI for security automation. These technical skilling sessions are delivered live by experts from our product engineering teams. Come ready to learn, engage with your peers, ask questions, and provide feedback. Upcoming sessions are noted below and will be available on-demand on the Microsoft Security Community YouTube channel. Coming Up February 5 | Identity Risk Management in Microsoft Entra Speaker: Marilee Turscak Identity teams face a constant stream of risky user signals, and determining which threats require action can be time‑consuming. This webinar explores the Identity Risk Management Agent in Microsoft Entra, powered by Security Copilot, and how it continuously monitors risky identities, analyzes correlated sign‑in and behavior signals, and explains why a user is considered risky. Attendees will see how the agent provides guided remediation recommendations—such as password resets or risk dismissal—at scale and supports natural‑language interaction for faster investigations. The session also covers how the agent learns from administrator instructions to apply consistent, policy‑aligned responses over time. February 19 | Agents That Actually Work: From an MVP Speaker: Ugur Koc, Microsoft MVP Microsoft MVP Ugur Koc will share a real-world workflow for building agents in Security Copilot, showing how to move from an initial idea to a consistently performing agent. The session highlights how to iterate on objectives, tighten instructions, select the right tools, and diagnose where agents break or drift from expected behavior. Attendees will see practical testing and validation techniques, including how to review agent decisions and fine-tune based on evidence rather than intuition to help determine whether an agent is production ready. March 5 | Conditional Access Optimization Agent: What It Is & Why It Matters Get a clear, practical look at the Conditional Access Optimization Agent—how it automates policy upkeep, simplifies operations, and uses new post‑Ignite updates like Agent Identity and dashboards to deliver smarter, standards‑aligned recommendations. Now On-Demand January 28 | Security Copilot in Purview Technical Deep Dive Speakers: Patrick David, Thao Phan, Alexandra Roland Discover how AI-powered alert triage agents for Data Loss Prevention (DLP) and Insider Risk Management (IRM) are transforming incident response and compliance workflows. Explore new Data Security Posture Management (DSPM) capabilities that deliver deeper insights and automation to strengthen your security posture. This session will showcase real-world scenarios and actionable strategies to help you protect sensitive data and simplify compliance. January 22 | Security Copilot Skilling Series | Building Custom Agents: Unlocking Context, Automation, and Scale Speakers: Innocent Wafula, Sean Wesonga, and Sebuh Haileleul Microsoft Security Copilot already features a robust ecosystem of first-party and partner-built agents, but some scenarios require solutions tailored to your organization’s specific needs and context. In this session, you'll learn how the Security Copilot agent builder platform and MCP servers empower you to create tailored agents that provide context-aware reasoning and enterprise-scale solutions for your unique scenarios. December 18 | What's New in Security Copilot for Defender Speaker: Doug Helton Discover the latest innovations in Microsoft Security Copilot embedded in Defender that are transforming how organizations detect, investigate, and respond to threats. This session will showcase powerful new capabilities—like AI-driven incident response, contextual insights, and automated workflows—that help security teams stop attacks faster and simplify operations. Why Attend: Stay Ahead of Threats: Learn how cutting-edge AI features accelerate detection and remediation. Boost Efficiency: See how automation reduces manual effort and improves SOC productivity. Get Expert Insights: Hear directly from product leaders and explore real-world use cases. Don’t miss this opportunity to future-proof your security strategy and unlock the full potential of Security Copilot in Defender! December 4 | Discussion of Ignite Announcements Speakers: Zineb Takafi, Mike Danoski and Oluchi Chukwunwere, Priyanka Tyagi, Diana Vicezar, Thao Phan, Alex Roland, and Doug Helton Ignite 2025 is all about driving impact in the era of AI—and security is at the center of it. In this session, we’ll unpack the biggest Security Copilot announcements from Ignite on agents and discuss how Copilot capabilities across Intune, Entra, Purview, and Defender deliver end-to-end protection. November 13 | Microsoft Entra AI: Unlocking Identity Intelligence with Security Copilot Skills and Agents Speakers: Mamta Kumar, Sr. Product Manager; Margaret Garcia Fani, Sr. Product Manager This session will demonstrate how Security Copilot in Microsoft Entra transforms identity security by introducing intelligent, autonomous capabilities that streamline operations and elevate protection. Customers will discover how to leverage AI-driven tools to optimize conditional access, automate access reviews, and proactively manage identity and application risks - empowering them into a more secure, and efficient digital future. October 30 | What's New in Copilot in Microsoft Intune Speaker: Amit Ghodke, Principal PM Architect, CxE CAT MEM Join us to learn about the latest Security Copilot capabilities in Microsoft Intune. We will discuss what's new and how you can supercharge your endpoint management experience with the new AI capabilities in Intune. October 16 | What’s New in Copilot in Microsoft Purview Speaker: Patrick David, Principal Product Manager, CxE CAT Compliance Join us for an insider’s look at the latest innovations in Microsoft Purview —where alert triage agents for DLP and IRM are transforming how we respond to sensitive data risks and improve investigation depth and speed. We’ll also dive into powerful new capabilities in Data Security Posture Management (DSPM) with Security Copilot, designed to supercharge your security insights and automation. Whether you're driving compliance or defending data, this session will give you the edge. October 9 | When to Use Logic Apps vs. Security Copilot Agents Speaker: Shiv Patel, Sr. Product Manager, Security Copilot Explore how to scale automation in security operations by comparing the use cases and capabilities of Logic Apps and Security Copilot Agents. This webinar highlights when to leverage Logic Apps for orchestrated workflows and when Security Copilot Agents offer more adaptive, AI-driven responses to complex security scenarios. All sessions will be published to the Microsoft Security Community YouTube channel - Security Copilot Skilling Series Playlist __________________________________________________________________________________________________________________________________________________________________ Looking for more? Keep up on the latest information on the Security Copilot Blog. Join the Microsoft Security Community mailing list to stay up to date on the latest product news and events. Engage with your peers one of our Microsoft Security discussion spaces.2.3KViews1like0Comments