microsoft purview
594 TopicsData Quality (DQ) for Standalone Data Assets at Microsoft Purview
Why is Data Quality for your Data Asset critical today? Many companies today cannot activate their data estate. Our research shows that 75% of the companies today do not even have a data quality program. This is such a major problem because data quality is becoming a major focus of AI. Your AI is only as good as your data. In the old days, when we weren't necessarily content aware, it was fine because human was always in the loop and corrected data as needed, but in the new world right now where the AI looks at the content, not just the structure but the content of the data and if that content of the data is fundamentally off, whether it's misrepresented, inconsistent, illegible, inaccurate, insecure, incompliance, it will make your AI wrong. It will make your AI ugly. And it will impact your company and impact you as an employee building AI and BI/Insights. There is nothing worse than building something that doesn't get used. What Is Data Quality for a Standalone Data Asset? A standalone data asset is a dataset that is not linked to any data product. Data Quality (DQ) for standalone assets enables organizations to measure, monitor, and improve the quality of these datasets independently—without requiring them to be part of a data product. Why This Matters Improve Data Before Linking to a Data Product Users can profile, assess, clean, and standardize data before associating it with a data product. This ensures higher quality at the time of onboarding. Make Better Curation Decisions Understanding the quality of standalone assets helps organizations decide which datasets are suitable for inclusion in governed data products. Support Broader Use Cases Not all datasets are used for analytics. Standalone assets may support: Data monetization AI grounding or training data Operational workloads Unified Data Quality Management Organizations can use a single Purview Data Quality solution to manage both standalone assets and data product–associated assets, including issue remediation. Optimize Storage and Reduce Costs Low-quality or unused datasets can be archived to lower-cost storage or removed using data minimization principles, reducing storage costs and improving ROI. Accelerate Governance Adoption Organizations can start measuring and improving data quality immediately—without waiting for formal data product definitions—helping mature governance practices faster. Measure and Monitor Data Quality for Standalone Assets using Microsoft Purview To start measuring and monitoring the data quality of a standalone data asset, first add the asset from the data map to a governance domain. If a connection to the data source has not already been configured, you must create one for the selected source system. Once the asset is added to the domain and the data source connection is established, select the asset and run data profiling. You can accept the recommended columns or customize the selection by adding or removing columns. After profiling completes, review the results to understand the structure and quality of the data. Based on profiling insights, you can create custom rules, apply out-of-the-box rules, or use AI-enabled rule suggestions. After adding rules to the selected columns, run a data quality scan to generate column-level data quality scores and assess the overall health of the dataset. To continuously measure and improve data quality, configure data quality error record storage and schedule recurring data quality scans. You can also associate a standalone data asset—along with its data quality scores and rules—to a data product. This allows data product owners to reuse rules created for standalone assets. The same data asset can be cloned and associated with multiple data products to support different use cases. Note that the data product must be in Draft state before a standalone asset with a data quality score can be associated with it. Additionally, you can configure alerts to notify your team when a data quality score falls below a defined threshold or when a data quality scan fails. Because Purview uses role-based access control (RBAC), only users with the Data Product Owner role can associate a standalone data asset with a data product. Users must also have access to the relevant governance domain. A Data Product Owner must have a Domain Reader (local or global) or Data Quality Reader role to browse standalone asset DQ pages and associated assets. Summary Data Quality for standalone data assets allows organizations to independently assess, improve, and monitor datasets before or without linking them to data products. This approach increases governance agility, improves decision-making, supports diverse use cases beyond analytics, reduces storage costs, and accelerates enterprise data maturity. References Data Quality Scan for a Data Asset in Unified Catalog (preview) | Microsoft LearnAsk Microsoft Anything: Purview Data Security Investigations Part 2
Microsoft Purview Data Security Investigations is now generally available! Data Security Investigations enables customers to quickly uncover and mitigate data security and sensitive data risks buried in their data using AI‑powered deep content analysis—both proactively and reactively. With Data Security Investigations, security teams can identify investigation-relevant data, analyze it at scale with AI, and mitigate uncovered risks in a single unified solution. By streamlining complex, time‑consuming investigative workflows, organizations can move from signal to insight in hours rather than weeks or months. Whether you're responding to an active data security incident or proactively assessing data exposure, DSI gives data security teams the clarity, speed, and confidence to investigate data risk in today's threat landscape. Join us for an AMA with the team that developed Microsoft Purview's newest solution to go over new features, our refined business model and more! What is an AMA? An 'Ask Microsoft Anything' (AMA) session is an opportunity for you to engage directly with Microsoft employees! This AMA will consist of a short presentation followed by taking questions on-camera from the comment section down below! Ask your questions/give your feedback and we will have our awesome Microsoft Subject Matter Experts engaging and responding directly in the video feed. We know this timeslot might not work for everyone, so feel free to ask your questions at any time leading up to the event and the experts will do their best to answer during the live hour. This page will stay up so come back and use it as a resource anytime. We hope you enjoy!4.1KViews9likes22CommentsPurview Data Map scanning Microsoft Fabric and no classifications applied or scan rule sets
Microsoft Purview cannot currently apply built-in or custom classifications (including sensitive information types) to metadata discovered from Microsoft Fabric workspace scans. While Purview can register Fabric workspaces and extract structural metadata (workspaces, Lakehouses, Warehouses, tables, columns, and limited lineage), classification rules are not executed against Fabric assets in the same way they are for supported sources such as Azure SQL, ADLS Gen2, or on-prem databases. This results in classification gaps across a core enterprise analytics platform. Why This Is a Significant Service Omission 1. Breaks the Core Value Proposition of Purview 2. Undermines Regulatory and Risk Management Controls 3. Creates an Inconsistent Governance Experience 4. Blocks Downstream Purview Capabilities 5. Forces Anti-Patterns and Workarounds The lack of automated classification support for Microsoft Fabric workspace data represents a material service omission in Microsoft Purview, significantly limiting its effectiveness as a unified data governance platform and introducing avoidable compliance, operational, and assurance risks—particularly in regulated environments. Are there plans to improve this and if so what are the timescales?182Views1like1CommentIntroducing Security Dashboard for AI (Now in Public Preview)
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is available in public preview. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 2026Accelerate Your Security Copilot Readiness with Our Global Technical Workshop Series
The Security Copilot team is delivering virtual hands-on technical workshops designed for technical practitioners who want to deepen their AI for Security expertise with Microsoft Entra, Intune, Microsoft Purview, and Microsoft Threat Protection. These workshops will help you onboard and configure Security Copilot and deepen your knowledge on agents. These free workshops are delivered year-round and available in multiple time zones. What You’ll Learn Our workshop series combines scenario-based instruction, live demos, hands-on exercises, and expert Q&A to help you operationalize Security Copilot across your security stack. These sessions are all moderated by experts from Microsoft’s engineering teams and are aligned with the latest Security Copilot capabilities. Every session delivers 100% technical content, designed to accelerate real-world Security Copilot adoption. Who Should Attend These workshops are ideal for: Security Architects & Engineers SOC Analysts Identity & Access Management Engineers Endpoint & Device Admins Compliance & Risk Practitioners Partner Technical Consultants Customer technical teams adopting AI powered defense Register now for these upcoming Security Copilot Virtual Workshops Start building Security Copilot skills—choose the product area and time zone that works best for you. Please take note of pre-requisites for each workshop in the registration page Security Copilot Virtual Workshop: Copilot in Defender February 4, 2026 at 8:00-9:00 AM (PST) - register here March 4, 2026 at 8:00-9:00 AM (PST) - register here Asia Pacific optimized delivery schedules Time conversion: 4:00-5:30 PM NZDT; 11:00-12:30 AM GMT +8; 8:30-10:00 AM IST; 7:00-8:30 PM PST February 5, 2026 at 2:00-3:30 PM (AEDT) - register here March 5, 2026 at 2:00-3:30 PM (AEDT) - register here Security Copilot Virtual Workshop: Copilot in Entra February 25, 2026 at 8:00 - 9:30 AM (PST) - register here Asia Pacific optimized delivery schedules Time conversion: 4:00-5:30 PM NZDT; 11:00-12:30 AM GMT +8; 8:30-10:00 AM IST; 7:00-8:30 PM PST February 26, 2026 at 2:00-3:30 PM (AEDT) - register here March 26, 2026 at 2:00-3:30 PM (AEDT) - register here Security Copilot Virtual Workshop: Copilot in Intune February 11, 2026 at 8:00-9:30 AM (PST) - register here March 11, 2026 at 8:00-9:30 AM (PST) - register here Asia Pacific optimized delivery schedules Time conversion: 4:00-5:30 PM NZDT; 11:00-12:30 AM GMT +8; 8:30-10:00 AM IST; 7:00-8:30 PM PST February 12, 2026 at 2:00-3:30 PM (AEDT) - register here March 12, 2026 at 2:00-3:30 PM (AEDT) - register here Security Copilot Virtual Workshop: Copilot in Purview February 18, 2026 8:00 - 9:30 AM (PST) - register here March 18, 2026 8:00 - 9:30 AM (PST) - register here Asia Pacific optimized delivery schedules Time conversion: 4:00-5:30 PM NZDT; 11:00-12:30 AM GMT +8; 8:30-10:00 AM IST; 7:00-8:30 PM PST February 19, 2026 2:00-3:30 PM (AEDT)- register here March 19, 2026 2:00-3:30 PM (AEDT)- register here Learn and Engage with the Microsoft Security Community Log in and follow this Microsoft Security Community Blog and post/ interact in the Microsoft Security Community discussion spaces. Follow = Click the heart in the upper right when you're logged in 🤍 Join the Microsoft Security Community and be notified of upcoming events, product feedback surveys, and more. Get early access to Microsoft Security products and provide feedback to engineers by joining the Microsoft Security Advisors.. Learn about the Microsoft MVP Program. Join the Microsoft Security Community LinkedIn and the Microsoft Entra Community LinkedIn3.1KViews5likes1CommentCan´t Sign confidential documents
Hello, I have a problem. I want to send confidential contracts to customers for signing with Adobe DocuSign. This contracts have a label "confidential" from purview and are encrypted. But now the customer cant sign the contract with DocuSign because of the encryption. Is there a way that they can sign the document? We must encrypt the documents because compliance reasons and ISMS. Thank you.111Views2likes5CommentsLive AMA: Defining AI boundaries with data sensitivity
As AI becomes embedded in everyday work, traditional data security models break down. Copilots and agents can search, summarize, and recombine information at machine speed, creating new exposure paths for sensitive data — even when nothing is formally shared or exfiltrated. In this session, we’ll explain why data sensitivity, not data location, is now the true security boundary, and what that shift means for protecting information in the age of AI. We’ll walk through how organizations can establish a shared understanding of what data is sensitive, use sensitivity labels to consistently define how that data should be handled, and automatically enforce protections wherever data is created or used — including in AI experiences. We’ll close with a live Ask Me Anything (AMA), where you can bring real-world questions about securing Copilot and agents, scaling classification and labeling, and turning sensitivity into consistent, enforceable controls with Microsoft Purview.1.9KViews9likes9CommentsLifecycle using Custom Protection with Purview Sensitivity Labels
Organizations using Purview Sensitivity Labels with custom protection face a fundamental governance challenge: there is no lifecycle‑ready way to maintain, audit, or update per‑document user rights as teams evolve. This affects compliance, need‑to‑know enforcement, and operational security. Document lifecycle challenges Team growth: new members do not inherit document‑specific rights. Team shrinkage: departing members retain access unless manually removed. Employee offboarding: accounts are disabled, but compliance may require explicit removal from protected documents. Audit requirements: organizations need to answer “Who has what rights on document X?” — and today, no native tool provides this for custom‑protected files. Existing method Limitation Purview PowerShell Overwrites all existing assignments; no granular updates MIP Client Not yet capable of bulk lifecycle operations OlaProeis/FileLabeler Great tool, but limited by the same PowerShell constraints What the tool enables Rights audit trail per document Controlled lifecycle updates (add/remove/transfer rights) Preservation of original files for rollback Multi‑action batch processing Admin‑only delegated workflow with MIP superuser role Full logging for compliance Supported operations ListRightAssignments – extract all rights from each document under a given label GUID SetOwner / AddOwner – assign or add owners AddEditor / AddRestrictedEditor / AddViewer – role‑based additions RemoveAccess – remove any user from all roles AddAccessAs – map one user’s role to one or more new users Multi‑action execution – combine operations in a single run Safe mode – original files preserved; updated copies created with a trailer Because this tool can modify access to highly sensitive content, it must be embedded in a controlled workflow: ticket‑based approval, delegated admin, MIP superuser assignment, and retention of all logs as part of the audit trail. This ensures compliance with need‑to‑know, separation of duties, and legal requirements. I would appreciate feedback from the community and Microsoft product teams on: whether similar lifecycle capabilities are planned for Purview whether the MIP SDK is the right long‑term approach how others handle custom‑protected document lifecycle today interest in collaborating on a more robust open‑source version Max78Views0likes1CommentIntroducing eDiscovery Graph API Standard and Enhancements to Premium APIs
We have been busy working to enable organisations that leverage the Microsoft Purview eDiscovery Graph APIs to benefit from the enhancements in the new modern experience for eDiscovery. I am pleased to share that APIs have now been updated with additional parameters to enable organisations to now benefit from the following features already present in the modern experience within the Purview Portal: Ability to control the export package structure and item naming convention Trigger advanced indexing as part of the Statistics, Add to Review and Export jobs Enables for the first time the ability to trigger HTML transcription of Teams, Viva and Copilot interaction when adding to a review set Benefit from the new statistic options such as Include Categories and Include Keyword Report More granular control of the number of versions collected of modern attachments and documents collected directly collected from OneDrive and SharePoint These changes were communicated as part of the M365 Message Center Post MC1115305. This change involved the beta version of the API calls being promoted into the V1.0 endpoint of the Graph API. The following v1.0 API calls were updated as part of this work: Search Estimate Statistics – ediscoverySearch: estimateStatistics Search Export Report - ediscoverySearch: exportReport Search Export Result - ediscoverySearch: exportResult Search Add to ReviewSet – ediscoveryReviewSet: addToReviewSet ReviewSet Export - ediscoveryReviewSet: export The majority of this blog post is intended to walk through the updates to each of these APIs and provide understanding on how to update your calls to these APIs to maintain a consistent outcome (and benefit from the new functionality). If you are new to the Microsoft Purview eDiscovery APIs you can refer to my previous blog post on how to get started with them. Getting started with the eDiscovery APIs | Microsoft Community Hub First up though, availability of the Graph API for E3 customers We are excited to announce that starting September 9, 2025, Microsoft will launch the eDiscovery Graph API Standard, a new offering designed to empower Microsoft 365 E3 customers with secure, automated data export capabilities. The new eDiscovery Graph API offers scalable, automated exports with secure credential management, improved performance and reliability for Microsoft 365 E3 customers. The new API enables automation of the search, collect, hold, and export flow from Microsoft Purview eDiscovery. While it doesn’t include premium features like Teams/Yammer conversations or advanced indexing (available only with the Premium Graph APIs), it delivers meaningful value for Microsoft 365 E3 customers needing to automate structured legal exports. Key capabilities: Export from Exchange, SharePoint, Teams, Viva Engage and OneDrive for Business Case, search, hold and export management Integration with partner/vendor workflows Support automation that takes advantage of new features within the modern user experience Pricing & Access Microsoft will offer 50 GB of included export volume per tenant per month, with additional usage billed at $10/GB—a price point that balances customer value, sustainability, and market competitiveness. The Graph API Standard will be available in public preview starting September 9. For more details on pay-as-you-go features in eDiscovery and Purview refer to the following links. Billing in eDiscovery | Microsoft Learn Enable Microsoft Purview pay-as-you-go features via subscription | Microsoft Learn Wait, but what about the custodian and noncustodial locations workflow in eDiscovery Classic (Premium)? As you are probably aware, in the modern user experience for eDiscovery there have been some changes to the Data Sources tab and how it is used in the workflow. Typically, organisations leveraging the Microsoft Purview eDiscovery APIs previously would have used the custodian and noncustodial data sources APIs to add the relevant data sources to the case using the following APIs. ediscoveryCustodian resource type - Microsoft Graph v1.0 | Microsoft Learn ediscoveryNoncustodialDataSource resource type - Microsoft Graph v1.0 | Microsoft Learn Once added via the API calls, when creating a search these locations would be bound to a search. This workflow in the API remains supported for backwards compatibility. This includes the creation of system generated case hold policies when applying holds to the locations via these APIs. Organisations can continue to use this approach with the APIs. However, to simplify your code and workflow in the APIs consider using the following API call to add additional sources directly to the search. Add additional sources - Microsoft Graph v1.0 | Microsoft Learn Some key things to note if you continue to use the custodian and noncustodial data sources APIs in your automation workflow. This will not populate the new data sources tab in the modern experience for eDiscovery They can continue to be queried via the API calls Advanced indexing triggered via these APIs will have no influence on if advanced indexing is used in jobs triggered from a search Make sure you use the new parameters to trigger advanced indexing on the job when running the Statistics, Add to Review Set and Direct Export jobs Generating Search Statistics ediscoverySearch: estimateStatistics In eDiscovery Premium (Classic) and the previous version of the APIs, generating statistics was a mandatory step before you could progress to either adding the search to a review set or triggering a direct export. With the new modern experience for eDiscovery, this step is completely optional and is not mandatory. For organizations that previously generated search statistics but never checked or used the results before moving to adding the search to a review set or triggering a direct export job, they can now skip this step. If organizations do want to continue to generate statistics, then calling the updated API with the same parameters call will continue to generate statistics for the search. An example of a previous call would look as follows: POST /security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/estimateStatistics Historically this API didn’t require a request body. With the APIs now natively working with the modern experience for eDiscovery; the API call now supports a request body, enabling you to benefit from the new statistic options. Details on these new options can be found in the links below. Create a search for a case in eDiscovery | Microsoft Learn Evaluate and refine search results in eDiscovery | Microsoft Learn If a search is run without a request body it will still generate the following information: Total matches and volume Number of locations searched and the number of locations with hits Number of data sources searched and the number of data sources with hits The top five data sources that make up the most search hits matching your query Hit count by location type (mailbox versus site) As the API is now natively working with the modern experience for eDiscovery you can optionally include a request body to pass the statisticOptions parameter in the POST API call. With the changes to how Advanced Indexing works within the new UX and the additional reporting categories available, you can use the statisticsOptions parameter to trigger the generate statistic job with the additional options within the modern experience for the modern UX. The values you can include are detailed in the table below. Property Option from Portal includeRefiners Include categories: Refine your view to include people, sensitive information types, item types, and errors. includeQueryStats Include query keywords report: Assess keyword relevance for different parts of your search query. includeUnindexedStats Include partially indexed items: We'll provide details about items that weren't fully indexed. These partially indexed items might be unsearchable or partially searchable advancedIndexing Perform advanced indexing on partially indexed items: We'll try to reindex a sample of partially indexed items to determine whether they match your query. After running the query, check the Statistics page to review information about partially indexed items. Note: Can only be used if includeUnindexedStats is also included. locationsWithoutHits Exclude partially indexed items in locations without search hits: Ignore partially indexed items in locations with no matches to the search query. Checking this setting will only return partially indexed items in locations where there is already at least one hit. Note: Can only be used if includeUnindexedStats is also included. In eDiscovery Premium (Classic) the advanced indexing took place when a custodian or non-custodial data location was added to the Data Sources tab. This means that when you triggered the estimate statistics call on the search it would include results from both the native Exchange and SharePoint index as well as the Advanced Index. In the modern experience for eDiscovery, the advanced indexing runs as part of the job. However, this must be selected as an option on the job. Note that not all searches will benefit from advanced indexing, one example would be a simple date range search on a mailbox or SPO site as this will still have hits on the partially indexed items (even partial indexed email and SPO file items have date metadata in the native indexes). The following example using PowerShell and the Microsoft Graph PowerShell module and passes the new StatisticsOptions parameter to the POST call and selects all available options. # Generate estimates for the newly created search $statParams = @{ statisticsOptions = "includeRefiners,includeQueryStats,includeUnindexedStats,advancedIndexing,locationsWithoutHits" } $params = $statParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/estimateStatistics" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Write-Host "Estimate statistics generation triggered for search ID: $searchID" Once run, it will create a generated statistic job with the additional options selected. Direct Export - Report ediscoverySearch: exportReport This API enables you to generate an item report directly form a search without taking the data into a review set or exporting the items that match the search. With the APIs now natively working with the modern experience for eDiscovery, new parameters have been added to the request body as well as new values available for existing parameters. The new parameters are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. These new parameters reflect the changes made in the modern experience for eDiscovery that provides more granular control for eDiscovery managers to apply different collection options based on where the SPO item was collected from (e.g. directly from a SPO site vs a cloud attachment link included in an email). Within eDiscovery Premium (Classic) the All Document Versions option applied to both SharePoint and OneDrive files collected directly from SharePoint and any cloud attachments contained within email, teams and viva engage messages. Historically for this API, within the additionalOptions parameter you could include the allDocumentVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allDocumentVersions value can still be included in the additionalOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalOptions parameter and switch to using the new documentVersion parameter. As described earlier, to benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the direct export job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalOptions advancedIndexing Perform advanced indexing on partially indexed items exportCriteria searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items exportLocation responsiveLocations, nonresponsiveLocations Exclude partially indexed items in locations without search hits. Finally, in the new modern experience for eDiscovery more granular control has been introduced to enable organisations to independently choose to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and the ability to collect up to 12 hours of related conversations when a message matches a search. This is reflected in the job settings by the following options: Organize conversations into HTML transcripts Include Teams and Viva Engage conversations In the classic experience this was a single option titled Teams and Yammer Conversations that did both actions and was controlled by including the teamsAndYammerConversations value in the additionalOptions parameter. With the APIs now natively working with the modern experience for eDiscovery, the teamsAndYammerConversations value can still be included in the additionalOptions parameter but it will only trigger the collection of up to 12 hours of related conversations when a message matches a search without converting the items into HTML transcripts. To do this we need to include the new value of htmlTranscripts in the additionalOptions parameter. As an example, lets look at the following direct export report job from the portal and use the Microsoft Graph PowerShell module to call the exportReport API call with the updated request body. $exportName = "New UX - Direct Export Report" $exportParams = @{ displayName = $exportName description = "Direct export report from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportReport" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Direct Export - Results ediscoverySearch: exportResult - Microsoft Graph v1.0 | Microsoft Learn This API call enables you to export the items from a search without taking the data into a review set. All the information from the above section on the changes to the exportReport API also applies to this API call. However with this API call we will actually be exporting the items from the search and not just the report. As such we need to pass in the request body information on how we want the export package to look. Previously with direct export for eDiscovery Premium (Classic) you had a three options in the UX and in the API to define the export format. Option Exchange Export Structure SharePoint / OneDrive Export Structure Individual PST files for each mailbox PST created for each mailbox. The structure of each PST is reflective of the folders within the mailbox with emails stored based on their original location in the mailbox. Emails named based on their subject. Folder for each mailbox site. Within each folder, the structure is reflective of the SharePoint/OneDrive site with documents stored based on their original location in the site. Documents are named based on their document name. Individual .msg files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folders within the mailbox with emails stored as .msg files based on their original location in the mailbox. Emails named based on their subject. As above. Individual .eml files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folder within the mailbox with emails stored as .eml files based on their original location in the mailbox. Emails named based on their subject As above. Historically with this API, the exportFormat parameter was used to control the desired export format. Three values could be used and they were pst, msg and eml. This parameter is still relevant but only controls how email items will be saved, either in a PST file, as individual .msg files or as individual .eml files. Note: The eml export format option is depreciated in the new UX. Going forward you should use either pst or msg. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. More information on the export package options and what they control can be found in the following link. https://learn.microsoft.com/en-gb/purview/edisc-search-export#export-package-options To support this, new values have been added to the additionalOptions parameter for this API call, these must be included in the request body otherwise the export structure will be as follows. exportFormat value Exchange Export Structure SharePoint / OneDrive Export Structure pst PST files created that containing data from multiple mailboxes. All emails contained within a single folder within the PST. Emails named a based on an assigned unique identifier (GUID) One folder for all documents. All documents contained within a single folder. Documents are named based on an assigned unique identifier (GUID) msg Folder created containing data from all mailboxes. All emails contained within a single folder stored as .msg files. Emails named a based on an assigned unique identifier (GUID) As above. The new values added to the additionalOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal splitSource Organize data from different locations into separate folders or PSTs includeFolderAndPath Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive a similar output structure when previously using the pst or msg values in the exportFormat parameter I would include all of the above values in the additionalOptions parameter. For example, to generate a direct export where the email items are stored in separate PSTs per mailbox, the structure of the PST files reflects the mailbox and each items is named as per the subject of the email; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - PST" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = “https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params If I want to export the email items as individual .msg files instead of storing them in PST files; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - MSG" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = " https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" Add to Review Set ediscoveryReviewSet: addToReviewSet This API call enables you to commit the items that match the search to a Review Set within an eDiscovery case. This enables you to review, tag, redact and filter the items that match the search without exporting the data from the M365 service boundary. Historically with this API call it was more limited compared to triggering the job via the eDiscovery Premium (Classic) UI. With the APIs now natively working with the modern experience for eDiscovery organizations can make use of the enhancements made within the modern UX and have greater flexibility in selecting the options that are relevant for your requirements. There is a lot of overlap with previous sections, specifically the “Direct Export – Report” section on what updates are required to benefit from updated API. They are as follows: Controlling the number of versions of SPO and OneDrive documents added to the review set via the new cloudAttachmentVersion and documentVersion parameters Enabling organizations to trigger the advanced indexing of partial indexed items during the add to review set job via new values added to existing parameters However there are some nuances to the parameter names and the values for this specific API call compared to the exportReport API call. For example, with this API call we use the additionalDataOptions parameter opposed to the additionalOptions parameter. As with the exportReport and exportResult APIs, there are new parameters to control the number of versions of SPO and OneDrive documents added to the review set are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. Historically for this API call, within the additionalDataOptions parameter you could include the allVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allVersions value can still be included in the additionalDataOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalDataOptions parameter and switch to using the new documentVersion parameter. To benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the add to review set job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalDataOptions advancedIndexing Perform advanced indexing on partially indexed items itemsToInclude searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items additionalDataOptions locationsWithoutHits Exclude partially indexed items in locations without search hits. Historically the API call didn’t support the add to review set job options to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and collect up to 12 hours of related conversations when a message matches a search. With the APIs now natively working with the modern experience for eDiscovery this is now possible by adding support for the htmlTranscripts and messageConversationExpansion values to the addtionalDataOptions parameter. As an example, let’s look at the following add to review set job from the portal and use the Microsoft Graph PowerShell module to invoke the addToReviewSet API call with the updated request body. $commitParams = @{ search = @{ id = $searchID } additionalDataOptions = "linkedFiles,advancedIndexing,htmlTranscripts,messageConversationExpansion,locationsWithoutHits" cloudAttachmentVersion = "latest" documentVersion = "latest" itemsToInclude = "searchHits,partiallyIndexed" } $params = $commitParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/addToReviewSet" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Export from Review Set ediscoveryReviewSet: export This API call enables you to export items from a Review Set within an eDiscovery case. Historically with this API, the exportStructure parameter was used to control the desired export format. Two values could be used and they were directory and pst. This parameter has had been updated to include a new value of msg. Note: The directory value is depreciated in the new UX but remains available in v1.0 of the API call for backwards compatibility. Going forward you should use msg alongside the new exportOptions values. The exportStructure parameter will only control how email items are saved, either within PST files or as individual .msg files. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. As with the exportResults API call for direct export, new values have been added to the exportOptions parameter for this API call. The new values added to the exportOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal splitSource Organize data from different locations into separate folders or PSTs includeFolderAndPath Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive an equivalent output structure when previously using the pst value in the exportStructure parameter I would include all of the above values in the exportOptions parameter within the request body. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - PST" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportStructure = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params To receive an equivalent output structure when previously using the directory value in the exportStructure parameter I would instead use the msg value within the request body. As the condensed directory structure format export all items into a single folder, all named based on uniquely assigned identifier I do not need to include the new values added to the exportOptions parameter. An example using the Microsoft Graph PowerShell module can be found below. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - MSG" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles" exportStructure = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Continuing to use the directory value in exportStructure will produce the same output as if msg was used. Wrap Up Thank you for your time reading through this post. Hopefully you are now equipped with the information needed to make the most of the new modern experience for eDiscovery when making your Graph API calls.