discover and respond
60 TopicsEnhanced eDiscovery Premium APIs
We have been busy working to enable organisations that leverage the Microsoft Purview eDiscovery Graph APIs to benefit from the enhancements in the new modern experience for eDiscovery. I am pleased to share that APIs have now been updated with additional parameters to enable organisations to now benefit from the following features already present in the modern experience within the Purview Portal: Ability to control the export package structure and item naming convention Trigger advanced indexing as part of the Statistics, Add to Review and Export jobs Enables for the first time the ability to trigger HTML transcription of Teams, Viva and Copilot interaction when adding to a review set Benefit from the new statistic options such as Include Categories and Include Keyword Report More granular control of the number of versions collected of modern attachments and documents collected directly collected from OneDrive and SharePoint These changes were communicated as part of the M365 Message Center Post MC1115305. This change involved the beta version of the API calls being promoted into the V1.0 endpoint of the Graph API. The following v1.0 API calls were updated as part of this work: Search Estimate Statistics – ediscoverySearch: estimateStatistics Search Export Report - ediscoverySearch: exportReport Search Export Result - ediscoverySearch: exportResult Search Add to ReviewSet – ediscoveryReviewSet: addToReviewSet ReviewSet Export - ediscoveryReviewSet: export This blog post is intended to walk through the updates to each of these APIs and provide understanding on how to update your calls to these APIs to maintain a consistent outcome (and benefit from the new functionality). If you are new to the Microsoft Purview eDiscovery APIs you can refer to my previous blog post on how to get started with them. Getting started with the eDiscovery APIs | Microsoft Community Hub Wait, but what about the custodian and noncustodial locations workflow in eDiscovery Classic (Premium)? As you are probably aware, in the modern user experience for eDiscovery there have been some changes to the Data Sources tab and how it is used in the workflow. Typically, organisations leveraging the Microsoft Purview eDiscovery APIs previously would have used the custodian and noncustodial data sources APIs to add the relevant data sources to the case using the following APIs. ediscoveryCustodian resource type - Microsoft Graph v1.0 | Microsoft Learn ediscoveryNoncustodialDataSource resource type - Microsoft Graph v1.0 | Microsoft Learn Once added via the API calls, when creating a search these locations would be bound to a search. This workflow in the API remains supported for backwards compatibility. This includes the creation of system generated case hold policies when applying holds to the locations via these APIs. Organisations can continue to use this approach with the APIs. However, to simplify your code and workflow in the APIs consider using the following API call to add additional sources directly to the search. Add additional sources - Microsoft Graph v1.0 | Microsoft Learn Some key things to note if you continue to use the custodian and noncustodial data sources APIs in your automation workflow. This will not populate the new data sources tab in the modern experience for eDiscovery They can continue to be queried via the API calls Advanced indexing triggered via these APIs will have no influence on if advanced indexing is used in jobs triggered from a search Make sure you use the new parameters to trigger advanced indexing on the job when running the Statistics, Add to Review Set and Direct Export jobs Generating Search Statistics ediscoverySearch: estimateStatistics In eDiscovery Premium (Classic) and the previous version of the APIs, generating statistics was a mandatory step before you could progress to either adding the search to a review set or triggering a direct export. With the new modern experience for eDiscovery, this step is completely optional and is not mandatory. For organizations that previously generated search statistics but never checked or used the results before moving to adding the search to a review set or triggering a direct export job, they can now skip this step. If organizations do want to continue to generate statistics, then calling the updated API with the same parameters call will continue to generate statistics for the search. An example of a previous call would look as follows: POST /security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/estimateStatistics Historically this API didn’t require a request body. With the APIs now natively working with the modern experience for eDiscovery; the API call now supports a request body, enabling you to benefit from the new statistic options. Details on these new options can be found in the links below. Create a search for a case in eDiscovery | Microsoft Learn Evaluate and refine search results in eDiscovery | Microsoft Learn If a search is run without a request body it will still generate the following information: Total matches and volume Number of locations searched and the number of locations with hits Number of data sources searched and the number of data sources with hits The top five data sources that make up the most search hits matching your query Hit count by location type (mailbox versus site) As the API is now natively working with the modern experience for eDiscovery you can optionally include a request body to pass the statisticOptions parameter in the POST API call. With the changes to how Advanced Indexing works within the new UX and the additional reporting categories available, you can use the statisticsOptions parameter to trigger the generate statistic job with the additional options within the modern experience for the modern UX. The values you can include are detailed in the table below. Property Option from Portal includeRefiners Include categories: Refine your view to include people, sensitive information types, item types, and errors. includeQueryStats Include query keywords report: Assess keyword relevance for different parts of your search query. includeUnindexedStats Include partially indexed items: We'll provide details about items that weren't fully indexed. These partially indexed items might be unsearchable or partially searchable advancedIndexing Perform advanced indexing on partially indexed items: We'll try to reindex a sample of partially indexed items to determine whether they match your query. After running the query, check the Statistics page to review information about partially indexed items. Note: Can only be used if includeUnindexedStats is also included. locationsWithoutHits Exclude partially indexed items in locations without search hits: Ignore partially indexed items in locations with no matches to the search query. Checking this setting will only return partially indexed items in locations where there is already at least one hit. Note: Can only be used if includeUnindexedStats is also included. In eDiscovery Premium (Classic) the advanced indexing took place when a custodian or non-custodial data location was added to the Data Sources tab. This means that when you triggered the estimate statistics call on the search it would include results from both the native Exchange and SharePoint index as well as the Advanced Index. In the modern experience for eDiscovery, the advanced indexing runs as part of the job. However, this must be selected as an option on the job. Note that not all searches will benefit from advanced indexing, one example would be a simple date range search on a mailbox or SPO site as this will still have hits on the partially indexed items (even partial indexed email and SPO file items have date metadata in the native indexes). The following example using PowerShell and the Microsoft Graph PowerShell module and passes the new StatisticsOptions parameter to the POST call and selects all available options. # Generate estimates for the newly created search $statParams = @{ statisticsOptions = "includeRefiners,includeQueryStats,includeUnindexedStats,advancedIndexing,locationsWithoutHits" } $params = $statParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/estimateStatistics" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Write-Host "Estimate statistics generation triggered for search ID: $searchID" Once run, it will create a generated statistic job with the additional options selected. Direct Export - Report ediscoverySearch: exportReport This API enables you to generate an item report directly form a search without taking the data into a review set or exporting the items that match the search. With the APIs now natively working with the modern experience for eDiscovery, new parameters have been added to the request body as well as new values available for existing parameters. The new parameters are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. These new parameters reflect the changes made in the modern experience for eDiscovery that provides more granular control for eDiscovery managers to apply different collection options based on where the SPO item was collected from (e.g. directly from a SPO site vs a cloud attachment link included in an email). Within eDiscovery Premium (Classic) the All Document Versions option applied to both SharePoint and OneDrive files collected directly from SharePoint and any cloud attachments contained within email, teams and viva engage messages. Historically for this API, within the additionalOptions parameter you could include the allDocumentVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allDocumentVersions value can still be included in the additionalOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalOptions parameter and switch to using the new documentVersion parameter. As described earlier, to benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the direct export job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalOptions advancedIndexing Perform advanced indexing on partially indexed items exportCriteria searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items exportLocation responsiveLocations, nonresponsiveLocations Exclude partially indexed items in locations without search hits. Finally, in the new modern experience for eDiscovery more granular control has been introduced to enable organisations to independently choose to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and the ability to collect up to 12 hours of related conversations when a message matches a search. This is reflected in the job settings by the following options: Organize conversations into HTML transcripts Include Teams and Viva Engage conversations In the classic experience this was a single option titled Teams and Yammer Conversations that did both actions and was controlled by including the teamsAndYammerConversations value in the additionalOptions parameter. With the APIs now natively working with the modern experience for eDiscovery, the teamsAndYammerConversations value can still be included in the additionalOptions parameter but it will only trigger the collection of up to 12 hours of related conversations when a message matches a search without converting the items into HTML transcripts. To do this we need to include the new value of htmlTranscripts in the additionalOptions parameter. As an example, lets look at the following direct export report job from the portal and use the Microsoft Graph PowerShell module to call the exportReport API call with the updated request body. $exportName = "New UX - Direct Export Report" $exportParams = @{ displayName = $exportName description = "Direct export report from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportReport" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Direct Export - Results ediscoverySearch: exportResult - Microsoft Graph v1.0 | Microsoft Learn This API call enables you to export the items from a search without taking the data into a review set. All the information from the above section on the changes to the exportReport API also applies to this API call. However with this API call we will actually be exporting the items from the search and not just the report. As such we need to pass in the request body information on how we want the export package to look. Previously with direct export for eDiscovery Premium (Classic) you had a three options in the UX and in the API to define the export format. Option Exchange Export Structure SharePoint / OneDrive Export Structure Individual PST files for each mailbox PST created for each mailbox. The structure of each PST is reflective of the folders within the mailbox with emails stored based on their original location in the mailbox. Emails named based on their subject. Folder for each mailbox site. Within each folder, the structure is reflective of the SharePoint/OneDrive site with documents stored based on their original location in the site. Documents are named based on their document name. Individual .msg files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folders within the mailbox with emails stored as .msg files based on their original location in the mailbox. Emails named based on their subject. As above. Individual .eml files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folder within the mailbox with emails stored as .eml files based on their original location in the mailbox. Emails named based on their subject As above. Historically with this API, the exportFormat parameter was used to control the desired export format. Three values could be used and they were pst, msg and eml. This parameter is still relevant but only controls how email items will be saved, either in a PST file, as individual .msg files or as individual .eml files. Note: The eml export format option is depreciated in the new UX. Going forward you should use either pst or msg. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. More information on the export package options and what they control can be found in the following link. https://learn.microsoft.com/en-gb/purview/edisc-search-export#export-package-options To support this, new values have been added to the additionalOptions parameter for this API call, these must be included in the request body otherwise the export structure will be as follows. exportFormat value Exchange Export Structure SharePoint / OneDrive Export Structure pst PST files created that containing data from multiple mailboxes. All emails contained within a single folder within the PST. Emails named a based on an assigned unique identifier (GUID) One folder for all documents. All documents contained within a single folder. Documents are named based on an assigned unique identifier (GUID) msg Folder created containing data from all mailboxes. All emails contained within a single folder stored as .msg files. Emails named a based on an assigned unique identifier (GUID) As above. The new values added to the additionalOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal includeFolderAndPath Organize data from different locations into separate folders or PSTs splitSource Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive a similar output structure when previously using the pst or msg values in the exportFormat parameter I would include all of the above values in the additionalOptions parameter. For example, to generate a direct export where the email items are stored in separate PSTs per mailbox, the structure of the PST files reflects the mailbox and each items is named as per the subject of the email; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - PST" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = “https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params If I want to export the email items as individual .msg files instead of storing them in PST files; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - MSG" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = " https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" Add to Review Set ediscoveryReviewSet: addToReviewSet This API call enables you to commit the items that match the search to a Review Set within an eDiscovery case. This enables you to review, tag, redact and filter the items that match the search without exporting the data from the M365 service boundary. Historically with this API call it was more limited compared to triggering the job via the eDiscovery Premium (Classic) UI. With the APIs now natively working with the modern experience for eDiscovery organizations can make use of the enhancements made within the modern UX and have greater flexibility in selecting the options that are relevant for your requirements. There is a lot of overlap with previous sections, specifically the “Direct Export – Report” section on what updates are required to benefit from updated API. They are as follows: Controlling the number of versions of SPO and OneDrive documents added to the review set via the new cloudAttachmentVersion and documentVersion parameters Enabling organizations to trigger the advanced indexing of partial indexed items during the add to review set job via new values added to existing parameters However there are some nuances to the parameter names and the values for this specific API call compared to the exportReport API call. For example, with this API call we use the additionalDataOptions parameter opposed to the additionalOptions parameter. As with the exportReport and exportResult APIs, there are new parameters to control the number of versions of SPO and OneDrive documents added to the review set are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. Historically for this API call, within the additionalDataOptions parameter you could include the allVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allVersions value can still be included in the additionalDataOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalDataOptions parameter and switch to using the new documentVersion parameter. To benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the add to review set job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalDataOptions advancedIndexing Perform advanced indexing on partially indexed items itemsToInclude searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items additionalDataOptions locationsWithoutHits Exclude partially indexed items in locations without search hits. Historically the API call didn’t support the add to review set job options to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and collect up to 12 hours of related conversations when a message matches a search. With the APIs now natively working with the modern experience for eDiscovery this is now possible by adding support for the htmlTranscripts and messageConversationExpansion values to the addtionalDataOptions parameter. As an example, let’s look at the following add to review set job from the portal and use the Microsoft Graph PowerShell module to invoke the addToReviewSet API call with the updated request body. $commitParams = @{ search = @{ id = $searchID } additionalDataOptions = "linkedFiles,advancedIndexing,htmlTranscripts,messageConversationExpansion,locationsWithoutHits" cloudAttachmentVersion = "latest" documentVersion = "latest" itemsToInclude = "searchHits,partiallyIndexed" } $params = $commitParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/addToReviewSet" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Export from Review Set ediscoveryReviewSet: export This API call enables you to export items from a Review Set within an eDiscovery case. Historically with this API, the exportStructure parameter was used to control the desired export format. Two values could be used and they were directory and pst. This parameter has had been updated to include a new value of msg. Note: The directory value is depreciated in the new UX but remains available in v1.0 of the API call for backwards compatibility. Going forward you should use msg alongside the new exportOptions values. The exportStructure parameter will only control how email items are saved, either within PST files or as individual .msg files. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. As with the exportResults API call for direct export, new values have been added to the exportOptions parameter for this API call. The new values added to the exportOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal includeFolderAndPath Organize data from different locations into separate folders or PSTs splitSource Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive an equivalent output structure when previously using the pst value in the exportStructure parameter I would include all of the above values in the exportOptions parameter within the request body. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - PST" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportStructure = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params To receive an equivalent output structure when previously using the directory value in the exportStructure parameter I would instead use the msg value within the request body. As the condensed directory structure format export all items into a single folder, all named based on uniquely assigned identifier I do not need to include the new values added to the exportOptions parameter. An example using the Microsoft Graph PowerShell module can be found below. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - MSG" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles" exportStructure = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Continuing to use the directory value in exportStructure will produce the same output as if msg was used. Wrap Up Thank you for your time reading through this post. Hopefully you are now equipped with the information needed to make the most of the new modern experience for eDiscovery when making your Graph API calls.Upcoming changes to Microsoft Purview eDiscovery
Today, we are announcing three significant updates to the Microsoft Purview eDiscovery products and services. These updates reinforce our commitment to meeting and exceeding the data security, privacy, and compliance requirements of our customers. To improve security and help protect customers and their data, we have accelerated the timeline for the below changes, which will be enforced by default on May 26. The following features will be retired from the Microsoft Purview portal: Content Search will transition to the new unified Purview eDiscovery experience. The eDiscovery (Standard) classic experience will transition to the new unified Purview eDiscovery experience. The eDiscovery export PowerShell cmdlet parameters will be retired. These updates aim to unify and simplify the eDiscovery user experience in the new Microsoft Purview Portal, while preserving the accessibility and integrity of existing eDiscovery cases. Content Search transition to the new unified Purview eDiscovery experience The classic eDiscovery Content Search solution will be streamlined into the new unified Purview eDiscovery experience. Effective May 26 th , the Content Search solution will no longer be available in the classic Purview portal. Content Search provides administrators with the ability to create compliance searches to investigate data located in Microsoft 365. We hear from customers that the Content Seach tool is used to investigate data privacy concerns, perform legal or incident investigations, validate data classifications, etc. Currently, each compliance search created in the Content Search tool is created outside of the boundaries of a Purview eDiscovery (Standard) case. This means that administrators in Purview Role Groups containing the Compliance Search role can view all Content Searches in their tenant. While the Content Search solution does not enable any additional search permission access, the view of all Content Searches in a customer tenant is not an ideal architecture. Alternatively, when using a Purview eDiscovery case, these administrators only have access to cases in which they are assigned. Customers can now create their new compliance searches within an eDiscovery case using the new unified Purview eDiscovery experience. All content searches in a tenant created prior to May 26, 2025 are now accessible in the new unified Purview eDiscovery experience within a case titled “Content Search”. Although the permissions remain consistent, eDiscovery managers and those with custom permissions will now only be able to view searches from within the eDiscovery cases in which they are assigned, including the “Content Search” case. eDiscovery Standard transition to the new unified Purview eDiscovery experience The classic Purview eDiscovery (Standard) solution experience has transitioned into the new unified Purview eDiscovery experience. Effective May 26 th , the classic Purview eDiscovery (Standard) solution will no longer be available to customers within the classic Purview portal. All existing eDiscovery cases created in the classic purview experience are now available within the new unified Purview eDiscovery experience. Retirement of eDiscovery Export PowerShell Cmdlet parameters The Export parameter within the ComplianceSearchAction eDiscovery PowerShell cmdlets will be retired on May 26, 2025: New-ComplianceSearchAction -Export parameter (and parameters dependent on export such as Report, Retentionreport …) Get-ComplianceSearchAction -Export parameter Set-ComplianceSearchAction -ChangeExportKey parameter We recognize that the removal of the Export parameter may require adjustments to your current workflow process when using Purview eDiscovery (Standard). The remaining Purview eDiscovery PowerShell cmdlets will continue to be supported after May 26 th , 2025: Create and update Compliance Cases New-ComplianceCase, Set-ComplianceCase Create and update Case Holds New-CaseHoldPolicy, Set-CaseHoldPolicy, New-CaseHoldRule, Set-CaseHoldRule Create, update and start Compliance Searches New-ComplianceSearch,Set-ComplianceSearch, Start-ComplianceSearch, Apply Purge action to a Compliance Search New-ComplianceSearchAction -Purge Additionally, if you have a Microsoft 365 E5 license and use eDiscovery (Premium), your organization can script all eDiscovery operations, including export, using the Microsoft Graph eDiscovery APIs. Purview eDiscovery Premium On May 26 th , there will be no changes to the classic Purview eDiscovery (Premium) solution in the classic Purview portal. Cases that were created using the Purview eDiscovery (Premium) classic case experience can also now be accessed in the new unified Purview eDiscovery experience. We recognize that these changes may impact your current processes, and we appreciate your support as we implement these updates. Microsoft runs on trust and protecting your data is our utmost priority. We believe these improvements will provide a more secure and reliable eDiscovery experience. To learn more about the Microsoft Purview eDiscovery solution and become an eDiscovery Ninja, please check out our eDiscovery Ninja Guide at https://aka.ms/eDiscoNinja!What’s New in Microsoft Purview eDiscovery
As organizations continue to navigate increasingly complex compliance and legal landscapes, Microsoft Purview eDiscovery is adapting to address these challenges. New and upcoming enhancements will improve how legal and compliance teams manage cases, streamline workflows, and reduce operational friction. Here’s a look at some newly released features and some others that are coming soon. Enhanced Reporting in Modern eDiscovery The modern eDiscovery experience now offers comprehensive reporting capabilities that capture every action taken within a case. This creates an auditable record of actions, enabling users to manage case activities confidently and accurately. These enhancements are critical for defensibility and audit readiness, and they also help users better understand the outcomes of their searches based on the options and settings they selected. Recent improvements include: Compliance boundary visibility: The Summary.csv report now includes the compliance boundary settings that were applied to the search query. This helps clarify search results, especially when different users have varying access permissions across data locations. Decryption settings: Visibility has been added to the Settings.csv report to indicate if a specific process has Exchange or SharePoint decryption capabilities enabled. This ensures users can verify whether encrypted content can be successfully decrypted during the process of adding to a review set or export. Visibility into Premium feature usage: There is a new value in the Settings.csv report that indicates whether Premium features were enabled for the process. Improved clarity: The Items.csv report now includes an "Added by" column to understand how the item was identified. This column shows if items were included by direct search (IndexedQuery), partial indexing (UnindexedQuery), or advanced indexing (AdvancedIndex). Contextual information: The Summary.csv report helps users understand their exported data by providing contextual explanations. For instance, it explains how the volume of exported data may be greater than the search estimate due to factors such as cloud attachments or multiple versions of SharePoint documents. Copy Search to Hold: Reuse with Confidence Another commonly requested feature that is now available is the ability to create a hold from a search. If your workflow starts with searches before creating holds, you can use the new, “Create a hold” button to create a new hold policy based on that search. Depending on your workflow, it can be effective to start off with a broad, high-level search across the entire data source to quickly surface potentially relevant content and gauge the approximate amount of data that may need review. For example, in a litigation scenario, you may want to search across the entire mailbox and OneDrive of custodians that are of interest. Once the preservation obligation is triggered, this new feature easily allows you to copy your search and create a hold to preserve the content, streamlining your workflow and enhancing efficiency. Whether you begin with a highly targeted search or something broader, this feature reduces duplication of effort and ensures consistency across processes when you want to turn an existing search into a hold. Retry Failed Locations: Easily Address Processing Issues Searches can occasionally encounter issues due to temporarily inaccessible locations. The new “Retry failed locations” feature introduces a simple, automated way to reprocess those issues without restarting the entire job. When you retry the failed locations, the results will be aggregated with the original job to provide a comprehensive overview. This feature is particularly beneficial for administrators, allowing them to efficiently retry the search while ensuring continuity in their workflow. Duplicate Search: Consistency Made Easy Save time and reduce rework by duplicating an existing search with just a few mouse clicks. Whether you're building on a previous query or rerunning it with slight adjustments, this new feature lets you preserve all original parameters, such as conditions, data sources, and locations, so you can move faster with confidence and consistency. Administrators and users simply click the “Duplicate search” button while on an existing search and then rename it. Case-Level Data Source Management: The New Data Sources Tab We recently introduced a new case-level Data Sources view in Purview eDiscovery, allowing data sources to be reused in searches and holds within a case. This enhancement allows case administrators to map and manage all relevant data locations, including mailboxes, SharePoint sites, Teams channels, and more, directly within a case. Once data sources have been added, it greatly simplifies the process of creating searches. Adding data sources within the Data Sources tab enables administrators to select from a list of previously added sources when creating new searches, rather than searching for them each time. This feature is especially useful for frequent searches across the same data sources, which happens often within eDiscovery. With this feature, users can: Visualize all data sources tied to a case. Use data source locations to populate searches and eDiscovery hold policies. Simplify the search creation process for administrators and users. This new functionality empowers teams to make more informed decisions about what to preserve, search, and review. Delete Searches and Search Exports: Keep Your Case Well Organized A newly released feature is one that helps you remove outdated or redundant searches from a case, helping keep the workspace clean and focused. With just a few clicks, you can delete searches that are no longer relevant, helping you stay focused on what matters most without extra clutter. Whether you're tidying up after a project or clearing out test runs, deletion helps keep you organized. hot showing how to delete a search. Another new feature enables users to delete search exports. This functionality is intended to assist with the management of case data and to remove unnecessary or outdated search exports, including sensitive information that is no longer required. Condition Builder Enhancements for Logical Operators (AND, OR, NEAR) The Condition Builder now supports logical operators—AND, OR, and NEAR—all within the same line, or grid. These enhancements empower users to construct more targeted search conditions and offer increased control over the use of phrases, enabling additional options over how terms are combined and matched, and proving greater flexibility to keyword queries. Tenant level control over Premium Features An upcoming enhancement will introduce a tenant-level setting to allow organizations to set the default behavior for new case creation and whether to use Premium features by default. This will provide greater flexibility and control, especially for customers that are in a mixed-license environment. Export Naming and Controlling Size of Exports A couple more upcoming enhancements are intended to give users more control and additional options over the export process, increasing flexibility for legal and compliance teams. First, export packages will include the user-defined export name directly in the download package. This small but impactful change simplifies tracking and association of exported data with specific cases. This is especially useful when managing multiple exports across different cases. Second, a new configuration setting will allow administrators to define a maximum export package size. This gives teams greater control over how data is partitioned, helping to optimize performance and reduce the risk of download issues or browser timeouts during large exports. Takeaways These updates are part of a broader modernization of the Purview eDiscovery experience, which includes a unified user experience, enhanced reporting, and a more streamlined and intuitive workflow for teams managing regulatory inquiries, litigation, or investigations. These enhancements aim to reduce friction, streamline workflows, and accelerate productivity. To learn more about thew new eDiscovery user experience, visit our Microsoft documentation at https://aka.ms/ediscoverydocsnew For more updates on the future of eDiscovery, please check out our product roadmap at https://aka.ms/ediscoveryroadmap To become a Purview eDiscovery Ninja, check out our eDiscovery Ninja Guide at: https://aka.ms/ediscoveryninjaSecure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Coplot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurSearch and Purge using the Security and Compliance PowerShell cmdlets
Welcome back to the series of blogs covering search and purge in Microsoft Purview eDiscovery! If you are new to this series, please first visit the blog post in our series that you can find here: Search and Purge workflow in the new modern eDiscovery experience. Also please ensure you read in full the Microsoft Learn documentation on this topic as I will not be covering some of the steps in full (permissions, releasing holds, all limitations): Find and delete email messages in eDiscovery | Microsoft Learn So as a reminder, E3/G3 customers must use the Security and Compliance PowerShell cmdlets to execute the purge operation. Searches can continue to be created using the New-ComplianceSearch cmdlet and then run the newly created search using the Start-ComplianceSearch cmdlet. Once a search has run, the statistics can be reviewed before executing the New-ComplianceSearchAction cmdlet with the Purge switch to remove the item from the targeted locations. However, some organizations may want to initially run the search, review statistics and export an item report in the new user experience before using the New-ComplianceSearchAction cmdlet to purge the items from the mailbox. Create the case, if you will be using the new Content Search case you can skip this step. However, if you want to create a new case to host the search, you must create the case via PowerShell. This ensures any searches created within the case in the Purview portal will support the PowerShell based purge command. Use the Connect-IPPSession command to connect to Security and Compliance PowerShell before running the following command to create a new case. New-ComplianceCase “Test Case” Select the new Purview Content Search case or the new case you created in step 1 and create a new Search Within your new search use the Add Sources option to search for and select the mailboxes containing the item to be purged by adding them to the Data sources of your newly created search. Note: Make sure only Exchange mailboxes are selected as you can only purge items contained within Exchange Mailboxes. If you added both the mailbox and associated sites, you can remove the sites using the 3 dot menu next to the data source under User Options. Alternatively, use the manage sources button to remove the sites associated with the data source. Within Condition builder define the conditions required to target the item you wish to purge. In this example, I am targeting an email with a specific subject, from a specific sender, on a specific day. To help me understand the estimated number of items that would be returned by the search I can run a statistics job first to give me confidence that the query is correct. I do this by selecting Run Query from the search itself. Then I can select Statistics and Run Query to trigger the Statistics job. Note, you can view the progress of the job via the Process Manager Once completed I can view the Statistics to confirm the query looks accurate and returning the numbers I was expecting. If I want to further verify that the items returned by the search is what I am looking for, I can run a Sample job to review a sample of the items matching the search query Once the Sample job is completed, I can review samples for locations with hits to determine if this is indeed the items I want to purge. If I need to go further and generate a report of the items that match the search (not just statistics and sampling) I can run an export to generate a report for the items that match the search criteria. Note: It is important to run the export report to review the results that purge action will remove from the mailbox. This will ensure that we purge only the items of interest. Download the report for the export job via the Process Manager or the Export tab to review the items that were a match Note: If very few locations have hits it is recommended to reduce the scope of your search by updating the data sources to include only the locations with hits. Switch back to the cmdlet and use Get-ComplianceSearch cmdlet as below, ensure the query is as you specified in the Purview Portal Get-ComplianceSearch -Identity "My search and purge" | fl As the search hasn’t be run yet in PowerShell – the Items count is 0 and the JobEndTime is not set - the search needs to be re-run via PS as per the example shown below Start-ComplianceSearch "My search and purge" Give it a few minutes to complete and use Get-ComplianceSearch to check the status of the search, if the status is not “Completed” and JobEndTime is not set you may need to give it more time Check the search returned the same results once it has finished running Get-ComplianceSearch -Identity "My search and purge" | fl name,status,searchtype,items,searchstatistics CRITICAL: It is important to make sure the Items count match the number of items returned in the item report generated from the Purview Portal. If the number of items returned in PowerShell do not match, then do not continue with the purge action. Issue the purge command using the New-ComplianceSearchAction cmdlet New-ComplianceSearchAction -SearchName "My search and purge" -Purge -PurgeType HardDelete Once completed check the status of the purge command to confirm that the items have been deleted Get-ComplianceSearchAction "My search and purge_purge" | fl Now that the purge operation has been completed successfully, it has been removed from the target mailbox and is no longer accessible by the user.Search and Purge using Microsoft Graph eDiscovery API
Welcome back to the series of blogs covering search and purge in Microsoft Purview eDiscovery! If you are new to this series, please first visit the blog post in our series that you can find here: Search and Purge workflow in the new modern eDiscovery experience Also, please ensure you have fully read the Microsoft Learn documentation on this topic as I will not be covering some of the steps in full (permissions, releasing holds, all limitations): Find and delete Microsoft Teams chat messages in eDiscovery | Microsoft Learn So as a reminder, for E5/G5 customers and cases with premium features enabled- you must use the Graph API to execute the purge operation. With the eDiscovery Graph API, you have the option to create the case, create a search, generate statistics, create an item report and issue the purge command all from the Graph API. It is also possible to use the Purview Portal to create the case, create the search, generate statistics/samples and generate the item report. However, the final validation of the items that would be purged by rerunning the statistics operation and issuing the purge command must be run via the Graph API. In this post, we will take a look at two examples, one involving an email message and one involving a Teams message. I will also look to show how to call the graph APIs. Purging email messages via the Graph API In this example, I want to purge the following email incorrectly sent to Debra Berger. I also want to remove it from the sender's mailbox as well. Let’s assume in this example I do not know exactly who sent and received the email, but I do know the subject and date it was sent on. In this example, I am going to use the Modern eDiscovery Purview experience to create a new case where I will undertake some initial searches to locate the item. Once the case is created, I will Create a search and give it a name. In this example, I do not know all the mailboxes where the email is present, so my initial search is going to be a tenant wide search of all Exchange mailboxes, using the subject and date range as conditions to see which locations have hits. Note: For scenarios where you know the location of the items there is no requirement to do a tenant wide search. You can target the search to the know locations instead. I will then select Run Query and trigger a Statistics job to see which locations in the tenant have hits. For our purposes, we do not need to select Include categories, Include query keywords report or Include partially indexed items. This will trigger a Generate statistics job and take you to the Statistics tab of the search. Once the job completes it will display information on the total matches and number of locations with hits. To find out exactly which locations have hits, I can use the improved process reports to review more granular detail on the locations with hits. The report for the Generate statistics job can be found by selecting Process manager and then selecting the job. Once displayed I can download the reports associated with this process by selecting Download report. Once we have downloaded the report for the process, we get a ZIP file containing four different reports, to understand where I had hits I can review the Locations report within the zip file. If I open the locations report and filter on the count column I can see in this instance I have two locations with hits, Admin and DebraB. I will use this to make my original search more targeted. It also gives me an opportunity to check that I am not going to exceed the limits on the number of items I can target for the purge per execution. Returning to our original search I will remove All people and groups from my Data Sources and replace it with the two locations I had hits from. I will re-run my Generate Statistics job to ensure I am still getting the expected results. As the numbers align and remain consistent, I will do a further check and generate samples from the search. This will allow me to review the items to confirm that they are the items I wish to purge. From the search query I select Run query and select Sample. This will trigger a Generate sample job and take you to the Sample tab of the search. Once complete, I can review samples of the items returned by the search to confirm if these items are the items I want to purge. Now that I have confirmed, based on the sampling, that I have the items I want to purge I want to generate a detailed item report of all items that are a match for my search. To do this I need to generate an export report for the search. Note: Sampling alone may not return all the results impacted by the search, it only returns a sample of the items that match the query. To determine the full set of items that will be targeted we need to generate the export report. From the Search I can select Export to perform a direct export without having to add the data to a review set (available when premium features are enabled). Ensure to configure the following options on the export: Indexed items that match your search query Unselect all the options under Messages and related items from mailboxes and Exchange Online Export Item report only If you want to manually review the items that would be impacted by the purge operation you can optionally export the items alongside the items report for further review. You can also add the search to a review set to review the items that you are targeting. The benefit of adding to the review set is that it enables to you review the items whilst still keeping the data within the M365 service boundary. Note: If you add to a review set, a copy of the items will remain in the review set until the case is deleted. I can review the progress of the export job and download the report via the Process Manager. Once I have downloaded the report, I can review the Items.csv file to check the items targeted by the search. It is at this stage I must switch to using the Graph APIs to validate the actions that will be taken by the purge command and to issue the purge command itself. Not undertaking these additional validation steps can result in un-intended purge of data. There are two approaches you can use to interact with the Microsoft Graph eDiscovery APIs: Via Graph Explorer Via the MS.Graph PS module For this example, I will show how to use the Graph Explorer to make the relevant Graph API calls. For the Teams example, I will use the MS.Graph PS Module. We are going to use the APIs to complete the following steps: Trigger a statistics job via the API and review the results Trigger the purge command The Graph Explorer can be accessed via the following link: Graph Explorer | Try Microsoft Graph APIs - Microsoft Graph To start using the Graph Explorer to work with Microsoft Graph eDiscovery APIs you first need to sign in with your admin account. You need to ensure that you consent to the required Microsoft Graph eDiscovery API permissions by selecting Consent to permissions. From the Permissions flyout search for eDiscovery and select Consent for eDiscovery.ReadWrite.All. When prompted to consent to the permissions for the Graph Explorer select Accept. Optionally you can consent on behalf of your organisation to suppress this step for others. Once complete we can start making calls to the APIs via Graph Explorer. To undertake the next steps we need to capture some additional information, specifically the Case ID and the Search ID. We can get the case ID from the Case Settings in the Purview Portal, recording the Id value shown on the Case details pane. If we return to the Graph Explorer we can use this CaseID to see all the searches within an eDiscovery case. The structure of the HTTPS call is as follows: GET https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<caseID>/searches List searches - Microsoft Graph v1.0 | Microsoft Learn If we replace <caseID> with the Id we captured from the case settings we can issue the API call to see all the searches within the case to find the required search ID. When you issue the GET request in Graph Explorer you can review the Response preview to find the search ID we are looking for. Now that we have the case ID and the Search ID we can trigger an estimate by using the following Graph API call. POST https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/estimateStatistics ediscoverySearch: estimateStatistics - Microsoft Graph v1.0 | Microsoft Learn Once you issue the POST command you will be returned with an Accepted – 202 message. Now I need to use the following REST API call to review the status of the Estimate Statistics job in Graph Explorer. GET https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/lastEstimateStatisticsOperation List lastEstimateStatisticsOperation - Microsoft Graph v1.0 | Microsoft Learn If the estimates job is not complete when you run the GET command the Response preview contents will show the status as running. If the estimates job is complete when you run the GET command the Response preview contents will show you the results of the estimates job. CRITICAL: Ensure that the indexedItemCount matches the items returned in the item report generated via the Portal. If this does not match do not proceed to issuing the purge command. Now that I have validated everything, I am ready to issue the purge command via the Graph API. I will use the following Graph API call. POST https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/purgeData ediscoverySearch: purgeData - Microsoft Graph v1.0 | Microsoft Learn With this POST command we also need to provide a Request Body to tell the API which areas we want to target (mailboxes or teamsMessages) and the purge type (recoverable, permantlyDelete). As we are targeting email items I will use mailboxes as the PurgeAreas option. As I only want to remove the item from the user’s mailbox view I am going to use recoverable as the PurgeType. { "purgeType": "recoverable", "purgeAreas": "mailboxes" } Once you issue the POST command you will be returned with an Accepted – 202 message. Once the command has been issued it will proceed to purge the items that match the search criteria from the locations targeted. If I go back to my original example, we can now see the item has been removed from the users mailbox. As it has been soft deleted I can review the recoverable items folder from Outlook on the Web where I will see that for the user, it has now been deleted pending clean-up from their mailbox. Purging Teams messages via the Graph API In this example, I want to purge the following Teams conversation between Debra, Adele and the admin (CDX) from all participants Teams client. I am going to reuse the “HK016 – Search and Purge” case to create a new search called “Teams conversation removal”. I add three participants of the chat as Data sources to the search, I am then going to use the KeyQL condition to target the items I want to remove. In this example I am using the following KeyQL. (Participants=AdeleV@M365x00001337.OnMicrosoft.com AND Participants=DebraB@M365x00001337.OnMicrosoft.com AND Participants=admin@M365x00001337.onmicrosoft.com) AND (Kind=im OR Kind=microsoftteams) AND (Date=2025-06-04) This is looking for all Teams messages that contain all three participants sent on the 4 th of June 2025. It is critical when targeting Teams messages that I ensure my query targets exactly the items that I want to purge. With Teams messages (opposed to email items) there are less options available that enable us to granularly target the team items for purging. Note: The use of the new Identifier condition is not supported for purge options. Use of this can lead to unintended data to be removed and should not be used as a condition in the search at this time. If I was to be looking for a very specific phrase, I could further refine the query by using the Keyword condition to look for that specific Teams message. Once I have created my search I am ready to generate both Statistics and Samples to enable me to validate I am targeting the right items for my search. My statistics job has returned 21 items, 7 from each location targeted. This aligns with the number of items within the Teams conversation. However, I am going to also validate that the samples I have generated match the content I want to purge, ensuring that I haven’t inadvertently returned additional items I was not expecting. Now that I have confirmed, based on the sampling, that the sample of items returned look to be correct I want to generate a detailed item report of all items that are a match for my search. To do this I need to generate an export report for the search. From the Search I can select Export to perform a direct export without having to add the data to a review set (available when premium features are enabled). Ensure to configure the following options on the export: Indexed items that match your search query Unselect all the options under Messages and related items from mailboxes and Exchange Online Export Item report only Once I select Export it will create a new export job, I can review the progress of the job and download the report via the Process Manager. Once I have downloaded the report, I can review the Items.csv file to check the items targeted by the search and that would be purged when I issue the purge call. Now that I have confirmed that the search is targeting the items I want to purge it is at this stage I must switch to using the Graph APIs. As discussed, there are two approaches you can use to interact with the Microsoft Graph eDiscovery APIs: Using Graph Explorer Using the MS.Graph PS module For this example, I will show how to use the MS.Graph PS Module to make the relevant Graph API calls. To understand how to use the Graph Explorer to issue the purge command please refer to the previous example for purging email messages. We are going to use the APIs to complete the following steps: Trigger a statistics job via the API and review the results Trigger the purge command To install the MS.Graph PowerShell module please refer to the following article. Install the Microsoft Graph PowerShell SDK | Microsoft Learn To understand more about the MS.Graph PS module and how to get started you can review the following article. Get started with the Microsoft Graph PowerShell SDK | Microsoft Learn Once the PowerShell module is installed you can connect to the eDiscovery Graph APIs by running the following command. connect-mgGraph -Scopes "ediscovery.ReadWrite.All" You will be prompted to authenticate, once complete you will be presented with the following banner. To undertake the next steps we need to capture some additional information, specifically the Case ID and the Search ID. As before we can get the case ID from the Case Settings in the Purview Portal, recording the Id value shown on the Case details pane. Alternatively we can use the following PowerShell command to find a list of cases and their ID. get-MgSecurityCaseEdiscoveryCase | ft displayname,id List ediscoveryCases - Microsoft Graph v1.0 | Microsoft Learn Once we have the ID of the case we want to execute the purge command from, we can run the following command to find the IDs of all the search jobs in the case. Get-MgSecurityCaseEdiscoveryCaseSearch -EdiscoveryCaseId <ediscoveryCaseId> | ft displayname,id,ContentQuery List searches - Microsoft Graph v1.0 | Microsoft Learn Now that we have both the Case ID and the Search ID we can trigger the generate statistics job using the following command. Invoke-MgEstimateSecurityCaseEdiscoveryCaseSearchStatistics -EdiscoveryCaseId <ediscoveryCaseId> -EdiscoverySearchId <ediscoverySearchId> ediscoverySearch: estimateStatistics - Microsoft Graph v1.0 | Microsoft Learn Now I need to use the following command to review the status of the Estimate Statistics job. Get-MgSecurityCaseEdiscoveryCaseSearchLastEstimateStatisticsOperation -EdiscoveryCaseID <ediscoveryCaseId> -EdiscoverySearchId <ediscoverySearchId> List lastEstimateStatisticsOperation - Microsoft Graph v1.0 | Microsoft Learn If the estimates job is not complete when you run the command the status will show as running. If the estimates job is complete when you run the command status will show as succeeded and will also show the number of hits in the IndexItemCount. CRITICAL: Ensure that the indexedItemCount matches the items returned in the item report generated via the Portal. If this does not match do not proceed to issuing the purge command. Now that I have validated everything I am ready to issue the purge command via the Graph API. With this command we need to provide a Request Body to tell the API which areas we want to target (mailboxes or teamsMessages) and the purge type (recoverable, permantlyDelete). As we are targeting teams items I will use teamsMessages as the PurgeAreas option. Note: If you specify mailboxes then only the compliance copy stored in the user mailbox will be purged and not the item from the teams services itself. This will mean the item will remain visible to the user in Teams and can no longer be purged. When purgeType is set to either recoverable or permanentlyDelete and purgeAreas is set to teamsMessages, the Teams messages are permanently deleted. In other words either option will result in the permanent deletion of the items from Teams and they cannot be recovered. $params = @{ purgeType = "recoverable" purgeAreas = "teamsMessages" } Once I have prepared my request body I will issue the following command. Clear-MgSecurityCaseEdiscoveryCaseSearchData -EdiscoveryCaseId $ediscoveryCaseId -EdiscoverySearchId $ediscoverySearchId -BodyParameter $params ediscoverySearch: purgeData - Microsoft Graph v1.0 | Microsoft Learn Once the command has been issued it will proceed to purge the items that match the search criteria from the locations targeted. If I go back to my original example, we can now see the items has been removed from Teams. Congratulations, you have made it to the end of the blog post. Hopefully you found it useful and it assists you to build your own operational processes for using the Graph API to issue search and purge actions.Search and Purge workflow in the new modern eDiscovery experience
With the retirement of Content Search (Classic) and eDiscovery Standard (Classic) in May, and alongside the future retirement of eDiscovery Premium (Classic) in August, organizations may be wondering how this will impact their existing search and purge workflow. The good news is that it will not impact your organizations ability to search for and purge email, Teams and M365 Copilot messages; however there are some additional points to be careful about when working with purge with cmdlet and Graph alongside of the modern eDiscovery experience. We have made some recent updates to our documentation regarding this topic to reflect the changes in the new modern eDiscovery experience. These can be found below and you should ensure that you read them in full as they are packed with important information on the process. Find and delete email messages in eDiscovery | Microsoft Learn Find and delete Microsoft Teams chat messages in eDiscovery | Microsoft Learn Search for and delete Copilot data in eDiscovery | Microsoft Learn The intention of this first blog post in the series is to cover the high-level points including some best practices when it comes to running search and purge operations using Microsoft Purview eDiscovery. Please stay tuned for further blog posts intended to provide more detailed step-by-step of the following search and purge scenarios: Search and Purge email and Teams messages using Microsoft Graph eDiscovery APIs Search and Purge email messages using the Security and Compliance PowerShell cmdlets I will update this blog post with the subsequent links to the follow-on posts in this series. So let’s start by looking at the two methods available to issue a purge command with Microsoft Purview eDiscovery, they are the Microsoft Graph eDiscovery APIs or the Security and Compliance PowerShell cmdlets. What licenses you have dictates which options are available to you and what type of items you can be purge from Microsoft 365 workloads. For E3/G3 customers and cases which have the premium features disabled You can only use the PowerShell cmdlets to issue the purge command You should only purge email items from mailboxes and not Teams messages You are limited to deleting 10 items per location with a purge command For E5/G5 customers and cases which have the premium features enabled You can only use the Graph API to issue the purge command You can purge email items and Teams messages You can delete up to 100 items per location with a purge command To undertake a search and then purge you must have the correct permissions assigned to your account. There are two key Purview Roles that you must be assigned, they are: Compliance Search: This role lets users run the Content Search tool in the Microsoft Purview portal to search mailboxes and public folders, SharePoint Online sites, OneDrive for Business sites, Skype for Business conversations, Microsoft 365 groups, and Microsoft Teams, and Viva Engage groups. This role allows a user to get an estimate of the search results and create export reports, but other roles are needed to initiate content search actions such as previewing, exporting, or deleting search results. Search and Purge: This role lets users perform bulk removal of data matching the criteria of a search. To learn more about permissions in eDiscovery, along with the different eDiscovery Purview Roles, please refer to the following Microsoft Learn article: Assign permissions in eDiscovery | Microsoft Learn By default, eDiscovery Manager and eDiscovery Administrators have the “Compliance Search” role assigned. For search and purge, only the Organization Management Purview Role group has the role assigned by default. However, this is a highly privileged Purview Role group and customers should considering using a custom role group to assign the Search and Purge Purview role to authorised administrators. Details on how to create a custom role group in Purview can be found in the following article. Permissions in the Microsoft Purview portal | Microsoft Learn It is also important to consider the impact of any retention policies or legal holds will have when attempting to purge email items from a mailbox where you want to hard delete the items and remove it completely from the mailbox. When a retention policy or legal hold is applied to a mailbox, email items that are hard deleted via the purge process are moved and retained in the Recoverable Items folder of the mailbox. There purged items will be retained until such time as all holds are lifted and until the retention period defined in the retention policy has expired. It is important to note that items retained in the Recoverable Items folder are not visible to users but are returned in eDiscovery searches. For some search and purge use cases this is not a concern; if the primary goal is to remove the item from the user’s view then additional steps are required. However if the goal is to completely remove the email item from the mailbox in Exchange Online so it doesn't appear in the user’s view and is not returned by future eDiscovery searches then additional steps are required. They are: Disable client access to the mailbox Modify retention settings on the mailbox Disable the Exchange Online Managed Folder Assistant for the mailbox Remove all legal holds and retention policies from the mailbox Perform the search and purge operation Revert the mailbox to its previous state These steps should be carefully followed as any mistake could result in additional data that is being retained being permanently deleted from the service. The full detailed steps can be found in the following article. Delete items in the Recoverable Items folder mailboxes on hold in eDiscovery | Microsoft Learn Now for some best practice when running search and purge operations: Where possible target the specific locations containing the items you wish to purge and avoid tenant wide searches where possible If a tenant wide search is used to initially locate the items, once the locations containing the items are known modify the search to target the specific locations and rerun the steps Always validate the item report against the statistics prior to issuing the purge command to ensure you are only purging items you intend to remove If the item counts do not align then do not proceed with the purge command Ensure admins undertaking search and purge operations are appropriately trained and equipped with up-to-date guidance/process on how to safely execute the purge process The search conditions Identifier, Sensitivity Label and Sensitive Information Type do not support purge operations and if used can cause un-intended results Organizations with E5/G5 licenses should also take this opportunity to review if other Microsoft Purview and Defender offerings can help them achieve the same outcomes. When considering the right approach/tool to meet your desired outcomes you should become familiar with the following additional options for removing email items: Priority Clean-up (link): Use the Priority cleanup feature under Data Lifecycle Management in Microsoft Purview when you need to expedite the permanent deletion of sensitive content from Exchange mailboxes, overriding any existing retention settings or eDiscovery holds. This process might be implemented for security or privacy in response to an incident, or for compliance with regulatory requirements. Threat Explorer (link): Threat Explorer in Microsoft Defender for Office 365 is a powerful tool that enables security teams to investigate and remediate malicious emails in near real-time. It allows users to search for and filter email messages based on various criteria - such as sender, recipient, subject, or threat type - and take direct actions like soft delete, hard delete, or moving messages to junk or deleted folders. For manual remediation, Threat Explorer supports actions on emails delivered within the past 30 days In my next posts I will be delving further into how to use both the Graph APIs and the Security and Compliance PowerShell module to safely execute your purge commands.Getting started with the eDiscovery APIs
The Microsoft Purview APIs for eDiscovery in Microsoft Graph enable organizations to automate repetitive tasks and integrate with their existing eDiscovery tools to build repeatable workflows that industry regulations might require. Before you can make any calls to the Microsoft Purview APIs for eDiscovery you must first register an app in the Microsoft’s Identity Platform, Entra ID. An app can access data in two ways: Delegated Access: an app acting on behalf of a signed-in user App-only access: an app action with its own identity For more information on access scenarios see Authentication and authorization basics. This article will demonstrate how to configure the required pre-requisites to enable access to the Microsoft Purview APIs for eDiscovery. This will based on using app-only access to the APIs, using either a client secret or a self-signed certificate to authenticate the requests. The Microsoft Purview APIs for eDiscovery have two separate APIs, they are: Microsoft Graph: Part of the Microsoft.Graph.Security namespace and used for working with Microsoft Purview eDiscovery Cases. MicrosoftPurviewEDiscovery: Used exclusively to download programmatically the export package created by a Microsoft Purview eDiscovery Export job. Currently, the eDiscovery APIs in Microsoft Graph only work with eDiscovery (Premium) cases. For a list of supported API calls within the Microsoft Graph calls, see Use the Microsoft Purview eDiscovery API. Microsoft Graph API Pre-requisites Implementing app-only access involves registering an app in Azure portal, creating client secret/certificates, assigning API permissions, setting up a service principal, and then using app-only access to call Microsoft Graph APIs. To register an app, create client secret/certificates and assign API permissions the account must be at least a Cloud Application Administrator. For more information on registering an app in the Azure portal, see Register an application with the Microsoft identity platform. Granting tenant-wide admin consent for Microsoft Purview eDiscovery API application permissions requires you to sign in as a user that is authorized to consent on behalf of the organization, see Grant tenant-wide admin consent to an application. Setting up a service principal requires the following pre-requisites: A machine with the ExchangeOnlineManagement module installed An account that has the Role Management role assigned in Microsoft Purview, see Roles and role groups in Microsoft Defender for Office 365 and Microsoft Purview Configuration steps For detailed steps on implementing app-only access for Microsoft Purview eDiscovery, see Set up app-only access for Microsoft Purview eDiscovery. Connecting to Microsoft Graph API using app-only access Use the Connect-MgGraph cmdlet in PowerShell to authenticate and connect to Microsoft Graph using the app-only access method. This cmdlets enables your app to interact with Microsoft Graph securely and enables you to explore the Microsoft Purview eDiscovery APIs. Connecting via client secret To connect using a client secret, update and run the following example PowerShell code. $clientSecret = "<client secret>" ## Update with client secret added to the registered app $appID = "<APP ID>" ## Update with Application ID of registered/Enterprise app $tenantId = "<Tenant ID>" ## Update with tenant ID $ClientSecretPW = ConvertTo-SecureString "$clientSecret" -AsPlainText -Force $clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ("$appID", $clientSecretPW) Connect-MgGraph -TenantId "$tenantId" -ClientSecretCredential $clientSecretCred Connecting via certificate To connect using a certificate, update and run the following example PowerShell code. $certPath = "Cert:\currentuser\my\<xxxxxxxxxx>" ## Update with the cert thumbnail $appID = "<APP ID>" ## Update with Application ID of registered/Enterprise app $tenantId = "<Tenant ID>" ## Update with tenant ID $ClientCert = Get-ChildItem $certPath Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert Invoke Microsoft Graph API calls Once connected you can start making calls to the Microsoft Graph API. For example, lets look at listing the eDiscovery cases within the tenant, see List ediscoveryCases. Within the documentation, for each operation it will list the following information: Permissions required to make the API call HTTP request and method Request header and body information Response Examples (HTTP, C#, CLI, Go, Java, PHP, PowerShell, Python) As we are connected via the Microsoft Graph PowerShell module we can either use the HTTP or the eDiscovery specific cmdlets within the Microsoft Graph PowerShell module. First let’s look at the PowerShell cmdlet example. As you can see it returns a list of all the cases within the tenant. When delving deeper into a case it is important to record the Case ID as you will use this in future calls. Then we can look at the HTTP example, we will use the Invoke-MgGraphRequest cmdlet to make the call via PowerShell. First we need to store the URL in a variable as below. $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases" Then we will use the Invoke-MgGraphRequest cmdlet to make the API call. Invoke-MgGraphRequest -Method Get -Uri $uri As you can see from the output below, we need to extract the values from the returned response. This can be done by saving the Value elements of the response to a new variable using the following command. $cases = (Invoke-MgGraphRequest -Method Get -Uri $uri).value This returns a collection of Hashtables; optionally you can run a small bit of PowerShell code to convert the hash tables into PS Objects for easier use with cmdlets such as format-table and format-list. $CasesAsObjects = @() foreach($i in $cases) {$CasesAsObjects += [pscustomobject]$i} MicrosoftPurviewEDiscovery API You can also configure the MicrosoftPurviewEDiscovery API to enable the programmatic download of export packages and the item report from an export job in a Microsoft Purview eDiscovery case. Pre-requisites Prior to executing the configuration steps in this section it is assumed that you have completed and validated the configuration detailed in the Microsoft Graph API section. The previously registered app in Entra ID will be extended to include the required permissions to achieve programmatic download of the export package. This already provides the following pre-requisites: Registered App in Azure portal configured with the appropriate client secret/certificate Service principal in Microsoft Purview assigned the relevant eDiscovery roles Microsoft eDiscovery API permissions configured for the Microsoft Graph To extend the existing registered apps API permissions to enable programmatic download, the following steps must be completed Registering a new Microsoft Application and service principal in the tenant Assign additional API permissions to the previously registered app in the Azure Portal Granting tenant-wide admin consent for Microsoft Purview eDiscovery APIs application permissions requires you to sign in as a user that is authorized to consent on behalf of the organization, see Grant tenant-wide admin consent to an application. Configuration steps Step 1 – Register the MicrosoftPurviewEDiscovery app in Entra ID First validate that the MicrosoftPurviewEDiscovery app is not already registered by logging into the Azure Portal and browsing to Microsoft Entra ID > Enterprise Applications. Change the application type filter to show Microsoft Applications and in the search box enter MicrosoftPurviewEDiscovery. If this returns a result as below, move to step 2. If the search returns no results as per the example below, proceed with registering the app in Entra ID. The Microsoft.Graph PowerShell Module can be used to register the MicrosoftPurviewEDiscovery App in Entra ID, see Install the Microsoft Graph PowerShell SDK. Once installed on a machine, run the following cmdlet to connect to the Microsoft Graph via PowerShell. Connect-MgGraph -scopes "Application.ReadWrite.All" If this is the first time using the Microsoft.Graph PowerShell cmdlets you may be prompted to consent to the following permissions. To register the MicrosoftPurviewEDiscovery app, run the following PowerShell commands. $spId = @{"AppId" = "b26e684c-5068-4120-a679-64a5d2c909d9" } New-MgServicePrincipal -BodyParameter $spId; Step 2 – Assign additional MicrosoftPurviewEDiscovery permissions to the registered app Now that the Service Principal has been added you can update the permissions on your previously registered app created in the Microsoft Graph API section of this document. Log into the Azure Portal and browse to Microsoft Entra ID > App Registrations. Find and select the app you created in the Microsoft Graph API section of this document. Select API Permissions from the navigation menu. Select Add a permission and then APIs my organization uses. Search for MicrosoftPurviewEDiscovery and select it. Then select Application Permissions and select the tick box for eDiscovery.Download.Read before selecting Add Permissions. You will be returned to the API permissions screen, now you must select Grant Admin Consent.. to approve the newly added permissions. User.Read Microsoft Graph API permissions have been added and admin consent granted. It also shows that the eDiscovery.Download.Read MicrosoftPurviewEDiscovery API application permissions have been added but admin consent has not yet been granted. Once admin consent is granted you will see the Status of the newly added permissions update to Granted for... Downloading the export packages and reports Retrieving the case ID and export Job ID To successfully download the export packages and reports of an export job in an eDiscovery case, you must first retrieve the case ID and the operation/job ID for the export job. To gather this information via the Purview Portal you can open the eDiscovery Case, locate the export job and select Copy support information before pasting this information into Notepad. , case ID, job ID, job state, created by, created timestamp, completed timestamp and support information generation time. To access this information programmatically you can make the following Graph API calls to locate the case ID and the job ID you wish to export. First connect to the Microsoft Graph using the steps detailed in the previous section titled "Connecting to Microsoft Graph API using app-only access" Using the eDiscovery Graph PowerShell Cmdlets you can use the following command if you know the case name. Get-MgSecurityCaseEdiscoveryCase | where {$_.displayname -eq "<Name of case>"} Once you have the case ID you can look up the operations in the case to identify the job ID for the export using the following command. Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" Export jobs will either be logged under an action of exportResult (direct export) or ContentExport (export from review set). The name of the export jobs are not returned by this API call, to find the name of the export job you must query the specific operation ID. This can be achieved using the following command. Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" -CaseOperationId “<operation ID>” The name of the export operation is contained within the property AdditionalProperties. If you wish to make the HTTP API calls directly to list cases in the tenant, see List ediscoveryCases - Microsoft Graph v1.0 | Microsoft Learn. If you wish to make the HTTP API calls directly to list the operations for a case, see List caseOperations - Microsoft Graph v1.0 | Microsoft Learn. You will need to use the Case ID in the API call to indicate which case you wish to list the operations from. For example: https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<CaseID>/operations/ The name of the export jobs are not returned with this API call, to find the name of the export job you must query the specific job ID. For example: https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<CaseID>/operations/<OperationID> Downloading the Export Package Retrieving the download URLs for export packages The URL required to download the export packages and reports are contained within a property called exportFileMetaData. To retrieve this information we need to know the case ID of the eDiscovery case that the export job was run in, as well as the operation ID for the export job. Using the eDiscovery Graph PowerShell Cmdlets you can retrieve this property use the following commands. $Operation = Get-MgSecurityCaseEdiscoveryCaseOperation -EdiscoveryCaseId "<case ID>" -CaseOperationId “<operation ID>” $Operation.AdditionalProperties.exportFileMetadata If you wish to make the HTTP API calls directly to return the exportFileMetaData for an operation, see List caseOperations - Microsoft Graph v1.0 | Microsoft Learn. For each export package visible in the Microsoft Purview Portal there will be an entry in the exportFileMetaData property. Each entry will list the following: The export package file name The downloadUrl to retrieve the export package The size of the export package Example scripts to download the Export Package As the MicrosoftPurviewEDiscovery API is separate to the Microsoft Graph API, it requires a separate authentication token to authorise the download request. As a result, you must use the MSAL.PS PowerShell Module and the Get-MSALToken cmdlet to acquire a separate token in addition to connecting to the Microsoft Graph APIs via the Connect-MgGraph cmdlet. The following example scripts can be used to as a reference when developing your own scripts to enable the programmatic download of the export packages. Connecting with a client secret If you have configured your app to use a client secret, then you can use the following example script for reference to download the export package and reports programmatically. Copy the contents into notepad and save it as DownloadExportUsingApp.ps1. [CmdletBinding()] param ( [Parameter(Mandatory = $true)] [string]$tenantId, [Parameter(Mandatory = $true)] [string]$appId, [Parameter(Mandatory = $true)] [string]$appSecret, [Parameter(Mandatory = $true)] [string]$caseId, [Parameter(Mandatory = $true)] [string]$exportId, [Parameter(Mandatory = $true)] [string]$path = "D:\Temp", [ValidateSet($null, 'USGov', 'USGovDoD')] [string]$environment = $null ) if (-not(Get-Module -Name Microsoft.Graph -ListAvailable)) { Write-Host "Installing Microsoft.Graph module" Install-Module Microsoft.Graph -Scope CurrentUser } if (-not(Get-Module -Name MSAL.PS -ListAvailable)) { Write-Host "Installing MSAL.PS module" Install-Module MSAL.PS -Scope CurrentUser } $password = ConvertTo-SecureString $appSecret -AsPlainText -Force $clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ($appId, $password) if (-not(Get-MgContext)) { Write-Host "Connect with credentials of a ediscovery admin (token for graph)" if (-not($environment)) { Connect-MgGraph -TenantId $TenantId -ClientSecretCredential $clientSecretCred } else { Connect-MgGraph -TenantId $TenantId -ClientSecretCredential $clientSecretCred -Environment $environment } } Write-Host "Connect with credentials of a ediscovery admin (token for export)" $exportToken = Get-MsalToken -ClientId $appId -Scopes "b26e684c-5068-4120-a679-64a5d2c909d9/.default" -TenantId $tenantId -RedirectUri "http://localhost" -ClientSecret $password $uri = "/v1.0/security/cases/ediscoveryCases/$($caseId)/operations/$($exportId)" $export = Invoke-MgGraphRequest -Uri $uri; if (-not($export)){ Write-Host "Export not found" exit } else{ $export.exportFileMetadata | % { Write-Host "Downloading $($_.fileName)" Invoke-WebRequest -Uri $_.downloadUrl -OutFile "$($path)\$($_.fileName)" -Headers @{"Authorization" = "Bearer $($exportToken.AccessToken)"; "X-AllowWithAADToken" = "true" } } } Once saved, open a new PowerShell windows which has the following PowerShell Modules installed: Microsoft.Graph MSAL.PS Browse to the directory you have saved the script and issue the following command. .\DownloadExportUsingApp.ps1 -tenantId “<tenant ID>” -appId “<App ID>” -appSecret “<Client Secret>” -caseId “<CaseID>” -exportId “<ExportID>” -path “<Output Path>” Review the folder which you have specified as the Path to view the downloaded files. Connecting with a certificate If you have configured your app to use a certificate then you can use the following example script for reference to download the export package and reports programmatically. Copy the contents into notepad and save it as DownloadExportUsingAppCert.ps1. [CmdletBinding()] param ( [Parameter(Mandatory = $true)] [string]$tenantId, [Parameter(Mandatory = $true)] [string]$appId, [Parameter(Mandatory = $true)] [String]$certPath, [Parameter(Mandatory = $true)] [string]$caseId, [Parameter(Mandatory = $true)] [string]$exportId, [Parameter(Mandatory = $true)] [string]$path = "D:\Temp", [ValidateSet($null, 'USGov', 'USGovDoD')] [string]$environment = $null ) if (-not(Get-Module -Name Microsoft.Graph -ListAvailable)) { Write-Host "Installing Microsoft.Graph module" Install-Module Microsoft.Graph -Scope CurrentUser } if (-not(Get-Module -Name MSAL.PS -ListAvailable)) { Write-Host "Installing MSAL.PS module" Install-Module MSAL.PS -Scope CurrentUser } ##$password = ConvertTo-SecureString $appSecret -AsPlainText -Force ##$clientSecretCred = New-Object System.Management.Automation.PSCredential -ArgumentList ($appId, $password) $ClientCert = Get-ChildItem $certPath if (-not(Get-MgContext)) { Write-Host "Connect with credentials of a ediscovery admin (token for graph)" if (-not($environment)) { Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert } else { Connect-MgGraph -TenantId $TenantId -ClientId $appId -Certificate $ClientCert -Environment $environment } } Write-Host "Connect with credentials of a ediscovery admin (token for export)" $connectionDetails = @{ 'TenantId' = $tenantId 'ClientId' = $appID 'ClientCertificate' = $ClientCert 'Scope' = "b26e684c-5068-4120-a679-64a5d2c909d9/.default" } $exportToken = Get-MsalToken @connectionDetails $uri = "/v1.0/security/cases/ediscoveryCases/$($caseId)/operations/$($exportId)" $export = Invoke-MgGraphRequest -Uri $uri; if (-not($export)){ Write-Host "Export not found" exit } else{ $export.exportFileMetadata | % { Write-Host "Downloading $($_.fileName)" Invoke-WebRequest -Uri $_.downloadUrl -OutFile "$($path)\$($_.fileName)" -Headers @{"Authorization" = "Bearer $($exportToken.AccessToken)"; "X-AllowWithAADToken" = "true" } } } Once saved open a new PowerShell windows which has the following PowerShell Modules installed: Microsoft.Graph MSAL.PS Browse to the directory you have saved the script and issue the following command. .\DownloadExportUsingAppCert.ps1 -tenantId “<tenant ID>” -appId “<App ID>” -certPath “<Certificate Path>” -caseId “<CaseID>” -exportId “<ExportID>” -path “<Output Path>” Review the folder which you have specified as the Path to view the downloaded files. Conclusion Congratulations you have now configured your environment to enable access to the eDiscovery APIs! It is a great opportunity to further explore the available Microsoft Purview eDiscovery REST API calls using the Microsoft.Graph PowerShell module. For a full list of API calls available, see Use the Microsoft Purview eDiscovery API. Stay tuned for future blog posts covering other aspects of the eDiscovery APIs and examples on how it can be used to automate existing eDiscovery workflows.Getting Started with the New Purview eDiscovery (E3)
“I heard that classic eDiscovery (Standard) will be retired on May 26 th . How can I get started in the new Purview eDiscovery?” Welcome to the new era of Purview eDiscovery! As we transition from the classic eDiscovery (Standard) to the new Purview eDiscovery, you'll find a more intuitive and user-friendly experience designed to streamline your workflow. This enhanced platform offers additional capabilities such as improved data sources for easier identification of search locations, an upgraded condition builder, better support for modern collaboration, and a more efficient export process. There are a few important notes before we get started with the new Purview eDiscovery user experience: The new Purview eDiscovery is a unified user experience. No longer will there be separate E3 or E5 products for eDiscovery; both E3 and E5 users will enjoy the same new interface. However, Purview eDiscovery users with E5 licenses or advanced SKU license holders will have access to new Premium features, while E3 Purview eDiscovery users will also benefit from new enhancements. Rest assured, you will not need to migrate any of your existing classic cases or content searches. All your current cases and content searches are seamlessly integrated into the new user experience. There are also no changes required for your existing permissions or compliance boundaries. The new Purview eDiscovery respects your existing settings, ensuring a smooth transition. You will see a new case under Purview eDiscovery called “Content Search.” You will find all your existing content searches within this case. You will also be able to access your content search by using the new Purview Content Search shortcut (Learn more about getting started with the new Purview Content Search by going to the following article: https://aka.ms/newcontentsearch). "Where do I get started in the new Purview eDiscovery?" You will be able to access the new Purview eDiscovery by going to the Microsoft Purview portal and signing in using the credentials for a user account assigned eDiscovery permissions. Select the eDiscovery solution card under the Purview portal and then select Cases in the left nav. This will take you to the new Purview eDiscovery. From there, you will be able to select Create case. “Now that I have created my case, what’s next?” Now that you’ve created your case, let’s talk about the new case settings. Click on the Case settings button in the new Purview eDiscovery case view. These are the relevant settings for E3 eDiscovery: The Case details settings are where you can go to disable or enable the eDiscovery (Premium) features (E5) using the eDiscovery (Premium) toggle. ings" page for a Purview eDiscovery case, where users can input the case name, number, and description. This image also shows where you can enable or disable the Premium features for this case. It also provides access to manage permissions, data sources, search & analytics, and review sets for comprehensive case control. You will also be able to close or delete the case using the Actions button under Case details. Permissions settings in eDiscovery allow you to add or remove users to a case and manage role group membership for a case. This is where you will go to give other eDiscovery managers/users access to your case. You can also add a role group to give all members of that role group access to your case. t access is limited to individual users at this stage. The new Data sources section is where you can make changes to the locations you wish to include in tenant-wide searches. NOTE: adding more data sources might cause searches to take longer than normal. The Search & analytics and Review set settings sections are for E5 features. Now that you have managed your Purview eDiscovery settings, the next step is to either create a search or create a hold policy to manage your eDiscovery holds. First, let’s start with the new Purview eDiscovery search experience! Make sure that you are under the Searches tab in your case and click Create a search. Create a search name and search description and select the Create button to create a new search in the new Purview eDiscovery experience. This will take you to the new Purview eDiscovery search experience. Under the Query tab in your new search, you will see the enhanced Data sources on the left side. The new Purview eDiscovery’s enhanced data sources will make it a lot easier for you to set the locations that you would like to search. You can use the enhanced data sources to search for M365 content such as email, documents, and instant messaging conversations in your organization. Use search to find content in these cloud-based Microsoft 365 data sources: Exchange Online mailboxes SharePoint sites OneDrive accounts Microsoft Teams Microsoft 365 Groups Viva Engage In this example, we will be searching Nestor’s mailbox and OneDrive site for an email sent in March 2025 that contains the keyword string “Project 9” Click Add sources under Data sources to add your locations (you can also search all your mailboxes or sites by selecting Add tenant-wide sources if needed) Type in the name of the user or their email address to find the user’s locations that you are wanting to search and then select them. Next, add a group like a Microsoft Team that you would like to search. Click the Manage button to see the locations associated with this user and Team. The enhanced data source experience will automatically identify a user’s mailbox and OneDrive site if they have one enabled. Select Save to continue. Optional: you can exclude either their Mailbox or OneDrive site by unchecking them under the Manage sources view. Now that you have identified the locations that we want to search. The next step is to create a query to define what we are wanting to search for within the locations. Under the Keywords condition, make sure that Equal is selected, and type in Project 9 and hit enter. & Project Team are listed for targeted investigation This will let you specify that you are looking for any chat, email, or document that contains the phrase “Project 9” Next, click on the + Add conditions button to add the date range condition. Select Date from the list and select Apply. Switch the Date operator from Before to Between and select March 1, 2025 through March 31, 2025 as the date range. Click the Run query button to generate the search estimate. Then click Run Query after selecting any additional options that you may want. After the search has run, the Statistics tab will help you verify whether the relevant content was found. You can also generate a sample of the results by going under the Sample tab and selecting the Generate sample results button. a single SharePoint source. Visual charts highlight search hit trends and top location types, while sections for sensitive information types and top users currently show no data. You can export the results of your search after you have verified that the relevant content has been returned by your search by selecting the Export button. Give your export a name and description. In the Export type section, choose one of the following options: Export items report only: Only the summary and item report are created. The various options for organizing data, folder and path structure, condensing paths, and other structures are hidden. Export items with items report: Items are exported with the item report. Other export format options are available with this option in the Export format section. In the Export format section, choose one of the following options: Create PSTs for messages: This option creates .pst files for messages. Create .msg files for messages: This option creates .msg files for messages Select one or more of the following output package options: Organize data from different locations into separate folders or PSTs: This option organizes data into separate folders for each data location. Include folder and path of the source: This option includes the original folder and folder path structure for items. Condense paths to fit within 256 characters: This option condenses the folder path for each item to 259 characters or less. Give each item a friendly name: This option creates a friendly name for each item. After you have selected the options for your export, select the Export button. Click the Export button to go to the Export tab. Select your export once the status shows as “Complete” Select the export packages that you wish to download and hit the Download button. Clicking the Download button will kick off a browser download. The new Content Search does not use classic Content Search and eDiscovery (Standard)’s .NET eDiscovery Export Tool application. NOTE: You may have to disable popup blocking depending on your browser settings. The download report relating to the export is named Reports-caseName-EntityName-ProcessName-timestamp.zip. With EntityName being the user given name to the export. This will include several .CSV files including items.csv which provide details of all items exported, including information such as item ID, location of the item, subject/title of the item, item class/type, and success/error status. The .PST files exported will be included in an export package called PSTs.00x.zip Files exported (e.g. files stored in OneDrive and SharePoint) will be included in an export package called Items.00x.zip “How do I place a hold using the new Purview eDiscovery?” You can create holds in the new Purview eDiscovery to preserve content in mailboxes and sites. This includes mailboxes and sites that are associated with Microsoft Teams, Microsoft 365 groups, and Viva Engage Groups. When you place locations on hold, content is preserved until you remove the hold from the locations or delete/release the hold policy. Like classic eDiscovery (Standard), you will first visit the Hold policies tab. In the hold policies tab, please click New policy to create a new hold policy for your case. Please give your hold policy a unique policy name and policy description. Next, you will add the locations that you would like to place on hold. Please click Add sources under Data sources to start adding locations to your hold. Note: you must select at least one data source to create the hold policy. Put in the name of the custodian that you would like to place on hold. Like the search experience, you will automatically identify the user’s mailbox and OneDrive site when you search by their name. Next, you can enter a group by putting in the name of the group. In this example, I have added a Team called the “Mark 8 Project”. & Project" and received results including two Teams and one Private Shared Channel. The interface allows filtering by scope and type, and each result has a checkbox for selection. Action buttons at the bottom like "Manage," "Save and close," and "Cancel" enable users to finalize or adjust their selections. Please select Manage or Save and close to save your results. If you leave the query blank under the Condition builder section, all the data in the specified locations will be placed on hold. You can also create a query-based hold to put data that matches your query on hold. Note: For the best results when dealing with encrypted or partially indexed items, we recommend limiting conditions to Date, Participants, and Type in query-based holds. Queries aren't effective on other conditions within encrypted or partially indexed items and holds might not be applied to these items. Select Apply hold to enable your hold policy. After creating a hold, check that the hold is applied successfully by navigating to the Details tab for the hold policy. You can check the statuses of all the locations within your hold policy within the Details tab. This is a great way to verify that your hold was successfully deployed. You can also delete the policy, retry the policy, and turn off the policy by selecting Policy actions. This screenshot displays the dashboard for the hold policy titled "H001a - Custodian and Teams Hold," summarizing its application across 6 locations and 2 data sources. A detailed table lists each location along with its hold status, team group, location type, and associated site. Users can filter results, customize columns, and access policy actions such as delete policy, retry policy, or turn it off. You can select a location under the Details tab to learn additional information regarding the held location. You can also select Download Report to get a downloaded report of the hold details. Other important information for creating holds After you create an eDiscovery hold, it might take up to 24 hours for the hold to take effect. For long term data retention not related to eDiscovery investigations, we advise that you use retention policies and retention labels. For more information, see Learn about retention policies and retention labels. When you select a distribution list to be placed on hold, the distribution list expands into the members of the distribution list. Users can choose to place all members' mailboxes and sites on hold or a subset/mix of these data sources on hold. Subsequent changes in distribution list membership don't change or update holds or the policy. Users must add the distribution list to data source again to ensure the latest membership is reflected and expanded. The Recycle Bin in SharePoint sites isn't indexed and therefore unavailable for searching. As a result, eDiscovery searches can't find any Recycle Bin content to place holds. When you create a query-based hold, all content from selected locations is initially placed on hold. Later, any content that doesn't match the specified query is cleared from the hold every seven to 14 days. However, a query-based hold doesn't clear content if more than five holds of any type are applied to a content location, or if any item has indexing issues. The URL for a user's OneDrive account includes their user principal name (UPN) (for example, https://alpinehouse-my.sharepoint.com/personal/sarad_alpinehouse_onmicrosoft_com). In the rare case that a person's UPN is changed, their OneDrive URL will also change to incorporate the new UPN. If a user's OneDrive account is part of an eDiscovery hold, and their UPN is changed, you need to update the hold by adding the user's new OneDrive URL and removing the old one. If the URL for the OneDrive site changes, previously placed holds on the site remain effective and content is preserved. For more information, see How UPN changes affect the OneDrive URL.6.4KViews0likes4Comments