microsoft 365
162 TopicsAuthorization and Identity Governance Inside AI Agents
Designing Authorization‑Aware AI Agents Enforcing Microsoft Entra ID RBAC in Copilot Studio As AI agents move from experimentation to enterprise execution, authorization becomes the defining line between innovation and risk. AI agents are rapidly evolving from experimental assistants into enterprise operators—retrieving user data, triggering workflows, and invoking protected APIs. While many early implementations rely on prompt‑level instructions to control access, regulated enterprise environments require authorization to be enforced by identity systems, not language models. This article presents a production‑ready, identity‑first architecture for building authorization‑aware AI agents using Copilot Studio, Power Automate, Microsoft Entra ID, and Microsoft Graph, ensuring every agent action executes strictly within the requesting user’s permissions. Why Prompt‑Level Security Is Not Enough Large Language Models interpret intent—they do not enforce policy. Even the most carefully written prompts cannot: Validate Microsoft Entra ID group or role membership Reliably distinguish delegated user identity from application identity Enforce deterministic access decisions Produce auditable authorization outcomes Relying on prompts for authorization introduces silent security failures, over‑privileged access, and compliance gaps—particularly in Financial Services, Healthcare, and other regulated industries. Authorization is not a reasoning problem. It is an identity enforcement problem. Common Authorization Anti‑Patterns in AI Agents The following patterns frequently appear in early AI agent implementations and should be avoided in enterprise environments: Hard‑coded role or group checks embedded in prompts Trusting group names passed as plain‑text parameters Using application permissions for user‑initiated actions Skipping verification of the user’s Entra ID identity Lacking an auditable authorization decision point These approaches may work in demos, but they do not survive security reviews, compliance audits, or real‑world misuse scenarios. Authorization‑Aware Agent Architecture In an authorization‑aware design, the agent never decides access. Authorization is enforced externally, by identity‑aware workflows that sit outside the language model’s reasoning boundary. High‑Level Flow The Copilot Studio agent receives a user request The agent passes the User Principal Name (UPN) and intended action A Power Automate flow validates permissions using Microsoft Entra ID via Microsoft Graph Only authorized requests are allowed to proceed Unauthorized requests fail fast with a deterministic outcome Authorization‑aware Copilot Studio architecture enforces Entra ID RBAC before executing any business action. The agent orchestrates intent. Identity systems enforce access. Enforcing Entra ID RBAC with Microsoft Graph Power Automate acts as the authorization enforcement layer: Resolve user identity from the supplied UPN Retrieve group or role memberships using Microsoft Graph Normalize and compare memberships against approved RBAC groups Explicitly deny execution when authorization fails This keeps authorization logic: Centralized Deterministic Auditable Independent of the AI model Reference Implementation: Power Automate RBAC Enforcement Flow The following import‑ready Power Automate cloud flow demonstrates a secure RBAC enforcement pattern for Copilot Studio agents. It validates Microsoft Entra ID group membership before allowing any business action. Scenario Trigger: User‑initiated agent action Identity model: Delegated user identity Input: userUPN, requestedAction Outcome: Authorized or denied based on Entra ID RBAC { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", "contentVersion": "1.0.0.0", "triggers": { "Copilot_Request": { "type": "Request", "kind": "Http", "inputs": { "schema": { "type": "object", "properties": { "userUPN": { "type": "string" }, "requestedAction": { "type": "string" } }, "required": [ "userUPN" ] } } } }, "actions": { "Get_User_Groups": { "type": "Http", "inputs": { "method": "GET", "uri": "https://graph.microsoft.com/v1.0/users/@{triggerBody()?['userUPN']}/memberOf?$select=displayName", "authentication": { "type": "ManagedServiceIdentity" } } }, "Normalize_Group_Names": { "type": "Select", "inputs": { "from": "@body('Get_User_Groups')?['value']", "select": { "groupName": "@toLower(item()?['displayName'])" } }, "runAfter": { "Get_User_Groups": [ "Succeeded" ] } }, "Check_Authorization": { "type": "Condition", "expression": "@contains(body('Normalize_Group_Names'), 'ai-authorized-users')", "runAfter": { "Normalize_Group_Names": [ "Succeeded" ] }, "actions": { "Authorized_Action": { "type": "Compose", "inputs": "User authorized via Entra ID RBAC" } }, "else": { "actions": { "Access_Denied": { "type": "Terminate", "inputs": { "status": "Failed", "message": "Access denied. User not authorized via Entra ID RBAC." } } } } } } } This pattern enforces authorization outside the agent, aligns with Zero Trust principles, and creates a clear audit boundary suitable for enterprise and regulated environments. Flow Diagram: Agent Integrated with RBAC Authorization Flow and Sample Prompt Execution: Delegated vs Application Permissions Scenario Recommended Permission Model User‑initiated agent actions Delegated permissions Background or system automation Application permissions Using delegated permissions ensures agent execution remains strictly within the requesting user’s identity boundary. Auditing and Compliance Benefits Deterministic and explainable authorization decisions Centralized enforcement aligned with identity governance Clear audit trails for security and compliance reviews Readiness for SOC, ISO, PCI, and FSI assessments Enterprise Security Takeaways Authorization belongs in Microsoft Entra ID, not prompts AI agents must respect enterprise identity boundaries Copilot Studio + Power Automate + Microsoft Graph enable secure‑by‑design AI agents By treating AI agents as first‑class enterprise actors and enforcing authorization at the identity layer, organizations can scale AI adoption with confidence, trust, and compliance.Introducing Security Dashboard for AI (Now in Public Preview)
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is available in public preview. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 2026Introducing eDiscovery Graph API Standard and Enhancements to Premium APIs
We have been busy working to enable organisations that leverage the Microsoft Purview eDiscovery Graph APIs to benefit from the enhancements in the new modern experience for eDiscovery. I am pleased to share that APIs have now been updated with additional parameters to enable organisations to now benefit from the following features already present in the modern experience within the Purview Portal: Ability to control the export package structure and item naming convention Trigger advanced indexing as part of the Statistics, Add to Review and Export jobs Enables for the first time the ability to trigger HTML transcription of Teams, Viva and Copilot interaction when adding to a review set Benefit from the new statistic options such as Include Categories and Include Keyword Report More granular control of the number of versions collected of modern attachments and documents collected directly collected from OneDrive and SharePoint These changes were communicated as part of the M365 Message Center Post MC1115305. This change involved the beta version of the API calls being promoted into the V1.0 endpoint of the Graph API. The following v1.0 API calls were updated as part of this work: Search Estimate Statistics – ediscoverySearch: estimateStatistics Search Export Report - ediscoverySearch: exportReport Search Export Result - ediscoverySearch: exportResult Search Add to ReviewSet – ediscoveryReviewSet: addToReviewSet ReviewSet Export - ediscoveryReviewSet: export The majority of this blog post is intended to walk through the updates to each of these APIs and provide understanding on how to update your calls to these APIs to maintain a consistent outcome (and benefit from the new functionality). If you are new to the Microsoft Purview eDiscovery APIs you can refer to my previous blog post on how to get started with them. Getting started with the eDiscovery APIs | Microsoft Community Hub First up though, availability of the Graph API for E3 customers We are excited to announce that starting September 9, 2025, Microsoft will launch the eDiscovery Graph API Standard, a new offering designed to empower Microsoft 365 E3 customers with secure, automated data export capabilities. The new eDiscovery Graph API offers scalable, automated exports with secure credential management, improved performance and reliability for Microsoft 365 E3 customers. The new API enables automation of the search, collect, hold, and export flow from Microsoft Purview eDiscovery. While it doesn’t include premium features like Teams/Yammer conversations or advanced indexing (available only with the Premium Graph APIs), it delivers meaningful value for Microsoft 365 E3 customers needing to automate structured legal exports. Key capabilities: Export from Exchange, SharePoint, Teams, Viva Engage and OneDrive for Business Case, search, hold and export management Integration with partner/vendor workflows Support automation that takes advantage of new features within the modern user experience Pricing & Access Microsoft will offer 50 GB of included export volume per tenant per month, with additional usage billed at $10/GB—a price point that balances customer value, sustainability, and market competitiveness. The Graph API Standard will be available in public preview starting September 9. For more details on pay-as-you-go features in eDiscovery and Purview refer to the following links. Billing in eDiscovery | Microsoft Learn Enable Microsoft Purview pay-as-you-go features via subscription | Microsoft Learn Wait, but what about the custodian and noncustodial locations workflow in eDiscovery Classic (Premium)? As you are probably aware, in the modern user experience for eDiscovery there have been some changes to the Data Sources tab and how it is used in the workflow. Typically, organisations leveraging the Microsoft Purview eDiscovery APIs previously would have used the custodian and noncustodial data sources APIs to add the relevant data sources to the case using the following APIs. ediscoveryCustodian resource type - Microsoft Graph v1.0 | Microsoft Learn ediscoveryNoncustodialDataSource resource type - Microsoft Graph v1.0 | Microsoft Learn Once added via the API calls, when creating a search these locations would be bound to a search. This workflow in the API remains supported for backwards compatibility. This includes the creation of system generated case hold policies when applying holds to the locations via these APIs. Organisations can continue to use this approach with the APIs. However, to simplify your code and workflow in the APIs consider using the following API call to add additional sources directly to the search. Add additional sources - Microsoft Graph v1.0 | Microsoft Learn Some key things to note if you continue to use the custodian and noncustodial data sources APIs in your automation workflow. This will not populate the new data sources tab in the modern experience for eDiscovery They can continue to be queried via the API calls Advanced indexing triggered via these APIs will have no influence on if advanced indexing is used in jobs triggered from a search Make sure you use the new parameters to trigger advanced indexing on the job when running the Statistics, Add to Review Set and Direct Export jobs Generating Search Statistics ediscoverySearch: estimateStatistics In eDiscovery Premium (Classic) and the previous version of the APIs, generating statistics was a mandatory step before you could progress to either adding the search to a review set or triggering a direct export. With the new modern experience for eDiscovery, this step is completely optional and is not mandatory. For organizations that previously generated search statistics but never checked or used the results before moving to adding the search to a review set or triggering a direct export job, they can now skip this step. If organizations do want to continue to generate statistics, then calling the updated API with the same parameters call will continue to generate statistics for the search. An example of a previous call would look as follows: POST /security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/estimateStatistics Historically this API didn’t require a request body. With the APIs now natively working with the modern experience for eDiscovery; the API call now supports a request body, enabling you to benefit from the new statistic options. Details on these new options can be found in the links below. Create a search for a case in eDiscovery | Microsoft Learn Evaluate and refine search results in eDiscovery | Microsoft Learn If a search is run without a request body it will still generate the following information: Total matches and volume Number of locations searched and the number of locations with hits Number of data sources searched and the number of data sources with hits The top five data sources that make up the most search hits matching your query Hit count by location type (mailbox versus site) As the API is now natively working with the modern experience for eDiscovery you can optionally include a request body to pass the statisticOptions parameter in the POST API call. With the changes to how Advanced Indexing works within the new UX and the additional reporting categories available, you can use the statisticsOptions parameter to trigger the generate statistic job with the additional options within the modern experience for the modern UX. The values you can include are detailed in the table below. Property Option from Portal includeRefiners Include categories: Refine your view to include people, sensitive information types, item types, and errors. includeQueryStats Include query keywords report: Assess keyword relevance for different parts of your search query. includeUnindexedStats Include partially indexed items: We'll provide details about items that weren't fully indexed. These partially indexed items might be unsearchable or partially searchable advancedIndexing Perform advanced indexing on partially indexed items: We'll try to reindex a sample of partially indexed items to determine whether they match your query. After running the query, check the Statistics page to review information about partially indexed items. Note: Can only be used if includeUnindexedStats is also included. locationsWithoutHits Exclude partially indexed items in locations without search hits: Ignore partially indexed items in locations with no matches to the search query. Checking this setting will only return partially indexed items in locations where there is already at least one hit. Note: Can only be used if includeUnindexedStats is also included. In eDiscovery Premium (Classic) the advanced indexing took place when a custodian or non-custodial data location was added to the Data Sources tab. This means that when you triggered the estimate statistics call on the search it would include results from both the native Exchange and SharePoint index as well as the Advanced Index. In the modern experience for eDiscovery, the advanced indexing runs as part of the job. However, this must be selected as an option on the job. Note that not all searches will benefit from advanced indexing, one example would be a simple date range search on a mailbox or SPO site as this will still have hits on the partially indexed items (even partial indexed email and SPO file items have date metadata in the native indexes). The following example using PowerShell and the Microsoft Graph PowerShell module and passes the new StatisticsOptions parameter to the POST call and selects all available options. # Generate estimates for the newly created search $statParams = @{ statisticsOptions = "includeRefiners,includeQueryStats,includeUnindexedStats,advancedIndexing,locationsWithoutHits" } $params = $statParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/estimateStatistics" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Write-Host "Estimate statistics generation triggered for search ID: $searchID" Once run, it will create a generated statistic job with the additional options selected. Direct Export - Report ediscoverySearch: exportReport This API enables you to generate an item report directly form a search without taking the data into a review set or exporting the items that match the search. With the APIs now natively working with the modern experience for eDiscovery, new parameters have been added to the request body as well as new values available for existing parameters. The new parameters are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. These new parameters reflect the changes made in the modern experience for eDiscovery that provides more granular control for eDiscovery managers to apply different collection options based on where the SPO item was collected from (e.g. directly from a SPO site vs a cloud attachment link included in an email). Within eDiscovery Premium (Classic) the All Document Versions option applied to both SharePoint and OneDrive files collected directly from SharePoint and any cloud attachments contained within email, teams and viva engage messages. Historically for this API, within the additionalOptions parameter you could include the allDocumentVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allDocumentVersions value can still be included in the additionalOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalOptions parameter and switch to using the new documentVersion parameter. As described earlier, to benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the direct export job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalOptions advancedIndexing Perform advanced indexing on partially indexed items exportCriteria searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items exportLocation responsiveLocations, nonresponsiveLocations Exclude partially indexed items in locations without search hits. Finally, in the new modern experience for eDiscovery more granular control has been introduced to enable organisations to independently choose to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and the ability to collect up to 12 hours of related conversations when a message matches a search. This is reflected in the job settings by the following options: Organize conversations into HTML transcripts Include Teams and Viva Engage conversations In the classic experience this was a single option titled Teams and Yammer Conversations that did both actions and was controlled by including the teamsAndYammerConversations value in the additionalOptions parameter. With the APIs now natively working with the modern experience for eDiscovery, the teamsAndYammerConversations value can still be included in the additionalOptions parameter but it will only trigger the collection of up to 12 hours of related conversations when a message matches a search without converting the items into HTML transcripts. To do this we need to include the new value of htmlTranscripts in the additionalOptions parameter. As an example, lets look at the following direct export report job from the portal and use the Microsoft Graph PowerShell module to call the exportReport API call with the updated request body. $exportName = "New UX - Direct Export Report" $exportParams = @{ displayName = $exportName description = "Direct export report from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportReport" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Direct Export - Results ediscoverySearch: exportResult - Microsoft Graph v1.0 | Microsoft Learn This API call enables you to export the items from a search without taking the data into a review set. All the information from the above section on the changes to the exportReport API also applies to this API call. However with this API call we will actually be exporting the items from the search and not just the report. As such we need to pass in the request body information on how we want the export package to look. Previously with direct export for eDiscovery Premium (Classic) you had a three options in the UX and in the API to define the export format. Option Exchange Export Structure SharePoint / OneDrive Export Structure Individual PST files for each mailbox PST created for each mailbox. The structure of each PST is reflective of the folders within the mailbox with emails stored based on their original location in the mailbox. Emails named based on their subject. Folder for each mailbox site. Within each folder, the structure is reflective of the SharePoint/OneDrive site with documents stored based on their original location in the site. Documents are named based on their document name. Individual .msg files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folders within the mailbox with emails stored as .msg files based on their original location in the mailbox. Emails named based on their subject. As above. Individual .eml files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folder within the mailbox with emails stored as .eml files based on their original location in the mailbox. Emails named based on their subject As above. Historically with this API, the exportFormat parameter was used to control the desired export format. Three values could be used and they were pst, msg and eml. This parameter is still relevant but only controls how email items will be saved, either in a PST file, as individual .msg files or as individual .eml files. Note: The eml export format option is depreciated in the new UX. Going forward you should use either pst or msg. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. More information on the export package options and what they control can be found in the following link. https://learn.microsoft.com/en-gb/purview/edisc-search-export#export-package-options To support this, new values have been added to the additionalOptions parameter for this API call, these must be included in the request body otherwise the export structure will be as follows. exportFormat value Exchange Export Structure SharePoint / OneDrive Export Structure pst PST files created that containing data from multiple mailboxes. All emails contained within a single folder within the PST. Emails named a based on an assigned unique identifier (GUID) One folder for all documents. All documents contained within a single folder. Documents are named based on an assigned unique identifier (GUID) msg Folder created containing data from all mailboxes. All emails contained within a single folder stored as .msg files. Emails named a based on an assigned unique identifier (GUID) As above. The new values added to the additionalOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal splitSource Organize data from different locations into separate folders or PSTs includeFolderAndPath Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive a similar output structure when previously using the pst or msg values in the exportFormat parameter I would include all of the above values in the additionalOptions parameter. For example, to generate a direct export where the email items are stored in separate PSTs per mailbox, the structure of the PST files reflects the mailbox and each items is named as per the subject of the email; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - PST" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = “https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params If I want to export the email items as individual .msg files instead of storing them in PST files; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - MSG" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = " https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" Add to Review Set ediscoveryReviewSet: addToReviewSet This API call enables you to commit the items that match the search to a Review Set within an eDiscovery case. This enables you to review, tag, redact and filter the items that match the search without exporting the data from the M365 service boundary. Historically with this API call it was more limited compared to triggering the job via the eDiscovery Premium (Classic) UI. With the APIs now natively working with the modern experience for eDiscovery organizations can make use of the enhancements made within the modern UX and have greater flexibility in selecting the options that are relevant for your requirements. There is a lot of overlap with previous sections, specifically the “Direct Export – Report” section on what updates are required to benefit from updated API. They are as follows: Controlling the number of versions of SPO and OneDrive documents added to the review set via the new cloudAttachmentVersion and documentVersion parameters Enabling organizations to trigger the advanced indexing of partial indexed items during the add to review set job via new values added to existing parameters However there are some nuances to the parameter names and the values for this specific API call compared to the exportReport API call. For example, with this API call we use the additionalDataOptions parameter opposed to the additionalOptions parameter. As with the exportReport and exportResult APIs, there are new parameters to control the number of versions of SPO and OneDrive documents added to the review set are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. Historically for this API call, within the additionalDataOptions parameter you could include the allVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allVersions value can still be included in the additionalDataOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalDataOptions parameter and switch to using the new documentVersion parameter. To benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the add to review set job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalDataOptions advancedIndexing Perform advanced indexing on partially indexed items itemsToInclude searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items additionalDataOptions locationsWithoutHits Exclude partially indexed items in locations without search hits. Historically the API call didn’t support the add to review set job options to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and collect up to 12 hours of related conversations when a message matches a search. With the APIs now natively working with the modern experience for eDiscovery this is now possible by adding support for the htmlTranscripts and messageConversationExpansion values to the addtionalDataOptions parameter. As an example, let’s look at the following add to review set job from the portal and use the Microsoft Graph PowerShell module to invoke the addToReviewSet API call with the updated request body. $commitParams = @{ search = @{ id = $searchID } additionalDataOptions = "linkedFiles,advancedIndexing,htmlTranscripts,messageConversationExpansion,locationsWithoutHits" cloudAttachmentVersion = "latest" documentVersion = "latest" itemsToInclude = "searchHits,partiallyIndexed" } $params = $commitParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/addToReviewSet" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Export from Review Set ediscoveryReviewSet: export This API call enables you to export items from a Review Set within an eDiscovery case. Historically with this API, the exportStructure parameter was used to control the desired export format. Two values could be used and they were directory and pst. This parameter has had been updated to include a new value of msg. Note: The directory value is depreciated in the new UX but remains available in v1.0 of the API call for backwards compatibility. Going forward you should use msg alongside the new exportOptions values. The exportStructure parameter will only control how email items are saved, either within PST files or as individual .msg files. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. As with the exportResults API call for direct export, new values have been added to the exportOptions parameter for this API call. The new values added to the exportOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal splitSource Organize data from different locations into separate folders or PSTs includeFolderAndPath Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive an equivalent output structure when previously using the pst value in the exportStructure parameter I would include all of the above values in the exportOptions parameter within the request body. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - PST" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportStructure = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params To receive an equivalent output structure when previously using the directory value in the exportStructure parameter I would instead use the msg value within the request body. As the condensed directory structure format export all items into a single folder, all named based on uniquely assigned identifier I do not need to include the new values added to the exportOptions parameter. An example using the Microsoft Graph PowerShell module can be found below. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - MSG" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles" exportStructure = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Continuing to use the directory value in exportStructure will produce the same output as if msg was used. Wrap Up Thank you for your time reading through this post. Hopefully you are now equipped with the information needed to make the most of the new modern experience for eDiscovery when making your Graph API calls.Interested in Being a Guest on the Microsoft Security Podcast at RSA?
Going to RSA? We’re recording short, 15-minute Microsoft Security podcast conversations live at the conference- focused on real-world practitioner experience. No pitches, no slides, no marketing. Just an honest conversation about what you’re seeing, what’s changed, and what you’d tell a peer. If you’re doing the work and want to share your perspective, we’d love to hear from you. Take the survey to let us know you are interested here!Microsoft Purview Data Security Investigations is now generally available
Every data security investigation starts with the same question: What data security risks are buried in this data? Exposed credentials in thousands of files across a data estate. Evidence of fraud hidden in vendor communications. Sensitive documents accidentally shared to a large group. Finding these risks manually — reviewing content file by file, message by message — is no longer viable when organizations are managing 220 zettabytes of data[1] and facing over 12,000 confirmed breaches annually[2]. That's why we built Microsoft Purview Data Security Investigations, now generally available. Microsoft Purview Data Security Investigations enables data security teams to identify investigation-relevant data, investigate that data with AI-powered deep content analysis, and mitigate risk — all within one unified solution. Teams can quickly analyze data at scale to surface sensitive data and security risks, then collaborate securely to address them. By streamlining complex, time‑consuming investigative workflows, admins can resolve investigations in hours instead of weeks or months. Proactive and reactive investigation scenarios Organizations are using Data Security Investigations to tackle diverse data security challenges — from reactive incident response to proactive risk assessment. Some of our top use cases from preview include: Data breach and leak: Understand severity, sensitivity, and significance of data that has been leaked or breached, including risks buried in impacted data, to take action and mitigate its impact to the organization. Credentials exposure: Proactively scan thousands of SharePoint sites to identify files containing credentials like passwords, which can give a threat actor prolonged access to an organization's environment. Internal fraud and bribery: Uncover suspicious communications tied to vendor payments or client interactions, uncovering hard-to-find patterns in large volumes of emails and messages. Sensitive data exposure in Teams: Determine who accessed classified documents after accidental sharing — and whether that data was further distributed. Inappropriate content investigations: Quickly find what was posted, where, and by whom, even when teams only know a timeframe or channel name. Investigations that once took weeks — or weren’t possible at all — can now be completed in hours. By eliminating manual effort and surfacing hidden risks across sprawling data estates, Data Security Investigations empowers teams to investigate more efficiently and confidently, making deep, scalable investigations a reality. What Microsoft Purview Data Security Investigations does – and what’s new Since launching public preview, we've listened closely to customer feedback and made significant enhancements to help teams investigate faster, mitigate more effectively, and manage costs with confidence. Data Security Investigations addresses three critical stages of an investigation: Identify impacted data Data Security admins can efficiently identify relevant data by searching their Microsoft 365 data estate to locate emails, Teams messages, Copilot prompts and responses, and documents. Investigators can also launch pre-scoped investigations from a Microsoft Defender XDR incident or a Microsoft Purview Insider Risk Management case. We’ve recently added a new integration that allows admins to launch a Data Security Investigation from Microsoft Purview Data Security Posture Management as well. This capability can help a data security admin investigate an objective, such as preventing data exfiltration. Investigate using deep content analysis Once the investigation is scoped, the solution's generative AI capabilities allow admins to gain deeper insights into the data, analyzing across 95+ languages to uncover critical sensitive data and security risks. Teams can quickly answer three questions: What data security risks exist within the data? Why do they matter? And what actions can be taken to mitigate them? To help answer these questions, two new investigative capabilities, AI search and AI context input, as well as enhancements to existing features were added in November. Data Security Investigations help admins scale their impact and accelerate investigations with the following features: AI search: Using a new AI-powered natural language search experience, admins can find key risks using keywords, metadata, and semantic embeddings — making it easier to locate investigation-relevant content across large data estates. Categorization: By automatically classifying investigation data into meaningful categories, admins can quickly understand incident severity, what types of content is at risk, and trends within an investigation. Vector search: Using semantic search, admins can find contextually related content — even when exact keywords don't match. Risk examination: Using deep content analysis, admins can examine content for sensitive data and security risks, providing a risk score, recommended mitigation steps, and AI-generated rationale for each analyzed asset. AI context input: Admins can now add investigation-specific context before analysis, resulting in more efficient, higher-quality insights tailored to the specific incident. AI search in action, finding credentials present in the dataset being investigated. Mitigate identified risks Investigators can use Data Security Investigations to securely collaborate with partner teams to mitigate identified risks, simplifying tasks that have traditionally been time consuming and complex. In September, we launched an integration with the Microsoft Sentinel graph, the data risk graph, allowing admins to visualize correlations between investigation data, users, and their activities. This automatically combines unified audit logs, Entra audit logs, and threat intelligence, which would otherwise need to be manually correlated, saving time, providing critical context, and allowing investigators to understand all nodes in their investigation. At the start of January 2026, we launched a new mitigation action, purge, that helps admins quickly and efficiently delete sensitive or overshared content directly within the investigation workflow in the product interface. This reduces exposure immediately and keeps incidents from escalating or recurring. Built-in cost management tools To help customers predict and manage costs associated with using Data Security Investigations, we recently released a lightweight cost estimator and usage dashboard. The in-product cost estimator is now available to help analysts model and forecast both storage and compute unit costs based on specific use cases, enabling more accurate budget planning. Additionally, the usage dashboard provides granular breakdowns of billed storage and compute unit usage, empowering data security admins to identify cost-saving opportunities and optimize resource allocation. For detailed guidance on managing costs, see https://aka.ms/DSIcostmanagementtips. Refined business model for general availability These cost management tools are designed to support our updated business model, which offers greater flexibility and transparency. Customers need the freedom to scale investigations without overcommitting resources. To better align with how customers investigate data risk at scale, we refined the Data Security Investigations business model as part of general availability. The product now uses two consumptive meters: Data Security Investigations Storage Meter – For storing investigation-related data, charged by GB Data Security Investigations Compute Meter – For the computational capacity required to complete AI-powered data analysis and actions, charged by Compute Units (CUs) Monthly charges are determined by the amount of data stored and the number of CUs consumed per hour. This pay-as-you-go model ensures customers only pay for what they need when they need it, providing the flexibility, scalability, and cost efficiency needed for both urgent incident response and proactive data security hygiene assessments. Find more information on pricing at aka.ms/purviewpricing. Get started today As data security threats evolve, so must the way we investigate them. Microsoft Purview Data Security Investigations is now generally available, giving organizations a modern, AI-powered approach to uncovering and mitigating risk — without the complexity of disconnected tools or manual workflows. Whether investigating an active breach or proactively hunting for hidden risks, Data Security Investigations gives data security teams the speed and precision needed to act decisively in today's threat landscape. Join for a live Ask Me Anything with the people behind the product on Thursday February 5th at 10am PST, more details here: aka.ms/PurviewDSIAMA2 Learn more about Data Security Investigations at aka.ms/DSIdocs View pricing details at aka.ms/purviewpricing Try Data Security Investigations today! Visit the product https://purview.microsoft.com/dsi and find setup instructions at aka.ms/DSIsetup [1] Worldwide IDC Global DataSphere Forecast, 2025–2029 [2] 2025-dbir-data-breach-investigations-report.pdfSearch and Purge workflow in the new modern eDiscovery experience
With the retirement of Content Search (Classic) and eDiscovery Standard (Classic) in May, and alongside the future retirement of eDiscovery Premium (Classic) in August, organizations may be wondering how this will impact their existing search and purge workflow. The good news is that it will not impact your organizations ability to search for and purge email, Teams and M365 Copilot messages; however there are some additional points to be careful about when working with purge with cmdlet and Graph alongside of the modern eDiscovery experience. We have made some recent updates to our documentation regarding this topic to reflect the changes in the new modern eDiscovery experience. These can be found below and you should ensure that you read them in full as they are packed with important information on the process. Find and delete email messages in eDiscovery | Microsoft Learn Find and delete Microsoft Teams chat messages in eDiscovery | Microsoft Learn Search for and delete Copilot data in eDiscovery | Microsoft Learn The intention of this first blog post in the series is to cover the high-level points including some best practices when it comes to running search and purge operations using Microsoft Purview eDiscovery. Please stay tuned for further blog posts intended to provide more detailed step-by-step of the following search and purge scenarios: Search and Purge email and Teams messages using Microsoft Graph eDiscovery APIs Search and Purge email messages using the Security and Compliance PowerShell cmdlets I will update this blog post with the subsequent links to the follow-on posts in this series. So let’s start by looking at the two methods available to issue a purge command with Microsoft Purview eDiscovery, they are the Microsoft Graph eDiscovery APIs or the Security and Compliance PowerShell cmdlets. What licenses you have dictates which options are available to you and what type of items you can be purge from Microsoft 365 workloads. For E3/G3 customers and cases which have the premium features disabled You can only use the PowerShell cmdlets to issue the purge command You should only purge email items from mailboxes and not Teams messages You are limited to deleting 10 items per location with a purge command For E5/G5 customers and cases which have the premium features enabled You can only use the Graph API to issue the purge command You can purge email items and Teams messages You can delete up to 100 items per location with a purge command To undertake a search and then purge you must have the correct permissions assigned to your account. There are two key Purview Roles that you must be assigned, they are: Compliance Search: This role lets users run the Content Search tool in the Microsoft Purview portal to search mailboxes and public folders, SharePoint Online sites, OneDrive for Business sites, Skype for Business conversations, Microsoft 365 groups, and Microsoft Teams, and Viva Engage groups. This role allows a user to get an estimate of the search results and create export reports, but other roles are needed to initiate content search actions such as previewing, exporting, or deleting search results. Search and Purge: This role lets users perform bulk removal of data matching the criteria of a search. To learn more about permissions in eDiscovery, along with the different eDiscovery Purview Roles, please refer to the following Microsoft Learn article: Assign permissions in eDiscovery | Microsoft Learn By default, eDiscovery Manager and eDiscovery Administrators have the “Compliance Search” role assigned. For search and purge, only the Organization Management Purview Role group has the role assigned by default. However, this is a highly privileged Purview Role group and customers should considering using a custom role group to assign the Search and Purge Purview role to authorised administrators. Details on how to create a custom role group in Purview can be found in the following article. Permissions in the Microsoft Purview portal | Microsoft Learn It is also important to consider the impact of any retention policies or legal holds will have when attempting to purge email items from a mailbox where you want to hard delete the items and remove it completely from the mailbox. When a retention policy or legal hold is applied to a mailbox, email items that are hard deleted via the purge process are moved and retained in the Recoverable Items folder of the mailbox. There purged items will be retained until such time as all holds are lifted and until the retention period defined in the retention policy has expired. It is important to note that items retained in the Recoverable Items folder are not visible to users but are returned in eDiscovery searches. For some search and purge use cases this is not a concern; if the primary goal is to remove the item from the user’s view then additional steps are required. However if the goal is to completely remove the email item from the mailbox in Exchange Online so it doesn't appear in the user’s view and is not returned by future eDiscovery searches then additional steps are required. They are: Disable client access to the mailbox Modify retention settings on the mailbox Disable the Exchange Online Managed Folder Assistant for the mailbox Remove all legal holds and retention policies from the mailbox Perform the search and purge operation Revert the mailbox to its previous state These steps should be carefully followed as any mistake could result in additional data that is being retained being permanently deleted from the service. The full detailed steps can be found in the following article. Delete items in the Recoverable Items folder mailboxes on hold in eDiscovery | Microsoft Learn Now for some best practice when running search and purge operations: Where possible target the specific locations containing the items you wish to purge and avoid tenant wide searches where possible If a tenant wide search is used to initially locate the items, once the locations containing the items are known modify the search to target the specific locations and rerun the steps Always validate the item report against the statistics prior to issuing the purge command to ensure you are only purging items you intend to remove If the item counts do not align then do not proceed with the purge command Ensure admins undertaking search and purge operations are appropriately trained and equipped with up-to-date guidance/process on how to safely execute the purge process The search conditions Identifier, Sensitivity Label and Sensitive Information Type do not support purge operations and if used can cause un-intended results Organizations with E5/G5 licenses should also take this opportunity to review if other Microsoft Purview and Defender offerings can help them achieve the same outcomes. When considering the right approach/tool to meet your desired outcomes you should become familiar with the following additional options for removing email items: Priority Clean-up (link): Use the Priority cleanup feature under Data Lifecycle Management in Microsoft Purview when you need to expedite the permanent deletion of sensitive content from Exchange mailboxes, overriding any existing retention settings or eDiscovery holds. This process might be implemented for security or privacy in response to an incident, or for compliance with regulatory requirements. Threat Explorer (link): Threat Explorer in Microsoft Defender for Office 365 is a powerful tool that enables security teams to investigate and remediate malicious emails in near real-time. It allows users to search for and filter email messages based on various criteria - such as sender, recipient, subject, or threat type - and take direct actions like soft delete, hard delete, or moving messages to junk or deleted folders. For manual remediation, Threat Explorer supports actions on emails delivered within the past 30 days In my next posts I will be delving further into how to use both the Graph APIs and the Security and Compliance PowerShell module to safely execute your purge commands.Search and Purge using the Security and Compliance PowerShell cmdlets
Welcome back to the series of blogs covering search and purge in Microsoft Purview eDiscovery! If you are new to this series, please first visit the blog post in our series that you can find here: Search and Purge workflow in the new modern eDiscovery experience. Also please ensure you read in full the Microsoft Learn documentation on this topic as I will not be covering some of the steps in full (permissions, releasing holds, all limitations): Find and delete email messages in eDiscovery | Microsoft Learn So as a reminder, E3/G3 customers must use the Security and Compliance PowerShell cmdlets to execute the purge operation. Searches can continue to be created using the New-ComplianceSearch cmdlet and then run the newly created search using the Start-ComplianceSearch cmdlet. Once a search has run, the statistics can be reviewed before executing the New-ComplianceSearchAction cmdlet with the Purge switch to remove the item from the targeted locations. However, some organizations may want to initially run the search, review statistics and export an item report in the new user experience before using the New-ComplianceSearchAction cmdlet to purge the items from the mailbox. Before starting, ensure you have version 3.9.0 or later of the Exchange Online Management PowerShell Module installed (link). If multiple versions of the Exchange Online Management PowerShell module are installed alongside version 3.9.0, remove the older versions of the module to avoid potential conflicts between the different versions of the module. When connecting using the Connect-IPPSession cmdlet ensure you include the EnableSearchOnlySession parameter otherwise the purge command will not run and may generate an error (link) Create the case, if you will be using the new Content Search case you can skip this step. However, if you want to create a new case to host the search, you must create the case via PowerShell. This ensures any searches created within the case in the Purview portal will support the PowerShell based purge command. Use the Connect-IPPSession command to connect to Security and Compliance PowerShell before running the following command to create a new case. New-ComplianceCase “Test Case” Select the new Purview Content Search case or the new case you created in step 1 and create a new Search Within your new search use the Add Sources option to search for and select the mailboxes containing the item to be purged by adding them to the Data sources of your newly created search. Note: Make sure only Exchange mailboxes are selected as you can only purge items contained within Exchange Mailboxes. If you added both the mailbox and associated sites, you can remove the sites using the 3 dot menu next to the data source under User Options. Alternatively, use the manage sources button to remove the sites associated with the data source. Within Condition builder define the conditions required to target the item you wish to purge. In this example, I am targeting an email with a specific subject, from a specific sender, on a specific day. To help me understand the estimated number of items that would be returned by the search I can run a statistics job first to give me confidence that the query is correct. I do this by selecting Run Query from the search itself. Then I can select Statistics and Run Query to trigger the Statistics job. Note, you can view the progress of the job via the Process Manager Once completed I can view the Statistics to confirm the query looks accurate and returning the numbers I was expecting. If I want to further verify that the items returned by the search is what I am looking for, I can run a Sample job to review a sample of the items matching the search query Once the Sample job is completed, I can review samples for locations with hits to determine if this is indeed the items I want to purge. If I need to go further and generate a report of the items that match the search (not just statistics and sampling) I can run an export to generate a report for the items that match the search criteria. Note: It is important to run the export report to review the results that purge action will remove from the mailbox. This will ensure that we purge only the items of interest. Download the report for the export job via the Process Manager or the Export tab to review the items that were a match Note: If very few locations have hits it is recommended to reduce the scope of your search by updating the data sources to include only the locations with hits. Switch back to the cmdlet and use Get-ComplianceSearch cmdlet as below, ensure the query is as you specified in the Purview Portal Get-ComplianceSearch -Identity "My search and purge" | fl As the search hasn’t be run yet in PowerShell – the Items count is 0 and the JobEndTime is not set - the search needs to be re-run via PS as per the example shown below Start-ComplianceSearch "My search and purge" Give it a few minutes to complete and use Get-ComplianceSearch to check the status of the search, if the status is not “Completed” and JobEndTime is not set you may need to give it more time Check the search returned the same results once it has finished running Get-ComplianceSearch -Identity "My search and purge" | fl name,status,searchtype,items,searchstatistics CRITICAL: It is important to make sure the Items count match the number of items returned in the item report generated from the Purview Portal. If the number of items returned in PowerShell do not match, then do not continue with the purge action. Issue the purge command using the New-ComplianceSearchAction cmdlet New-ComplianceSearchAction -SearchName "My search and purge" -Purge -PurgeType HardDelete Once completed check the status of the purge command to confirm that the items have been deleted Get-ComplianceSearchAction "My search and purge_purge" | fl Now that the purge operation has been completed successfully, it has been removed from the target mailbox and is no longer accessible by the user.Microsoft Purview Data Risk Assessments: M365 vs Fabric
Why Data Risk Assessments matter more in the AI era: Generative AI changes the oversharing equation. It can surface data faster, to more people, with less friction, which means existing permission mistakes become more visible, more quickly. Microsoft Purview Data Risk Assessments are designed to identify and help you remediate oversharing risks before (or while) AI experiences like Copilot and analytics copilots accelerate access patterns. Quick Decision Guide: When to use Which? Use Microsoft 365 Data Risk Assessments when: You’re rolling out Microsoft 365 Copilot (or Copilot Chat/agents grounded in SharePoint/OneDrive). Your biggest exposure risk is oversharing SharePoint sites, broad internal access, anonymous links, or unlabeled sensitive files. Use Fabric Data Risk Assessments when: You’re rolling out Copilot in Fabric and want visibility into sensitive data exposure in workspaces and Fabric items (Dashboard, Report, DataExploration, DataPipeline, KQLQuerySet, Lakehouse, Notebook, SQLAnalyticsEndpoint, and Warehouse) Your governance teams need to reduce risk in analytics estates without waiting for a full data governance program to mature. Use both when (most enterprises): Data is spread across collaboration + analytics + AI interactions, and you want a unified posture and remediation motion under DSPM objectives. At a high level: Microsoft 365 Data Risk Assessments focus on oversharing risk in SharePoint and OneDrive content, a primary readiness step for Microsoft 365 Copilot rollouts. Fabric Data Risk Assessments focus on oversharing risk in Microsoft Fabric workspaces and items, especially relevant for Copilot in Fabric and Power BI / Fabric artifacts. Both experiences show up under the newer DSPM (preview) guided objectives (and also remain available via DSPM for AI (classic) paths depending on your tenant rollout). The Differences That Matter NOTE: Assessments are a posture snapshot, not a live feed. Default assessments will automatically re-scan every week while custom assessments need to be manually recreated or duplicated to get a new snapshot. Use them to prioritize remediation and then re-run on a cadence to measure improvement. Scenario M365 Data Risk Assessments Fabric Data Risk Assessments Default scope & frequency Surfaces top 100 most active SharePoint sites (and OneDrives) weekly. Surfaces org-wide oversharing issues in M365 content. Default data risk assessment automatically runs weekly for the top 100 Fabric workspaces based on usage in your organization. Focuses on oversharing in Fabric items. Supported item types SharePoint Online sites (incl. Teams files) & OneDrive documents. Focus on files/folders and their sharing links or permissions. Fabric content: Dashboards, Power BI Reports, Data Explorations, Data Pipelines, KQL Querysets, Lakehouses, Notebooks, SQL Analytics Endpoints, Warehouses (as of preview). Oversharing signals Unprotected sensitive files that don't have sensitivity label that have broad or external access (e.g., “everyone” or public link sharing). Also flags sites with many active users (high exposure). Unprotected sensitive data in Fabric workspaces. For example, reports or datasets with SITs but no sensitivity label, or Fabric items accessible to many users (or shared externally via link, if applicable) Freshness and re-run behavior Custom assessments can be rerun by duplicating the assessment to create a new run, results can expire after a defined window (30‑day expiration and using “duplicate” to re-run). Fabric custom assessments are also active for 30 days and can be duplicated to continue scanning the scoped list of workspaces. To rerun and to see results after the 30-day expiration, use the duplicate option to create a new assessment with the same selections. Setup Requirements For deeper capabilities like M365 item-level scanning in custom assessments requires an Entra app registration with specific Microsoft Graph application permissions + admin consent for your tenant. The Fabric default assessment requires one-time service principal setup and enabling service principal authentication for Fabric admin APIs in the Fabric admin portal. Remediation options Advanced: Item-level scanning identifies potentially overshared files with direct remediation actions (e.g., remove sharing link, apply sensitivity label, notify owner) for SharePoint sites. site-level actions are also available such as enabling default sensitivity labels or configuring DLP policies. Workspace level Purview controls: e.g., create DLP policies, apply default sensitivity labels to new items, or review user access in Entra. (The actual permission changes in Fabric must be done by admins or owners in Fabric.) User experience Default assessment: Site-level insights like label of sites, how many times site was accessed and how many sensitive items were found in the site. Custom Assessments: Site-level and item-level insights if using item-level scan. Shows counts of sensitive files, share links (anyone links, externals), label coverage, last accessed info, etc. And possible remediation actions Results organized by Fabric workspace. Shows counts of sensitive items found and how many are labeled vs unlabeled, broad access indicators (e.g., large viewer counts or “anyone link” usage for data in that workspace), plus recommended mitigation (DLP/sensitivity label policies) Licensing DSPM/DSPM for AI M365 features typically require Microsoft 365 E5 / E5 Compliance entitlements for relevant user-based scenarios. Copilot in Fabric governance (starting with Copilot for Power BI) requires E5 licensing and enabling the Fabric tenant option “Allow Microsoft Purview to secure AI interactions” (default enabled) and pay-as-you-go billing for Purview management of those AI interactions. Common pitfalls organizations face when securing Copilot in Fabric (and how to avoid them) Pitfall How to Avoid or Mitigate Not completing prerequisites for Purview to access Fabric environment Example: Skipping the Entra app setup for Fabric assessment scanning. Follow the setup checklist and enable the Purview-Fabric integration. Without it, your Fabric workspaces won’t be scanned for oversharing risk. Dedicate time pre-deployment to configure required roles, app registrations, and settings. Fabric setup stalls due to missing admin ownership Fabric assessments require Entra app admin + Fabric admin collaboration (service principal + tenant settings). Skipping the “label strategy” and jumping straight to DLP DLP is strongest when paired with a clear sensitivity labeling strategy; labels provide durable semantics across M365 + Fabric. Fragmented labeling strategy Example: Fabric assets have a different or no labeling schema separate from M365. Align on a unified sensitivity label taxonomy across M365 and Fabric. Re-use labels in Fabric via Purview publishing so that classifications mean the same thing everywhere. This ensures consistent DLP and retention behavior across all data locations. Too broad DLP blocks disrupt users Example: Blocking every sharing action involving any internal data causes frustration. Take a risk-based approach: Start with monitor-only DLP rules to gather data, then refine. Focus on high-impact scenarios (for instance, blocking external sharing of highly sensitive data) rather than blanket rules. Use user education (via policy tips) to drive awareness alongside enforcement. Ignoring the human element (owners & users) Example: IT implements controls but doesn’t inform data owners or train users. Involve workspace owners and end-users early. For each high-risk workspace, engage the owner to verify if data is truly sensitive and to help implement least-privilege access. Provide training to users about how Copilot uses data and why labeling & proper sharing are important. This fosters a culture of “shared responsibility” for AI security. No ongoing plan after initial fixes Example: One-time scan and label, but no follow-up, so new issues emerge unchecked. Operationalize the process: Treat this as a continuous cycle. Leverage the “Operate” phase – schedule regular re-assessments (e.g., monthly Fabric risk scans) and quarterly reviews of Copilot-related incidents. Consider appointing a Copilot Governance Board or expanding an existing data governance committee to include AI oversight, so there’s accountability for long-term upkeep. Resources Prevent oversharing with data risk assessments (DSPM) Prerequisites for Fabric Data risk assessments Fabric: enable service principal authentication for admin APIs Beyond Visibility: new Purview DSPM experience Microsoft Purview licensing guidance New Purview pricing options for protecting AI apps and agents (PAYG context) Acknowledgements Sincere Regards to Sunil Kadam, Principal Squad Leader & Jenny Li, Product Lead, for their review and feedback.Always‑on Diagnostics for Purview Endpoint DLP: Effortless, Zero‑Friction troubleshooting for admins
Historically, some security teams have struggled with the challenge of troubleshooting issues with endpoint DLP. Investigations can often slow down because reproducing issues, collecting traces, and aligning on context can be tedious. With always-on diagnostics in Purview endpoint data loss prevention (DLP), our goal has been simple: make troubleshooting seamless, and effortless—without ever disrupting the information worker. Today, we’re excited to share new enhancements to always-on diagnostics for Purview endpoint DLP. This is the next step in our journey to modernize supportability in Microsoft Purview and dramatically reduce admin friction during investigations. Where We Started: Introduction of continuous diagnostic collection Earlier this year, we introduced continuous diagnostic trace collection on Windows endpoints (support for macOS endpoints coming soon). This eliminated the single largest source of friction: the need to reproduce issues. With this capability: Logs are captured persistently for up to 90 days Information workers no longer need admin permissions to retrieve traces Admins can submit complete logs on the first attempt Support teams can diagnose transient or rare issues with high accuracy In just a few months, we saw resolution times drop dramatically. The message was clear: Always-on diagnostics is becoming a new troubleshooting standard. Our Newest Enhancements: Built for Admins. Designed for Zero Friction. The newest enhancements to always-on diagnostics unlock the most requested capability from our IT and security administrators: the ability to retrieve and upload always-on diagnostic traces directly from devices using the Purview portal — with no user interaction required. This means: Admins can now initiate trace uploads on demand No interruption to information workers and their productivity No issue reproduction sessions, minimizing unnecessary disruption and coordination Every investigation starts with complete context Because the traces are already captured on-device, these improvements now help complete the loop by giving admins a seamless, portal-integrated workflow to deliver logs to Microsoft when needed. This experience is now fully available for customers using endpoint DLP on Windows. Why This Matters As a product team, our success is measured not just by usage, but by how effectively we eliminate friction for customers. Always-on diagnostics minimizes the friction and frustration that has historically affected some customers. - No more asking your employee or information worker to "can you reproduce that?" and share logs - No more lost context - No more delays while logs are collected after the fact How it Works Local trace capture Devices continuously capture endpoint DLP diagnostic data in a compressed, proprietary format, and this data stays solely on the respective device based on the retention period and storage limits configured by the admin. Users no longer need to reproduce issues during retrieval—everything the investigation requires is already captured on the endpoint. Admin-triggered upload Admins can now request diagnostic uploads directly from the Purview portal, eliminating the need to disrupt users. Upload requests can be initiated from multiple entry points, including: Alerts (Data Loss Prevention → Alerts → Events) Activity Explorer (Data Loss Prevention → Explorers → Activity explorer) Device Policy Status Page (Settings → Device onboarding → Devices) From any of these locations, admins can simply choose Request device log, select the date range, add a brief description, and submit the request. Once processed, the device’s always-on diagnostic logs are securely uploaded to Microsoft telemetry as per customer-approved settings. Admins can include the upload request number in their ticket with Microsoft Support, and sharing this number removes the need for the support engineer to ask for logs again during the investigation. This workflow ensures investigations start with complete diagnostic context. Privacy & compliance considerations Data is only uploaded during admin-initiated investigations Data adheres to our published diagnostic data retention policies Logs are only accessible to the Microsoft support team, not any other parties We Want to Hear From You Are you using always-on diagnostics? We'd love to hear about your experience. Share your feedback, questions, or success stories in the Microsoft Tech Community, or reach out to our engineering team directly. Making troubleshooting effortless—so you can focus on what matters, not on chasing logs.