microsoft purview
254 TopicsUse Sensitive Info Types to classify your structured data assets at column level
We are excited to announce that Microsoft Purview has extended the support of Sensitive info types (SITs) to Azure and 3P data assets in the Data Map/Catalog. Before this release, SITs could only be applied at file level. Now, SITs can be applied more granularly, i.e., at column level, for structured non-M365 assets.7.4KViews0likes6CommentsSecurity Copilot Skilling Series
Starting this October, Security Copilot joins forces with your favorite Microsoft Security products in a skilling series miles above the rest. The Security Copilot Skilling Series is your opportunity to strengthen your security posture through threat detection, incident response, and leveraging AI for security automation. These technical skilling sessions are delivered live by experts from our product engineering teams. Come ready to learn, engage with your peers, ask questions, and provide feedback. Upcoming sessions are noted below and will be available on-demand on the Microsoft Security Community YouTube channel. Coming Up October 30 | What's New in Copilot in Microsoft Intune Speaker: Amit Ghodke, Principal PM Architect, CxE CAT MEM Join us to learn about the latest Security Copilot capabilities in Microsoft Intune. We will discuss what's new and how you can supercharge your endpoint management experience with the new AI capabilities in Intune. Register now. November 13 | Microsoft Entra AI: Unlocking Identity Intelligence with Security Copilot Skills and Agents Speakers: Mamta Kumar, Sr. Product Manager; Rahul Prakash, Principal Product Manager, AI Innovations; Chad Hasbrook, Sr. Product Manager, IDNA This session will demonstrate how Security Copilot in Microsoft Entra transforms identity security by introducing intelligent, autonomous capabilities that streamline operations and elevate protection. Customers will discover how to leverage AI-driven tools to optimize conditional access, automate access reviews, and proactively manage identity and application risks - empowering them into a more secure, and efficient digital future. Register now Please stand by for an updated flight list; many more sessions coming soon. Click "follow" in the upper right of this article to be notified of updates. Now On-Demand October 16 | What’s New in Copilot in Microsoft Purview Speaker: Patrick David, Principal Product Manager, CxE CAT Compliance Join us for an insider’s look at the latest innovations in Microsoft Purview —where alert triage agents for DLP and IRM are transforming how we respond to sensitive data risks and improve investigation depth and speed. We’ll also dive into powerful new capabilities in Data Security Posture Management (DSPM) with Security Copilot, designed to supercharge your security insights and automation. Whether you're driving compliance or defending data, this session will give you the edge. October 9 | When to Use Logic Apps vs. Security Copilot Agents Speaker: Shiv Patel, Sr. Product Manager, Security Copilot Explore how to scale automation in security operations by comparing the use cases and capabilities of Logic Apps and Security Copilot Agents. This webinar highlights when to leverage Logic Apps for orchestrated workflows and when Security Copilot Agents offer more adaptive, AI-driven responses to complex security scenarios. All sessions will be published to the Microsoft Security Community YouTube channel - Security Copilot Skilling Series Playlist __________________________________________________________________________________________________________________________________________________________________ Looking for more? Keep up on the latest information on the Security Copilot Blog. Join the Microsoft Security Community mailing list to stay up to date on the latest product news and events. Engage with your peers one of our Microsoft Security discussion spaces.Unlocking the Power of Microsoft Purview for ChatGPT Enterprise
In today's rapidly evolving technology landscape, data security and compliance are key. Microsoft Purview offers a robust solution for managing and securing interactions with AI based solutions. This integration not only enhances data governance but also ensures that sensitive information is handled with the appropriate controls. Let's dive into the benefits of this integration and outline the steps to integrate with ChatGPT Enterprise in specific. The integration works for Entra connected users on the ChatGPT workspace, if you have needs that goes beyond this, please tell us why and how it impacts you. Important update 1: Effective May 1, these capabilities require you to enable pay-as-you-go billing in your organization. Important update 2: From May 19, you are required to create a collection policy to ingest ChatGPT Enterprise information. In DSPM for AI you will find this one click process. Benefits of Integrating ChatGPT Enterprise with Microsoft Purview Enhanced Data Security: By integrating ChatGPT Enterprise with Microsoft Purview, organizations can ensure that interactions are securely captured and stored within their Microsoft 365 tenant. This includes user text prompts and AI app text responses, providing a comprehensive record of communications. Compliance and Governance: Microsoft Purview offers a range of compliance solutions, including Insider Risk Management, eDiscovery, Communication Compliance, and Data Lifecycle & Records Management. These tools help organizations meet regulatory requirements and manage data effectively. Customizable Detection: The integration allows for the detection of built in can custom classifiers for sensitive information, which can be customized to meet the specific needs of the organization. To help ensures that sensitive data is identified and protected. The audit data streams into Advanced Hunting and the Unified Audit events that can generate visualisations of trends and other insights. Seamless Integration: The ChatGPT Enterprise integration uses the Purview API to push data into Compliant Storage, ensuring that external data sources cannot access and push data directly. This provides an additional layer of security and control. Step-by-Step Guide to Setting Up the Integration 1. Get Object ID for the Purview account in Your Tenant: Go to portal.azure.com and search for "Microsoft Purview" in the search bar. Click on "Microsoft Purview accounts" from the search results. Select the Purview account you are using and copy the account name. Go to portal.azure.com and search for “Enterprise" in the search bar. Click on Enterprise applications. Remove the filter for Enterprise Applications Select All applications under manage, search for the name and copy the Object ID. 2. Assign Graph API Roles to Your Managed Identity Application: Assign Purview API roles to your managed identity application by connecting to MS Graph utilizing Cloud Shell in the Azure portal. Open a PowerShell window in portal.azure.com and run the command Connect-MgGraph. Authenticate and sign in to your account. Run the following cmdlet to get the ServicePrincipal ID for your organization for the Purview API app. (Get-MgServicePrincipal -Filter "AppId eq '9ec59623-ce40-4dc8-a635-ed0275b5d58a'").id This command provides the permission of Purview.ProcessConversationMessages.All to the Microsoft Purview Account allowing classification processing. Update the ObjectId to the one retrieved in step 1 for command and body parameter. Update the ResourceId to the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{a4543e1f-6e5d-4ec9-a54a-f3b8c156163f}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam It will look something like this from the command line We also need to add the permission for the application to read the user accounts to correctly map the ChatGPT Enterprise user with Entra accounts. First run the following command to get the ServicePrincipal ID for your organization for the GRAPH app. (Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'").id The following step adds the permission User.Read.All to the Purview application. Update the ObjectId with the one retrieved in step 1. Update the ResourceId with the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{df021288-bdef-4463-88db-98f22de89214}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam 3. Store the ChatGPT Enterprise API Key in Key Vault The steps for setting up Key vault integration for Data Map can be found here Create and manage credentials for scans in the Microsoft Purview Data Map | Microsoft Learn When setup you will see something like this in Key vault. 4. Integrate ChatGPT Enterprise Workspace to Purview: Create a new data source in Purview Data Map that connects to the ChatGPT Enterprise workspace. Go to purview.microsoft.com and select Data Map, search if you do not see it on the first screen. Select Data sources Select Register Search for ChatGPT Enterprise and select Provide your ChatGPT Enterprise ID Create the first scan by selecting Table view and filter on ChatGPT Add your key vault credentials to the scan Test the connection and once complete click continue When you click continue the following screen will show up, if everything is ok click Save and run. Validate the progress by clicking on the name, completion of the first full scan may take an extended period of time. Depending on size it may take more than 24h to complete. If you click on the scan name you expand to all the runs for that scan. When the scan completes you can start to make use of the DSPM for AI experience to review interactions with ChatGPT Enterprise. The mapping to the users is based on the ChatGPT Enterprise connection to Entra, with prompts and responses stored in the user's mailbox. 5. Review and Monitor Data: Please see this article for required permissions and guidance around Microsoft Purview Data Security Posture Management (DSPM) for AI, Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Use Purview DSPM for AI analytics and Activity Explorer to review interactions and classifications. You can expand on prompts and responses in ChatGPT Enterprise 6. Microsoft Purview Communication Compliance Communication Compliance (here after CC) is a feature of Microsoft Purview that allows you to monitor and detect inappropriate or risky interactions with ChatGPT Enterprise. You can monitor and detect requests and responses that are inappropriate based on ML models, regular Sensitive Information Types, and other classifiers in Purview. This can help you identify Jailbreak and Prompt injection attacks and flag them to IRM and for case management. Detailed steps to configure CC policies and supported configurations can be found here. 7. Microsoft Purview Insider Risk Management We believe that Microsoft Purview Insider Risk Management (here after IRM) can serve a key role in protecting your AI workloads long term. With its adaptive protection capabilities, IRM dynamically adjusts user access based on evolving risk levels. In the event of heightened risk, IRM can enforce Data Loss Prevention (DLP) policies on sensitive content, apply tailored Entra Conditional Access policies, and initiate other necessary actions to effectively mitigate potential risks. This strategic approach will help you to apply more stringent policies where it matters avoiding a boil the ocean approach to allow your team to get started using AI. To get started use the signals that are available to you including CC signals to raise IRM tickets and enforce adaptive protection. You should create your own custom IRM policy for this. Do include Defender signals as well. Based on elevated risk you may select to block users from accessing certain assets such as ChatGPT Enterprise. Please see this article for more detail Block access for users with elevated insider risk - Microsoft Entra ID | Microsoft Learn. 8. eDiscovery eDiscovery of AI interactions is crucial for legal compliance, transparency, accountability, risk management, and data privacy protection. Many industries must preserve and discover electronic communications and interactions to meet regulatory requirements. Including AI interactions in eDiscovery ensures organizations comply with these obligations and preserves relevant evidence for litigation. This process also helps maintain trust by enabling the review of AI decisions and actions, demonstrating due diligence to regulators. Microsoft Purview eDiscovery solutions | Microsoft Learn 9. Data Lifecycle Management Microsoft Purview offers robust solutions to manage AI data from creation to deletion, including classification, retention, and secure disposal. This ensures that AI interactions are preserved and retrievable for audits, litigation, and compliance purposes. Please see this article for more information Automatically retain or delete content by using retention policies | Microsoft Learn. Closing By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their ChatGPT Enterprise interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. We are still in preview some of the features listed are not fully integrated, please reach out to us if you have any questions or if you have additional requirements.Security Copilot Agents: The New Era of AI, Driven Cyber Defense
With increasing cyber threats, security teams require intelligent agents that adapt and operate throughout the security stack, not just automation. Key statistics from our Microsoft Digital Defense Report 2024 which highlights this concerning trend of Cybersecurity threats: Over 600 million cyberattacks per day targeting Microsoft customers 2.75x increase in ransomware attacks year-over-year 400% surge in tech scams since 2022 Growing collaboration between cybercriminals and nation-state actors In my previous blogs, I explored how AI agents are transforming security operations in Microsoft Defender XDR, Intune, and Entra: Phishing Triage Agent in Defender XDR: Say Goodbye to False Positives and Analyst Fatigue Intune AI Agent: Instant Threat Defense, Invisible Protection Conditional Access Optimization Agent in Microsoft Entra Security Copilot Today, I’ll discuss how Security Copilot, Copilot for Azure in Azure, Defender for Cloud, and Security Copilot Agents in Microsoft Purview use AI to transform security, compliance, and efficiency across the Microsoft ecosystem. What Are Security Copilot Agents? Security Copilot Agents are modular, AI-driven assistants embedded in Microsoft’s security platforms. They automate, high-volume repetitive tasks, deliver actionable insights, and streamline incident responses. By leveraging large language models (LLMs), Microsoft’s global threat intelligence, and your organization’s data, these agents empower security teams to work smarter and faster. Microsoft Security Copilot agents overview Agents are available in both standalone and embedded experiences and can be discovered and configured directly within product portals like Defender, Sentinel, Entra, Intune, and Purview. Why Security Copilot Agents Matter Security Copilot Agents represent a paradigm shift in cyber defense: Automation at Scale: They handle high-volume repetitive tasks, freeing up human expertise for strategic initiatives. Adaptive Intelligence: Agents learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework. Operational Efficiency: By reducing manual workloads, agents accelerate response, prioritize risks, and strengthen security posture. Microsoft Security Copilot Frequently Asked Questions Security Copilot Agents in Azure and Defender for Cloud Azure and Defender for Cloud now feature embedded Security Copilot and Copilot for Azure that help security professionals analyze, summarize, remediate, and delegate recommendations using natural language prompts. This integration streamlines security management by: Risk Exploration: Agents help admins identify misconfigured resources and focus on those posing critical risks, using natural language queries. Accelerated Remediation: Agents generate remediation scripts and automate pull requests, enabling rapid fixes for vulnerabilities. Noise Reduction: By filtering through alerts and recommendations, agents help teams focus on the most impactful remediations. Unified Experience: Security Copilot and Copilot for Azure work together to provide context, explain recommendations, and guide implementation steps, all within the Defender for Cloud portal. Microsoft Security Copilot in Defender for Cloud Security Copilot Agents in Microsoft Purview Microsoft Purview leverages Security Copilot agents to automate and scale Data Loss Prevention (DLP) and Insider Risk Management workflows. Here are more details: Alert Triage Agent (DLP): Evaluates alerts based on sensitivity, exfiltration, and policy risk, sorting them into actionable categories. Alert Triage Agent (Insider Risk): Assesses user, file, and activity risk, prioritizing alerts for investigation. Managed Alert Queue: Agents sift out high-risk activities from lower-risk noise, improving response time and team efficiency. Comprehensive Explanations: Agents provide clear logic behind alert categorization, supporting transparency and compliance. Deployment: Enabling Security Copilot can be done in: Azure portal https://portal.azure.com Security Copilot portal https://securitycopilot.microsoft.com. Security Copilot requires per-seat licenses for human users, while all agent operations are billed by Security Compute Units (SCUs) on a pay-as-you-go basis. Agents do not need separate per-seat licenses; their costs depend solely on SCU consumption, and they typically run under a service or managed identity in the Copilot environment. Security Copilot Agent Responsible AI FAQ Security Copilot Agents: Unified Across the Microsoft Security Ecosystem Security Copilot Agents automate intelligence and security orchestration across Microsoft’s ecosystem, including Defender, Sentinel, Entra, Intune, Azure, Purview, Threat Intelligence, and Office. Their unified design enables consistent protection, swift responses, and scalable automation for security teams. Operating across multiple platforms, these agents provide comprehensive coverage and efficient threat response. End-to-End Visibility: Agents correlate signals across domains, providing context, rich insights and automating common workflows. Custom Agent Creation: Teams can build custom agents using no-code tools, tailoring automation to their unique environments. Marketplace Integration: The new Security Store allows organizations to browse, deploy, and manage agents alongside conventional security tools, streamlining procurement and governance. Intune AI Agents: Device and Endpoint Management Intune AI Agents automate device compliance and endpoint security. They monitor configuration drift, remediate vulnerabilities, and enforce security baselines across managed devices. By correlating device signals with threat intelligence, these agents proactively identify risks and recommend mitigation actions, reducing manual workload and accelerating incident response. Defender for Cloud AI Agents: Threat Detection and Response Defender for Cloud AI Agents continuously analyze cloud workloads, network traffic, and user behavior to detect threats and suspicious activities. They automate alert triage, escalate high-risk events, and coordinate remediation actions across hybrid environments. Integration with other Copilot Agents ensures unified protection and rapid containment of cloud-based threats. Conditional Access Optimization Agent: Policy Automation The Conditional Access Optimization Agent evaluates authentication patterns, risk signals, and user activity to recommend and enforce adaptive access policies. It automates policy updates based on real-time threat intelligence, ensuring that only authorized users access sensitive resources while minimizing friction for legitimate users. Azure AI Agents: Cloud Security and Automation Azure AI Agents provide automated monitoring, configuration validation, and vulnerability management across cloud resources. They integrate with Defender for Cloud and Sentinel, enabling cross-platform correlation of security events and orchestration of incident response workflows. These agents help maintain compliance, optimize resource usage, and enforce best practices. Purview AI Agents: Compliance and Data Protection Purview AI Agents automate data classification, information protection, and compliance management for AI-powered applications and Copilot experiences. They enforce retention policies, flag sensitive data handling, and ensure regulatory compliance across organizational data assets. Their integration supports transparent security controls and audit-ready reporting. Phishing Triage Defender for Office AI Agents: Email Threat Automation Defender for Office AI Agents specialize in identifying, categorizing, and responding to phishing attempts. They analyze email metadata, attachments, and user interactions to detect malicious campaigns, automate alerting, and initiate containment actions. By streamlining phishing triage, these agents reduce investigation times and enhance protection against targeted attacks. Threat Intelligence Briefing Agent: Contextual Security Insights The Threat Intelligence Briefing Agent aggregates global threat intelligence, correlates it with local signals, and delivers actionable briefings to security teams. It highlights emerging risks, prioritizes vulnerabilities, and recommends remediation based on organizational context. This agent empowers teams with timely, relevant insights to anticipate and counter evolving threats. Marketplace Integration and Custom Agent Creation Organizations can leverage the Security Store to discover, deploy, and manage agents tailored to their specific needs. No-code tools facilitate custom agent creation, enabling rapid automation of unique workflows and seamless integration with existing security infrastructure. Getting Started To deploy Security Copilot Agents across the enterprise, make sure to Check Licensing: Ensure you have the required subscriptions and SCUs provisioned. Enable Agents: Use product portals to activate agents and configure settings. Integrate Across Products: Link agents for enhanced threat detection, compliance, and automated response. Monitor and Optimize: Use dashboards and reports to track effectiveness and refine policies. About the Author: Hi! Jacques “Jack” here, Microsoft Technical Trainer. As a technical trainer, I’ve seen firsthand how Security Copilot Agents accelerate secure modernization and empower teams to stay ahead of threats. Whether you’re optimizing identity protection, automating phishing triage, or streamlining endpoint remediation, these agents are your AI, powered allies in building a resilient security posture. #MicrosoftLearn #SkilledByMTT #MTTBloggingGroupStep-by-Step Guide: Integrating Microsoft Purview with Azure Databricks and Microsoft Fabric
Co-Authored By: aryananmol, laurenkirkwood and mmanley This article provides practical guidance on setup, cost considerations, and integration steps for Azure Databricks and Microsoft Fabric to help organizations plan for building a strong data governance framework. It outlines how Microsoft Purview can unify governance efforts across cloud platforms, enabling consistent policy enforcement, metadata management, and lineage tracking. The content is tailored for architects and data leaders seeking to execute governance in scalable, hybrid environments. Note: this article focuses mainly on Data Governance features for Microsoft Purview. Why Microsoft Purview Microsoft Purview enables organizations to discover, catalog, and manage data across environments with clarity and control. Automated scanning and classification build a unified view of your data estate enriched with metadata, lineage, and sensitivity labels, and the Unified Catalog gives business-friendly search and governance constructs like domains, data products, glossary terms, and data quality. Note: Microsoft Purview Unified Catalog is being rolled out globally, with availability across multiple Microsoft Entra tenant regions; this page lists supported regions, availability dates, and deployment plans for the Unified Catalog service: Unified Catalog Supported Regions. Understanding Data Governance Features Cost in Purview Under the classic model: Data Map (Classic), users pay for an “always-on” Data Map capacity and scanning compute. In the new model, those infrastructure costs are subsumed into the consumption meters – meaning there are no direct charges for metadata storage or scanning jobs when using the Unified Catalog (Enterprise tier). Essentially, Microsoft stopped billing separately for the underlying data map and scan vCore-hours once you opt into the new model or start fresh with it. You only incur charges when you govern assets or run data processing tasks. This makes costs more predictable and tied to governance value: you can scan as much as needed to populate the catalog without worrying about scan fees and then pay only for the assets you actively manage (“govern”) and any data quality processes you execute. In summary, Purview Enterprise’s pricing is usage-based and divided into two primary areas: (1) Governed Assets and (2) Data Processing (DGPUs). Plan for Governance Microsoft Purview’s data governance framework is built on two core components: Data Map and Unified Catalog. The Data Map acts as the technical foundation, storing metadata about assets discovered through scans across your data estate. It inventories sources and organizes them into collections and domains for technical administration. The Unified Catalog sits on top as the business-facing layer, leveraging the Data Map’s metadata to create a curated marketplace of data products, glossary terms, and governance domains for data consumers and stewards. Before onboarding sources, align Unified Catalog (business-facing) and Data Map (technical inventory) and define roles, domains, and collections so ownership and access boundaries are clear. Here is a documentation that covers roles and permissions in Purview: Permissions in the Microsoft Purview portal | Microsoft Learn. The imageabove helps understand therelationship between the primary data governance solutions, Unified Catalog and Data Map, and the permissions granted by the roles for each solution. Considerations and Steps for Setting up Purview Steps for Setting up Purview: Step 1: Create a Purview Account. In the Azure Portal, use the search bar at the top to navigate to Microsoft Purview Accounts. Once there, click “Create”. This will take you to the following screen: Step 2: Click Next: Configuration and follow the Wizard, completing the necessary fields, including information on Networking, Configurations, and Tags. Then click Review + Create to create your Purview account. Consideration: Private networking: Use Private Endpoints to secure Unified Catalog/Data Map access and scan traffic; follow the new platform private endpoints guidance in the Microsoft Purview portal or migrate classic endpoints. Once your Purview Account is created, you’ll want to set up and manage your organization’s governance strategy to ensure that your data is classified and managed according to the specific lifecycle guidelines you set. Note: Follow the steps in this guide to set up Microsoft Purview Data Lifecycle Management: Data retention policy, labeling, and records management. Data Map Best Practices Design your collections hierarchy to align with organizational strategy—such as by geography, business function, or data domain. Register each data source only once per Purview account to avoid conflicting access controls. If multiple teams consume the same source, register it at a parent collection and create scans under subcollections for visibility. The imageaboveillustrates a recommended approach for structuring your Purview DataMap. Why Collection Structure Matters A well-structured Data Map strategy, including a clearly defined hierarchy of collections and domains, is critical because the Data Map serves as the metadata backbone for Microsoft Purview. It underpins the Unified Catalog, enabling consistent governance, role-based access control, and discoverability across the enterprise. Designing this hierarchy thoughtfully ensures scalability, simplifies permissions management, and provides a solid foundation for implementing enterprise-wide data governance. Purview Integration with Azure Databricks Databricks Workspace Structure In Azure Databricks, each region supports a single Unity Catalog metastore, which is shared across all workspaces within that region. This centralized architecture enables consistent data governance, simplifies access control, and facilitates seamless data sharing across teams. As an administrator, you can scan one workspace in the region using Microsoft Purview to discover and classify data managed by Unity Catalog, since the metastore governs all associated workspaces in a region. If your organization operates across multiple regions and utilizes cross-region data sharing, please review the consideration and workaround outlined below to ensure proper configuration and governance. Follow pre-requisite requirements here, before you register your workspace: Prerequisites to Connect and manage Azure Databricks Unity Catalog in Microsoft Purview. Steps to Register Databricks Workspace Step 1: In the Microsoft Purview portal, navigate to the Data Map section from the left-hand menu. Select Data Sources. Click on Register to begin the process of adding your Databricks workspace. Step 2: Note: There are two Databricks data sources, please review documentation here to review differences in capability: Connect to and manage Azure Databricks Unity Catalog in Microsoft Purview | Microsoft Learn. You can choose either source based on your organization’s needs. Recommended is “Azure Databricks Unity Catalog”: Step 3: Register your workspace. Here are the steps to register your data source: Steps to Register an Azure Databricks workspace in Microsoft Purview. Step 4: Initiate scan for your workspace, follow steps here: Steps to scan Azure Databricks to automatically identify assets. Once you have entered the required information test your connection and click continue to set up scheduled scan trigger. Step 5: For Scan trigger, choose whether to set up a schedule or run the scan once according to your business needs. Step 6: From the left pane, select Data Map and select your data source for your workspace. You can view a list of existing scans on that data source under Recent scans, or you can view all scans on the Scans tab. Review further options here: Manage and Review your Scans. You can review your scanned data sources, history and details here: Navigate to scan run history for a given scan. Limitation: The “Azure Databricks Unity Catalog” data source in Microsoft Purview does not currently support connection via Managed Vnet. As a workaround, the product team recommends using the “Azure Databricks Unity Catalog” source in combination with a Self-hosted Integration Runtime (SHIR) to enable scanning and metadata ingestion. You can find setup guidance here: Create and manage SHIR in Microsoft Purview Choose the right integration runtime configuration Scoped scan support for Unity Catalog is expected to enter private preview soon. You can sign up here: https://aka.ms/dbxpreview. Considerations: If you have delta-shared Databricks-to-Databricks workspaces, you may have duplication in your data assets if you are scanning both Workspaces. The workaround for this scenario is as you add tables/data assets to a Data Product for Governance in Microsoft Purview, you can identify the duplicated tables/data assets using their Fully Qualified Name (FQN). To make identification easier: Look for the keyword “sharing” in the FQN, which indicates a Delta-Shared table. You can also apply tags to these tables for quicker filtering and selection. The screenshot highlights how the FQN appears in the interface, helping you confidently identify and manage your data assets. Purview Integration with Microsoft Fabric Understanding Fabric Integration: Connect Cross-Tenant: This refers to integrating Microsoft Fabric resources across different Microsoft Entra tenants. It enables organizations to share data, reports, and workloads securely between separate tenants, often used in multi-organization collaborations or partner ecosystems. Key considerations include authentication, data governance, and compliance with cross-tenant policies. Connect In-Same-Tenant: This involves connecting Fabric resources within the same Microsoft Entra tenant. It simplifies integration by leveraging shared identity and governance models, allowing seamless access to data, reports, and pipelines across different workspaces or departments under the same organizational umbrella. Requirements: An Azure account with an active subscription. Create an account for free. An active Microsoft Purview account. Authentication is supported via: Managed Identity. Delegated Authentication and Service Principal. Steps to Register Fabric Tenant Step 1: In the Microsoft Purview portal, navigate to the Data Map section from the left-hand menu. Select Data Sources. Click on Register to begin the process of adding your Fabric Tenant (which also includes PowerBI). Step 2: Add in Data Source Name, keep Tenant ID as default (auto-populated). Microsoft Fabric and Microsoft Purview should be in the same tenant. Step 3: Enter in Scan name, enable/disable scanning for personal workspaces. You will notice under Credentials automatically created identity for authenticating Purview account. Note: If your Purview is behind Private Network, follow the guidelines here: Connect to your Microsoft Fabric tenant in same tenant as Microsoft Purview. Step 4: From your Microsoft Fabric, open Settings, Click on Tenant Settings and enable “Service Principals can access read-only admin APIs”, “Enhanced admin API responses within detailed metadata” and “Enhance Admin API responses with DAX and Mashup Expressions” within Admin API Settings section. Step 5: You will need to create a group, add the Purviews' managed identity to the group and add the group under “Service Principals can access read-only admin APIs” section of your tenant settings inside Microsoft Fabric Step 6: Test your connection and setup scope for your scan. Select the required workspaces, click continue and automate a scan trigger. Step 7: From the left pane, select Data Map and select your data source for your workspace. You can view a list of existing scans on that data source under Recent scans, or you can view all scans on the Scans tab. Review further options here: Manage and Review your Scans. You can review your scanned data sources, history and details here: Navigate to scan run history for a given scan. Why Customers Love Purview Kern County unified its approach to securing and governing data with Microsoft Purview, ensuring consistent compliance and streamlined data management across departments. EY accelerated secure AI development by leveraging the Microsoft Purview SDK, enabling robust data governance and privacy controls for advanced analytics and AI initiatives. Prince William County Public Schools created a more cyber-safe classroom environment with Microsoft Purview, protecting sensitive student information while supporting digital learning. FSA (Food Standards Agency) helps keep the UK food supply safe using Microsoft Purview Records Management, ensuring regulatory compliance and safeguarding critical data assets. Conclusion Purview’s Unified Catalog centralizes governance across Discovery, Catalog Management, and Health Management. The Governance features in Purview allow organizations to confidently answer critical questions: What data do we have? Where did it come from? Who is responsible for it? Is it secure and compliant? Can we trust its quality? Microsoft Purview, when integrated with Azure Databricks and Microsoft Fabric, provides a unified approach to cataloging, classifying, and governing data across diverse environments. By leveraging Purview’s Unified Catalog, Data Map, and advanced governance features, organizations can achieve end-to-end visibility, enforce consistent policies, and improve data quality. You might ask, why does data quality matter? Well, in today’s world, data is the new gold. References Microsoft Purview | Microsoft Learn Pricing - Microsoft Purview | Microsoft Azure Use Microsoft Purview to Govern Microsoft Fabric Connect to and manage Azure Databricks Unity Catalog in Microsoft PurviewIntroducing Microsoft Sentinel graph (Public Preview)
Security is being reengineered for the AI era—moving beyond static, rulebound controls and after-the-fact response toward platform-led, machine-speed defense. The challenge is clear: fragmented tools, sprawling signals, and legacy architectures that can’t match the velocity and scale of modern attacks. What’s needed is an AI-ready, data-first foundation—one that turns telemetry into a security graph, standardizes access for agents, and coordinates autonomous actions while keeping humans in command of strategy and high-impact investigations. Security teams already center operations on their SIEM for end-to-end visibility, and we’re advancing that foundation by evolving Microsoft Sentinel into both the SIEM and the platform for agentic defense—connecting analytics and context across ecosystems. And today, we announced the general availability of Sentinel data lake and introduced new preview platform capabilities that are built on Sentinel data lake (Figure 1), so protection accelerates to machine speed while analysts do their best work. We are excited to announce the public preview of Microsoft Sentinel graph, a deeply connected map of your digital estate across endpoints, cloud, email, identity, SaaS apps, and enriched with our threat intelligence. Sentinel graph, a core capability of the Sentinel platform, enables Defenders and Agentic AI to connect the dots and bring deep context quickly, enabling modern defense across pre-breach and post-breach. Starting today, we are delivering new graph-based analytics and interactive visualization capabilities across Microsoft Defender and Microsoft Purview. Attackers think in graphs. For a long time, defenders have been limited to querying and analyzing data in lists forcing them to think in silos. With Sentinel graph, Defenders and AI can quickly reveal relationships, traversable digital paths to understand blast radius, privilege escalation, and anomalies across large, cloud-scale data sets, deriving deep contextual insight across their digital estate, SOC teams and their AI Agents can stay proactive and resilient. With Sentinel graph-powered experiences in Defender and Purview, defenders can now reason over assets, identities, activities, and threat intelligence to accelerate detection, hunting, investigation, and response. Incident graph in Defender. The incident graph in the Microsoft Defender portal is now enriched with ability to analyze blast radius of the active attack. During an incident investigation, the blast radius analysis quickly evaluates and visualizes the vulnerable paths an attacker could take from a compromise entity to a critical asset. This allows SOC teams to effectively prioritize and focus their attack mitigation and response saving critical time and limiting impact. Hunting graph in Defender. Threat hunting often requires connecting disparate pieces of data to uncover hidden paths that attackers exploit to reach your crown jewels. With the new hunting graph, analysts can visually traverse the complex web of relationships between users, devices, and other entities to reveal privileged access paths to critical assets. This graph-powered exploration transforms threat hunting into a proactive mission, enabling SOC teams to surface vulnerabilities and intercept attacks before they gain momentum. This approach shifts security operations from reactive alert handling to proactive threat hunting, enabling teams to identify vulnerabilities and stop attacks before they escalate. Data risk graph in Purview Insider Risk Management (IRM). Investigating data leaks and insider risks is challenging when information is scattered across multiple sources. The data risk graph in IRM offers a unified view across SharePoint and OneDrive, connecting users, assets, and activities. Investigators can see not just what data was leaked, but also the full blast radius of risky user activity. This context helps data security teams triage alerts, understand the impact of incidents, and take targeted actions to prevent future leaks. Data risk graph in Purview Data Security Investigation (DSI). To truly understand a data breach, you need to follow the trail—tracking files and their activities across every tool and source. The data risk graph does this by automatically combining unified audit logs, Entra audit logs, and threat intelligence, providing an invaluable insight. With the power of the data risk graph, data security teams can pinpoint sensitive data access and movement, map potential exfiltration paths, and visualize the users and activities linked to risky files, all in one view. Getting started Microsoft Defender If you already have the Sentinel data lake, the required graph will be auto provisioned when you login into the Defender portal; hunting graph and incident graph experience will appear in the Defender portal. New to data lake? Use the Sentinel data lake onboarding flow to provision the data lake and graph. Microsoft Purview Follow the Sentinel data lake onboarding flow to provision the data lake and graph. In Purview Insider Risk Management (IRM), follow the instructions here. In Purview Data Security Investigation (DSI), follow the instructions here. Reference links Watch Microsoft Secure Microsoft Secure news blog Data lake blog MCP server blog ISV blog Security Store blog Copilot blog Microsoft Sentinel—AI-Powered Cloud SIEM | Microsoft SecurityTeams Private Channels: Group-Based Compliance Model & Purview eDiscovery Considerations
Microsoft Teams Private Channels are undergoing an architectural change that will affect how your organisations uses Microsoft Purview eDiscovery to hold and discovery these messages going forward. In essence, copies of private channel messages will now be stored in the M365 Group mailbox, aligning their storage with how standard and shared channels work today. This shift, due to roll out from early October 2025 to December 2025, brings new benefits (like greatly expanded channel limits and meeting support) and has the potential to impact your Purview eDiscovery searches and legal holds workflows. In this blog post, we’ll break down what’s changing, what remains the same, and provide you with the information you need to review your own eDiscovery processes when working with private channel messages. What’s Changing? Private channel conversation history is moving to a group-based model. Historically, when users posted in a private channel, copies of those messages were stored in each member of the private channel’s Exchange Online mailbox (in a hidden folder). This meant that Microsoft Purview eDiscovery search and hold actions for private channel content had to be scoped to the member’s mailbox, which added complexity. Under the new model rolling out in late 2025, each private channel will get its own dedicated channel mailbox linked to the parent Teams’ M365 group mailbox. In other words, private channel messages will be stored similarly to shared channel messages; where the parent Teams’ M365 group mailbox is targeted in eDiscovery searches and holds, instead of targeting the mailboxes of all members of the private channel. Targeting the parent Teams’ M365 Group mailbox in a search or a hold will extend to all dedicated channel mailboxes for shared and private channels within the team as well as including any standard channels. After the transition, any new messages in a private channel will see the message copy being stored in the channel’s group mailbox, not in users’ mailboxes. Why the change? This aligns the retention and collection of private channel messages to standard and shared channel messages. Instead of having to include separate data sources depending on the type of Teams channel, eDiscovery practitioners can simply target the Team’s M365 Group mailbox and cover all its channel, no matter it’s type. This update will introduce major improvements to private channels themselves. This includes raising the limits on private channels and members, and enabling features that were previously missing: Maximum private channels per team: increasing from 30 to 1000. Maximum members in a private channel: increasing from 250 to 5000. Meeting scheduling in private channels: previously not supported, now allowed under the new model. The table below summarizes the old vs new model for Teams private channel messages: Aspect Before (User Mailbox Model) After (Group Mailbox Model) Message Storage Messages copied into each private channel member’s Exchange Online mailbox. Messages are stored in a channel mailbox associated with the parent Teams’ M365 group mailbox. eDiscovery Search Had to search private channel member’s mailboxes to find channel messages. Search the parent M365 group mailbox for new private channel messages and user mailboxes for any messages that were not migrated to the group mailbox. Legal Hold Placement Apply hold on private channel member’s mailbox to preserve messages. Apply hold on the parent M365 group mailbox. Existing holds may need to include both the M365 group mailbox and members mailboxes to cover new messages and messages that were not migrated to the group mailbox. Things to know about the changes During the migration of Teams private channel messages to the new group-based model, the process will transfer the latest version of each message from the private channel member’s mailbox to the private channel’s dedicated channel mailbox. However, it’s important to note that this process does not include the migration of held message versions; specifically, any messages that were edited or deleted prior to the migration. These held messages, due to a legal hold or retention policy, will remain in the individual user mailboxes where they were originally stored. As such, eDiscovery practitioners should consider, based on their need, including the user mailboxes in their search and hold scopes. Legal Holds for Private Channel Content Before the migration, if you needed to preserve a private channel’s messages, you placed a hold on the mailboxes of each member of the private channel. This ensured each user’s copy of the channel messages was held by the hold. Often, eDiscovery practitioners would also place a hold on the M365 group mailbox to also hold the messages from standard and shared channels After the migration, this workflow changes: you will instead place a hold on the parent Team’s M365 group mailbox that corresponds to the private channel. Before migration: It is recommended to update any existing hold that are intended to preserve private channel messages so that it includes the parent Team’s M365 group mailbox in addition to the private channel members’ mailboxes. This ensures continuity as any new messages (once the channel migrates) will be stored in the group mailbox. After migration: For any new eDiscovery hold involving a private channel, simply add the parent Teams’ M365 group mailbox to the hold. As previously discussed eDiscovery practitioners should consider, based on need, if the hold also needs to include the private channel members mailboxes due to non-migrated content. Any private channel messages currently held in the user mailbox will continue to be preserved by the existing hold, but to hold any future messages sent post migration will require a hold placed on the group mailbox. eDiscovery Search and Collection Performing searches related to private channel messages will change after the migration: Before Migration: To collect private channel messages, you targeted the private channel member’s mailbox as a data source in the search. After migration: The private channel messages will be stored in a channel mailbox associated with the parent Team’s M365 group mailbox. That means you include the Team’s M365 group mailbox as a data source in your search. As previously discussed eDiscovery practitioners should consider, based on need, if the search also needs to include the private channel members mailboxes due to non-migrated content. What Isn’t Changing? It’s important to emphasize that only Teams private channel messages are changing in this rollout. Other content locations in Teams remain as they were, so your existing eDiscovery processes remain unchanged: Standard channel messages: These are been stored in the Teams M365 group mailbox. You will continue to place holds on the Team’s M365 group mailbox for standard channel content and target it in searches to do collections. Shared channel messages: Shared channels messages are stored in a channel mailbox linked to the M365 group mailbox for the Team. You continue to place holds and undertake searches by targeting the M365 group mailbox for the Team that contains the shared channel. Teams chats (1:1 or group chats): Teams chats are stored in each user’s Exchange Online mailbox. For eDiscovery, you will continue to search individual user mailboxes for chats and place holds on user mailboxes to preserve chat content. Files and SharePoint data: Any file shared in teams message or uploaded to a SharePoint site associated with a channel remains as it is today. In conclusion For more information regarding timelines, refer to the to the Microsoft Teams blog post “New enhancements in Private Channels in Microsoft Teams unlock their full potential” as well as checking for updates via the Message Center Post MC1134737.Introducing Microsoft Security Store
Security is being reengineered for the AI era—moving beyond static, rulebound controls and after-the-fact response toward platform-led, machine-speed defense. We recognize that defending against modern threats requires the full strength of an ecosystem, combining our unique expertise and shared threat intelligence. But with so many options out there, it’s tough for security professionals to cut through the noise, and even tougher to navigate long procurement cycles and stitch together tools and data before seeing meaningful improvements. That’s why we built Microsoft Security Store - a storefront designed for security professionals to discover, buy, and deploy security SaaS solutions and AI agents from our ecosystem partners such as Darktrace, Illumio, and BlueVoyant. Security SaaS solutions and AI agents on Security Store integrate with Microsoft Security products, including Sentinel platform, to enhance end-to-end protection. These integrated solutions and agents collaborate intelligently, sharing insights and leveraging AI to enhance critical security tasks like triage, threat hunting, and access management. In Security Store, you can: Buy with confidence – Explore solutions and agents that are validated to integrate with Microsoft Security products, so you know they’ll work in your environment. Listings are organized to make it easy for security professionals to find what’s relevant to their needs. For example, you can filter solutions based on how they integrate with your existing Microsoft Security products. You can also browse listings based on their NIST Cybersecurity Framework functions, covering everything from network security to compliance automation — helping you quickly identify which solutions strengthen the areas that matter most to your security posture. Simplify purchasing – Buy solutions and agents with your existing Microsoft billing account without any additional payment setup. For Azure benefit-eligible offers, eligible purchases contribute to your cloud consumption commitments. You can also purchase negotiated deals through private offers. Accelerate time to value – Deploy agents and their dependencies in just a few steps and start getting value from AI in minutes. Partners offer ready-to-use AI agents that can triage alerts at scale, analyze and retrieve investigation insights in real time, and surface posture and detection gaps with actionable recommendations. A rich ecosystem of solutions and AI agents to elevate security posture In Security Store, you’ll find solutions covering every corner of cybersecurity—threat protection, data security and governance, identity and device management, and more. To give you a flavor of what is available, here are some of the exciting solutions on the store: Darktrace’s ActiveAI Security SaaS solution integrates with Microsoft Security to extend self-learning AI across a customer's entire digital estate, helping detect anomalies and stop novel attacks before they spread. The Darktrace Email Analysis Agent helps SOC teams triage and threat hunt suspicious emails by automating detection of risky attachments, links, and user behaviors using Darktrace Self-Learning AI, integrated with Microsoft Defender and Security Copilot. This unified approach highlights anomalous properties and indicators of compromise, enabling proactive threat hunting and faster, more accurate response. Illumio for Microsoft Sentinel combines Illumio Insights with Microsoft Sentinel data lake and Security Copilot to enhance detection and response to cyber threats. It fuses data from Illumio and all the other sources feeding into Sentinel to deliver a unified view of threats across millions of workloads. AI-driven breach containment from Illumio gives SOC analysts, incident responders, and threat hunters unified visibility into lateral traffic threats and attack paths across hybrid and multi-cloud environments, to reduce alert fatigue, prioritize threat investigation, and instantly isolate workloads. Netskope’s Security Service Edge (SSE) platform integrates with Microsoft M365, Defender, Sentinel, Entra and Purview for identity-driven, label-aware protection across cloud, web, and private apps. Netskope's inline controls (SWG, CASB, ZTNA) and advanced DLP, with Entra signals and Conditional Access, provide real-time, context-rich policies based on user, device, and risk. Telemetry and incidents flow into Defender and Sentinel for automated enrichment and response, ensuring unified visibility, faster investigations, and consistent Zero Trust protection for cloud, data, and AI everywhere. PERFORMANTA Email Analysis Agent automates deep investigations into email threats, analyzing metadata (headers, indicators, attachments) against threat intelligence to expose phishing attempts. Complementing this, the IAM Supervisor Agent triages identity risks by scrutinizing user activity for signs of credential theft, privilege misuse, or unusual behavior. These agents deliver unified, evidence-backed reports directly to you, providing instant clarity and slashing incident response time. Tanium Autonomous Endpoint Management (AEM) pairs realtime endpoint visibility with AI-driven automation to keep IT environments healthy and secure at scale. Tanium is integrated with the Microsoft Security suite—including Microsoft Sentinel, Defender for Endpoint, Entra ID, Intune, and Security Copilot. Tanium streams current state telemetry into Microsoft’s security and AI platforms and lets analysts pivot from investigation to remediation without tool switching. Tanium even executes remediation actions from the Sentinel console. The Tanium Security Triage Agent accelerates alert triage, enabling security teams to make swift, informed decisions using Tanium Threat Response alerts and real-time endpoint data. Walkthrough of Microsoft Security Store Now that you’ve seen the types of solutions available in Security Store, let’s walk through how to find the right one for your organization. You can get started by going to the Microsoft Security Store portal. From there, you can search and browse solutions that integrate with Microsoft Security products, including a dedicated section for AI agents—all in one place. If you are using Microsoft Security Copilot, you can also open the store from within Security Copilot to find AI agents - read more here. Solutions are grouped by how they align with industry frameworks like NIST CSF 2.0, making it easier to see which areas of security each one supports. You can also filter by integration type—e.g., Defender, Sentinel, Entra, or Purview—and by compliance certifications to narrow results to what fits your environment. To explore a solution, click into its detail page to view descriptions, screenshots, integration details, and pricing. For AI agents, you’ll also see the tasks they perform, the inputs they require, and the outputs they produce —so you know what to expect before you deploy. Every listing goes through a review process that includes partner verification, security scans on code packages stored in a secure registry to protect against malware, and validation that integrations with Microsoft Security products work as intended. Customers with the right permissions can purchase agents and SaaS solutions directly through Security Store. The process is simple: choose a partner solution or AI agent and complete the purchase in just a few clicks using your existing Microsoft billing account—no new payment setup required. Qualifying SaaS purchases also count toward your Microsoft Azure Consumption Commitment (MACC), helping accelerate budget approvals while adding the security capabilities your organization needs. Security and IT admins can deploy solutions directly from Security Store in just a few steps through a guided experience. The deployment process automatically provisions the resources each solution needs—such as Security Copilot agents and Microsoft Sentinel data lake notebook jobs—so you don’t have to do so manually. Agents are deployed into Security Copilot, which is built with security in mind, providing controls like granular agent permissions and audit trails, giving admins visibility and governance. Once deployment is complete, your agent is ready to configure and use so you can start applying AI to expand detection coverage, respond faster, and improve operational efficiency. Security and IT admins can view and manage all purchased solutions from the “My Solutions” page and easily navigate to Microsoft Cost Management tools to track spending and manage subscriptions. Partners: grow your business with Microsoft For security partners, Security Store opens a powerful new channel to reach customers, monetize differentiated solutions, and grow with Microsoft. We will showcase select solutions across relevant Microsoft Security experiences, starting with Security Copilot, so your offerings appear in the right context for the right audience. You can monetize both SaaS solutions and AI agents through built-in commerce capabilities, while tapping into Microsoft’s go-to-market incentives. For agent builders, it’s even simpler—we handle the entire commerce lifecycle, including billing and entitlement, so you don’t have to build any infrastructure. You focus on embedding your security expertise into the agent, and we take care of the rest to deliver a seamless purchase experience for customers. Security Store is built on top of Microsoft Marketplace, which means partners publish their solution or agent through the Microsoft Partner Center - the central hub for managing all marketplace offers. From there, create or update your offer with details about how your solution integrates with Microsoft Security so customers can easily discover it in Security Store. Next, upload your deployable package to the Security Store registry, which is encrypted for protection. Then define your license model, terms, and pricing so customers know exactly what to expect. Before your offer goes live, it goes through certification checks that include malware and virus scans, schema validation, and solution validation. These steps help give customers confidence that your solutions meet Microsoft’s integration standards. Get started today By creating a storefront optimized for security professionals, we are making it simple to find, buy, and deploy solutions and AI agents that work together. Microsoft Security Store helps you put the right AI‑powered tools in place so your team can focus on what matters most—defending against attackers with speed and confidence. Get started today by visiting Microsoft Security Store. If you’re a partner looking to grow your business with Microsoft, start by visiting Microsoft Security Store - Partner with Microsoft to become a partner. Partners can list their solution or agent if their solution has a qualifying integration with Microsoft Security products, such as a Sentinel connector or Security Copilot agent, or another qualifying MISA solution integration. You can learn more about qualifying integrations and the listing process in our documentation here.Introducing eDiscovery Graph API Standard and Enhancements to Premium APIs
We have been busy working to enable organisations that leverage the Microsoft Purview eDiscovery Graph APIs to benefit from the enhancements in the new modern experience for eDiscovery. I am pleased to share that APIs have now been updated with additional parameters to enable organisations to now benefit from the following features already present in the modern experience within the Purview Portal: Ability to control the export package structure and item naming convention Trigger advanced indexing as part of the Statistics, Add to Review and Export jobs Enables for the first time the ability to trigger HTML transcription of Teams, Viva and Copilot interaction when adding to a review set Benefit from the new statistic options such as Include Categories and Include Keyword Report More granular control of the number of versions collected of modern attachments and documents collected directly collected from OneDrive and SharePoint These changes were communicated as part of the M365 Message Center Post MC1115305. This change involved the beta version of the API calls being promoted into the V1.0 endpoint of the Graph API. The following v1.0 API calls were updated as part of this work: Search Estimate Statistics – ediscoverySearch: estimateStatistics Search Export Report - ediscoverySearch: exportReport Search Export Result - ediscoverySearch: exportResult Search Add to ReviewSet – ediscoveryReviewSet: addToReviewSet ReviewSet Export - ediscoveryReviewSet: export The majority of this blog post is intended to walk through the updates to each of these APIs and provide understanding on how to update your calls to these APIs to maintain a consistent outcome (and benefit from the new functionality). If you are new to the Microsoft Purview eDiscovery APIs you can refer to my previous blog post on how to get started with them. Getting started with the eDiscovery APIs | Microsoft Community Hub First up though, availability of the Graph API for E3 customers We are excited to announce that starting September 9, 2025, Microsoft will launch the eDiscovery Graph API Standard, a new offering designed to empower Microsoft 365 E3 customers with secure, automated data export capabilities. The new eDiscovery Graph API offers scalable, automated exports with secure credential management, improved performance and reliability for Microsoft 365 E3 customers. The new API enables automation of the search, collect, hold, and export flow from Microsoft Purview eDiscovery. While it doesn’t include premium features like Teams/Yammer conversations or advanced indexing (available only with the Premium Graph APIs), it delivers meaningful value for Microsoft 365 E3 customers needing to automate structured legal exports. Key capabilities: Export from Exchange, SharePoint, Teams, Viva Engage and OneDrive for Business Case, search, hold and export management Integration with partner/vendor workflows Support automation that takes advantage of new features within the modern user experience Pricing & Access Microsoft will offer 50 GB of included export volume per tenant per month, with additional usage billed at $10/GB—a price point that balances customer value, sustainability, and market competitiveness. The Graph API Standard will be available in public preview starting September 9. For more details on pay-as-you-go features in eDiscovery and Purview refer to the following links. Billing in eDiscovery | Microsoft Learn Enable Microsoft Purview pay-as-you-go features via subscription | Microsoft Learn Wait, but what about the custodian and noncustodial locations workflow in eDiscovery Classic (Premium)? As you are probably aware, in the modern user experience for eDiscovery there have been some changes to the Data Sources tab and how it is used in the workflow. Typically, organisations leveraging the Microsoft Purview eDiscovery APIs previously would have used the custodian and noncustodial data sources APIs to add the relevant data sources to the case using the following APIs. ediscoveryCustodian resource type - Microsoft Graph v1.0 | Microsoft Learn ediscoveryNoncustodialDataSource resource type - Microsoft Graph v1.0 | Microsoft Learn Once added via the API calls, when creating a search these locations would be bound to a search. This workflow in the API remains supported for backwards compatibility. This includes the creation of system generated case hold policies when applying holds to the locations via these APIs. Organisations can continue to use this approach with the APIs. However, to simplify your code and workflow in the APIs consider using the following API call to add additional sources directly to the search. Add additional sources - Microsoft Graph v1.0 | Microsoft Learn Some key things to note if you continue to use the custodian and noncustodial data sources APIs in your automation workflow. This will not populate the new data sources tab in the modern experience for eDiscovery They can continue to be queried via the API calls Advanced indexing triggered via these APIs will have no influence on if advanced indexing is used in jobs triggered from a search Make sure you use the new parameters to trigger advanced indexing on the job when running the Statistics, Add to Review Set and Direct Export jobs Generating Search Statistics ediscoverySearch: estimateStatistics In eDiscovery Premium (Classic) and the previous version of the APIs, generating statistics was a mandatory step before you could progress to either adding the search to a review set or triggering a direct export. With the new modern experience for eDiscovery, this step is completely optional and is not mandatory. For organizations that previously generated search statistics but never checked or used the results before moving to adding the search to a review set or triggering a direct export job, they can now skip this step. If organizations do want to continue to generate statistics, then calling the updated API with the same parameters call will continue to generate statistics for the search. An example of a previous call would look as follows: POST /security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/estimateStatistics Historically this API didn’t require a request body. With the APIs now natively working with the modern experience for eDiscovery; the API call now supports a request body, enabling you to benefit from the new statistic options. Details on these new options can be found in the links below. Create a search for a case in eDiscovery | Microsoft Learn Evaluate and refine search results in eDiscovery | Microsoft Learn If a search is run without a request body it will still generate the following information: Total matches and volume Number of locations searched and the number of locations with hits Number of data sources searched and the number of data sources with hits The top five data sources that make up the most search hits matching your query Hit count by location type (mailbox versus site) As the API is now natively working with the modern experience for eDiscovery you can optionally include a request body to pass the statisticOptions parameter in the POST API call. With the changes to how Advanced Indexing works within the new UX and the additional reporting categories available, you can use the statisticsOptions parameter to trigger the generate statistic job with the additional options within the modern experience for the modern UX. The values you can include are detailed in the table below. Property Option from Portal includeRefiners Include categories: Refine your view to include people, sensitive information types, item types, and errors. includeQueryStats Include query keywords report: Assess keyword relevance for different parts of your search query. includeUnindexedStats Include partially indexed items: We'll provide details about items that weren't fully indexed. These partially indexed items might be unsearchable or partially searchable advancedIndexing Perform advanced indexing on partially indexed items: We'll try to reindex a sample of partially indexed items to determine whether they match your query. After running the query, check the Statistics page to review information about partially indexed items. Note: Can only be used if includeUnindexedStats is also included. locationsWithoutHits Exclude partially indexed items in locations without search hits: Ignore partially indexed items in locations with no matches to the search query. Checking this setting will only return partially indexed items in locations where there is already at least one hit. Note: Can only be used if includeUnindexedStats is also included. In eDiscovery Premium (Classic) the advanced indexing took place when a custodian or non-custodial data location was added to the Data Sources tab. This means that when you triggered the estimate statistics call on the search it would include results from both the native Exchange and SharePoint index as well as the Advanced Index. In the modern experience for eDiscovery, the advanced indexing runs as part of the job. However, this must be selected as an option on the job. Note that not all searches will benefit from advanced indexing, one example would be a simple date range search on a mailbox or SPO site as this will still have hits on the partially indexed items (even partial indexed email and SPO file items have date metadata in the native indexes). The following example using PowerShell and the Microsoft Graph PowerShell module and passes the new StatisticsOptions parameter to the POST call and selects all available options. # Generate estimates for the newly created search $statParams = @{ statisticsOptions = "includeRefiners,includeQueryStats,includeUnindexedStats,advancedIndexing,locationsWithoutHits" } $params = $statParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/estimateStatistics" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Write-Host "Estimate statistics generation triggered for search ID: $searchID" Once run, it will create a generated statistic job with the additional options selected. Direct Export - Report ediscoverySearch: exportReport This API enables you to generate an item report directly form a search without taking the data into a review set or exporting the items that match the search. With the APIs now natively working with the modern experience for eDiscovery, new parameters have been added to the request body as well as new values available for existing parameters. The new parameters are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. These new parameters reflect the changes made in the modern experience for eDiscovery that provides more granular control for eDiscovery managers to apply different collection options based on where the SPO item was collected from (e.g. directly from a SPO site vs a cloud attachment link included in an email). Within eDiscovery Premium (Classic) the All Document Versions option applied to both SharePoint and OneDrive files collected directly from SharePoint and any cloud attachments contained within email, teams and viva engage messages. Historically for this API, within the additionalOptions parameter you could include the allDocumentVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allDocumentVersions value can still be included in the additionalOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalOptions parameter and switch to using the new documentVersion parameter. As described earlier, to benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the direct export job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalOptions advancedIndexing Perform advanced indexing on partially indexed items exportCriteria searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items exportLocation responsiveLocations, nonresponsiveLocations Exclude partially indexed items in locations without search hits. Finally, in the new modern experience for eDiscovery more granular control has been introduced to enable organisations to independently choose to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and the ability to collect up to 12 hours of related conversations when a message matches a search. This is reflected in the job settings by the following options: Organize conversations into HTML transcripts Include Teams and Viva Engage conversations In the classic experience this was a single option titled Teams and Yammer Conversations that did both actions and was controlled by including the teamsAndYammerConversations value in the additionalOptions parameter. With the APIs now natively working with the modern experience for eDiscovery, the teamsAndYammerConversations value can still be included in the additionalOptions parameter but it will only trigger the collection of up to 12 hours of related conversations when a message matches a search without converting the items into HTML transcripts. To do this we need to include the new value of htmlTranscripts in the additionalOptions parameter. As an example, lets look at the following direct export report job from the portal and use the Microsoft Graph PowerShell module to call the exportReport API call with the updated request body. $exportName = "New UX - Direct Export Report" $exportParams = @{ displayName = $exportName description = "Direct export report from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportReport" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Direct Export - Results ediscoverySearch: exportResult - Microsoft Graph v1.0 | Microsoft Learn This API call enables you to export the items from a search without taking the data into a review set. All the information from the above section on the changes to the exportReport API also applies to this API call. However with this API call we will actually be exporting the items from the search and not just the report. As such we need to pass in the request body information on how we want the export package to look. Previously with direct export for eDiscovery Premium (Classic) you had a three options in the UX and in the API to define the export format. Option Exchange Export Structure SharePoint / OneDrive Export Structure Individual PST files for each mailbox PST created for each mailbox. The structure of each PST is reflective of the folders within the mailbox with emails stored based on their original location in the mailbox. Emails named based on their subject. Folder for each mailbox site. Within each folder, the structure is reflective of the SharePoint/OneDrive site with documents stored based on their original location in the site. Documents are named based on their document name. Individual .msg files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folders within the mailbox with emails stored as .msg files based on their original location in the mailbox. Emails named based on their subject. As above. Individual .eml files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folder within the mailbox with emails stored as .eml files based on their original location in the mailbox. Emails named based on their subject As above. Historically with this API, the exportFormat parameter was used to control the desired export format. Three values could be used and they were pst, msg and eml. This parameter is still relevant but only controls how email items will be saved, either in a PST file, as individual .msg files or as individual .eml files. Note: The eml export format option is depreciated in the new UX. Going forward you should use either pst or msg. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. More information on the export package options and what they control can be found in the following link. https://learn.microsoft.com/en-gb/purview/edisc-search-export#export-package-options To support this, new values have been added to the additionalOptions parameter for this API call, these must be included in the request body otherwise the export structure will be as follows. exportFormat value Exchange Export Structure SharePoint / OneDrive Export Structure pst PST files created that containing data from multiple mailboxes. All emails contained within a single folder within the PST. Emails named a based on an assigned unique identifier (GUID) One folder for all documents. All documents contained within a single folder. Documents are named based on an assigned unique identifier (GUID) msg Folder created containing data from all mailboxes. All emails contained within a single folder stored as .msg files. Emails named a based on an assigned unique identifier (GUID) As above. The new values added to the additionalOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal splitSource Organize data from different locations into separate folders or PSTs includeFolderAndPath Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive a similar output structure when previously using the pst or msg values in the exportFormat parameter I would include all of the above values in the additionalOptions parameter. For example, to generate a direct export where the email items are stored in separate PSTs per mailbox, the structure of the PST files reflects the mailbox and each items is named as per the subject of the email; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - PST" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = “https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params If I want to export the email items as individual .msg files instead of storing them in PST files; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - MSG" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = " https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" Add to Review Set ediscoveryReviewSet: addToReviewSet This API call enables you to commit the items that match the search to a Review Set within an eDiscovery case. This enables you to review, tag, redact and filter the items that match the search without exporting the data from the M365 service boundary. Historically with this API call it was more limited compared to triggering the job via the eDiscovery Premium (Classic) UI. With the APIs now natively working with the modern experience for eDiscovery organizations can make use of the enhancements made within the modern UX and have greater flexibility in selecting the options that are relevant for your requirements. There is a lot of overlap with previous sections, specifically the “Direct Export – Report” section on what updates are required to benefit from updated API. They are as follows: Controlling the number of versions of SPO and OneDrive documents added to the review set via the new cloudAttachmentVersion and documentVersion parameters Enabling organizations to trigger the advanced indexing of partial indexed items during the add to review set job via new values added to existing parameters However there are some nuances to the parameter names and the values for this specific API call compared to the exportReport API call. For example, with this API call we use the additionalDataOptions parameter opposed to the additionalOptions parameter. As with the exportReport and exportResult APIs, there are new parameters to control the number of versions of SPO and OneDrive documents added to the review set are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. Historically for this API call, within the additionalDataOptions parameter you could include the allVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allVersions value can still be included in the additionalDataOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalDataOptions parameter and switch to using the new documentVersion parameter. To benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the add to review set job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalDataOptions advancedIndexing Perform advanced indexing on partially indexed items itemsToInclude searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items additionalDataOptions locationsWithoutHits Exclude partially indexed items in locations without search hits. Historically the API call didn’t support the add to review set job options to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and collect up to 12 hours of related conversations when a message matches a search. With the APIs now natively working with the modern experience for eDiscovery this is now possible by adding support for the htmlTranscripts and messageConversationExpansion values to the addtionalDataOptions parameter. As an example, let’s look at the following add to review set job from the portal and use the Microsoft Graph PowerShell module to invoke the addToReviewSet API call with the updated request body. $commitParams = @{ search = @{ id = $searchID } additionalDataOptions = "linkedFiles,advancedIndexing,htmlTranscripts,messageConversationExpansion,locationsWithoutHits" cloudAttachmentVersion = "latest" documentVersion = "latest" itemsToInclude = "searchHits,partiallyIndexed" } $params = $commitParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/addToReviewSet" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Export from Review Set ediscoveryReviewSet: export This API call enables you to export items from a Review Set within an eDiscovery case. Historically with this API, the exportStructure parameter was used to control the desired export format. Two values could be used and they were directory and pst. This parameter has had been updated to include a new value of msg. Note: The directory value is depreciated in the new UX but remains available in v1.0 of the API call for backwards compatibility. Going forward you should use msg alongside the new exportOptions values. The exportStructure parameter will only control how email items are saved, either within PST files or as individual .msg files. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. As with the exportResults API call for direct export, new values have been added to the exportOptions parameter for this API call. The new values added to the exportOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal splitSource Organize data from different locations into separate folders or PSTs includeFolderAndPath Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive an equivalent output structure when previously using the pst value in the exportStructure parameter I would include all of the above values in the exportOptions parameter within the request body. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - PST" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportStructure = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params To receive an equivalent output structure when previously using the directory value in the exportStructure parameter I would instead use the msg value within the request body. As the condensed directory structure format export all items into a single folder, all named based on uniquely assigned identifier I do not need to include the new values added to the exportOptions parameter. An example using the Microsoft Graph PowerShell module can be found below. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - MSG" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles" exportStructure = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Continuing to use the directory value in exportStructure will produce the same output as if msg was used. Wrap Up Thank you for your time reading through this post. Hopefully you are now equipped with the information needed to make the most of the new modern experience for eDiscovery when making your Graph API calls.