discover and respond
65 TopicsMicrosoft Purview Data Governance - Authoring Custom Data Quality rules using expression languages
The cost of poor-quality data runs into millions of dollars in direct losses. When indirect costs—such as missed opportunities—are included, the total impact is many times higher. Poor data quality also creates significant societal costs. It can lead customers to pay higher prices for goods and services and force citizens to bear higher taxes due to inefficiencies and errors. In critical domains, the consequences can be severe. Defective or inaccurate data can result in injury or loss of life, for example due to medication errors or incorrect medical procedures, especially as healthcare increasingly relies on data- and AI-driven decision-making. Students may be unfairly denied admission to universities because of errors in entrance exam scoring. Consumers may purchase unsafe or harmful food products if nutritional labels are inaccurate or misleading. Research and industry measurements show that 20–35 percent of an organization’s operating revenue is often wasted on recovering from process failures, data defects, information scrap, and rework caused by poor data quality (Larry P. English, Information Quality Applied). Data Quality Rules To maintain high-quality data, organizations must continuously measure and monitor data quality and understand the negative impact of poor-quality data on their specific use cases. Data quality rules play a critical role in objectively measuring, enforcing, and quantifying data quality, enabling organizations to improve trust, reduce risk, and maximize the value of their data assets. Data Quality (DQ) rules define how data should be structured, related, constrained, and validated so it can be trusted for operational, analytical, and AI use cases. Data quality rules are essential guidelines that organizations establish to ensure the accuracy, consistency, and completeness of their data. These rules fall into four major categories: Business Entity rules, Business Attribute rules, Data Dependency rules, and Data Validity rules (Ref: Informit.com/articles). Business Entity Rules These rules ensure that core business objects (such as Customer, Order, Account, or Product) are well-defined and correctly related. Business entity rules prevent duplicate records, broken relationships, and incomplete business processes. Business Entity Rules Definition Example Uniqueness Every entity instance must be uniquely identifiable. Each customer must have a unique Customer ID that is never NULL. Duplicate customer records indicate poor data quality. Cardinality Defines how many instances of one entity can relate to another. One customer can place many orders (one-to-many), but an order belongs to exactly one customer. Optionality Defines whether a relationship is mandatory or optional. An order must be linked to a customer (mandatory), but a customer may exist without having placed any orders (optional). Business Attribute Rules These rules focus on individual data elements (columns/fields) within business entities. Attribute rules ensure consistency, interpretability, and prevent invalid or meaningless values. Business Attribute Rules Definition Example Data Inheritance Attributes defined in a supertype must be consistent across subtypes. An Account Number remains the same whether the account is Checking or Savings. Data Domains Attribute values must conform to allowed formats or ranges. · State Code must be one of the 50 U.S. state abbreviations · Age must be between 0 and 120 · Date must follow CCYY/MM/DD format Data Dependency Rules These rules define logical and conditional relationships between entities and attributes. Data dependency rules enforce business logic and prevent contradictory or illogical data states. Data Dependency Rules Definition Example Entity Relationship Dependency The existence of one relationship depends on another condition. Orders cannot be placed for customers with a “Delinquent” status. Attribute Dependency The value of one attribute depends on others. · If Loan Status = “Funded,” then Loan Amount > 0 and Funding Date is required · Pay Amount = Hours Worked × Hourly Rate · If Monthly Salary > 0, then Commission Rate must be NULL Data Validity Rules These rules ensure that actual data values are complete, correct, accurate, precise, unique, and consistent. Validity rules ensure data is trustworthy for reporting, regulatory compliance, and AI/ML models. Data Validity Rules Definition Example Completeness Required records, relationships, attributes, and values must exist. No NULLs in mandatory fields like Customer ID or Order Date. Correctness & Accuracy Values must reflect real-world truth and business rules. A customer’s credit limit must align with approved financial records. Precision Data must be stored with the required level of detail. Interest rates stored to four decimal places if required for calculations. Uniqueness No duplicate records, keys, definitions, or overloaded columns. A “Customer Type Code” column should not mix customer types and shipping methods. Consistency Duplicate or redundant data must match everywhere it appears. Customer address stored in multiple systems must be identical. Compliance PII and sensitive data Check and validate personal information like credit card, passport number, national id, bank account, etc. System Rules Microsoft Purview Data Quality provides both system (out-of-the-box) rules and custom rules, along with an AI-enabled data quality rule recommendation feature. Together, these capabilities help organizations effectively measure, monitor, and improve data quality by applying the right set of data quality rules. System (out-of-the-box) rules cover the majority of business attribute and data validity scenarios. List of the system rules are illustrated below (see the screenshot below). Custom Rules Custom rules allow you to define validations that evaluate one or more values within a row, enabling complex, context-aware data quality checks tailored to specific business requirements. Custom rules support all four major categories of data quality rules: Business Entity rules, Business Attribute rules, Data Dependency rules, and Data Validity rules. You can use regular expression language, Azure Data Factory expression, and SQL expression language to create custom rules. Purview Data Quality custom rule has three parts: Row expression: This Boolean expression applies to each row that the filter expression approves. If this expression returns true, the row passes. If it returns false, the row fails. Filter expression: This optional condition narrows down the dataset on which the row condition is evaluated. You activate it by selecting the Use filter expression checkbox. This expression returns a Boolean value. The filter expression applies to a row and if it returns true, then that row is considered for the rule. If the filter expression returns false for that row, then it means that row is ignored for the purposes of this rule. The default behavior of the filter expression is to pass all rows, so if you don't specify a filter expression, all rows are considered. Null expression: Checks how NULL values should be handled. This expression returns to a Boolean that handles cases where data is missing. If the expression returns true, the row expression isn't applied. Each part of the rule works similarly to existing Microsoft Purview Data Quality conditions. A rule only passes if the row expression evaluates to TRUE for the dataset that matches the filter expression and handles missing values as specified in the null expression. Examples: Ensure that the location of the salesperson is correct. Azure data factory expression language is used to author this rule. 2. Ensure "fare Amount" is positive and "trip Distance" is valid. SQL expression language is used to author this rule. 3. For each trip, check if the fare is above the average for its payment type. SQL expression language is used to author this rule. Together, above listed four categories of data quality rules: Prevent errors at the source Enforce business logic Improve trust in analytics and AI Reduce remediation costs downstream In short, high-quality data is not accidental—it is enforced through well-defined data quality rules across entities, attributes, relationships, and values. References Create Data Quality Rules in Unified Catalog | Microsoft Learn Expression builder in mapping data flows - Azure Data Factory & Azure Synapse | Microsoft Learn Expression Functions in the Mapping Data Flow - Azure Data Factory & Azure Synapse | Microsoft Learn http://www.informit.com/articles/article.aspx?p=399325&seqNum=3 Information Quality Applied, Larry P. EnglishSecurity as the core primitive - Securing AI agents and apps
This week at Microsoft Ignite, we shared our vision for Microsoft security -- In the agentic era, security must be ambient and autonomous, like the AI it protects. It must be woven into and around everything we build—from silicon to OS, to agents, apps, data, platforms, and clouds—and throughout everything we do. In this blog, we are going to dive deeper into many of the new innovations we are introducing this week to secure AI agents and apps. As I spend time with our customers and partners, there are four consistent themes that have emerged as core security challenges to secure AI workloads. These are: preventing agent sprawl and access to resources, protecting against data oversharing and data leaks, defending against new AI threats and vulnerabilities, and adhering to evolving regulations. Addressing these challenges holistically requires a coordinated effort across IT, developers, and security leaders, not just within security teams and to enable this, we are introducing several new innovations: Microsoft Agent 365 for IT, Foundry Control Plane in Microsoft Foundry for developers, and the Security Dashboard for AI for security leaders. In addition, we are releasing several new purpose-built capabilities to protect and govern AI apps and agents across Microsoft Defender, Microsoft Entra, and Microsoft Purview. Observability at every layer of the stack To facilitate the organization-wide effort that it takes to secure and govern AI agents and apps – IT, developers, and security leaders need observability (security, management, and monitoring) at every level. IT teams need to enable the development and deployment of any agent in their environment. To ensure the responsible and secure deployment of agents into an organization, IT needs a unified agent registry, the ability to assign an identity to every agent, manage the agent’s access to data and resources, and manage the agent’s entire lifecycle. In addition, IT needs to be able to assign access to common productivity and collaboration tools, such as email and file storage, and be able to observe their entire agent estate for risks such as over-permissioned agents. Development teams need to build and test agents, apply security and compliance controls by default, and ensure AI models are evaluated for safety guardrails and security vulnerabilities. Post deployment, development teams must observe agents to ensure they are staying on task, accessing applications and data sources appropriately, and operating within their cost and performance expectations. Security & compliance teams must ensure overall security of their AI estate, including their AI infrastructure, platforms, data, apps, and agents. They need comprehensive visibility into all their security risks- including agent sprawl and resource access, data oversharing and leaks, AI threats and vulnerabilities, and complying with global regulations. They want to address these risks by extending their existing security investments that they are already invested in and familiar with, rather than using siloed or bolt-on tools. These teams can be most effective in delivering trustworthy AI to their organizations if security is natively integrated into the tools and platforms that they use every day, and if those tools and platforms share consistent security primitives such as agent identities from Entra; data security and compliance controls from Purview; and security posture, detections, and protections from Defender. With the new capabilities being released today, we are delivering observability at every layer of the AI stack, meeting IT, developers, and security teams where they are in the tools they already use to innovate with confidence. For IT Teams - Introducing Microsoft Agent 365, the control plane for agents, now in preview The best infrastructure for managing your agents is the one you already use to manage your users. With Agent 365, organizations can extend familiar tools and policies to confidently deploy and secure agents, without reinventing the wheel. By using the same trusted Microsoft 365 infrastructure, productivity apps, and protections, organizations can now apply consistent and familiar governance and security controls that are purpose-built to protect against agent-specific threats and risks. gement and governance of agents across organizations Microsoft Agent 365 delivers a unified agent Registry, Access Control, Visualization, Interoperability, and Security capabilities for your organization. These capabilities work together to help organizations manage agents and drive business value. The Registry powered by the Entra provides a complete and unified inventory of all the agents deployed and used in your organization including both Microsoft and third-party agents. Access Control allows you to limit the access privileges of your agents to only the resources that they need and protect their access to resources in real time. Visualization gives organizations the ability to see what matters most and gain insights through a unified dashboard, advanced analytics, and role-based reporting. Interop allows agents to access organizational data through Work IQ for added context, and to integrate with Microsoft 365 apps such as Outlook, Word, and Excel so they can create and collaborate alongside users. Security enables the proactive detection of vulnerabilities and misconfigurations, protects against common attacks such as prompt injections, prevents agents from processing or leaking sensitive data, and gives organizations the ability to audit agent interactions, assess compliance readiness and policy violations, and recommend controls for evolving regulatory requirements. Microsoft Agent 365 also includes the Agent 365 SDK, part of Microsoft Agent Framework, which empowers developers and ISVs to build agents on their own AI stack. The SDK enables agents to automatically inherit Microsoft's security and governance protections, such as identity controls, data security policies, and compliance capabilities, without the need for custom integration. For more details on Agent 365, read the blog here. For Developers - Introducing Microsoft Foundry Control Plane to observe, secure and manage agents, now in preview Developers are moving fast to bring agents into production, but operating them at scale introduces new challenges and responsibilities. Agents can access tools, take actions, and make decisions in real time, which means development teams must ensure that every agent behaves safely, securely, and consistently. Today, developers need to work across multiple disparate tools to get a holistic picture of the cybersecurity and safety risks that their agents may have. Once they understand the risk, they then need a unified and simplified way to monitor and manage their entire agent fleet and apply controls and guardrails as needed. Microsoft Foundry provides a unified platform for developers to build, evaluate and deploy AI apps and agents in a responsible way. Today we are excited to announce that Foundry Control Plane is available in preview. This enables developers to observe, secure, and manage their agent fleets with built-in security, and centralized governance controls. With this unified approach, developers can now identify risks and correlate disparate signals across their models, agents, and tools; enforce consistent policies and quality gates; and continuously monitor task adherence and runtime risks. Foundry Control Plane is deeply integrated with Microsoft’s security portfolio to provide a ‘secure by design’ foundation for developers. With Microsoft Entra, developers can ensure an agent identity (Agent ID) and access controls are built into every agent, mitigating the risk of unmanaged agents and over permissioned resources. With Microsoft Defender built in, developers gain contextualized alerts and posture recommendations for agents directly within the Foundry Control Plane. This integration proactively prevents configuration and access risks, while also defending agents from runtime threats in real time. Microsoft Purview’s native integration into Foundry Control Plane makes it easy to enable data security and compliance for every Foundry-built application or agent. This allows Purview to discover data security and compliance risks and apply policies to prevent user prompts and AI responses from safety and policy violations. In addition, agent interactions can be logged and searched for compliance and legal audits. This integration of the shared security capabilities, including identity and access, data security and compliance, and threat protection and posture ensures that security is not an afterthought; it’s embedded at every stage of the agent lifecycle, enabling you to start secure and stay secure. For more details, read the blog. For Security Teams - Introducing Security Dashboard for AI - unified risk visibility for CISOs and AI risk leaders, coming soon AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 90% of security professionals, including CISOs, report that their responsibilities have expanded to include data governance and AI oversight within the past year. 1 At the same time, 86% of risk managers say disconnected data and systems lead to duplicated efforts and gaps in risk coverage. 2 To address these needs, we are excited to introduce the Security Dashboard for AI. This serves as a unified dashboard that aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview. This unified dashboard allows CISOs and AI risk leaders to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. For example, you can see your full AI inventory and get visibility into a quarantined agent, flagged for high data risk due to oversharing sensitive information in Purview. The dashboard then correlates that signal with identity insights from Entra and threat protection alerts from Defender to provide a complete picture of exposure. From there, you can delegate tasks to the appropriate teams to enforce policies and remediate issues quickly. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, there’s nothing new to buy. If you’re already using Microsoft security products to secure AI, you’re already a Security Dashboard for AI customer. Figure 5: Security Dashboard for AI provides CISOs and AI risk leaders with a unified view of their AI risk by bringing together their AI inventory, AI risk, and security recommendations to strengthen overall posture Together, these innovations deliver observability and security across IT, development, and security teams, powered by Microsoft’s shared security capabilities. With Microsoft Agent 365, IT teams can manage and secure agents alongside users. Foundry Control Plane gives developers unified governance and lifecycle controls for agent fleets. Security Dashboard for AI provides CISOs and AI risk leaders with a consolidated view of AI risks across platforms, apps, and agents. Added innovation to secure and govern your AI workloads In addition to the IT, developer, and security leader-focused innovations outlined above, we continue to accelerate our pace of innovation in Microsoft Entra, Microsoft Purview, and Microsoft Defender to address the most pressing needs for securing and governing your AI workloads. These needs are: Manage agent sprawl and resource access e.g. managing agent identity, access to resources, and permissions lifecycle at scale Prevent data oversharing and leaks e.g. protecting sensitive information shared in prompts, responses, and agent interactions Defend against shadow AI, new threats, and vulnerabilities e.g. managing unsanctioned applications, preventing prompt injection attacks, and detecting AI supply chain vulnerabilities Enable AI governance for regulatory compliance e.g. ensuring AI development, operations, and usage comply with evolving global regulations and frameworks Manage agent sprawl and resource access 76% of business leaders expect employees to manage agents within the next 2–3 years. 3 Widespread adoption of agents is driving the need for visibility and control, which includes the need for a unified registry, agent identities, lifecycle governance, and secure access to resources. Today, Microsoft Entra provides robust identity protection and secure access for applications and users. However, organizations lack a unified way to manage, govern, and protect agents in the same way they manage their users. Organizations need a purpose-built identity and access framework for agents. Introducing Microsoft Entra Agent ID, now in preview Microsoft Entra Agent ID offers enterprise-grade capabilities that enable organizations to prevent agent sprawl and protect agent identities and their access to resources. These new purpose-built capabilities enable organizations to: Register and manage agents: Get a complete inventory of the agent fleet and ensure all new agents are created with an identity built-in and are automatically protected by organization policies to accelerate adoption. Govern agent identities and lifecycle: Keep the agent fleet under control with lifecycle management and IT-defined guardrails for both agents and people who create and manage them. Protect agent access to resources: Reduce risk of breaches, block risky agents, and prevent agent access to malicious resources with conditional access and traffic inspection. Agents built in Microsoft Copilot Studio, Microsoft Foundry, and Security Copilot get an Entra Agent ID built-in at creation. Developers can also adopt Entra Agent ID for agents they build through Microsoft Agent Framework, Microsoft Agent 365 SDK, or Microsoft Entra Agent ID SDK. Read the Microsoft Entra blog to learn more. Prevent data oversharing and leaks Data security is more complex than ever. Information Security Media Group (ISMG) reports that 80% of leaders cite leakage of sensitive data as their top concern. 4 In addition to data security and compliance risks of generative AI (GenAI) apps, agents introduces new data risks such as unsupervised data access, highlighting the need to protect all types of corporate data, whether it is accessed by employees or agents. To mitigate these risks, we are introducing new Microsoft Purview data security and compliance capabilities for Microsoft 365 Copilot and for agents and AI apps built with Copilot Studio and Microsoft Foundry, providing unified protection, visibility, and control for users, AI Apps, and Agents. New Microsoft Purview controls safeguard Microsoft 365 Copilot with real-time protection and bulk remediation of oversharing risks Microsoft Purview and Microsoft 365 Copilot deliver a fully integrated solution for protecting sensitive data in AI workflows. Based on ongoing customer feedback, we’re introducing new capabilities to deliver real-time protection for sensitive data in M365 Copilot and accelerated remediation of oversharing risks: Data risk assessments: Previously, admins could monitor oversharing risks such as SharePoint sites with unprotected sensitive data. Now, they can perform item-level investigations and bulk remediation for overshared files in SharePoint and OneDrive to quickly reduce oversharing exposure. Data Loss Prevention (DLP) for M365 Copilot: DLP previously excluded files with sensitivity labels from Copilot processing. Now in preview, DLP also prevents prompts that include sensitive data from being processed in M365 Copilot, Copilot Chat, and Copilot agents, and prevents Copilot from using sensitive data in prompts for web grounding. Priority cleanup for M365 Copilot assets: Many organizations have org-wide policies to retain or delete data. Priority cleanup, now generally available, lets admins delete assets that are frequently processed by Copilot, such as meeting transcripts and recordings, on an independent schedule from the org-wide policies while maintaining regulatory compliance. On-demand classification for meeting transcripts: Purview can now detect sensitive information in meeting transcripts on-demand. This enables data security admins to apply DLP policies and enforce Priority cleanup based on the sensitive information detected. & bulk remediation Read the full Data Security blog to learn more. Introducing new Microsoft Purview data security capabilities for agents and apps built with Copilot Studio and Microsoft Foundry, now in preview Microsoft Purview now extends the same data security and compliance for users and Copilots to agents and apps. These new capabilities are: Enhanced Data Security Posture Management: A centralized DSPM dashboard that provides observability, risk assessment, and guided remediation across users, AI apps, and agents. Insider Risk Management (IRM) for Agents: Uniquely designed for agents, using dedicated behavioral analytics, Purview dynamically assigns risk levels to agents based on their risky handing of sensitive data and enables admins to apply conditional policies based on that risk level. Sensitive data protection with Azure AI Search: Azure AI Search enables fast, AI-driven retrieval across large document collections, essential for building AI Apps. When apps or agents use Azure AI Search to index or retrieve data, Purview sensitivity labels are preserved in the search index, ensuring that any sensitive information remains protected under the organization’s data security & compliance policies. For more information on preventing data oversharing and data leaks - Learn how Purview protects and governs agents in the Data Security and Compliance for Agents blog. Defend against shadow AI, new threats, and vulnerabilities AI workloads are subject to new AI-specific threats like prompt injections attacks, model poisoning, and data exfiltration of AI generated content. Although security admins and SOC analysts have similar tasks when securing agents, the attack methods and surfaces differ significantly. To help customers defend against these novel attacks, we are introducing new capabilities in Microsoft Defender that deliver end-to-end protection, from security posture management to runtime defense. Introducing Security Posture Management for agents, now in preview As organizations adopt AI agents to automate critical workflows, they become high-value targets and potential points of compromise, creating a critical need to ensure agents are hardened, compliant, and resilient by preventing misconfigurations and safeguarding against adversarial manipulation. Security Posture Management for agents in Microsoft Defender now provides an agent inventory for security teams across Microsoft Foundry and Copilot Studio agents. Here, analysts can assess the overall security posture of an agent, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions, all aligned to the MITRE ATT&CK framework. Additionally, the new agent attack path analysis visualizes how an agent’s weak security posture can create broader organizational risk, so you can quickly limit exposure and prevent lateral movement. Introducing Threat Protection for agents, now in preview Attack techniques and attack surfaces for agents are fundamentally different from other assets in your environment. That’s why Defender is delivering purpose-built protections and detections to help defend against them. Defender is introducing runtime protection for Copilot Studio agents that automatically block prompt injection attacks in real time. In addition, we are announcing agent-specific threat detections for Copilot Studio and Microsoft Foundry agents coming soon. Defender automatically correlates these alerts with Microsoft’s industry-leading threat intelligence and cross-domain security signals to deliver richer, contextualized alerts and security incident views for the SOC analyst. Defender’s risk and threat signals are natively integrated into the new Microsoft Foundry Control Plane, giving development teams full observability and the ability to act directly from within their familiar environment. Finally, security analysts will be able to hunt across all agent telemetry in the Advanced Hunting experience in Defender, and the new Agent 365 SDK extends Defender’s visibility and hunting capabilities to third-party agents, starting with Genspark and Kasisto, giving security teams even more coverage across their AI landscape. To learn more about how you can harden the security posture of your agents and defend against threats, read the Microsoft Defender blog. Enable AI governance for regulatory compliance Global AI regulations like the EU AI Act and NIST AI RMF are evolving rapidly; yet, according to ISMG, 55% of leaders report lacking clarity on current and future AI regulatory requirements. 5 As enterprises adopt AI, they must ensure that their AI innovation aligns with global regulations and standards to avoid costly compliance gaps. Introducing new Microsoft Purview Compliance Manager capabilities to stay ahead of evolving AI regulations, now in preview Today, Purview Compliance Manager provides over 300 pre-built assessments for common industry, regional, and global standards and regulations. However, the pace of change for new AI regulations requires controls to be continuously re-evaluated and updated so that organizations can adapt to ongoing changes in regulations and stay compliant. To address this need, Compliance Manager now includes AI-powered regulatory templates. AI-powered regulatory templates enable real-time ingestion and analysis of global regulatory documents, allowing compliance teams to quickly adapt to changes as they happen. As regulations evolve, the updated regulatory documents can be uploaded to Compliance Manager, and the new requirements are automatically mapped to applicable recommended actions to implement controls across Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft 365, and Microsoft Foundry. Automated actions by Compliance Manager further streamline governance, reduce manual workload, and strengthen regulatory accountability. Introducing expanded Microsoft Purview compliance capabilities for agents and AI apps now in preview Microsoft Purview now extends its compliance capabilities across agent-generated interactions, ensuring responsible use and regulatory alignment as AI becomes deeply embedded across business processes. New capabilities include expanded coverage for: Audit: Surface agent interactions, lifecycle events, and data usage with Purview Audit. Unified audit logs across user and agent activities, paired with traceability for every agent using an Entra Agent ID, support investigation, anomaly detection, and regulatory reporting. Communication Compliance: Detect prompts sent to agents and agent-generated responses containing inappropriate, unethical, or risky language, including attempts to manipulate agents into bypassing policies, generating risky content, or producing noncompliant outputs. When issues arise, data security admins get full context, including the prompt, the agent’s output, and relevant metadata, so they can investigate and take corrective action Data Lifecycle Management: Apply retention and deletion policies to agent-generated content and communication flows to automate lifecycle controls and reduce regulatory risk. Read about Microsoft Purview data security for agents to learn more. Finally, we are extending our data security, threat protection, and identity access capabilities to third-party apps and agents via the network. Advancing Microsoft Entra Internet Access Secure Web + AI Gateway - extend runtime protections to the network, now in preview Microsoft Entra Internet Access, part of the Microsoft Entra Suite, has new capabilities to secure access to and usage of GenAI at the network level, marking a transition from Secure Web Gateway to Secure Web and AI Gateway. Enterprises can accelerate GenAI adoption while maintaining compliance and reducing risk, empowering employees to experiment with new AI tools safely. The new capabilities include: Prompt injection protection which blocks malicious prompts in real time by extending Azure AI Prompt Shields to the network layer. Network file filtering which extends Microsoft Purview to inspect files in transit and prevents regulated or confidential data from being uploaded to unsanctioned AI services. Shadow AI Detection that provides visibility into unsanctioned AI applications through Cloud Application Analytics and Defender for Cloud Apps risk scoring, empowering security teams to monitor usage trends, apply Conditional Access, or block high-risk apps instantly. Unsanctioned MCP server blocking prevents access to MCP servers from unauthorized agents. With these controls, you can accelerate GenAI adoption while maintaining compliance and reducing risk, so employees can experiment with new AI tools safely. Read the Microsoft Entra blog to learn more. As AI transforms the enterprise, security must evolve to meet new challenges—spanning agent sprawl, data protection, emerging threats, and regulatory compliance. Our approach is to empower IT, developers, and security leaders with purpose-built innovations like Agent 365, Foundry Control Plane, and the Security Dashboard for AI. These solutions bring observability, governance, and protection to every layer of the AI stack, leveraging familiar tools and integrated controls across Microsoft Defender, Microsoft Entra, and Microsoft Purview. The future of security is ambient, autonomous, and deeply woven into the fabric of how we build, deploy, and govern AI systems. Explore additional resources Learn more about Security for AI solutions on our webpage Learn more about Microsoft Agent 365 Learn more about Microsoft Entra Agent ID Get started with Microsoft 365 Copilot Get started with Microsoft Copilot Studio Get started with Microsoft Foundry Get started with Microsoft Defender for Cloud Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Purview Compliance Manager Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Bedrock Security, 2025 Data Security Confidence Index, published Mar 17, 2025. 2 AuditBoard & Ascend2, Connected Risk Report 2024; as cited by MIT Sloan Management Review, Spring 2025. 3 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more 4 First Annual Generative AI study: Business Rewards vs. Security Risks, , Q3 2023, ISMG, N=400 5 First Annual Generative AI study: Business Rewards vs. Security Risks, Q3 2023, ISMG, N=400Secure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Copilot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurMicrosoft Ignite 2025: Top Security Innovations You Need to Know
🤖 Security & AI -The Big Story This Year 2025 marks a turning point for cybersecurity. Rapid adoption of AI across enterprises has unlocked innovation but introduced new risks. AI agents are now part of everyday workflows-automating tasks and interacting with sensitive data—creating new attack surfaces that traditional security models cannot fully address. Threat actors are leveraging AI to accelerate attacks, making speed and automation critical for defense. Organizations need solutions that deliver visibility, governance, and proactive risk management for both human and machine identities. Microsoft Ignite 2025 reflects this shift with announcements focused on securing AI at scale, extending Zero Trust principles to AI agents, and embedding intelligent automation into security operations. As a Senior Cybersecurity Solution Architect, I’ve curated the top security announcements from Microsoft Ignite 2025 to help you stay ahead of evolving threats and understand the latest innovations in enterprise security. Agent 365: Control Plane for AI Agents Agent 365 is a centralized platform that gives organizations full visibility, governance, and risk management over AI agents across Microsoft and third-party ecosystems. Why it matters: Unmanaged AI agents can introduce compliance gaps and security risks. Agent 365 ensures full lifecycle control. Key Features: Complete agent registry and discovery Access control and conditional policies Visualization of agent interactions and risk posture Built-in integration with Defender, Entra, and Purview Available via the Frontier Program Microsoft Agent 365: The control plane for AI agents Deep dive blog on Agent 365 Entra Agent ID: Zero Trust for AI Identities Microsoft Entra is the identity and access management suite (covering Azure AD, permissions, and secure access). Entra Agent ID extends Zero Trust identity principles to AI agents, ensuring they are governed like human identities. Why it matters: Unmanaged or over-privileged AI agents can create major security gaps. Agent ID enforces identity governance on AI agents and reduces automation risks. Key Features: Provides unique identities for AI agents Lifecycle governance and sponsorship for agents Conditional access policies applied to agent activity Integrated with open SDKs/APIs for third‑party platforms Microsoft Entra Agent ID Overview Entra Ignite 2025 announcements Public Preview details Security Copilot Expansion Security Copilot is Microsoft’s AI assistant for security teams, now expanded to automate threat hunting, phishing triage, identity risk remediation, and compliance tasks. Why it matters: Security teams face alert fatigue and resource constraints. Copilot accelerates response and reduces manual effort. Key Features: 12 new Microsoft-built agents across Defender, Entra, Intune, and Purview. 30+ partner-built agents available in the Microsoft Security Store. Automates threat hunting, phishing triage, identity risk remediation, and compliance tasks. Included for Microsoft 365 E5 customers at no extra cost. Security Copilot inclusion in Microsoft 365 E5 Security Copilot Ignite blog Security Dashboard for AI A unified dashboard for CISOs and risk leaders to monitor AI risks, aggregate signals from Microsoft security services, and assign tasks via Security Copilot - included at no extra cost. Why it matters: Provides a single pane of glass for AI risk management, improving visibility and decision-making. Key Features: Aggregates signals from Entra, Defender, and Purview Supports natural language queries for risk insights Enables task assignment via Security Copilot Ignite Session: Securing AI at Scale Microsoft Security Blog Microsoft Defender Innovations Microsoft Defender serves as Microsoft’s CNAPP solution, offering comprehensive, AI-driven threat protection that spans endpoints, email, cloud workloads, and SIEM/SOAR integrations. Why It Matters Modern attacks target multi-cloud environments and software supply chains. These innovations provide proactive defense, reduce breach risks before exploitation, and extend protection beyond Microsoft ecosystems-helping organizations secure endpoints, identities, and workloads at scale. Key Features: Predictive Shielding: Proactively hardens attack paths before adversaries pivot. Automatic Attack Disruption: Extended to AWS, Okta, and Proofpoint via Sentinel. Supply Chain Security: Defender for Cloud now integrates with GitHub Advanced Security. What’s new in Microsoft Defender at Ignite Defender for Cloud innovations Global Secure Access & AI Gateway Part of Microsoft Entra’s secure access portfolio, providing secure connectivity and inspection for web and AI traffic. Why it matters: Protects against lateral movement and AI-specific threats while maintaining secure connectivity. Key Features: TLS inspection, URL/file filtering AI Prompt Injection protection Private access for domain controllers to prevent lateral movement attacks. Learn about Secure Web and AI Gateway for agents Microsoft Entra: What’s new in secure access on the AI frontier Purview Enhancements Microsoft Purview is the data governance and compliance platform, ensuring sensitive data is classified, protected, and monitored. Why it matters: Ensures sensitive data remains protected and compliant in AI-driven environments. Key Features: AI Observability: Monitor agent activities and prevent sensitive data leakage. Compliance Guardrails: Communication compliance for AI interactions. Expanded DSPM: Data Security Posture Management for AI workloads. Announcing new Microsoft Purview capabilities to protect GenAI agents Intune Updates Microsoft Intune is a cloud-based endpoint device management solution that secures apps, devices, and data across platforms. It simplifies endpoint security management and accelerates response to device risks using AI. Why it matters: Endpoint security is critical as organizations manage diverse devices in hybrid environments. These updates reduce complexity, speed up remediation, and leverage AI-driven automation-helping security teams stay ahead of evolving threats. Key Features: Security Copilot agents automate policy reviews, device offboarding, and risk-based remediation. Enhanced remote management for Windows Recovery Environment (WinRE). Policy Configuration Agent in Intune lets IT admins create and validate policies with natural language What’s new in Microsoft Intune at Ignite Your guide to Intune at Ignite Closing Thoughts Microsoft Ignite 2025 signals the start of an AI-driven security era. From visibility and governance for AI agents to Zero Trust for machine identities, automation in security operations, and stronger compliance for AI workloads-these innovations empower organizations to anticipate threats, simplify governance, and accelerate secure AI adoption without compromising compliance or control. 📘 Full Coverage: Microsoft Ignite 2025 Book of News2.1KViews2likes0CommentsSearch and Purge workflow in the new modern eDiscovery experience
With the retirement of Content Search (Classic) and eDiscovery Standard (Classic) in May, and alongside the future retirement of eDiscovery Premium (Classic) in August, organizations may be wondering how this will impact their existing search and purge workflow. The good news is that it will not impact your organizations ability to search for and purge email, Teams and M365 Copilot messages; however there are some additional points to be careful about when working with purge with cmdlet and Graph alongside of the modern eDiscovery experience. We have made some recent updates to our documentation regarding this topic to reflect the changes in the new modern eDiscovery experience. These can be found below and you should ensure that you read them in full as they are packed with important information on the process. Find and delete email messages in eDiscovery | Microsoft Learn Find and delete Microsoft Teams chat messages in eDiscovery | Microsoft Learn Search for and delete Copilot data in eDiscovery | Microsoft Learn The intention of this first blog post in the series is to cover the high-level points including some best practices when it comes to running search and purge operations using Microsoft Purview eDiscovery. Please stay tuned for further blog posts intended to provide more detailed step-by-step of the following search and purge scenarios: Search and Purge email and Teams messages using Microsoft Graph eDiscovery APIs Search and Purge email messages using the Security and Compliance PowerShell cmdlets I will update this blog post with the subsequent links to the follow-on posts in this series. So let’s start by looking at the two methods available to issue a purge command with Microsoft Purview eDiscovery, they are the Microsoft Graph eDiscovery APIs or the Security and Compliance PowerShell cmdlets. What licenses you have dictates which options are available to you and what type of items you can be purge from Microsoft 365 workloads. For E3/G3 customers and cases which have the premium features disabled You can only use the PowerShell cmdlets to issue the purge command You should only purge email items from mailboxes and not Teams messages You are limited to deleting 10 items per location with a purge command For E5/G5 customers and cases which have the premium features enabled You can only use the Graph API to issue the purge command You can purge email items and Teams messages You can delete up to 100 items per location with a purge command To undertake a search and then purge you must have the correct permissions assigned to your account. There are two key Purview Roles that you must be assigned, they are: Compliance Search: This role lets users run the Content Search tool in the Microsoft Purview portal to search mailboxes and public folders, SharePoint Online sites, OneDrive for Business sites, Skype for Business conversations, Microsoft 365 groups, and Microsoft Teams, and Viva Engage groups. This role allows a user to get an estimate of the search results and create export reports, but other roles are needed to initiate content search actions such as previewing, exporting, or deleting search results. Search and Purge: This role lets users perform bulk removal of data matching the criteria of a search. To learn more about permissions in eDiscovery, along with the different eDiscovery Purview Roles, please refer to the following Microsoft Learn article: Assign permissions in eDiscovery | Microsoft Learn By default, eDiscovery Manager and eDiscovery Administrators have the “Compliance Search” role assigned. For search and purge, only the Organization Management Purview Role group has the role assigned by default. However, this is a highly privileged Purview Role group and customers should considering using a custom role group to assign the Search and Purge Purview role to authorised administrators. Details on how to create a custom role group in Purview can be found in the following article. Permissions in the Microsoft Purview portal | Microsoft Learn It is also important to consider the impact of any retention policies or legal holds will have when attempting to purge email items from a mailbox where you want to hard delete the items and remove it completely from the mailbox. When a retention policy or legal hold is applied to a mailbox, email items that are hard deleted via the purge process are moved and retained in the Recoverable Items folder of the mailbox. There purged items will be retained until such time as all holds are lifted and until the retention period defined in the retention policy has expired. It is important to note that items retained in the Recoverable Items folder are not visible to users but are returned in eDiscovery searches. For some search and purge use cases this is not a concern; if the primary goal is to remove the item from the user’s view then additional steps are required. However if the goal is to completely remove the email item from the mailbox in Exchange Online so it doesn't appear in the user’s view and is not returned by future eDiscovery searches then additional steps are required. They are: Disable client access to the mailbox Modify retention settings on the mailbox Disable the Exchange Online Managed Folder Assistant for the mailbox Remove all legal holds and retention policies from the mailbox Perform the search and purge operation Revert the mailbox to its previous state These steps should be carefully followed as any mistake could result in additional data that is being retained being permanently deleted from the service. The full detailed steps can be found in the following article. Delete items in the Recoverable Items folder mailboxes on hold in eDiscovery | Microsoft Learn Now for some best practice when running search and purge operations: Where possible target the specific locations containing the items you wish to purge and avoid tenant wide searches where possible If a tenant wide search is used to initially locate the items, once the locations containing the items are known modify the search to target the specific locations and rerun the steps Always validate the item report against the statistics prior to issuing the purge command to ensure you are only purging items you intend to remove If the item counts do not align then do not proceed with the purge command Ensure admins undertaking search and purge operations are appropriately trained and equipped with up-to-date guidance/process on how to safely execute the purge process The search conditions Identifier, Sensitivity Label and Sensitive Information Type do not support purge operations and if used can cause un-intended results Organizations with E5/G5 licenses should also take this opportunity to review if other Microsoft Purview and Defender offerings can help them achieve the same outcomes. When considering the right approach/tool to meet your desired outcomes you should become familiar with the following additional options for removing email items: Priority Clean-up (link): Use the Priority cleanup feature under Data Lifecycle Management in Microsoft Purview when you need to expedite the permanent deletion of sensitive content from Exchange mailboxes, overriding any existing retention settings or eDiscovery holds. This process might be implemented for security or privacy in response to an incident, or for compliance with regulatory requirements. Threat Explorer (link): Threat Explorer in Microsoft Defender for Office 365 is a powerful tool that enables security teams to investigate and remediate malicious emails in near real-time. It allows users to search for and filter email messages based on various criteria - such as sender, recipient, subject, or threat type - and take direct actions like soft delete, hard delete, or moving messages to junk or deleted folders. For manual remediation, Threat Explorer supports actions on emails delivered within the past 30 days In my next posts I will be delving further into how to use both the Graph APIs and the Security and Compliance PowerShell module to safely execute your purge commands.Search and Purge using the Security and Compliance PowerShell cmdlets
Welcome back to the series of blogs covering search and purge in Microsoft Purview eDiscovery! If you are new to this series, please first visit the blog post in our series that you can find here: Search and Purge workflow in the new modern eDiscovery experience. Also please ensure you read in full the Microsoft Learn documentation on this topic as I will not be covering some of the steps in full (permissions, releasing holds, all limitations): Find and delete email messages in eDiscovery | Microsoft Learn So as a reminder, E3/G3 customers must use the Security and Compliance PowerShell cmdlets to execute the purge operation. Searches can continue to be created using the New-ComplianceSearch cmdlet and then run the newly created search using the Start-ComplianceSearch cmdlet. Once a search has run, the statistics can be reviewed before executing the New-ComplianceSearchAction cmdlet with the Purge switch to remove the item from the targeted locations. However, some organizations may want to initially run the search, review statistics and export an item report in the new user experience before using the New-ComplianceSearchAction cmdlet to purge the items from the mailbox. Before starting, ensure you have version 3.9.0 or later of the Exchange Online Management PowerShell Module installed (link). If multiple versions of the Exchange Online Management PowerShell module are installed alongside version 3.9.0, remove the older versions of the module to avoid potential conflicts between the different versions of the module. When connecting using the Connect-IPPSession cmdlet ensure you include the EnableSearchOnlySession parameter otherwise the purge command will not run and may generate an error (link) Create the case, if you will be using the new Content Search case you can skip this step. However, if you want to create a new case to host the search, you must create the case via PowerShell. This ensures any searches created within the case in the Purview portal will support the PowerShell based purge command. Use the Connect-IPPSession command to connect to Security and Compliance PowerShell before running the following command to create a new case. New-ComplianceCase “Test Case” Select the new Purview Content Search case or the new case you created in step 1 and create a new Search Within your new search use the Add Sources option to search for and select the mailboxes containing the item to be purged by adding them to the Data sources of your newly created search. Note: Make sure only Exchange mailboxes are selected as you can only purge items contained within Exchange Mailboxes. If you added both the mailbox and associated sites, you can remove the sites using the 3 dot menu next to the data source under User Options. Alternatively, use the manage sources button to remove the sites associated with the data source. Within Condition builder define the conditions required to target the item you wish to purge. In this example, I am targeting an email with a specific subject, from a specific sender, on a specific day. To help me understand the estimated number of items that would be returned by the search I can run a statistics job first to give me confidence that the query is correct. I do this by selecting Run Query from the search itself. Then I can select Statistics and Run Query to trigger the Statistics job. Note, you can view the progress of the job via the Process Manager Once completed I can view the Statistics to confirm the query looks accurate and returning the numbers I was expecting. If I want to further verify that the items returned by the search is what I am looking for, I can run a Sample job to review a sample of the items matching the search query Once the Sample job is completed, I can review samples for locations with hits to determine if this is indeed the items I want to purge. If I need to go further and generate a report of the items that match the search (not just statistics and sampling) I can run an export to generate a report for the items that match the search criteria. Note: It is important to run the export report to review the results that purge action will remove from the mailbox. This will ensure that we purge only the items of interest. Download the report for the export job via the Process Manager or the Export tab to review the items that were a match Note: If very few locations have hits it is recommended to reduce the scope of your search by updating the data sources to include only the locations with hits. Switch back to the cmdlet and use Get-ComplianceSearch cmdlet as below, ensure the query is as you specified in the Purview Portal Get-ComplianceSearch -Identity "My search and purge" | fl As the search hasn’t be run yet in PowerShell – the Items count is 0 and the JobEndTime is not set - the search needs to be re-run via PS as per the example shown below Start-ComplianceSearch "My search and purge" Give it a few minutes to complete and use Get-ComplianceSearch to check the status of the search, if the status is not “Completed” and JobEndTime is not set you may need to give it more time Check the search returned the same results once it has finished running Get-ComplianceSearch -Identity "My search and purge" | fl name,status,searchtype,items,searchstatistics CRITICAL: It is important to make sure the Items count match the number of items returned in the item report generated from the Purview Portal. If the number of items returned in PowerShell do not match, then do not continue with the purge action. Issue the purge command using the New-ComplianceSearchAction cmdlet New-ComplianceSearchAction -SearchName "My search and purge" -Purge -PurgeType HardDelete Once completed check the status of the purge command to confirm that the items have been deleted Get-ComplianceSearchAction "My search and purge_purge" | fl Now that the purge operation has been completed successfully, it has been removed from the target mailbox and is no longer accessible by the user.Microsoft Security Store: Now Generally Available
When we launched the Microsoft Security Store in public preview on September 30, our goal was simple: make it easier for organizations to discover, purchase, and deploy trusted security solutions and AI agents that integrate seamlessly with Microsoft Security products. Today, Microsoft Security Store is generally available—with three major enhancements: Embedded where you work: Security Store is now built into Microsoft Defender, featuring SOC-focused agents, and into Microsoft Entra for Verified ID and External ID scenarios like fraud protection. By bringing these capabilities into familiar workflows, organizations can combine Microsoft and partner innovation to strengthen security operations and outcomes. Expanded catalog: Security Store now offers more than 100 third-party solutions, including advanced fraud prevention, forensic analysis, and threat intelligence agents. Security services available: Partners can now list and sell services such as managed detection and response and threat hunting directly through Security Store. Real-World Impact: What We Learned in Public Preview Thousands of customers explored Microsoft Security Store and tried a growing catalog of agents and SaaS solutions. While we are at the beginning of our journey, customer feedback shows these solutions are helping teams apply AI to improve security operations and reduce manual effort. Spairliners, a cloud-first aviation services joint venture between Air France and Lufthansa, strengthened identity and access controls by deploying Glueckkanja’s Privileged Admin Watchdog to enforce just-in-time access. “Using the Security Store felt easy, like adding an app in Entra. For a small team, being able to find and deploy security innovations in minutes is huge.” – Jonathan Mayer, Head of Innovation, Data and Quality GTD, a Chilean technology and telecommunications company, is testing a variety of agents from the Security Store: “As any security team, we’re always looking for ways to automate and simplify our operations. We are exploring and applying the world of agents more and more each day so having the Security Store is convenient—it’s easy to find and deploy agents. We’re excited about the possibilities for further automation and integrations into our workflows, like event-triggered agents, deeper Outlook integration, and more." – Jonathan Lopez Saez, Cybersecurity Architect Partners echoed the momentum they are seeing with the Security Store: “We’re excited by the early momentum with Security Store. We’ve already received multiple new leads since going live, including one in a new market for us, and we have multiple large deals we’re looking to drive through Security Store this quarter.” - Kim Brault, Head of Alliances, Delinea “Partnering with Microsoft through the Security Store has unlocked new ways to reach enterprise customers at scale. The store is pivotal as the industry shifts toward AI, enabling us to monetize agents without building our own billing infrastructure. With the new embedded experience, our solutions appear at the exact moment customers are looking to solve real problems. And by working with Microsoft’s vetting process, we help provide customers confidence to adopt AI agents” – Milan Patel, Co-founder and CEO, BlueVoyant “Agents and the Microsoft Security Store represent a major step forward in bringing AI into security operations. We’ve turned years of service experience into agentic automations, and it’s resonating with customers—we’ve been positively surprised by how quickly they’re adopting these solutions and embedding our automated agentic expertise into their workflows.” – Christian Kanja, Founder and CEO of glueckkanja New at GA: Embedded in Defender, Entra—Security Solutions right where you work Microsoft Security Store is now embedded in the Defender and Entra portals with partner solutions that extend your Microsoft Security products. By placing Security Store in front of security practitioners, it’s now easier than ever to use the best of partner and Microsoft capabilities in combination to drive stronger security outcomes. As Dorothy Li, Corporate Vice President of Security Copilot and Ecosystem put it, “Embedding the Security Store in our core security products is about giving customers access to innovative solutions that tap into the expertise of our partners. These solutions integrate with Microsoft Security products to complete end-to-end workflows, helping customers improve their security” Within the Microsoft Defender portal, SOC teams can now discover Copilot agents from both Microsoft and partners in the embedded Security Store, and run them all from a single, familiar interface. Let’s look at an example of how these agents might help in the day of the life of a SOC analyst. The day starts with Watchtower (BlueVoyant) confirming Sentinel connectors and Defender sensors are healthy, so investigations begin with full visibility. As alerts arrive, the Microsoft Defender Copilot Alert Triage Agent groups related signals, extracts key evidence, and proposes next steps; identity related cases are then validated with Login Investigator (adaQuest), which baselines recent sign-in behavior and device posture to cut false positives. To stay ahead of emerging campaigns, the analyst checks the Microsoft Threat Intelligence Briefing Agent for concise threat rundowns tied to relevant indicators, informing hunts and temporary hardening. When HR flags an offboarding, GuardianIQ (People Tech Group) correlates activity across Entra ID, email, and files to surface possible data exfiltration with evidence and risk scores. After containment, Automated Closing Comment Generator (Ascent Global Inc.) produces clear, consistent closure notes from Defender incident details, keeping documentation tight without hours of writing. Together, these Microsoft and partner agents maintain platform health, accelerate triage, sharpen identity decisions, add timely threat context, reduce insider risk blind spots, and standardize reporting—all inside the Defender portal. You can read more about the new agents available in the Defender portal in this blog. In addition, Security Store is now integrated into Microsoft Entra, focused on identity-centric solutions. Identity admins can discover and activate partner offerings for DDoS protection, intelligent bot defense, and government ID–based verification for account recovery —all within the Entra portal. With these capabilities, Microsoft Entra delivers a seamless, multi-layered defense that combines built-in identity protection with best-in-class partner technologies, making it easier than ever for enterprises to strengthen resilience against modern identity threats. Learn more here. Levent Besik, VP of Microsoft Entra, shared that “This sets a new benchmark for identity security and partner innovation at Microsoft. Attacks on digital identities can come from anywhere. True security comes from defense in depth, layering protection across the entire user journey so every interaction, from the first request to identity recovery, stays secure. This launch marks only the beginning; we will continue to introduce additional layers of protection to safeguard every aspect of the identity journey” New at GA: Services Added to a Growing Catalog of Agents and SaaS For the first time, partners can offer their security services directly through the Security Store. Customers can now find, buy, and activate managed detection and response, threat hunting, and other expert services—making it easier to augment internal teams and scale security operations. Every listing has a MXDR Verification that certifies they are providing next generation advanced threat detection and response services. You can browse all the services available at launch here, and read about some of our exciting partners below: Avanade is proud to be a launch partner for professional services in the Microsoft Security Store. As a leading global Microsoft Security Services provider, we’re excited to make our offerings easier to find and help clients strengthen cyber defenses faster through this streamlined platform - Jason Revill, Avanade Global Security Technology Lead ProServeIT partnering with Microsoft to have our offers in the Microsoft Security Store helps ProServeIT protect our joint customers and allows us to sell better with Microsoft sellers. It shows customers how our technology and services support each other to create a safe and secure platform - Eric Sugar, President Having Reply’s security services showcased in the Microsoft Security Store is a significant milestone for us. It amplifies our ability to reach customers at the exact point where they evaluate and activate Microsoft security solutions, ensuring our offerings are visible alongside Microsoft’s trusted technologies. Notable New Selections Since public preview, the Security Store catalog has grown significantly. Customers can now choose from over 100 third-party solutions, including 60+ SaaS offerings and 50+ Security Copilot agents, with new additions every week. Recent highlights include Cisco Duo and Rubrik: Cisco Duo IAM delivers comprehensive, AI-driven identity protection combining MFA, SSO, passwordless and unified directory management. Duo IAM seamlessly integrates across the Microsoft Security suite—enhancing Entra ID with risk-based authentication and unified access policy management across cloud and on-premises applications seamlessly in just a few clicks. Intune for device compliance and access enforcement. Sentinel for centralized security monitoring and threat detection through critical log ingestion about authentication events, administrator actions, and risk-based alerts, providing real-time visibility across the identity stack. Rubrik's data security platform delivers complete cyber resilience across enterprise, cloud, and SaaS alongside Microsoft. Through the Microsoft Sentinel integration, Rubrik’s data management capabilities are combined with Sentinel’s security analytics to accelerate issue resolution, enabling unified visibility and streamlined responses. Furthermore, Rubrik empowers organizations to reduce identity risk and ensure operational continuity with real-time protection, unified visibility and rapid recovery across Microsoft Active Directory and Entra ID infrastructure. The Road Ahead This is just the beginning. Microsoft Security Store will continue to make it even easier for customers to improve their security outcomes by tapping into the innovation and expertise of our growing partner ecosystem. The momentum we’re seeing is clear—customers are already gaining real efficiencies and stronger outcomes by adopting AI-powered agents. As we work together with partners, we’ll unlock even more automation, deeper integrations, and new capabilities that help security teams move faster and respond smarter. Explore the Security Store today to see what’s possible. For a more detailed walk-through of the capabilities, read our previous public preview Tech Community post If you’re a partner, now is the time to list your solutions and join us in shaping the future of security.1.1KViews3likes0CommentsTeams Private Channels: Group-Based Compliance Model & Purview eDiscovery Considerations
Microsoft Teams Private Channels are undergoing an architectural change that will affect how your organisations uses Microsoft Purview eDiscovery to hold and discovery these messages going forward. In essence, copies of private channel messages will now be stored in the M365 Group mailbox, aligning their storage with how standard and shared channels work today. This shift, due to roll out from early October 2025 to December 2025, brings new benefits (like greatly expanded channel limits and meeting support) and has the potential to impact your Purview eDiscovery searches and legal holds workflows. In this blog post, we’ll break down what’s changing, what remains the same, and provide you with the information you need to review your own eDiscovery processes when working with private channel messages. What’s Changing? Private channel conversation history is moving to a group-based model. Historically, when users posted in a private channel, copies of those messages were stored in each member of the private channel’s Exchange Online mailbox (in a hidden folder). This meant that Microsoft Purview eDiscovery search and hold actions for private channel content had to be scoped to the member’s mailbox, which added complexity. Under the new model rolling out in late 2025, each private channel will get its own dedicated channel mailbox linked to the parent Teams’ M365 group mailbox. In other words, private channel messages will be stored similarly to shared channel messages; where the parent Teams’ M365 group mailbox is targeted in eDiscovery searches and holds, instead of targeting the mailboxes of all members of the private channel. Targeting the parent Teams’ M365 Group mailbox in a search or a hold will extend to all dedicated channel mailboxes for shared and private channels within the team as well as including any standard channels. After the transition, any new messages in a private channel will see the message copy being stored in the channel’s group mailbox, not in users’ mailboxes. Why the change? This aligns the retention and collection of private channel messages to standard and shared channel messages. Instead of having to include separate data sources depending on the type of Teams channel, eDiscovery practitioners can simply target the Team’s M365 Group mailbox and cover all its channel, no matter it’s type. This update will introduce major improvements to private channels themselves. This includes raising the limits on private channels and members, and enabling features that were previously missing: Maximum private channels per team: increasing from 30 to 1000. Maximum members in a private channel: increasing from 250 to 5000. Meeting scheduling in private channels: previously not supported, now allowed under the new model. The table below summarizes the old vs new model for Teams private channel messages: Aspect Before (User Mailbox Model) After (Group Mailbox Model) Message Storage Messages copied into each private channel member’s Exchange Online mailbox. Messages are stored in a channel mailbox associated with the parent Teams’ M365 group mailbox. eDiscovery Search Had to search private channel member’s mailboxes to find channel messages. Search the parent M365 group mailbox for new private channel messages and user mailboxes for any messages that were not migrated to the group mailbox. Legal Hold Placement Apply hold on private channel member’s mailbox to preserve messages. Apply hold on the parent M365 group mailbox. Existing holds may need to include both the M365 group mailbox and members mailboxes to cover new messages and messages that were not migrated to the group mailbox. Things to know about the changes During the migration of Teams private channel messages to the new group-based model, the process will transfer the latest version of each message from the private channel member’s mailbox to the private channel’s dedicated channel mailbox. However, it’s important to note that this process does not include the migration of held message versions; specifically, any messages that were edited or deleted prior to the migration. These held messages, due to a legal hold or retention policy, will remain in the individual user mailboxes where they were originally stored. As such, eDiscovery practitioners should consider, based on their need, including the user mailboxes in their search and hold scopes. Legal Holds for Private Channel Content Before the migration, if you needed to preserve a private channel’s messages, you placed a hold on the mailboxes of each member of the private channel. This ensured each user’s copy of the channel messages was held by the hold. Often, eDiscovery practitioners would also place a hold on the M365 group mailbox to also hold the messages from standard and shared channels After the migration, this workflow changes: you will instead place a hold on the parent Team’s M365 group mailbox that corresponds to the private channel. Before migration: It is recommended to update any existing hold that are intended to preserve private channel messages so that it includes the parent Team’s M365 group mailbox in addition to the private channel members’ mailboxes. This ensures continuity as any new messages (once the channel migrates) will be stored in the group mailbox. After migration: For any new eDiscovery hold involving a private channel, simply add the parent Teams’ M365 group mailbox to the hold. As previously discussed eDiscovery practitioners should consider, based on need, if the hold also needs to include the private channel members mailboxes due to non-migrated content. Any private channel messages currently held in the user mailbox will continue to be preserved by the existing hold, but to hold any future messages sent post migration will require a hold placed on the group mailbox. eDiscovery Search and Collection Performing searches related to private channel messages will change after the migration: Before Migration: To collect private channel messages, you targeted the private channel member’s mailbox as a data source in the search. After migration: The private channel messages will be stored in a channel mailbox associated with the parent Team’s M365 group mailbox. That means you include the Team’s M365 group mailbox as a data source in your search. As previously discussed eDiscovery practitioners should consider, based on need, if the search also needs to include the private channel members mailboxes due to non-migrated content. What Isn’t Changing? It’s important to emphasize that only Teams private channel messages are changing in this rollout. Other content locations in Teams remain as they were, so your existing eDiscovery processes remain unchanged: Standard channel messages: These are been stored in the Teams M365 group mailbox. You will continue to place holds on the Team’s M365 group mailbox for standard channel content and target it in searches to do collections. Shared channel messages: Shared channels messages are stored in a channel mailbox linked to the M365 group mailbox for the Team. You continue to place holds and undertake searches by targeting the M365 group mailbox for the Team that contains the shared channel. Teams chats (1:1 or group chats): Teams chats are stored in each user’s Exchange Online mailbox. For eDiscovery, you will continue to search individual user mailboxes for chats and place holds on user mailboxes to preserve chat content. Files and SharePoint data: Any file shared in teams message or uploaded to a SharePoint site associated with a channel remains as it is today. In conclusion For more information regarding timelines, refer to the to the Microsoft Teams blog post “New enhancements in Private Channels in Microsoft Teams unlock their full potential” as well as checking for updates via the Message Center Post MC1134737.Introducing eDiscovery Graph API Standard and Enhancements to Premium APIs
We have been busy working to enable organisations that leverage the Microsoft Purview eDiscovery Graph APIs to benefit from the enhancements in the new modern experience for eDiscovery. I am pleased to share that APIs have now been updated with additional parameters to enable organisations to now benefit from the following features already present in the modern experience within the Purview Portal: Ability to control the export package structure and item naming convention Trigger advanced indexing as part of the Statistics, Add to Review and Export jobs Enables for the first time the ability to trigger HTML transcription of Teams, Viva and Copilot interaction when adding to a review set Benefit from the new statistic options such as Include Categories and Include Keyword Report More granular control of the number of versions collected of modern attachments and documents collected directly collected from OneDrive and SharePoint These changes were communicated as part of the M365 Message Center Post MC1115305. This change involved the beta version of the API calls being promoted into the V1.0 endpoint of the Graph API. The following v1.0 API calls were updated as part of this work: Search Estimate Statistics – ediscoverySearch: estimateStatistics Search Export Report - ediscoverySearch: exportReport Search Export Result - ediscoverySearch: exportResult Search Add to ReviewSet – ediscoveryReviewSet: addToReviewSet ReviewSet Export - ediscoveryReviewSet: export The majority of this blog post is intended to walk through the updates to each of these APIs and provide understanding on how to update your calls to these APIs to maintain a consistent outcome (and benefit from the new functionality). If you are new to the Microsoft Purview eDiscovery APIs you can refer to my previous blog post on how to get started with them. Getting started with the eDiscovery APIs | Microsoft Community Hub First up though, availability of the Graph API for E3 customers We are excited to announce that starting September 9, 2025, Microsoft will launch the eDiscovery Graph API Standard, a new offering designed to empower Microsoft 365 E3 customers with secure, automated data export capabilities. The new eDiscovery Graph API offers scalable, automated exports with secure credential management, improved performance and reliability for Microsoft 365 E3 customers. The new API enables automation of the search, collect, hold, and export flow from Microsoft Purview eDiscovery. While it doesn’t include premium features like Teams/Yammer conversations or advanced indexing (available only with the Premium Graph APIs), it delivers meaningful value for Microsoft 365 E3 customers needing to automate structured legal exports. Key capabilities: Export from Exchange, SharePoint, Teams, Viva Engage and OneDrive for Business Case, search, hold and export management Integration with partner/vendor workflows Support automation that takes advantage of new features within the modern user experience Pricing & Access Microsoft will offer 50 GB of included export volume per tenant per month, with additional usage billed at $10/GB—a price point that balances customer value, sustainability, and market competitiveness. The Graph API Standard will be available in public preview starting September 9. For more details on pay-as-you-go features in eDiscovery and Purview refer to the following links. Billing in eDiscovery | Microsoft Learn Enable Microsoft Purview pay-as-you-go features via subscription | Microsoft Learn Wait, but what about the custodian and noncustodial locations workflow in eDiscovery Classic (Premium)? As you are probably aware, in the modern user experience for eDiscovery there have been some changes to the Data Sources tab and how it is used in the workflow. Typically, organisations leveraging the Microsoft Purview eDiscovery APIs previously would have used the custodian and noncustodial data sources APIs to add the relevant data sources to the case using the following APIs. ediscoveryCustodian resource type - Microsoft Graph v1.0 | Microsoft Learn ediscoveryNoncustodialDataSource resource type - Microsoft Graph v1.0 | Microsoft Learn Once added via the API calls, when creating a search these locations would be bound to a search. This workflow in the API remains supported for backwards compatibility. This includes the creation of system generated case hold policies when applying holds to the locations via these APIs. Organisations can continue to use this approach with the APIs. However, to simplify your code and workflow in the APIs consider using the following API call to add additional sources directly to the search. Add additional sources - Microsoft Graph v1.0 | Microsoft Learn Some key things to note if you continue to use the custodian and noncustodial data sources APIs in your automation workflow. This will not populate the new data sources tab in the modern experience for eDiscovery They can continue to be queried via the API calls Advanced indexing triggered via these APIs will have no influence on if advanced indexing is used in jobs triggered from a search Make sure you use the new parameters to trigger advanced indexing on the job when running the Statistics, Add to Review Set and Direct Export jobs Generating Search Statistics ediscoverySearch: estimateStatistics In eDiscovery Premium (Classic) and the previous version of the APIs, generating statistics was a mandatory step before you could progress to either adding the search to a review set or triggering a direct export. With the new modern experience for eDiscovery, this step is completely optional and is not mandatory. For organizations that previously generated search statistics but never checked or used the results before moving to adding the search to a review set or triggering a direct export job, they can now skip this step. If organizations do want to continue to generate statistics, then calling the updated API with the same parameters call will continue to generate statistics for the search. An example of a previous call would look as follows: POST /security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/estimateStatistics Historically this API didn’t require a request body. With the APIs now natively working with the modern experience for eDiscovery; the API call now supports a request body, enabling you to benefit from the new statistic options. Details on these new options can be found in the links below. Create a search for a case in eDiscovery | Microsoft Learn Evaluate and refine search results in eDiscovery | Microsoft Learn If a search is run without a request body it will still generate the following information: Total matches and volume Number of locations searched and the number of locations with hits Number of data sources searched and the number of data sources with hits The top five data sources that make up the most search hits matching your query Hit count by location type (mailbox versus site) As the API is now natively working with the modern experience for eDiscovery you can optionally include a request body to pass the statisticOptions parameter in the POST API call. With the changes to how Advanced Indexing works within the new UX and the additional reporting categories available, you can use the statisticsOptions parameter to trigger the generate statistic job with the additional options within the modern experience for the modern UX. The values you can include are detailed in the table below. Property Option from Portal includeRefiners Include categories: Refine your view to include people, sensitive information types, item types, and errors. includeQueryStats Include query keywords report: Assess keyword relevance for different parts of your search query. includeUnindexedStats Include partially indexed items: We'll provide details about items that weren't fully indexed. These partially indexed items might be unsearchable or partially searchable advancedIndexing Perform advanced indexing on partially indexed items: We'll try to reindex a sample of partially indexed items to determine whether they match your query. After running the query, check the Statistics page to review information about partially indexed items. Note: Can only be used if includeUnindexedStats is also included. locationsWithoutHits Exclude partially indexed items in locations without search hits: Ignore partially indexed items in locations with no matches to the search query. Checking this setting will only return partially indexed items in locations where there is already at least one hit. Note: Can only be used if includeUnindexedStats is also included. In eDiscovery Premium (Classic) the advanced indexing took place when a custodian or non-custodial data location was added to the Data Sources tab. This means that when you triggered the estimate statistics call on the search it would include results from both the native Exchange and SharePoint index as well as the Advanced Index. In the modern experience for eDiscovery, the advanced indexing runs as part of the job. However, this must be selected as an option on the job. Note that not all searches will benefit from advanced indexing, one example would be a simple date range search on a mailbox or SPO site as this will still have hits on the partially indexed items (even partial indexed email and SPO file items have date metadata in the native indexes). The following example using PowerShell and the Microsoft Graph PowerShell module and passes the new StatisticsOptions parameter to the POST call and selects all available options. # Generate estimates for the newly created search $statParams = @{ statisticsOptions = "includeRefiners,includeQueryStats,includeUnindexedStats,advancedIndexing,locationsWithoutHits" } $params = $statParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/estimateStatistics" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Write-Host "Estimate statistics generation triggered for search ID: $searchID" Once run, it will create a generated statistic job with the additional options selected. Direct Export - Report ediscoverySearch: exportReport This API enables you to generate an item report directly form a search without taking the data into a review set or exporting the items that match the search. With the APIs now natively working with the modern experience for eDiscovery, new parameters have been added to the request body as well as new values available for existing parameters. The new parameters are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. These new parameters reflect the changes made in the modern experience for eDiscovery that provides more granular control for eDiscovery managers to apply different collection options based on where the SPO item was collected from (e.g. directly from a SPO site vs a cloud attachment link included in an email). Within eDiscovery Premium (Classic) the All Document Versions option applied to both SharePoint and OneDrive files collected directly from SharePoint and any cloud attachments contained within email, teams and viva engage messages. Historically for this API, within the additionalOptions parameter you could include the allDocumentVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allDocumentVersions value can still be included in the additionalOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalOptions parameter and switch to using the new documentVersion parameter. As described earlier, to benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the direct export job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalOptions advancedIndexing Perform advanced indexing on partially indexed items exportCriteria searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items exportLocation responsiveLocations, nonresponsiveLocations Exclude partially indexed items in locations without search hits. Finally, in the new modern experience for eDiscovery more granular control has been introduced to enable organisations to independently choose to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and the ability to collect up to 12 hours of related conversations when a message matches a search. This is reflected in the job settings by the following options: Organize conversations into HTML transcripts Include Teams and Viva Engage conversations In the classic experience this was a single option titled Teams and Yammer Conversations that did both actions and was controlled by including the teamsAndYammerConversations value in the additionalOptions parameter. With the APIs now natively working with the modern experience for eDiscovery, the teamsAndYammerConversations value can still be included in the additionalOptions parameter but it will only trigger the collection of up to 12 hours of related conversations when a message matches a search without converting the items into HTML transcripts. To do this we need to include the new value of htmlTranscripts in the additionalOptions parameter. As an example, lets look at the following direct export report job from the portal and use the Microsoft Graph PowerShell module to call the exportReport API call with the updated request body. $exportName = "New UX - Direct Export Report" $exportParams = @{ displayName = $exportName description = "Direct export report from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportReport" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Direct Export - Results ediscoverySearch: exportResult - Microsoft Graph v1.0 | Microsoft Learn This API call enables you to export the items from a search without taking the data into a review set. All the information from the above section on the changes to the exportReport API also applies to this API call. However with this API call we will actually be exporting the items from the search and not just the report. As such we need to pass in the request body information on how we want the export package to look. Previously with direct export for eDiscovery Premium (Classic) you had a three options in the UX and in the API to define the export format. Option Exchange Export Structure SharePoint / OneDrive Export Structure Individual PST files for each mailbox PST created for each mailbox. The structure of each PST is reflective of the folders within the mailbox with emails stored based on their original location in the mailbox. Emails named based on their subject. Folder for each mailbox site. Within each folder, the structure is reflective of the SharePoint/OneDrive site with documents stored based on their original location in the site. Documents are named based on their document name. Individual .msg files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folders within the mailbox with emails stored as .msg files based on their original location in the mailbox. Emails named based on their subject. As above. Individual .eml files for each message Folder created for each mailbox. Within each folder the file structure within is reflective of the folder within the mailbox with emails stored as .eml files based on their original location in the mailbox. Emails named based on their subject As above. Historically with this API, the exportFormat parameter was used to control the desired export format. Three values could be used and they were pst, msg and eml. This parameter is still relevant but only controls how email items will be saved, either in a PST file, as individual .msg files or as individual .eml files. Note: The eml export format option is depreciated in the new UX. Going forward you should use either pst or msg. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. More information on the export package options and what they control can be found in the following link. https://learn.microsoft.com/en-gb/purview/edisc-search-export#export-package-options To support this, new values have been added to the additionalOptions parameter for this API call, these must be included in the request body otherwise the export structure will be as follows. exportFormat value Exchange Export Structure SharePoint / OneDrive Export Structure pst PST files created that containing data from multiple mailboxes. All emails contained within a single folder within the PST. Emails named a based on an assigned unique identifier (GUID) One folder for all documents. All documents contained within a single folder. Documents are named based on an assigned unique identifier (GUID) msg Folder created containing data from all mailboxes. All emails contained within a single folder stored as .msg files. Emails named a based on an assigned unique identifier (GUID) As above. The new values added to the additionalOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal splitSource Organize data from different locations into separate folders or PSTs includeFolderAndPath Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive a similar output structure when previously using the pst or msg values in the exportFormat parameter I would include all of the above values in the additionalOptions parameter. For example, to generate a direct export where the email items are stored in separate PSTs per mailbox, the structure of the PST files reflects the mailbox and each items is named as per the subject of the email; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - PST" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = “https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" $exportResponse = Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params If I want to export the email items as individual .msg files instead of storing them in PST files; I would use the Microsoft Graph PowerShell module to call the exportResults API call with the updated request body. $exportName = "New UX - DirectExportJob - MSG" $exportParams = @{ displayName = $exportName description = "Direct export of items from the search" additionalOptions = "teamsAndYammerConversations,cloudAttachments,htmlTranscripts,advancedIndexing,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportCriteria = "searchHits,partiallyIndexed" documentVersion = "recent10" cloudAttachmentVersion = "recent10" exportLocation = "responsiveLocations" exportFormat = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = " https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/searches/$searchID/exportResult" Add to Review Set ediscoveryReviewSet: addToReviewSet This API call enables you to commit the items that match the search to a Review Set within an eDiscovery case. This enables you to review, tag, redact and filter the items that match the search without exporting the data from the M365 service boundary. Historically with this API call it was more limited compared to triggering the job via the eDiscovery Premium (Classic) UI. With the APIs now natively working with the modern experience for eDiscovery organizations can make use of the enhancements made within the modern UX and have greater flexibility in selecting the options that are relevant for your requirements. There is a lot of overlap with previous sections, specifically the “Direct Export – Report” section on what updates are required to benefit from updated API. They are as follows: Controlling the number of versions of SPO and OneDrive documents added to the review set via the new cloudAttachmentVersion and documentVersion parameters Enabling organizations to trigger the advanced indexing of partial indexed items during the add to review set job via new values added to existing parameters However there are some nuances to the parameter names and the values for this specific API call compared to the exportReport API call. For example, with this API call we use the additionalDataOptions parameter opposed to the additionalOptions parameter. As with the exportReport and exportResult APIs, there are new parameters to control the number of versions of SPO and OneDrive documents added to the review set are as follows: cloudAttachmentVersion: The versions of cloud attachments to include in messages ( e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when a cloud attachment is contained within a email, teams or viva engage messages. If version shared is configured this is also always returned. documentVersion: The versions of files in SharePoint to include (e.g. latest, latest 10, latest 100 or All). This controls how many versions of a file that is collected when targeting a SharePoint or OneDrive site directly in the search. Historically for this API call, within the additionalDataOptions parameter you could include the allVersions value to trigger the collection of all versions of any file stored in SharePoint and OneDrive. With the APIs now natively working with the modern experience for eDiscovery, the allVersions value can still be included in the additionalDataOptions parameter but it will only apply to files collected directly from a SharePoint or OneDrive site. It will not influence any cloud attachments included in email, teams and viva engage messages. To collect additional versions of cloud attachments use the cloudAttachmentVersion parameter to control the number of versions that are included. Also consider moving from using the allDocumentVersions value in the additionalDataOptions parameter and switch to using the new documentVersion parameter. To benefit from advanced indexing in the modern experience for eDiscovery, you must trigger advanced indexing as part of the add to review set job. Within the portal to include partially indexed items and run advanced indexing you would make the following selections. To achieve this via the API call we need to ensure we include the following parameters and values into the request body of the API call. Parameter Value Option from the portal additionalDataOptions advancedIndexing Perform advanced indexing on partially indexed items itemsToInclude searchHits, partiallyIndexed Indexed items that match your search query and partially indexed items additionalDataOptions locationsWithoutHits Exclude partially indexed items in locations without search hits. Historically the API call didn’t support the add to review set job options to convert Teams, Viva Engage and Copilot interactions into HTML transcripts and collect up to 12 hours of related conversations when a message matches a search. With the APIs now natively working with the modern experience for eDiscovery this is now possible by adding support for the htmlTranscripts and messageConversationExpansion values to the addtionalDataOptions parameter. As an example, let’s look at the following add to review set job from the portal and use the Microsoft Graph PowerShell module to invoke the addToReviewSet API call with the updated request body. $commitParams = @{ search = @{ id = $searchID } additionalDataOptions = "linkedFiles,advancedIndexing,htmlTranscripts,messageConversationExpansion,locationsWithoutHits" cloudAttachmentVersion = "latest" documentVersion = "latest" itemsToInclude = "searchHits,partiallyIndexed" } $params = $commitParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/addToReviewSet" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Export from Review Set ediscoveryReviewSet: export This API call enables you to export items from a Review Set within an eDiscovery case. Historically with this API, the exportStructure parameter was used to control the desired export format. Two values could be used and they were directory and pst. This parameter has had been updated to include a new value of msg. Note: The directory value is depreciated in the new UX but remains available in v1.0 of the API call for backwards compatibility. Going forward you should use msg alongside the new exportOptions values. The exportStructure parameter will only control how email items are saved, either within PST files or as individual .msg files. With the APIs now natively working with the modern experience for eDiscovery; we need to account for the additional flexibility customers have to control the structure of their export package. An example of the options available in the direct export job can be seen below. As with the exportResults API call for direct export, new values have been added to the exportOptions parameter for this API call. The new values added to the exportOptions parameters are as follows. They control the export package structure for both Exchange and SharePoint/OneDrive items. Property Option from Portal splitSource Organize data from different locations into separate folders or PSTs includeFolderAndPath Include folder and path of the source condensePaths Condense paths to fit within 259 characters limit friendlyName Give each item a friendly name Organizations are free to mix and match which export options they include in the request body to meet their own organizational requirements. To receive an equivalent output structure when previously using the pst value in the exportStructure parameter I would include all of the above values in the exportOptions parameter within the request body. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - PST" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles,includeFolderAndPath,splitSource,condensePaths,friendlyName" exportStructure = "pst" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params To receive an equivalent output structure when previously using the directory value in the exportStructure parameter I would instead use the msg value within the request body. As the condensed directory structure format export all items into a single folder, all named based on uniquely assigned identifier I do not need to include the new values added to the exportOptions parameter. An example using the Microsoft Graph PowerShell module can be found below. An example using the Microsoft Graph PowerShell module can be found below. $exportName = "ReviewSetExport - MSG" $exportParams = @{ outputName = $exportName description = "Exporting all items from the review set" exportOptions = "originalFiles" exportStructure = "msg" } $params = $exportParams | ConvertTo-Json -Depth 10 $uri = "https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/$caseID/reviewSets/$reviewSetID/export" Invoke-MgGraphRequest -Method Post -Uri $uri -Body $params Continuing to use the directory value in exportStructure will produce the same output as if msg was used. Wrap Up Thank you for your time reading through this post. Hopefully you are now equipped with the information needed to make the most of the new modern experience for eDiscovery when making your Graph API calls.Upcoming changes to Microsoft Purview eDiscovery
Today, we are announcing three significant updates to the Microsoft Purview eDiscovery products and services. These updates reinforce our commitment to meeting and exceeding the data security, privacy, and compliance requirements of our customers. To improve security and help protect customers and their data, we have accelerated the timeline for the below changes, which will be enforced by default on May 26. The following features will be retired from the Microsoft Purview portal: Content Search will transition to the new unified Purview eDiscovery experience. The eDiscovery (Standard) classic experience will transition to the new unified Purview eDiscovery experience. The eDiscovery export PowerShell cmdlet parameters will be retired. These updates aim to unify and simplify the eDiscovery user experience in the new Microsoft Purview Portal, while preserving the accessibility and integrity of existing eDiscovery cases. Content Search transition to the new unified Purview eDiscovery experience The classic eDiscovery Content Search solution will be streamlined into the new unified Purview eDiscovery experience. Effective May 26 th , the Content Search solution will no longer be available in the classic Purview portal. Content Search provides administrators with the ability to create compliance searches to investigate data located in Microsoft 365. We hear from customers that the Content Seach tool is used to investigate data privacy concerns, perform legal or incident investigations, validate data classifications, etc. Currently, each compliance search created in the Content Search tool is created outside of the boundaries of a Purview eDiscovery (Standard) case. This means that administrators in Purview Role Groups containing the Compliance Search role can view all Content Searches in their tenant. While the Content Search solution does not enable any additional search permission access, the view of all Content Searches in a customer tenant is not an ideal architecture. Alternatively, when using a Purview eDiscovery case, these administrators only have access to cases in which they are assigned. Customers can now create their new compliance searches within an eDiscovery case using the new unified Purview eDiscovery experience. All content searches in a tenant created prior to May 26, 2025 are now accessible in the new unified Purview eDiscovery experience within a case titled “Content Search”. Although the permissions remain consistent, eDiscovery managers and those with custom permissions will now only be able to view searches from within the eDiscovery cases in which they are assigned, including the “Content Search” case. eDiscovery Standard transition to the new unified Purview eDiscovery experience The classic Purview eDiscovery (Standard) solution experience has transitioned into the new unified Purview eDiscovery experience. Effective May 26 th , the classic Purview eDiscovery (Standard) solution will no longer be available to customers within the classic Purview portal. All existing eDiscovery cases created in the classic purview experience are now available within the new unified Purview eDiscovery experience. Retirement of eDiscovery Export PowerShell Cmdlet parameters The Export parameter within the ComplianceSearchAction eDiscovery PowerShell cmdlets will be retired on May 26, 2025: New-ComplianceSearchAction -Export parameter (and parameters dependent on export such as Report, Retentionreport …) Get-ComplianceSearchAction -Export parameter Set-ComplianceSearchAction -ChangeExportKey parameter We recognize that the removal of the Export parameter may require adjustments to your current workflow process when using Purview eDiscovery (Standard). The remaining Purview eDiscovery PowerShell cmdlets will continue to be supported after May 26 th , 2025: Create and update Compliance Cases New-ComplianceCase, Set-ComplianceCase Create and update Case Holds New-CaseHoldPolicy, Set-CaseHoldPolicy, New-CaseHoldRule, Set-CaseHoldRule Create, update and start Compliance Searches New-ComplianceSearch,Set-ComplianceSearch, Start-ComplianceSearch, Apply Purge action to a Compliance Search New-ComplianceSearchAction -Purge Additionally, if you have a Microsoft 365 E5 license and use eDiscovery (Premium), your organization can script all eDiscovery operations, including export, using the Microsoft Graph eDiscovery APIs. Purview eDiscovery Premium On May 26 th , there will be no changes to the classic Purview eDiscovery (Premium) solution in the classic Purview portal. Cases that were created using the Purview eDiscovery (Premium) classic case experience can also now be accessed in the new unified Purview eDiscovery experience. We recognize that these changes may impact your current processes, and we appreciate your support as we implement these updates. Microsoft runs on trust and protecting your data is our utmost priority. We believe these improvements will provide a more secure and reliable eDiscovery experience. To learn more about the Microsoft Purview eDiscovery solution and become an eDiscovery Ninja, please check out our eDiscovery Ninja Guide at https://aka.ms/eDiscoNinja!