compliance
247 TopicsSearch and Purge using Microsoft Graph eDiscovery API
Welcome back to the series of blogs covering search and purge in Microsoft Purview eDiscovery! If you are new to this series, please first visit the blog post in our series that you can find here: Search and Purge workflow in the new modern eDiscovery experience Also, please ensure you have fully read the Microsoft Learn documentation on this topic as I will not be covering some of the steps in full (permissions, releasing holds, all limitations): Find and delete Microsoft Teams chat messages in eDiscovery | Microsoft Learn So as a reminder, for E5/G5 customers and cases with premium features enabled- you must use the Graph API to execute the purge operation. With the eDiscovery Graph API, you have the option to create the case, create a search, generate statistics, create an item report and issue the purge command all from the Graph API. It is also possible to use the Purview Portal to create the case, create the search, generate statistics/samples and generate the item report. However, the final validation of the items that would be purged by rerunning the statistics operation and issuing the purge command must be run via the Graph API. In this post, we will take a look at two examples, one involving an email message and one involving a Teams message. I will also look to show how to call the graph APIs. Purging email messages via the Graph API In this example, I want to purge the following email incorrectly sent to Debra Berger. I also want to remove it from the sender's mailbox as well. Let’s assume in this example I do not know exactly who sent and received the email, but I do know the subject and date it was sent on. In this example, I am going to use the Modern eDiscovery Purview experience to create a new case where I will undertake some initial searches to locate the item. Once the case is created, I will Create a search and give it a name. In this example, I do not know all the mailboxes where the email is present, so my initial search is going to be a tenant wide search of all Exchange mailboxes, using the subject and date range as conditions to see which locations have hits. Note: For scenarios where you know the location of the items there is no requirement to do a tenant wide search. You can target the search to the know locations instead. I will then select Run Query and trigger a Statistics job to see which locations in the tenant have hits. For our purposes, we do not need to select Include categories, Include query keywords report or Include partially indexed items. This will trigger a Generate statistics job and take you to the Statistics tab of the search. Once the job completes it will display information on the total matches and number of locations with hits. To find out exactly which locations have hits, I can use the improved process reports to review more granular detail on the locations with hits. The report for the Generate statistics job can be found by selecting Process manager and then selecting the job. Once displayed I can download the reports associated with this process by selecting Download report. Once we have downloaded the report for the process, we get a ZIP file containing four different reports, to understand where I had hits I can review the Locations report within the zip file. If I open the locations report and filter on the count column I can see in this instance I have two locations with hits, Admin and DebraB. I will use this to make my original search more targeted. It also gives me an opportunity to check that I am not going to exceed the limits on the number of items I can target for the purge per execution. Returning to our original search I will remove All people and groups from my Data Sources and replace it with the two locations I had hits from. I will re-run my Generate Statistics job to ensure I am still getting the expected results. As the numbers align and remain consistent, I will do a further check and generate samples from the search. This will allow me to review the items to confirm that they are the items I wish to purge. From the search query I select Run query and select Sample. This will trigger a Generate sample job and take you to the Sample tab of the search. Once complete, I can review samples of the items returned by the search to confirm if these items are the items I want to purge. Now that I have confirmed, based on the sampling, that I have the items I want to purge I want to generate a detailed item report of all items that are a match for my search. To do this I need to generate an export report for the search. Note: Sampling alone may not return all the results impacted by the search, it only returns a sample of the items that match the query. To determine the full set of items that will be targeted we need to generate the export report. From the Search I can select Export to perform a direct export without having to add the data to a review set (available when premium features are enabled). Ensure to configure the following options on the export: Indexed items that match your search query Unselect all the options under Messages and related items from mailboxes and Exchange Online Export Item report only If you want to manually review the items that would be impacted by the purge operation you can optionally export the items alongside the items report for further review. You can also add the search to a review set to review the items that you are targeting. The benefit of adding to the review set is that it enables to you review the items whilst still keeping the data within the M365 service boundary. Note: If you add to a review set, a copy of the items will remain in the review set until the case is deleted. I can review the progress of the export job and download the report via the Process Manager. Once I have downloaded the report, I can review the Items.csv file to check the items targeted by the search. It is at this stage I must switch to using the Graph APIs to validate the actions that will be taken by the purge command and to issue the purge command itself. Not undertaking these additional validation steps can result in un-intended purge of data. There are two approaches you can use to interact with the Microsoft Graph eDiscovery APIs: Via Graph Explorer Via the MS.Graph PS module For this example, I will show how to use the Graph Explorer to make the relevant Graph API calls. For the Teams example, I will use the MS.Graph PS Module. We are going to use the APIs to complete the following steps: Trigger a statistics job via the API and review the results Trigger the purge command The Graph Explorer can be accessed via the following link: Graph Explorer | Try Microsoft Graph APIs - Microsoft Graph To start using the Graph Explorer to work with Microsoft Graph eDiscovery APIs you first need to sign in with your admin account. You need to ensure that you consent to the required Microsoft Graph eDiscovery API permissions by selecting Consent to permissions. From the Permissions flyout search for eDiscovery and select Consent for eDiscovery.ReadWrite.All. When prompted to consent to the permissions for the Graph Explorer select Accept. Optionally you can consent on behalf of your organisation to suppress this step for others. Once complete we can start making calls to the APIs via Graph Explorer. To undertake the next steps we need to capture some additional information, specifically the Case ID and the Search ID. We can get the case ID from the Case Settings in the Purview Portal, recording the Id value shown on the Case details pane. If we return to the Graph Explorer we can use this CaseID to see all the searches within an eDiscovery case. The structure of the HTTPS call is as follows: GET https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<caseID>/searches List searches - Microsoft Graph v1.0 | Microsoft Learn If we replace <caseID> with the Id we captured from the case settings we can issue the API call to see all the searches within the case to find the required search ID. When you issue the GET request in Graph Explorer you can review the Response preview to find the search ID we are looking for. Now that we have the case ID and the Search ID we can trigger an estimate by using the following Graph API call. POST https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/estimateStatistics ediscoverySearch: estimateStatistics - Microsoft Graph v1.0 | Microsoft Learn Once you issue the POST command you will be returned with an Accepted – 202 message. Now I need to use the following REST API call to review the status of the Estimate Statistics job in Graph Explorer. GET https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/lastEstimateStatisticsOperation List lastEstimateStatisticsOperation - Microsoft Graph v1.0 | Microsoft Learn If the estimates job is not complete when you run the GET command the Response preview contents will show the status as running. If the estimates job is complete when you run the GET command the Response preview contents will show you the results of the estimates job. CRITICAL: Ensure that the indexedItemCount matches the items returned in the item report generated via the Portal. If this does not match do not proceed to issuing the purge command. Now that I have validated everything, I am ready to issue the purge command via the Graph API. I will use the following Graph API call. POST https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/purgeData ediscoverySearch: purgeData - Microsoft Graph v1.0 | Microsoft Learn With this POST command we also need to provide a Request Body to tell the API which areas we want to target (mailboxes or teamsMessages) and the purge type (recoverable, permantlyDelete). As we are targeting email items I will use mailboxes as the PurgeAreas option. As I only want to remove the item from the user’s mailbox view I am going to use recoverable as the PurgeType. { "purgeType": "recoverable", "purgeAreas": "mailboxes" } Once you issue the POST command you will be returned with an Accepted – 202 message. Once the command has been issued it will proceed to purge the items that match the search criteria from the locations targeted. If I go back to my original example, we can now see the item has been removed from the users mailbox. As it has been soft deleted I can review the recoverable items folder from Outlook on the Web where I will see that for the user, it has now been deleted pending clean-up from their mailbox. Purging Teams messages via the Graph API In this example, I want to purge the following Teams conversation between Debra, Adele and the admin (CDX) from all participants Teams client. I am going to reuse the “HK016 – Search and Purge” case to create a new search called “Teams conversation removal”. I add three participants of the chat as Data sources to the search, I am then going to use the KeyQL condition to target the items I want to remove. In this example I am using the following KeyQL. (Participants=AdeleV@M365x00001337.OnMicrosoft.com AND Participants=DebraB@M365x00001337.OnMicrosoft.com AND Participants=admin@M365x00001337.onmicrosoft.com) AND (Kind=im OR Kind=microsoftteams) AND (Date=2025-06-04) This is looking for all Teams messages that contain all three participants sent on the 4 th of June 2025. It is critical when targeting Teams messages that I ensure my query targets exactly the items that I want to purge. With Teams messages (opposed to email items) there are less options available that enable us to granularly target the team items for purging. Note: The use of the new Identifier condition is not supported for purge options. Use of this can lead to unintended data to be removed and should not be used as a condition in the search at this time. If I was to be looking for a very specific phrase, I could further refine the query by using the Keyword condition to look for that specific Teams message. Once I have created my search I am ready to generate both Statistics and Samples to enable me to validate I am targeting the right items for my search. My statistics job has returned 21 items, 7 from each location targeted. This aligns with the number of items within the Teams conversation. However, I am going to also validate that the samples I have generated match the content I want to purge, ensuring that I haven’t inadvertently returned additional items I was not expecting. Now that I have confirmed, based on the sampling, that the sample of items returned look to be correct I want to generate a detailed item report of all items that are a match for my search. To do this I need to generate an export report for the search. From the Search I can select Export to perform a direct export without having to add the data to a review set (available when premium features are enabled). Ensure to configure the following options on the export: Indexed items that match your search query Unselect all the options under Messages and related items from mailboxes and Exchange Online Export Item report only Once I select Export it will create a new export job, I can review the progress of the job and download the report via the Process Manager. Once I have downloaded the report, I can review the Items.csv file to check the items targeted by the search and that would be purged when I issue the purge call. Now that I have confirmed that the search is targeting the items I want to purge it is at this stage I must switch to using the Graph APIs. As discussed, there are two approaches you can use to interact with the Microsoft Graph eDiscovery APIs: Using Graph Explorer Using the MS.Graph PS module For this example, I will show how to use the MS.Graph PS Module to make the relevant Graph API calls. To understand how to use the Graph Explorer to issue the purge command please refer to the previous example for purging email messages. We are going to use the APIs to complete the following steps: Trigger a statistics job via the API and review the results Trigger the purge command To install the MS.Graph PowerShell module please refer to the following article. Install the Microsoft Graph PowerShell SDK | Microsoft Learn To understand more about the MS.Graph PS module and how to get started you can review the following article. Get started with the Microsoft Graph PowerShell SDK | Microsoft Learn Once the PowerShell module is installed you can connect to the eDiscovery Graph APIs by running the following command. connect-mgGraph -Scopes "ediscovery.ReadWrite.All" You will be prompted to authenticate, once complete you will be presented with the following banner. To undertake the next steps we need to capture some additional information, specifically the Case ID and the Search ID. As before we can get the case ID from the Case Settings in the Purview Portal, recording the Id value shown on the Case details pane. Alternatively we can use the following PowerShell command to find a list of cases and their ID. get-MgSecurityCaseEdiscoveryCase | ft displayname,id List ediscoveryCases - Microsoft Graph v1.0 | Microsoft Learn Once we have the ID of the case we want to execute the purge command from, we can run the following command to find the IDs of all the search jobs in the case. Get-MgSecurityCaseEdiscoveryCaseSearch -EdiscoveryCaseId <ediscoveryCaseId> | ft displayname,id,ContentQuery List searches - Microsoft Graph v1.0 | Microsoft Learn Now that we have both the Case ID and the Search ID we can trigger the generate statistics job using the following command. Invoke-MgEstimateSecurityCaseEdiscoveryCaseSearchStatistics -EdiscoveryCaseId <ediscoveryCaseId> -EdiscoverySearchId <ediscoverySearchId> ediscoverySearch: estimateStatistics - Microsoft Graph v1.0 | Microsoft Learn Now I need to use the following command to review the status of the Estimate Statistics job. Get-MgSecurityCaseEdiscoveryCaseSearchLastEstimateStatisticsOperation -EdiscoveryCaseID <ediscoveryCaseId> -EdiscoverySearchId <ediscoverySearchId> List lastEstimateStatisticsOperation - Microsoft Graph v1.0 | Microsoft Learn If the estimates job is not complete when you run the command the status will show as running. If the estimates job is complete when you run the command status will show as succeeded and will also show the number of hits in the IndexItemCount. CRITICAL: Ensure that the indexedItemCount matches the items returned in the item report generated via the Portal. If this does not match do not proceed to issuing the purge command. Now that I have validated everything I am ready to issue the purge command via the Graph API. With this command we need to provide a Request Body to tell the API which areas we want to target (mailboxes or teamsMessages) and the purge type (recoverable, permantlyDelete). As we are targeting teams items I will use teamsMessages as the PurgeAreas option. Note: If you specify mailboxes then only the compliance copy stored in the user mailbox will be purged and not the item from the teams services itself. This will mean the item will remain visible to the user in Teams and can no longer be purged. When purgeType is set to either recoverable or permanentlyDelete and purgeAreas is set to teamsMessages, the Teams messages are permanently deleted. In other words either option will result in the permanent deletion of the items from Teams and they cannot be recovered. $params = @{ purgeType = "recoverable" purgeAreas = "teamsMessages" } Once I have prepared my request body I will issue the following command. Clear-MgSecurityCaseEdiscoveryCaseSearchData -EdiscoveryCaseId $ediscoveryCaseId -EdiscoverySearchId $ediscoverySearchId -BodyParameter $params ediscoverySearch: purgeData - Microsoft Graph v1.0 | Microsoft Learn Once the command has been issued it will proceed to purge the items that match the search criteria from the locations targeted. If I go back to my original example, we can now see the items has been removed from Teams. Congratulations, you have made it to the end of the blog post. Hopefully you found it useful and it assists you to build your own operational processes for using the Graph API to issue search and purge actions.Registration Open: Community-Led Purview Lightning Talks
Get ready for an electrifying event! The Microsoft Security Community proudly presents Purview Lightning Talks; an action-packed series featuring your fellow Microsoft users, partners and passionate Microsoft Security community members of all sorts. Each 3-12 minute talk cuts straight to the chase, delivering expert insights, real-world use cases, and even a few game-changing tips and tricks. Don’t miss this opportunity to learn, connect, and be inspired! Secure your spot now for the big day: April 30th at 8am Redmond Time. 💙 See agenda details below and follow this blog post (sign in and click the "follow" heart in the upper right) to receive notifications. We have more speaker details and community connection information coming soon! AGENDA The Day Offboarding Exposed Infinite Retention - Nikki Chapple nikkichapple A real-world discovery of orphaned OneDrives and retention debt caused by retain-only policies, and how Adaptive Scopes help prevent it. Topic: Data Lifecycle Management Securing Data in the Age of AI - Julio Cesar Goncalves Vasconcelos How Microsoft Purview enables organizations to accelerate AI adoption while maintaining security, compliance, and transparency. Topic: Purview for AI What’s In My Compliance Manager Toolbox - Jerrad Dahlager j-dahl7 A practical walkthrough of using Compliance Manager to map controls, track improvements, and simplify multi-framework compliance. Topic: Compliance Manager Why You Should Create Your Own Sensitive Information Types (SITs) - Niels Jakobsen Niels_Jakobsen An in-depth analysis of why built-in SITs are not one-size-fits-all, and how to tailor them for real enterprise needs. Topic: Information Protection Beyond eDiscovery – Purview DSI for Security Investigation - Susantha Silva How to turn DLP alerts and Insider Risk signals into structured data investigations without jumping between portals. Topic: Data Security (DSI) Four Labels Max for Daily Use: Which Ones & Why? - Romain Dalle Romain DALLE A minimalist sensitivity labeling baseline designed for real-world adoption and usability. Topic: Information Protection Elevating Purview DLP with a Real-World Use Case - Victor Wingsing vicwingsing Hardening Purview DLP beyond default configurations to close real-world data loss gaps. Topic: Data Loss Prevention (DLP) Stop, Think, Protect: Data Security in Real Life with Purview - Oliver Sahlmann Oliver Sahlmann A traffic-light approach showing how simple labels and DLP policies still deliver meaningful protection. Topic: Data Security The Purview Label Engine: Automated Classification & Documentation - Michael Kirst Neshva MichaelKirst1970 A scalable framework for rolling out Microsoft Purview labels across global, multilingual enterprises. Topic: Information Protection Data-Driven Endpoint DLP with Advanced Hunting - Tatu Seppälä tatuseppala Using KQL queries and usage patterns to refine endpoint DLP policies based on real behavior. Topic: Data Loss Prevention (DLP) Improving Discovery, Trust, and Reuse of Analytics with Purview Data Products - Craig Wyndowe CraigWyndowe How Purview Governance Domains and Data Products create a trusted, reusable analytics ecosystem. Topic: Data Governance From Zero to First Signal: Insider Risk Management Prerequisites That Matter - Sathish Veerapandian Sathish Veerapandian A focused look at the configurations required for Insider Risk Management to actually generate alerts. Topic: Insider Risk Management The Purview Hack No One Talks About: Container Sensitivity Labels - Nikki Chapple nikkichapple How container sensitivity labels instantly fix oversharing for Teams, Groups, and SharePoint sites. Topic: Information Protection Using Purview to Prevent Oversharing with AI Services - Viktor Hedberg headburgh How Information Protection and DLP prevent Copilot and AI services from exposing sensitive data. Topic: Information Protection & DLP How I Helped Customers Understand Their AI Usage (and Protect Data) - Bram de Jager Bram de Jager Exposing risky AI usage patterns and protecting sensitive data entered into public AI tools. Topic: Data Security Posture Management for AI Bulk Sensitivity Label Removal with Microsoft Purview Information Protection (MPIP) - Zak Hepler A practical demo on safely removing sensitivity labels at scale from SharePoint libraries. Topic: Information Protection Does M365 Support eDiscovery? (Mythbusting) - Julian Kusenberg Leprechaun91 A myth-busting session separating perception from reality in Microsoft 365 eDiscovery. Topic: eDiscovery486Views4likes0CommentsWelcome to the Microsoft Security Community!
Microsoft Security Community Hub | Protect it all with Microsoft Security Eliminate gaps and get the simplified, comprehensive protection, expertise, and AI-powered solutions you need to innovate and grow in a changing world. The Microsoft Security Community is your gateway to connect, learn, and collaborate with peers, experts, and product teams. Gain access to technical discussions, webinars, and help shape Microsoft’s security products. Get there fast To stay up to date on upcoming opportunities and the latest Microsoft Security Community news, make sure to subscribe to our email list. Find the latest skilling content and on-demand videos – subscribe to the Microsoft Security Community YouTube channel. Catch the latest announcements and connect with us on LinkedIn – Microsoft Security Community and Microsoft Entra Community. Read the latest in the the Microsoft Security Community blog. Upcoming Community Calls April 2026 CANCELLED: Apr. 14 | 8:00am | Microsoft Sentinel | Using distributed content to manage your multi-tenant SecOps Content distribution is a powerful multi-tenant feature that enables scalable management of security content across tenants. With this capability, you can create content distribution profiles in the multi-tenant portal that allow you to seamlessly replicate existing content—such as custom detection rules and endpoint security policies—from a source tenant to designated target tenants. Once distributed, the content runs on the target tenant, enabling centralized control with localized execution. This allows you to onboard new tenants quickly and maintain a consistent security baseline across tenants. In this session we'll walk through how you can use this new capability to scale your security operations. Apr. 23 | 8:00am | Security Copilot Skilling Series | Getting started with Security Copilot New to Security Copilot? This session walks through what you actually need to get started, including E5 inclusion requirements and a practical overview of the core experiences and agents you will use on day one. RESCHEDULED Apr. 28 | 8:00am | Security Copilot Skilling Series | Security Copilot Agents, DSPM AI Observability, and IRM for Agents This session covers an overview of how Microsoft Purview supports AI risk visibility and investigation through Data Security Posture Management (DSPM) and Insider Risk Management (IRM), alongside Security Copilot–powered agents. This session will go over what is AI Observability in DSPM as well as IRM for Agents in Copilot Studio and Azure AI Foundry. Attendees will learn about the IRM Triage Agent and DSPM Posture Agent and their deployment. Attendees will gain an understanding of how DSPM and IRM capabilities could be leveraged to improve visibility, context, and response for AI-related data risks in Microsoft Purview. Apr. 30 | 8:00am | Microsoft Security Community Presents | Purview Lightning Talks Join the Microsoft Security Community for Purview Lightning Talks; quick technical sessions delivered by the community, for the community. You’ll pick up practical Purview gems: must-know Compliance Manager tips, smart data security tricks, real-world scenarios, and actionable governance recommendations all in one energizing event. Hear directly from Purview customers, partners, and community members and walk away with ideas you can put to work right immediately. Register now; full agenda coming soon! May 2026 May 12 | 9:00am | Microsoft Sentinel | Hyper scale your SOC: Manage delegated access and role-based scoping in Microsoft Defender In this session we'll discuss Unified role based access control (RBAC) and granular delegated admin privileges (GDAP) expansions including: How to use RBAC to -Allow multiple SOC teams to operate securely within a shared Sentinel environment-Support granular, row-level access without requiring workspace separation-Get consistent and reusable scope definitions across tables and experiences How to use GDAP to -Manage MSSPs and hyper-scaler organizations with delegated- access to governed tenants within the Defender portal-Manage delegated access for Sentinel. Looking for more? Join the Security Advisors! As a Security Advisor, you’ll gain early visibility into product roadmaps, participate in focus groups, and access private preview features before public release. You’ll have a direct channel to share feedback with engineering teams, influencing the direction of Microsoft Security products. The program also offers opportunities to collaborate and network with fellow end users and Microsoft product teams. Join the Security Advisors program that best fits your interests: www.aka.ms/joincommunity. Additional resources Microsoft Security Hub on Tech Community Virtual Ninja Training Courses Microsoft Security Documentation Azure Network Security GitHub Microsoft Defender for Cloud GitHub Microsoft Sentinel GitHub Microsoft Defender XDR GitHub Microsoft Defender for Cloud Apps GitHub Microsoft Defender for Identity GitHub Microsoft Purview GitHub44KViews7likes13CommentsAuthorization and Governance for AI Agents: Runtime Authorization Beyond Identity at Scale
Designing Authorization‑Aware AI Agents at Scale Enforcing Runtime RBAC + ABAC with Approval Injection (JIT) Microsoft Entra Agent Identity enables organizations to govern and manage AI agent identities in Copilot Studio, improving visibility and identity-level control. However, as enterprises deploy multiple autonomous AI agents, identity and OAuth permissions alone cannot answer a more critical question: “Should this action be executed now, by this agent, for this user, under the current business and regulatory context?” This post introduces a reusable Authorization Fabric—combining a Policy Enforcement Point (PEP) and Policy Decision Point (PDP)—implemented as a Microsoft Entra‑protected endpoint using Azure Functions/App Service authentication. Every AI agent (Copilot Studio or AI Foundry/Semantic Kernel) calls this fabric before tool execution, receiving a deterministic runtime decision: ALLOW / DENY / REQUIRE_APPROVAL / MASK Who this is for Anyone building AI agents (Copilot Studio, AI Foundry/Semantic Kernel) that call tools, workflows, or APIs Organizations scaling to multiple agents and needing consistent runtime controls Teams operating in regulated or security‑sensitive environments, where decisions must be deterministic and auditable Why a V2? Identity is necessary—runtime authorization is missing Entra Agent Identity (preview) integrates Copilot Studio agents with Microsoft Entra so that newly created agents automatically get an Entra agent identity, manageable in the Entra admin center, and identity activity is logged in Entra. That solves who the agent is and improves identity governance visibility. But multi-agent deployments introduce a new risk class: Autonomous execution sprawl — many agents, operating with delegated privileges, invoking the same backends independently. OAuth and API permissions answer “can the agent call this API?” They do not answer “should the agent execute this action under business policy, compliance constraints, data boundaries, and approval thresholds?” This is where a runtime authorization decision plane becomes essential. The pattern: Microsoft Entra‑Protected Authorization Fabric (PEP + PDP) Instead of embedding RBAC logic independently inside every agent, use a shared fabric: PEP (Policy Enforcement Point): Gatekeeper invoked before any tool/action PDP (Policy Decision Point): Evaluates RBAC + ABAC + approval policies Decision output: ALLOW / DENY / REQUIRE_APPROVAL / MASK This Authorization Fabric functions as a shared enterprise control plane, decoupling authorization logic from individual agents and enforcing policies consistently across all autonomous execution paths. Architecture (POC reference architecture) Use a single runtime decision plane that sits between agents and tools. What’s important here Every agent (Copilot Studio or AI Foundry/SK) calls the Authorization Fabric API first The fabric is a protected endpoint (Microsoft Entra‑protected endpoint required) Tools (Graph/ERP/CRM/custom APIs) are invoked only after an ALLOW decision (or approval) Trust boundaries enforced by this architecture Agents never call business tools directly without a prior authorization decision The Authorization Fabric validates caller identity via Microsoft Entra Authorization decisions are centralized, consistent, and auditable Approval workflows act as a runtime “break-glass” control for high-impact actions This ensures identity, intent, and execution are independently enforced, rather than implicitly trusted. Runtime flow (Decision → Approval → Execution) Here is the runtime sequence as a simple flow (you can keep your Mermaid diagram too). ```mermaid flowchart TD START(["START"]) --> S1["[1] User Request"] S1 --> S2["[2] Agent Extracts Intent\n(action, resource, attributes)"] S2 --> S3["[3] Call /authorize\n(Entra protected)"] S3 --> S4 subgraph S4["[4] PDP Evaluation"] ABAC["ABAC: Tenant · Region · Data Sensitivity"] RBAC["RBAC: Entitlement Check"] Threshold["Approval Threshold"] ABAC --> RBAC --> Threshold end S4 --> Decision{"[5] Decision?"} Decision -->|"ALLOW"| Exec["Execute Tool / API"] Decision -->|"MASK"| Masked["Execute with Masked Data"] Decision -->|"DENY"| Block["Block Request"] Decision -->|"REQUIRE_APPROVAL"| Approve{"[6] Approval Flow"} Approve -->|"Approved"| Exec Approve -->|"Rejected"| Block Exec --> Audit["[7] Audit & Telemetry"] Masked --> Audit Block --> Audit Audit --> ENDNODE(["END"]) style START fill:#4A90D9,stroke:#333,color:#fff style ENDNODE fill:#4A90D9,stroke:#333,color:#fff style S1 fill:#5B5FC7,stroke:#333,color:#fff style S2 fill:#5B5FC7,stroke:#333,color:#fff style S3 fill:#E8A838,stroke:#333,color:#fff style S4 fill:#FFF3E0,stroke:#E8A838,stroke-width:2px style ABAC fill:#FCE4B2,stroke:#999 style RBAC fill:#FCE4B2,stroke:#999 style Threshold fill:#FCE4B2,stroke:#999 style Decision fill:#fff,stroke:#333 style Exec fill:#2ECC71,stroke:#333,color:#fff style Masked fill:#27AE60,stroke:#333,color:#fff style Block fill:#C0392B,stroke:#333,color:#fff style Approve fill:#F39C12,stroke:#333,color:#fff style Audit fill:#3498DB,stroke:#333,color:#fff ``` Design principle: No tool execution occurs until the Authorization Fabric returns ALLOW or REQUIRE_APPROVAL is satisfied via an approval workflow. Where Power Automate fits (important for readers) In most Copilot Studio implementations, Agents calls Power Automate (agent flows), is the practical integration layer that calls enterprise services and APIs. Copilot Studio supports “agent flows” as a way to extend agent capabilities with low-code workflows. For this pattern, Power Automate typically: acquires/uses the right identity context for the call (depending on your tenant setup), and calls the /authorize endpoint of the Authorization Fabric, returns the decision payload to the agent for branching. Copilot Studio also supports calling REST endpoints directly using the HTTP Request node, including passing headers such as Authorization: Bearer <token>. Protected endpoint only: Securing the Authorization Fabric with Microsoft Entra For this V2 pattern, the Authorization Fabric must be protected using Microsoft Entra‑protected endpoint on Azure Functions/App Service (built‑in auth). Microsoft Learn provides the configuration guidance for enabling Microsoft Entra as the authentication provider for Azure App Service / Azure Functions. Step 1 — Create the Authorization Fabric API (Azure Function) Expose an authorization endpoint: HTTP Step 2 — Enable Microsoft Entra‑protected endpoint on the Function App In Azure Portal: Function App → Authentication Add identity provider → Microsoft Choose Workforce configuration (enterprise tenant) Set Require authentication for all requests This ensures the Authorization Fabric is not callable without a valid Entra token. Step 3 — Optional hardening (recommended) Depending on enterprise posture, layer: IP restrictions / Private endpoints APIM in front of the Function for rate limiting, request normalization, centralized logging (For a POC, keep it minimal—add hardening incrementally.) Externalizing policy (so governance scales) To make this pattern reusable across multiple agents, policies should not be hardcoded inside each agent. Instead, store policy definitions in a central policy store such as Cosmos DB (or equivalent configuration store), and have the PDP load/evaluate policies at runtime. Why this matters: Policy changes apply across all agents instantly (no agent republish) Central governance + versioning + rollback becomes possible Audit and reporting become consistent across environments (For the POC, a single JSON document per policy pack in Cosmos DB is sufficient. For production, add versioning and staged rollout.) Store one PolicyPack JSON document per environment (dev/test/prod). Include version, effectiveFrom, priority for safe rollout/rollback. Minimal decision contract (standard request / response) To keep the fabric reusable across agents, standardize the request payload. Request payload (example) Decision response (deterministic) Example scenario (1 minute to understand) Scenario: A user asks a Finance agent to create a Purchase Order for 70,000. Even if the user has API permission and the agent can technically call the ERP API, runtime policy should return: REQUIRE_APPROVAL (threshold exceeded) trigger an approval workflow execute only after approval is granted This is the difference between API access and authorized business execution. Sample Policy Model (RBAC + ABAC + Approval) This POC policy model intentionally stays simple while demonstrating both coarse and fine-grained governance. 1) Coarse‑grained RBAC (roles → actions) FinanceAnalyst CreatePO up to 50,000 ViewVendor FinanceManager CreatePO up to 100,000 and/or approve higher spend 2) Fine‑grained ABAC (conditions at runtime) ABAC evaluates context such as region, classification, tenant boundary, and risk: 3) Approval injection (Agent‑level JIT execution) For higher-risk/high-impact actions, the fabric returns REQUIRE_APPROVAL rather than hard deny (when appropriate): How policies should be evaluated (deterministic order) To ensure predictable and auditable behavior, evaluate in a deterministic order: Tenant isolation & residency (ABAC hard deny first) Classification rules (deny or mask) RBAC entitlement validation Threshold/risk evaluation Approval injection (JIT step-up) This prevents approval workflows from bypassing foundational security boundaries such as tenant isolation or data sovereignty. Copilot Studio integration (enforcing runtime authorization) Copilot Studio can call external REST APIs using the HTTP Request node, including passing headers such as Authorization: Bearer <token> and binding response schema for branching logic. Copilot Studio also supports using flows with agents (“agent flows”) to extend capabilities and orchestrate actions. Option A (Recommended): Copilot Studio → Agent Flow (Power Automate) → Authorization Fabric Why: Flows are a practical place to handle token acquisition patterns, approval orchestration, and standardized logging. Topic flow: Extract user intent + parameters Call an agent flow that: calls /authorize returns decision payload Branch in the topic: If ALLOW → proceed to tool call If REQUIRE_APPROVAL → trigger approval flow; proceed only if approved If DENY → stop and explain policy reason Important: Tool execution must never be reachable through an alternate topic path that bypasses the authorization check. Option B: Direct HTTP Request node to Authorization Fabric Use the Send HTTP request node to call the authorization endpoint and branch using the response schema. This approach is clean, but token acquisition and secure secretless authentication are often simpler when handled via a managed integration layer (flow + connector). AI Foundry / Semantic Kernel integration (tool invocation gate) For Foundry/SK agents, the integration point is before tool execution. Semantic Kernel supports Azure AI agent patterns and tool integration, making it a natural place to enforce a pre-tool authorization check. Pseudo-pattern: Agent extracts intent + context Calls Authorization Fabric Enforces decision Executes tool only when allowed (or after approval) Telemetry & audit (what Security Architects will ask for) Even the best policy engine is incomplete without audit trails. At minimum, log: agentId, userUPN, action, resource decision + reason + policyIds approval outcome (if any) correlationId for downstream tool execution Why it matters: you now have a defensible answer to: “Why did an autonomous agent execute this action?” Security signal bonus: Denials, unusual approval rates, and repeated policy mismatches can also indicate prompt injection attempts, mis-scoped agents, or governance drift. What this enables (and why it scales) With a shared Authorization Fabric: Avoid duplicating authorization logic across agents Standardize decisions across Copilot Studio + Foundry agents Update governance once (policy change) and apply everywhere Make autonomy safer without blocking productivity Closing: Identity gets you who. Runtime authorization gets you whether/when/how. Copilot Studio can automatically create Entra agent identities (preview), improving identity governance and visibility for agents. But safe autonomy requires a runtime decision plane. Securing that plane as an Entra-protected endpoint is foundational for enterprise deployments. In enterprise environments, autonomous execution without runtime authorization is equivalent to privileged access without PIM—powerful, fast, and operationally risky.Authorization and Identity Governance Inside AI Agents
Designing Authorization‑Aware AI Agents Enforcing Microsoft Entra ID RBAC in Copilot Studio As AI agents move from experimentation to enterprise execution, authorization becomes the defining line between innovation and risk. AI agents are rapidly evolving from experimental assistants into enterprise operators—retrieving user data, triggering workflows, and invoking protected APIs. While many early implementations rely on prompt‑level instructions to control access, regulated enterprise environments require authorization to be enforced by identity systems, not language models. This article presents a production‑ready, identity‑first architecture for building authorization‑aware AI agents using Copilot Studio, Power Automate, Microsoft Entra ID, and Microsoft Graph, ensuring every agent action executes strictly within the requesting user’s permissions. Why Prompt‑Level Security Is Not Enough Large Language Models interpret intent—they do not enforce policy. Even the most carefully written prompts cannot: Validate Microsoft Entra ID group or role membership Reliably distinguish delegated user identity from application identity Enforce deterministic access decisions Produce auditable authorization outcomes Relying on prompts for authorization introduces silent security failures, over‑privileged access, and compliance gaps—particularly in Financial Services, Healthcare, and other regulated industries. Authorization is not a reasoning problem. It is an identity enforcement problem. Common Authorization Anti‑Patterns in AI Agents The following patterns frequently appear in early AI agent implementations and should be avoided in enterprise environments: Hard‑coded role or group checks embedded in prompts Trusting group names passed as plain‑text parameters Using application permissions for user‑initiated actions Skipping verification of the user’s Entra ID identity Lacking an auditable authorization decision point These approaches may work in demos, but they do not survive security reviews, compliance audits, or real‑world misuse scenarios. Authorization‑Aware Agent Architecture In an authorization‑aware design, the agent never decides access. Authorization is enforced externally, by identity‑aware workflows that sit outside the language model’s reasoning boundary. High‑Level Flow The Copilot Studio agent receives a user request The agent passes the User Principal Name (UPN) and intended action A Power Automate flow validates permissions using Microsoft Entra ID via Microsoft Graph Only authorized requests are allowed to proceed Unauthorized requests fail fast with a deterministic outcome Authorization‑aware Copilot Studio architecture enforces Entra ID RBAC before executing any business action. The agent orchestrates intent. Identity systems enforce access. Enforcing Entra ID RBAC with Microsoft Graph Power Automate acts as the authorization enforcement layer: Resolve user identity from the supplied UPN Retrieve group or role memberships using Microsoft Graph Normalize and compare memberships against approved RBAC groups Explicitly deny execution when authorization fails This keeps authorization logic: Centralized Deterministic Auditable Independent of the AI model Reference Implementation: Power Automate RBAC Enforcement Flow The following import‑ready Power Automate cloud flow demonstrates a secure RBAC enforcement pattern for Copilot Studio agents. It validates Microsoft Entra ID group membership before allowing any business action. Scenario Trigger: User‑initiated agent action Identity model: Delegated user identity Input: userUPN, requestedAction Outcome: Authorized or denied based on Entra ID RBAC { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", "contentVersion": "1.0.0.0", "triggers": { "Copilot_Request": { "type": "Request", "kind": "Http", "inputs": { "schema": { "type": "object", "properties": { "userUPN": { "type": "string" }, "requestedAction": { "type": "string" } }, "required": [ "userUPN" ] } } } }, "actions": { "Get_User_Groups": { "type": "Http", "inputs": { "method": "GET", "uri": "https://graph.microsoft.com/v1.0/users/@{triggerBody()?['userUPN']}/memberOf?$select=displayName", "authentication": { "type": "ManagedServiceIdentity" } } }, "Normalize_Group_Names": { "type": "Select", "inputs": { "from": "@body('Get_User_Groups')?['value']", "select": { "groupName": "@toLower(item()?['displayName'])" } }, "runAfter": { "Get_User_Groups": [ "Succeeded" ] } }, "Check_Authorization": { "type": "Condition", "expression": "@contains(body('Normalize_Group_Names'), 'ai-authorized-users')", "runAfter": { "Normalize_Group_Names": [ "Succeeded" ] }, "actions": { "Authorized_Action": { "type": "Compose", "inputs": "User authorized via Entra ID RBAC" } }, "else": { "actions": { "Access_Denied": { "type": "Terminate", "inputs": { "status": "Failed", "message": "Access denied. User not authorized via Entra ID RBAC." } } } } } } } This pattern enforces authorization outside the agent, aligns with Zero Trust principles, and creates a clear audit boundary suitable for enterprise and regulated environments. Flow Diagram: Agent Integrated with RBAC Authorization Flow and Sample Prompt Execution: Delegated vs Application Permissions Scenario Recommended Permission Model User‑initiated agent actions Delegated permissions Background or system automation Application permissions Using delegated permissions ensures agent execution remains strictly within the requesting user’s identity boundary. Auditing and Compliance Benefits Deterministic and explainable authorization decisions Centralized enforcement aligned with identity governance Clear audit trails for security and compliance reviews Readiness for SOC, ISO, PCI, and FSI assessments Enterprise Security Takeaways Authorization belongs in Microsoft Entra ID, not prompts AI agents must respect enterprise identity boundaries Copilot Studio + Power Automate + Microsoft Graph enable secure‑by‑design AI agents By treating AI agents as first‑class enterprise actors and enforcing authorization at the identity layer, organizations can scale AI adoption with confidence, trust, and compliance.Microsoft Purview Data Quality Thresholds: More Control, More Trust
What Are Data Quality Thresholds? A data quality threshold defines the minimum acceptable score for a rule to pass. Instead of applying a single fixed standard across all data, organizations can now set expectations that align with business context and criticality. For example: An email column may require 99% completeness A product description column may only require 85% completeness Financial or regulatory data may require 100% accuracy With customizable thresholds, quality expectations become more meaningful and business-aligned. Why Does This Matter? Previously, using a single hardcoded threshold could lead to misleading quality scores. Critical data might appear “healthy” even when it didn’t meet business standards. With Data Quality Thresholds, you can: Define rule-level expectations Align quality scores with business risk Increase trust in DQ reporting Improve governance decision-making Data Asset-Level Quality Threshold Users can define data quality thresholds at the data asset level to measure how suitable a dataset is for specific business use cases. This allows organizations to quantify the overall health and fitness of a data asset before it is used in analytics, reporting, or data products. If the measured data quality score falls below the predefined threshold, the system can trigger notifications to the data asset owner or steward, prompting them to take corrective actions. It is important to note that not all data assets are equally critical. Therefore, thresholds should be context-driven and use-case specific. Example Scenario A marketing dataset used for campaign analysis may tolerate a lower quality threshold (e.g., 80%), since minor inconsistencies may not significantly impact insights. However, a financial reporting dataset used for regulatory filings may require a very high threshold (e.g., 98–100%), as even small errors can lead to compliance risks. Data Quality Rule-Level Threshold Thresholds can also be defined at the individual rule level, particularly for rules applied to specific columns. This provides more granular control and ensures that critical data elements are held to higher standards. Not all attributes have the same importance, so thresholds should reflect business criticality. Example Scenarios Email vs. Gender (Customer Contact Data) A completeness rule for a customer’s email address should have a higher threshold (e.g., 95–100%), since missing or invalid email addresses directly impact communication and engagement. In contrast, a gender attribute may have a lower threshold (e.g., 70–80%), as it is often less critical for most use cases. Billing Address vs. CRM Address A billing address is highly critical because it directly impacts: Invoice generation Tax calculations Timely delivery of invoices Therefore, the threshold for billing address quality should be very high (e.g., 98–100%). On the other hand, a CRM address used for general customer profiling may have a lower threshold, as occasional inaccuracies may not significantly affect business operations. The Impact By enabling flexible, context-aware scoring, Data Quality Thresholds help organizations move beyond generic quality checks and toward business-driven data quality management. Summary Data Quality Thresholds define the minimum acceptable score for data quality rules, allowing organizations to move beyond a one-size-fits-all approach and align quality expectations with business context and criticality. Instead of using fixed thresholds, organizations can set custom thresholds based on how important the data is. For example, financial data may require near-perfect accuracy, while less critical fields can tolerate lower thresholds. Thresholds can be applied at two levels: Data Asset Level: Measures the overall fitness of a dataset for a specific use case. Critical datasets (e.g., financial reporting) require higher thresholds than less critical ones (e.g., marketing analytics). Rule Level: Applies to individual columns or rules, ensuring that critical attributes (e.g., email, billing address) have stricter quality requirements than less important ones. This approach improves: Alignment with business risk and priorities Trust in data quality reporting Governance decision-making Focus on high-impact data issues Overall, data quality thresholds enable more meaningful, context-aware, and business-driven data quality management, helping organizations prioritize what matters most and build confidence in their data.Security as the core primitive - Securing AI agents and apps
This week at Microsoft Ignite, we shared our vision for Microsoft security -- In the agentic era, security must be ambient and autonomous, like the AI it protects. It must be woven into and around everything we build—from silicon to OS, to agents, apps, data, platforms, and clouds—and throughout everything we do. In this blog, we are going to dive deeper into many of the new innovations we are introducing this week to secure AI agents and apps. As I spend time with our customers and partners, there are four consistent themes that have emerged as core security challenges to secure AI workloads. These are: preventing agent sprawl and access to resources, protecting against data oversharing and data leaks, defending against new AI threats and vulnerabilities, and adhering to evolving regulations. Addressing these challenges holistically requires a coordinated effort across IT, developers, and security leaders, not just within security teams and to enable this, we are introducing several new innovations: Microsoft Agent 365 for IT, Foundry Control Plane in Microsoft Foundry for developers, and the Security Dashboard for AI for security leaders. In addition, we are releasing several new purpose-built capabilities to protect and govern AI apps and agents across Microsoft Defender, Microsoft Entra, and Microsoft Purview. Observability at every layer of the stack To facilitate the organization-wide effort that it takes to secure and govern AI agents and apps – IT, developers, and security leaders need observability (security, management, and monitoring) at every level. IT teams need to enable the development and deployment of any agent in their environment. To ensure the responsible and secure deployment of agents into an organization, IT needs a unified agent registry, the ability to assign an identity to every agent, manage the agent’s access to data and resources, and manage the agent’s entire lifecycle. In addition, IT needs to be able to assign access to common productivity and collaboration tools, such as email and file storage, and be able to observe their entire agent estate for risks such as over-permissioned agents. Development teams need to build and test agents, apply security and compliance controls by default, and ensure AI models are evaluated for safety guardrails and security vulnerabilities. Post deployment, development teams must observe agents to ensure they are staying on task, accessing applications and data sources appropriately, and operating within their cost and performance expectations. Security & compliance teams must ensure overall security of their AI estate, including their AI infrastructure, platforms, data, apps, and agents. They need comprehensive visibility into all their security risks- including agent sprawl and resource access, data oversharing and leaks, AI threats and vulnerabilities, and complying with global regulations. They want to address these risks by extending their existing security investments that they are already invested in and familiar with, rather than using siloed or bolt-on tools. These teams can be most effective in delivering trustworthy AI to their organizations if security is natively integrated into the tools and platforms that they use every day, and if those tools and platforms share consistent security primitives such as agent identities from Entra; data security and compliance controls from Purview; and security posture, detections, and protections from Defender. With the new capabilities being released today, we are delivering observability at every layer of the AI stack, meeting IT, developers, and security teams where they are in the tools they already use to innovate with confidence. For IT Teams - Introducing Microsoft Agent 365, the control plane for agents, now in preview The best infrastructure for managing your agents is the one you already use to manage your users. With Agent 365, organizations can extend familiar tools and policies to confidently deploy and secure agents, without reinventing the wheel. By using the same trusted Microsoft 365 infrastructure, productivity apps, and protections, organizations can now apply consistent and familiar governance and security controls that are purpose-built to protect against agent-specific threats and risks. gement and governance of agents across organizations Microsoft Agent 365 delivers a unified agent Registry, Access Control, Visualization, Interoperability, and Security capabilities for your organization. These capabilities work together to help organizations manage agents and drive business value. The Registry powered by the Entra provides a complete and unified inventory of all the agents deployed and used in your organization including both Microsoft and third-party agents. Access Control allows you to limit the access privileges of your agents to only the resources that they need and protect their access to resources in real time. Visualization gives organizations the ability to see what matters most and gain insights through a unified dashboard, advanced analytics, and role-based reporting. Interop allows agents to access organizational data through Work IQ for added context, and to integrate with Microsoft 365 apps such as Outlook, Word, and Excel so they can create and collaborate alongside users. Security enables the proactive detection of vulnerabilities and misconfigurations, protects against common attacks such as prompt injections, prevents agents from processing or leaking sensitive data, and gives organizations the ability to audit agent interactions, assess compliance readiness and policy violations, and recommend controls for evolving regulatory requirements. Microsoft Agent 365 also includes the Agent 365 SDK, part of Microsoft Agent Framework, which empowers developers and ISVs to build agents on their own AI stack. The SDK enables agents to automatically inherit Microsoft's security and governance protections, such as identity controls, data security policies, and compliance capabilities, without the need for custom integration. For more details on Agent 365, read the blog here. For Developers - Introducing Microsoft Foundry Control Plane to observe, secure and manage agents, now in preview Developers are moving fast to bring agents into production, but operating them at scale introduces new challenges and responsibilities. Agents can access tools, take actions, and make decisions in real time, which means development teams must ensure that every agent behaves safely, securely, and consistently. Today, developers need to work across multiple disparate tools to get a holistic picture of the cybersecurity and safety risks that their agents may have. Once they understand the risk, they then need a unified and simplified way to monitor and manage their entire agent fleet and apply controls and guardrails as needed. Microsoft Foundry provides a unified platform for developers to build, evaluate and deploy AI apps and agents in a responsible way. Today we are excited to announce that Foundry Control Plane is available in preview. This enables developers to observe, secure, and manage their agent fleets with built-in security, and centralized governance controls. With this unified approach, developers can now identify risks and correlate disparate signals across their models, agents, and tools; enforce consistent policies and quality gates; and continuously monitor task adherence and runtime risks. Foundry Control Plane is deeply integrated with Microsoft’s security portfolio to provide a ‘secure by design’ foundation for developers. With Microsoft Entra, developers can ensure an agent identity (Agent ID) and access controls are built into every agent, mitigating the risk of unmanaged agents and over permissioned resources. With Microsoft Defender built in, developers gain contextualized alerts and posture recommendations for agents directly within the Foundry Control Plane. This integration proactively prevents configuration and access risks, while also defending agents from runtime threats in real time. Microsoft Purview’s native integration into Foundry Control Plane makes it easy to enable data security and compliance for every Foundry-built application or agent. This allows Purview to discover data security and compliance risks and apply policies to prevent user prompts and AI responses from safety and policy violations. In addition, agent interactions can be logged and searched for compliance and legal audits. This integration of the shared security capabilities, including identity and access, data security and compliance, and threat protection and posture ensures that security is not an afterthought; it’s embedded at every stage of the agent lifecycle, enabling you to start secure and stay secure. For more details, read the blog. For Security Teams - Introducing Security Dashboard for AI - unified risk visibility for CISOs and AI risk leaders, coming soon AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 90% of security professionals, including CISOs, report that their responsibilities have expanded to include data governance and AI oversight within the past year. 1 At the same time, 86% of risk managers say disconnected data and systems lead to duplicated efforts and gaps in risk coverage. 2 To address these needs, we are excited to introduce the Security Dashboard for AI. This serves as a unified dashboard that aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview. This unified dashboard allows CISOs and AI risk leaders to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. For example, you can see your full AI inventory and get visibility into a quarantined agent, flagged for high data risk due to oversharing sensitive information in Purview. The dashboard then correlates that signal with identity insights from Entra and threat protection alerts from Defender to provide a complete picture of exposure. From there, you can delegate tasks to the appropriate teams to enforce policies and remediate issues quickly. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, there’s nothing new to buy. If you’re already using Microsoft security products to secure AI, you’re already a Security Dashboard for AI customer. Figure 5: Security Dashboard for AI provides CISOs and AI risk leaders with a unified view of their AI risk by bringing together their AI inventory, AI risk, and security recommendations to strengthen overall posture Together, these innovations deliver observability and security across IT, development, and security teams, powered by Microsoft’s shared security capabilities. With Microsoft Agent 365, IT teams can manage and secure agents alongside users. Foundry Control Plane gives developers unified governance and lifecycle controls for agent fleets. Security Dashboard for AI provides CISOs and AI risk leaders with a consolidated view of AI risks across platforms, apps, and agents. Added innovation to secure and govern your AI workloads In addition to the IT, developer, and security leader-focused innovations outlined above, we continue to accelerate our pace of innovation in Microsoft Entra, Microsoft Purview, and Microsoft Defender to address the most pressing needs for securing and governing your AI workloads. These needs are: Manage agent sprawl and resource access e.g. managing agent identity, access to resources, and permissions lifecycle at scale Prevent data oversharing and leaks e.g. protecting sensitive information shared in prompts, responses, and agent interactions Defend against shadow AI, new threats, and vulnerabilities e.g. managing unsanctioned applications, preventing prompt injection attacks, and detecting AI supply chain vulnerabilities Enable AI governance for regulatory compliance e.g. ensuring AI development, operations, and usage comply with evolving global regulations and frameworks Manage agent sprawl and resource access 76% of business leaders expect employees to manage agents within the next 2–3 years. 3 Widespread adoption of agents is driving the need for visibility and control, which includes the need for a unified registry, agent identities, lifecycle governance, and secure access to resources. Today, Microsoft Entra provides robust identity protection and secure access for applications and users. However, organizations lack a unified way to manage, govern, and protect agents in the same way they manage their users. Organizations need a purpose-built identity and access framework for agents. Introducing Microsoft Entra Agent ID, now in preview Microsoft Entra Agent ID offers enterprise-grade capabilities that enable organizations to prevent agent sprawl and protect agent identities and their access to resources. These new purpose-built capabilities enable organizations to: Register and manage agents: Get a complete inventory of the agent fleet and ensure all new agents are created with an identity built-in and are automatically protected by organization policies to accelerate adoption. Govern agent identities and lifecycle: Keep the agent fleet under control with lifecycle management and IT-defined guardrails for both agents and people who create and manage them. Protect agent access to resources: Reduce risk of breaches, block risky agents, and prevent agent access to malicious resources with conditional access and traffic inspection. Agents built in Microsoft Copilot Studio, Microsoft Foundry, and Security Copilot get an Entra Agent ID built-in at creation. Developers can also adopt Entra Agent ID for agents they build through Microsoft Agent Framework, Microsoft Agent 365 SDK, or Microsoft Entra Agent ID SDK. Read the Microsoft Entra blog to learn more. Prevent data oversharing and leaks Data security is more complex than ever. Information Security Media Group (ISMG) reports that 80% of leaders cite leakage of sensitive data as their top concern. 4 In addition to data security and compliance risks of generative AI (GenAI) apps, agents introduces new data risks such as unsupervised data access, highlighting the need to protect all types of corporate data, whether it is accessed by employees or agents. To mitigate these risks, we are introducing new Microsoft Purview data security and compliance capabilities for Microsoft 365 Copilot and for agents and AI apps built with Copilot Studio and Microsoft Foundry, providing unified protection, visibility, and control for users, AI Apps, and Agents. New Microsoft Purview controls safeguard Microsoft 365 Copilot with real-time protection and bulk remediation of oversharing risks Microsoft Purview and Microsoft 365 Copilot deliver a fully integrated solution for protecting sensitive data in AI workflows. Based on ongoing customer feedback, we’re introducing new capabilities to deliver real-time protection for sensitive data in M365 Copilot and accelerated remediation of oversharing risks: Data risk assessments: Previously, admins could monitor oversharing risks such as SharePoint sites with unprotected sensitive data. Now, they can perform item-level investigations and bulk remediation for overshared files in SharePoint and OneDrive to quickly reduce oversharing exposure. Data Loss Prevention (DLP) for M365 Copilot: DLP previously excluded files with sensitivity labels from Copilot processing. Now in preview, DLP also prevents prompts that include sensitive data from being processed in M365 Copilot, Copilot Chat, and Copilot agents, and prevents Copilot from using sensitive data in prompts for web grounding. Priority cleanup for M365 Copilot assets: Many organizations have org-wide policies to retain or delete data. Priority cleanup, now generally available, lets admins delete assets that are frequently processed by Copilot, such as meeting transcripts and recordings, on an independent schedule from the org-wide policies while maintaining regulatory compliance. On-demand classification for meeting transcripts: Purview can now detect sensitive information in meeting transcripts on-demand. This enables data security admins to apply DLP policies and enforce Priority cleanup based on the sensitive information detected. & bulk remediation Read the full Data Security blog to learn more. Introducing new Microsoft Purview data security capabilities for agents and apps built with Copilot Studio and Microsoft Foundry, now in preview Microsoft Purview now extends the same data security and compliance for users and Copilots to agents and apps. These new capabilities are: Enhanced Data Security Posture Management: A centralized DSPM dashboard that provides observability, risk assessment, and guided remediation across users, AI apps, and agents. Insider Risk Management (IRM) for Agents: Uniquely designed for agents, using dedicated behavioral analytics, Purview dynamically assigns risk levels to agents based on their risky handing of sensitive data and enables admins to apply conditional policies based on that risk level. Sensitive data protection with Azure AI Search: Azure AI Search enables fast, AI-driven retrieval across large document collections, essential for building AI Apps. When apps or agents use Azure AI Search to index or retrieve data, Purview sensitivity labels are preserved in the search index, ensuring that any sensitive information remains protected under the organization’s data security & compliance policies. For more information on preventing data oversharing and data leaks - Learn how Purview protects and governs agents in the Data Security and Compliance for Agents blog. Defend against shadow AI, new threats, and vulnerabilities AI workloads are subject to new AI-specific threats like prompt injections attacks, model poisoning, and data exfiltration of AI generated content. Although security admins and SOC analysts have similar tasks when securing agents, the attack methods and surfaces differ significantly. To help customers defend against these novel attacks, we are introducing new capabilities in Microsoft Defender that deliver end-to-end protection, from security posture management to runtime defense. Introducing Security Posture Management for agents, now in preview As organizations adopt AI agents to automate critical workflows, they become high-value targets and potential points of compromise, creating a critical need to ensure agents are hardened, compliant, and resilient by preventing misconfigurations and safeguarding against adversarial manipulation. Security Posture Management for agents in Microsoft Defender now provides an agent inventory for security teams across Microsoft Foundry and Copilot Studio agents. Here, analysts can assess the overall security posture of an agent, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions, all aligned to the MITRE ATT&CK framework. Additionally, the new agent attack path analysis visualizes how an agent’s weak security posture can create broader organizational risk, so you can quickly limit exposure and prevent lateral movement. Introducing Threat Protection for agents, now in preview Attack techniques and attack surfaces for agents are fundamentally different from other assets in your environment. That’s why Defender is delivering purpose-built protections and detections to help defend against them. Defender is introducing runtime protection for Copilot Studio agents that automatically block prompt injection attacks in real time. In addition, we are announcing agent-specific threat detections for Copilot Studio and Microsoft Foundry agents coming soon. Defender automatically correlates these alerts with Microsoft’s industry-leading threat intelligence and cross-domain security signals to deliver richer, contextualized alerts and security incident views for the SOC analyst. Defender’s risk and threat signals are natively integrated into the new Microsoft Foundry Control Plane, giving development teams full observability and the ability to act directly from within their familiar environment. Finally, security analysts will be able to hunt across all agent telemetry in the Advanced Hunting experience in Defender, and the new Agent 365 SDK extends Defender’s visibility and hunting capabilities to third-party agents, starting with Genspark and Kasisto, giving security teams even more coverage across their AI landscape. To learn more about how you can harden the security posture of your agents and defend against threats, read the Microsoft Defender blog. Enable AI governance for regulatory compliance Global AI regulations like the EU AI Act and NIST AI RMF are evolving rapidly; yet, according to ISMG, 55% of leaders report lacking clarity on current and future AI regulatory requirements. 5 As enterprises adopt AI, they must ensure that their AI innovation aligns with global regulations and standards to avoid costly compliance gaps. Introducing new Microsoft Purview Compliance Manager capabilities to stay ahead of evolving AI regulations, now in preview Today, Purview Compliance Manager provides over 300 pre-built assessments for common industry, regional, and global standards and regulations. However, the pace of change for new AI regulations requires controls to be continuously re-evaluated and updated so that organizations can adapt to ongoing changes in regulations and stay compliant. To address this need, Compliance Manager now includes AI-powered regulatory templates. AI-powered regulatory templates enable real-time ingestion and analysis of global regulatory documents, allowing compliance teams to quickly adapt to changes as they happen. As regulations evolve, the updated regulatory documents can be uploaded to Compliance Manager, and the new requirements are automatically mapped to applicable recommended actions to implement controls across Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft 365, and Microsoft Foundry. Automated actions by Compliance Manager further streamline governance, reduce manual workload, and strengthen regulatory accountability. Introducing expanded Microsoft Purview compliance capabilities for agents and AI apps now in preview Microsoft Purview now extends its compliance capabilities across agent-generated interactions, ensuring responsible use and regulatory alignment as AI becomes deeply embedded across business processes. New capabilities include expanded coverage for: Audit: Surface agent interactions, lifecycle events, and data usage with Purview Audit. Unified audit logs across user and agent activities, paired with traceability for every agent using an Entra Agent ID, support investigation, anomaly detection, and regulatory reporting. Communication Compliance: Detect prompts sent to agents and agent-generated responses containing inappropriate, unethical, or risky language, including attempts to manipulate agents into bypassing policies, generating risky content, or producing noncompliant outputs. When issues arise, data security admins get full context, including the prompt, the agent’s output, and relevant metadata, so they can investigate and take corrective action Data Lifecycle Management: Apply retention and deletion policies to agent-generated content and communication flows to automate lifecycle controls and reduce regulatory risk. Read about Microsoft Purview data security for agents to learn more. Finally, we are extending our data security, threat protection, and identity access capabilities to third-party apps and agents via the network. Advancing Microsoft Entra Internet Access Secure Web + AI Gateway - extend runtime protections to the network, now in preview Microsoft Entra Internet Access, part of the Microsoft Entra Suite, has new capabilities to secure access to and usage of GenAI at the network level, marking a transition from Secure Web Gateway to Secure Web and AI Gateway. Enterprises can accelerate GenAI adoption while maintaining compliance and reducing risk, empowering employees to experiment with new AI tools safely. The new capabilities include: Prompt injection protection which blocks malicious prompts in real time by extending Azure AI Prompt Shields to the network layer. Network file filtering which extends Microsoft Purview to inspect files in transit and prevents regulated or confidential data from being uploaded to unsanctioned AI services. Shadow AI Detection that provides visibility into unsanctioned AI applications through Cloud Application Analytics and Defender for Cloud Apps risk scoring, empowering security teams to monitor usage trends, apply Conditional Access, or block high-risk apps instantly. Unsanctioned MCP server blocking prevents access to MCP servers from unauthorized agents. With these controls, you can accelerate GenAI adoption while maintaining compliance and reducing risk, so employees can experiment with new AI tools safely. Read the Microsoft Entra blog to learn more. As AI transforms the enterprise, security must evolve to meet new challenges—spanning agent sprawl, data protection, emerging threats, and regulatory compliance. Our approach is to empower IT, developers, and security leaders with purpose-built innovations like Agent 365, Foundry Control Plane, and the Security Dashboard for AI. These solutions bring observability, governance, and protection to every layer of the AI stack, leveraging familiar tools and integrated controls across Microsoft Defender, Microsoft Entra, and Microsoft Purview. The future of security is ambient, autonomous, and deeply woven into the fabric of how we build, deploy, and govern AI systems. Explore additional resources Learn more about Security for AI solutions on our webpage Learn more about Microsoft Agent 365 Learn more about Microsoft Entra Agent ID Get started with Microsoft 365 Copilot Get started with Microsoft Copilot Studio Get started with Microsoft Foundry Get started with Microsoft Defender for Cloud Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Purview Compliance Manager Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Bedrock Security, 2025 Data Security Confidence Index, published Mar 17, 2025. 2 AuditBoard & Ascend2, Connected Risk Report 2024; as cited by MIT Sloan Management Review, Spring 2025. 3 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more 4 First Annual Generative AI study: Business Rewards vs. Security Risks, , Q3 2023, ISMG, N=400 5 First Annual Generative AI study: Business Rewards vs. Security Risks, Q3 2023, ISMG, N=400Optimizing OneDrive Retention Policies with Administrative Units and Adaptive Scopes
A special thank you note to Ashwini_Anand for contributing to the content of this blog. In today's digital landscape, efficient data retention management is a critical priority for organizations of all sizes. Organizations can optimize their OneDrive retention policies, ensuring efficient and compliant data management tailored to their unique user base and licensing arrangements. Scenario: Contoso Org encountered a distinct challenge - managing data retention for their diverse user base of 200,000 employees, which includes 80,000 users with F3 licenses and 120,000 users with E3 and E5 licenses. As per Microsoft licensing, F3 users are allocated only 2 GB of OneDrive storage, whereas E3 and E5 users are provided with a much larger allocation of 5 TB. This difference required creating separate retention policies for these users' groups. The challenge was further complicated by the fact that retention policies utilize the same storage for preserving deleted data. If a unified retention policy were applied to all users such as retaining data for 6 years before deletion - F3 users’ OneDrive storage could potentially fill up within a year or less (depending on usage patterns). This would leave F3 users unable to delete or save new files, severely disrupting productivity and data management. To address this, it is essential to create a separate retention policy for E3 and E5 users, ensuring that the policy applies only to these users and excludes F3 users. This blog will discuss the process of designing and implementing such a policy for the large user base based on separate licenses, ensuring efficient data management and uninterrupted productivity. Challenges with Retention Policy Configuration for large organizations 1. Adaptive Scope Adaptive scopes in Microsoft Purview allow you to dynamically target policies based on specific attributes or properties such as department, location, email address, custom Exchange attributes etc. Refer the link to get the list of supported attributes: Adaptive scopes | Microsoft Learn. Limitation: Although Adaptive scopes can filter by user properties, Contoso, being a large organization, had already utilized all 15 custom attributes for various purposes. Additionally, user attributes also couldn’t be used to segregate users based on licenses. This made it challenging to repurpose any attribute for our filter criteria to apply the retention policy to a specific set of users. Furthermore, refinable strings used in SharePoint do not work for OneDrive sites. 2. Static Scope Static scope refers to manually selected locations (e.g., specific users, mailboxes, or sites) where the policy is applied. The scope remains fixed and does not automatically adjust. Limitation: Static scope allows the inclusion or exclusion of mailboxes and sites but is limited to 100 sites and 1000 mailboxes, making it challenging to utilize for large organizations. Proposed Solution: Administrative Units with Adaptive Scope To address the above challenges, it required utilizing Administrative Units (Admin Units - is a container within an organization that can hold users, groups, or devices. It helps us to manage and organize users within an organization more efficiently, especially in large or complex environments) with Adaptive Scopes for creation of a retention policy targeting E3 and E5 licensed users. This approach allows organizations to selectively apply retention policies based on user licenses, enhancing both efficiency and governance. Prerequisites For Administrative unit - Microsoft Entra ID P1 license For Retention policy - Refer to the link: Microsoft 365 guidance for security & compliance - Service Descriptions | Microsoft Learn Configuration Steps Step 1: Create Administrative Unit: Navigate to Microsoft Entra Admin Center https://entra.microsoft.com/#home Click on ‘Identity’ and then click on ‘Show more’ Expand ‘Roles & admins’ Proceed to ‘Admin units’ -> Add. Figure 1: Create an Administrative unit and enter the name and description Define a name for the Administrative unit. Click on ‘Next: Assign roles’ No role assignment required, click on 'Next: Review + create’) Click on ‘Create’. To get more information about creating administrative unit, refer this link: Create or delete administrative units - Microsoft Entra ID | Microsoft Learn Step 2: Update Dynamic Membership: Select the Administrative Unit which is created in Step1. Navigate to ‘Properties’ Choose ‘Dynamic User’ for Membership type. Click on ‘Add a dynamic query’ for Dynamic user members. Click on ‘Edit' for Rule syntax In order to include E3 and E5 licensed users who are using OneDrive, you need to include SharePoint Online Service Plan 2 enabled users. Use the query below in the code snippet to define the dynamic membership. user.assignedPlans -any (assignedPlan.servicePlanId -eq "5dbe027f-2339-4123-9542-606e4d348a72" -and assignedPlan.capabilityStatus -eq "Enabled") 7. Click on 'Save' to update the Dynamic membership rules 8. Click on 'Save' to update the Administrative unit changes. 9. Open the Administrative Unit and click on the 'Users' tab to check if users have started to populate. Note: It may take some time to replicate all users, depending on the size of your organization. Please wait for minutes and then check again. Step 3: Create Adaptive Scope under Purview Portal: Access https://purview.microsoft.com Navigate to ‘Settings’ Expand ‘Roles & scopes’ and click on ‘Adaptive scopes’ Create a new adaptive scope, providing ‘Name’ and ‘Description’. Proceed to select the Administrative unit which was created earlier. (It takes time for the Admin/Administrative Unit to become visible. Please wait for some time if it does not appear immediately.) Click on ‘Add’ and ‘Next’ Select ‘Users’ and 'Next' Once the Admin unit is selected, we need to specify the criteria which allows to select users within the Admin unit (this is the second level of filtering available). However, in this case since we needed to select all users of the admin unit, hence the below criteria was used. Click 'Add attribute' and form the below query. Email addresses is not equal to $null Note: You can apply any other filter if you need to select a subset of users within the Admin Unit based on your business use case. Click on ‘Next’ Review and ‘Submit’ the adaptive scope. Step 4: Create Retention Policy using Adaptive Scope: Access to the portal https://purview.microsoft.com/datalifecyclemanagement/overview Navigate to ‘Policies’ and then go to ‘Retention Policies’. Create a ‘New Retention policy’, providing a ‘Name’ and ‘Description’. Click on "Next", there is no need to add Admin units here as its already defined in Adaptive scope. Figure 9: Select the 'Admin Units' as Full directory 6. Choose ‘Adaptive’ and click on ‘Next’. Click on ‘Add scopes’ and Select the previously created Adaptive scope. Under Location, select OneDrive. Figure 11: Select the Adaptive scope and location at this point. 8. Click on ‘Next’ to proceed and select the desired retention settings. 9. Click Next and Finish Outcome By implementing Admin Units with adaptive scopes, organizations can effectively overcome challenges associated with applying OneDrive retention policies for distinguished and large set of users. This approach facilitates the dynamic addition of required users, eliminating the need for custom attributes and manual user management. Users are dynamically added or removed from the policy based on license status, ensuring seamless compliance management. FAQ: Why is it important to differentiate retention policies based on user licensing tiers? It is important to differentiate retention policies based on user licensing tiers to ensure that each user group has policies tailored to their specific needs and constraints, avoiding issues such as storage limitations for users with lower-tier licenses like F3. How many Exchange custom attributes are typically available? There are typically 15 Exchange custom attributes available, which can limit scalability when dealing with a large user base. What challenge does Adaptive Scoping face when including a large number of OneDrive sites? Adaptive Scoping faces the challenge of including a large number of OneDrive sites due to limitations in the number of custom attributes allowed. While these custom attributes help in categorizing and managing OneDrive sites, the finite number of attributes available can restrict scalability and flexibility. Why are refinable strings a limitation for Adaptive Scoping in OneDrive? Refinable strings are a limitation for Adaptive Scoping in OneDrive because their usage is restricted to SharePoint only. What are the limitations of Static Scoping for OneDrive sites? Static Scoping for OneDrive sites is limited by the strict limit of including or excluding only 100 sites, making it usage limited for larger environments. Do we need any licenses to create an administrative unit with dynamic membership? Yes, a Microsoft Entra ID P1 license is required for all members of the group.Select the 'Adaptive' retention policy typeFigure 10: Select the 'Adaptive' retention policy type3.3KViews3likes0CommentsBuilding Secure, Enterprise Ready AI Agents with Purview SDK and Agent Framework
At Microsoft Ignite, we announced the public preview of Purview integration with the Agent Framework SDK—making it easier to build AI agents that are secure, compliant, and enterprise‑ready from day one. AI agents are quickly moving from demos to production. They reason over enterprise data, collaborate with other agents, and take real actions. As that happens, one thing becomes non‑negotiable: Governance has to be built in. That’s where Purview SDK comes in. Agentic AI Changes the Security Model Traditional apps expose risks at the UI or API layer. AI agents are different. Agents can: Process sensitive enterprise data in prompts and responses Collaborate with other agents across workflows Act autonomously on behalf of users Without built‑in controls, even a well‑designed agent can create compliance gaps. Purview SDK brings Microsoft’s enterprise data security and compliance directly into the agent runtime, so governance travels with the agent—not after it. What You Get with Purview SDK + Agent Framework This integration delivers a few key things developers and enterprises care about most: Inline Data Protection Evaluate prompts and responses against Data Loss Prevention (DLP) policies in real time. Content can be allowed or blocked automatically. Built‑In Governance Send AI interactions to Purview for audit, eDiscovery, communication compliance, and lifecycle management—without custom plumbing. Enterprise‑Ready by Design Ship agents that meet enterprise security expectations from the start, not as a follow‑up project. All of this is done natively through Agent Framework middleware, so governance feels like part of the platform—not an add‑on. How Enforcement Works (Quickly) When an agent runs: Prompts and responses flow through the Agent Framework pipeline Purview SDK evaluates content against configured policies A decision is returned: allow, redact, or block Governance signals are logged for audit and compliance This same model works for: User‑to‑agent interactions Agent‑to‑agent communication Multi‑agent workflows Try It: Add Purview SDK in Minutes Here’s a minimal Python example using Agent Framework: That’s it! From that point on: Prompts and responses are evaluated against Purview policies setup within the enterprise tenant Sensitive data can be automatically blocked Interactions are logged for governance and audit Designed for Real Agent Systems Most production AI apps aren’t single‑agent systems. Purview SDK supports: Agent‑level enforcement for fine‑grained control Workflow‑level enforcement across orchestration steps Agent‑to‑agent governance to protect data as agents collaborate This makes it a natural fit for enterprise‑scale, multi‑agent architectures. Get Started Today You can start experimenting right away: Try the Purview SDK with Agent Framework Follow the Microsoft Learn docs to configure Purview SDK with Agent Framework. Explore the GitHub samples See examples of policy‑enforced agents in Python and .NET. Secure AI, Without Slowing It Down AI agents are quickly becoming production systems—not experiments. By integrating Purview SDK directly into the Agent Framework, Microsoft is making governance a default capability, not a deployment blocker. Build intelligent agents. Protect sensitive data. Scale with confidence.