data loss prevention
425 TopicsWelcome to the Microsoft Security Community!
Protect it all with Microsoft Security Eliminate gaps and get the simplified, comprehensive protection, expertise, and AI-powered solutions you need to innovate and grow in a changing world. The Microsoft Security Community is your gateway to connect, learn, and collaborate with peers, experts, and product teams. Gain access to technical discussions, webinars, and help shape Microsoft’s security products. Get there fast To stay up to date on upcoming opportunities and the latest Microsoft Security Community news, make sure to subscribe to our email list. Find the latest skilling content and on-demand videos – subscribe to the Microsoft Security Community YouTube channel. Catch the latest announcements and connect with us on LinkedIn – Microsoft Security Community and Microsoft Entra Community. Index Community Calls: March 2026 Upcoming Community Calls March 2026 Mar. 11 | 8:00am | Microsoft Security Store | A Day in the Life of an Identity Governance Manager Powered by Security Agents In this session, you’ll see how agents from the Microsoft Security Store help governance teams streamline reviews, reduce standing privilege, and close lifecycle gaps. Co‑presented with identity governance experts from the Microsoft MVP community, we’ll walk through a day‑in‑the‑life of an identity governance manager—covering scenarios like excessive access accumulation, offboarding gaps, and privileged role sprawl. You’ll see how agents can automate governance workflows while keeping you in control. Mar. 11 | 8:00am | Microsoft Entra | QR code authentication: Fast, simple sign‑in designed for Frontline Workers Frontline teams often work on shared mobile devices where typing long usernames and passwords slows everyone down. In this session, we’ll introduce the QR code authentication method in Microsoft Entra ID—a streamlined way for workers to sign in by scanning their unique QR code and entering a PIN on shared iOS/iPadOS or Android devices. No personal phones or complex credentials required. We’ll walk through the end‑to‑end experience, from enabling the method in your tenant and issuing codes to workers (via the Entra admin center or My Staff), to the on‑device sign‑in flow that gets your teams productive quickly. We’ll also cover best‑practice controls—like using Conditional Access and Shared device mode—to help you deploy with confidence. Bring your questions—we’ll host Q&A and collect product feedback to help prioritize upcoming investments. Mar. 11 | 5:00pm | Microsoft Entra | Building MCP on Entra: Design Choices for Enterprise Agents Explore approaches for integrating MCP with Microsoft Entra Agent ID. We’ll outline key considerations for identity, consent, and authorization, discuss patterns for scalable and auditable agent architectures, and share insights on interoperability. Expect practical guidance, common pitfalls, and an open forum for questions and feedback. Mar. 12 | 12:00pm (BRT) | Microsoft Intune | Novidades do Microsoft Intune - Últimos lançamentos Junte-se a nós para explorar as novidades do Microsoft Intune, incluindo os lançamentos mais recentes anunciados no Microsoft Ignite e a integração do Microsoft Security Copilot no Intune. A sessão contará com demonstrações ao vivo e um espaço interativo de perguntas e respostas, onde você poderá tirar suas dúvidas com especialistas. Mar. 18 | 1:00pm (AEDT) | Microsoft Entra | From Lockouts to Logins: Modern Account Recovery and Passkeys Lost phone, no backup? In a passwordless world, users can face total lockouts and risky helpdesk recovery. This session shows how Entra ID Account Recovery uses strong identity verification and passkey profiles to help users safely regain access. Mar. 19 | 8:00am | Microsoft Purview | Insider Risk Data Risk Graph We’re excited to share a new capability that brings Microsoft Purview Insider Risk Management (IRM) together with Microsoft Sentinel through the data risk graph (public preview) What it is: The data risk graph gives you an interactive, visual map of user activity, data movement, and risk signals—all in one place. Why it matters: Quickly investigate insider risk alerts with clear context, understand the impact of risky activities on sensitive data, accelerate response with intuitive, graph-based insights Getting started: Requires onboarding to the Sentinel data lake & graph. Needs appropriate admin/security roles and at least one IRM policy configured This session will provide practical guidance on onboarding, setup requirements, and best practices for data risk graph. Mar. 24 | 8:00am | Microsoft Purview | eDiscovery recent updates to the modern UX Join us to learn all about the recent updates to the modern UX, from new features and managing generative AI content. Mar. 24 | 9:00am | Microsoft Intune | Accelerate your Mac Management POC in Intune with Intune my Macs Intune my Macs enables you to stand up a complete Microsoft Intune macOS proof‑of‑concept in minutes. Using a single script, it deploys policies, compliance settings, scripts, PKG apps, and optionally Microsoft Defender for Endpoint (MDE). In this session, you’ll learn how to use the solution and see exactly what it delivers. Mar. 26 | 8:00am | Azure Network Security | What's New in Azure Web Application Firewall Azure Web Application Firewall (WAF) continues to evolve to help you protect your web applications against ever-changing threats. In this session, we’ll explore the latest enhancements across Azure WAF, including improvements in ruleset accuracy, threat detection, and configuration flexibility. Whether you use Application Gateway WAF or Azure Front Door WAF, this session will help you understand what’s new, what’s improved, and how to get the most from your WAF deployments. Mar. 31 | 8:00am | Microsoft Entra | Developer Tools for Agent ID: SDKs, CLIs & Samples Accelerate agent identity projects with Microsoft Entra’s developer toolchain. Explore SDKs, sample repos, and utilities for token acquisition, consent flows, and downstream API calls. Learn techniques for debugging local environments, validating authentication flows, and automating checks in CI/CD pipelines. Share ready-to-run samples, resources, and guidance for filing new tooling requests—helping you build faster and smarter. Looking for more? Join the Security Advisors! As a Security Advisor, you’ll gain early visibility into product roadmaps, participate in focus groups, and access private preview features before public release. You’ll have a direct channel to share feedback with engineering teams, influencing the direction of Microsoft Security products. The program also offers opportunities to collaborate and network with fellow end users and Microsoft product teams. Join the Security Advisors program that best fits your interests: www.aka.ms/joincommunity. Additional resources Microsoft Security Hub on Tech Community Virtual Ninja Training Courses Microsoft Security Documentation Azure Network Security GitHub Microsoft Defender for Cloud GitHub Microsoft Sentinel GitHub Microsoft Defender XDR GitHub Microsoft Defender for Cloud Apps GitHub Microsoft Defender for Identity GitHub Microsoft Purview GitHub33KViews6likes9CommentsHow to Test a File Against DLP Sensitive Information Types
Sensitive Information types (SITs) are definitions of data like credit card numbers used by DLP rules to detect potential external sharing violations. Knowing what SIT to use in a DLP rule is often difficult, which is why the Purview developers have added a test option to allow tenants to test files against individual SITs or all SITs to see what happens. https://office365itpros.com/2026/03/04/sensitive-information-type-test/17Views0likes0CommentsHow to deploy Microsoft Purview DSPM for AI to secure your AI apps
Microsoft Purview Data Security Posture Management (DSPM for AI) is designed to enhance data security for the following AI applications: Microsoft Copilot experiences, including Microsoft 365 Copilot. Enterprise AI apps, including ChatGPT enterprise integration. Other AI apps, including all other AI applications like ChatGPT consumer, Microsoft Copilot, DeepSeek, and Google Gemini, accessed through the browser. In this blog, we will dive into the different policies and reporting we have to discover, protect and govern these three types of AI applications. Prerequisites Please refer to the prerequisites for DSPM for AI in the Microsoft Learn Docs. Login to the Purview portal To begin, start by logging into Microsoft 365 Purview portal with your admin credentials: In the Microsoft Purview portal, go to the Home page. Find DSPM for AI under solutions. 1. Securing Microsoft 365 Copilot Be sure to check out our blog on How to use the DSPM for AI data assessment report to help you address oversharing concerns when you deploy Microsoft 365 Copilot. Discover potential data security risks in Microsoft 365 Copilot interactions In the Overview tab of DSPM for AI, start with the tasks in “Get Started” and Activate Purview Audit if you have not yet activated it in your tenant to get insights into user interactions with Microsoft Copilot experiences In the Recommendations tab, review the recommendations that are under “Not Started”. Create the following data discovery policy to discover sensitive information in AI interactions by clicking into it. Detect risky interactions in AI apps - This public preview Purview Insider Risk Management policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot experiences. Click here to learn more about Risky AI usage policy. With the policies to discover sensitive information in Microsoft Copilot experiences in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter to Microsoft Copilot Experiences, and review the following for Microsoft Copilot experiences: Total interactions over time (Microsoft Copilot) Sensitive interactions per AI app Top unethical AI interactions Top sensitivity labels references in Microsoft 365 Copilot Insider Risk severity Insider risk severity per AI app Potential risky AI usage Protect sensitive data in Microsoft 365 Copilot interactions From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities from Microsoft Copilot experiences based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. Then drill down to each activity to view details including the capability to view prompts and response with the right permissions. To protect the sensitive data in interactions for Microsoft 365 Copilot, review the Not Started policies in the Recommendations tab and create these policies: Information Protection Policy for Sensitivity Labels - This option creates default sensitivity labels and sensitivity label policies. If you've already configured sensitivity labels and their policies, this configuration is skipped. Protect sensitive data referenced in Microsoft 365 Copilot - This guides you through the process of creating a Purview Data Loss Prevention (DLP) policy to restrict the processing of content with specific sensitivity labels in Copilot interactions. Click here to learn more about Data Loss Prevention for Microsoft 365 Copilot. Protect sensitive data referenced in Copilot responses - Sensitivity labels help protect files by controlling user access to data. Microsoft 365 Copilot honors sensitivity labels on files and only shows users files they already have access to in prompts and responses. Use Data assessments to identify potential oversharing risks, including unlabeled files. Stay tuned for an upcoming blog post on using DSPM for AI data assessments! Use Copilot to improve your data security posture - Data Security Posture Management combines deep insights with Security Copilot capabilities to help you identify and address security risks in your org. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Govern the prompts and responses in Microsoft 365 Copilot interactions Understand and comply with AI regulations by selecting “Guided assistance to AI regulations” in the Recommendations tab and walking through the “Actions to take”. From the Recommendations tab, create a Control unethical behavior in AI Purview Communications Compliance policy to detect sensitive information in prompts and responses and address potentially unethical behavior in Microsoft Copilot experiences and ChatGPT for Enterprise. This policy covers all users and groups in your organization. To retain and/or delete Microsoft 365 Copilot prompts and responses, setup a Data Lifecycle policy by navigating to Microsoft Purview Data Lifecycle Management and find Retention Policies under the Policies header. You can also preserve, collect, analyze, review, and export Microsoft 365 Copilot interactions by creating an eDiscovery case. 2. Securing Enterprise AI apps Please refer to this amazing blog on Unlocking the Power of Microsoft Purview for ChatGPT Enterprise | Microsoft Community Hub for detailed information on how to integrate with ChatGPT for enterprise, the Purview solutions it currently supports through Purview Communication Compliance, Insider Risk Management, eDiscovery, and Data Lifecycle Management. Learn more about the feature also through our public documentation. 3. Securing other AI Microsoft Purview DSPM for AI currently supports the following list of AI sites. Be sure to also check out our blog on the new Microsoft Purview data security controls for the browser & network to secure other AI apps. Discover potential data security risks in prompts sent to other AI apps In the Overview tab of DSPM for AI, go through these three steps in “Get Started” to discover potential data security risk in other AI interactions: Install Microsoft Purview browser extension For Windows users: The Purview extension is not necessary for the enforcement of data loss prevention on the Edge browser but required for Chrome to detect sensitive info pasted or uploaded to AI sites. The extension is also required to detect browsing to other AI sites through an Insider Risk Management policy for both Edge and Chrome browser. Therefore, Purview browser extension is required for both Edge and Chrome in Windows. For MacOS users: The Purview extension is not necessary for the enforcement of data loss prevention on macOS devices, and currently, browsing to other AI sites through Purview Insider Risk Management is not supported on MacOS, therefore, no Purview browser extension is required for MacOS. Extend your insights for data discovery – this one-click collection policy will setup three separate Purview detection policies for other AI apps: Detect sensitive info shared in AI prompts in Edge – a Purview collection policy that detects prompts sent to ChatGPT consumer, Micrsoft Copilot, DeepSeek, and Google Gemini in Microsoft Edge and discovers sensitive information shared in prompt contents. This policy covers all users and groups in your organization in audit mode only. Detect when users visit AI sites – a Purview Insider Risk Management policy that detects when users use a browser to visit AI sites. Detect sensitive info pasted or uploaded to AI sites – a Purview Endpoint Data loss prevention (eDLP) policy that discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only. With the policies to discover sensitive information in other AI apps in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter by Other AI Apps, and review the following for other AI apps: Total interactions over time (other AI apps) Total visits (other AI apps) Sensitive interactions per AI app Insider Risk severity Insider risk severity per AI app Protect sensitive info shared with other AI apps From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. To protect the sensitive data in interactions for other AI apps, review the Not Started policies in the Recommendations tab and create these policies: Fortify your data security – This will create three policies to manage your data security risks with other AI apps: 1) Block elevated risk users from pasting or uploading sensitive info on AI sites – this will create a Microsoft Purview endpoint data loss prevention (eDLP) policy that uses adaptive protection to give a warn-with-override to elevated risk users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode. Learn more about adaptive protection in Data loss prevention. 2) Block elevated risk users from submitting prompts to AI apps in Microsoft Edge – this will create a Microsoft Purview browser data loss prevention (DLP) policy, and using adaptive protection, this policy will block elevated, moderate, and minor risk users attempting to put information in other AI apps using Microsoft Edge. This integration is built-in to Microsoft Edge. Learn more about adaptive protection in Data loss prevention. 3) Block sensitive info from being sent to AI apps in Microsoft Edge - this will create a Microsoft Purview browser data loss prevention (DLP) policy to detect inline for a selection of common sensitive information types and blocks prompts being sent to AI apps while using Microsoft Edge. This integration is built-in to Microsoft Edge. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Conclusion Microsoft Purview DSPM for AI can help you discover, protect, and govern the interactions from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps. We recommend you review the Reports in DSPM for AI routinely to discover any new interactions that may be of concern, and to create policies to secure and govern those interactions as necessary. We also recommend you utilize the Activity Explorer in DSPM for AI to review different Activity explorer events while users interacting with AI, including the capability to view prompts and response with the right permissions. We will continue to update this blog with new features that become available in DSPM for AI, so be sure to bookmark this page! Follow-up Reading Check out this blog on the details of each recommended policies in DSPM for AI: Microsoft Purview – Data Security Posture Management (DSPM) for AI | Microsoft Community Hub Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365Introducing Security Dashboard for AI (Now in Public Preview)
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is available in public preview. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 2026Email to external(trusted user) not require verify user Identity(with Google or One-time passcode)
Dear Expert and Community, I am starting with MS Purview - Data Loss Prevention. I have one point to clarify and seek your advise / comment / contribute or sharing good practice regarding with below: - Firstly, we can send email to externally user contain sensitive information, it is encryption or blocked (result: worked as expected). If remail encrypt, the external receiver require verify the Identity via sign in with google acc / with a one time password. - Second: we plan sending email to external user (only trusted user / domain). Is it possible, do not require these scope user reverify their Identity again and again? If yes, how to do it? If not - why? Well appreciated for update and supporting. Thanks,103Views0likes3CommentsNew Microsoft Purview Deployment Blueprint | Lightweight guide to mitigate data leakage
We’re excited to share our latest Data Security deployment blueprint: “Lightweight guide to mitigate data leakage”—a practical resource designed to help organizations quickly enable core data security features across their Microsoft 365 estate with minimal setup. The blueprint follows a good / better / best model that maps protections to your licensing. “Good” highlights foundational features included in Business Premium SKUs, while “Better” and “Best” layer in advanced E5 Compliance capabilities, such as auto-labeling, Endpoint DLP, insider risk signals and much more. With the new E5 Compliance Add-On for Business Premium, this guide shows how organizations can capture quick wins today while building toward stronger, long-term security practices. This blueprint is designed for IT administrators, security teams, and compliance stakeholders tasked with protecting sensitive data – and it’s equally valuable for Microsoft partners and consultants supporting customers on their data security journey. Whether you’re enabling basic safeguards or advancing towards automated protection, this guide provides clear, actionable steps to strengthen your data security posture. Ready to get started? Visit our Purview deployment blueprint page or jump straight to the direct PPT link for a step-by-step walkthrough. Securing your data doesn’t have to be complex – this lightweight blueprint makes it achievable for organizations of any size.Authorization and Identity Governance Inside AI Agents
Designing Authorization‑Aware AI Agents Enforcing Microsoft Entra ID RBAC in Copilot Studio As AI agents move from experimentation to enterprise execution, authorization becomes the defining line between innovation and risk. AI agents are rapidly evolving from experimental assistants into enterprise operators—retrieving user data, triggering workflows, and invoking protected APIs. While many early implementations rely on prompt‑level instructions to control access, regulated enterprise environments require authorization to be enforced by identity systems, not language models. This article presents a production‑ready, identity‑first architecture for building authorization‑aware AI agents using Copilot Studio, Power Automate, Microsoft Entra ID, and Microsoft Graph, ensuring every agent action executes strictly within the requesting user’s permissions. Why Prompt‑Level Security Is Not Enough Large Language Models interpret intent—they do not enforce policy. Even the most carefully written prompts cannot: Validate Microsoft Entra ID group or role membership Reliably distinguish delegated user identity from application identity Enforce deterministic access decisions Produce auditable authorization outcomes Relying on prompts for authorization introduces silent security failures, over‑privileged access, and compliance gaps—particularly in Financial Services, Healthcare, and other regulated industries. Authorization is not a reasoning problem. It is an identity enforcement problem. Common Authorization Anti‑Patterns in AI Agents The following patterns frequently appear in early AI agent implementations and should be avoided in enterprise environments: Hard‑coded role or group checks embedded in prompts Trusting group names passed as plain‑text parameters Using application permissions for user‑initiated actions Skipping verification of the user’s Entra ID identity Lacking an auditable authorization decision point These approaches may work in demos, but they do not survive security reviews, compliance audits, or real‑world misuse scenarios. Authorization‑Aware Agent Architecture In an authorization‑aware design, the agent never decides access. Authorization is enforced externally, by identity‑aware workflows that sit outside the language model’s reasoning boundary. High‑Level Flow The Copilot Studio agent receives a user request The agent passes the User Principal Name (UPN) and intended action A Power Automate flow validates permissions using Microsoft Entra ID via Microsoft Graph Only authorized requests are allowed to proceed Unauthorized requests fail fast with a deterministic outcome Authorization‑aware Copilot Studio architecture enforces Entra ID RBAC before executing any business action. The agent orchestrates intent. Identity systems enforce access. Enforcing Entra ID RBAC with Microsoft Graph Power Automate acts as the authorization enforcement layer: Resolve user identity from the supplied UPN Retrieve group or role memberships using Microsoft Graph Normalize and compare memberships against approved RBAC groups Explicitly deny execution when authorization fails This keeps authorization logic: Centralized Deterministic Auditable Independent of the AI model Reference Implementation: Power Automate RBAC Enforcement Flow The following import‑ready Power Automate cloud flow demonstrates a secure RBAC enforcement pattern for Copilot Studio agents. It validates Microsoft Entra ID group membership before allowing any business action. Scenario Trigger: User‑initiated agent action Identity model: Delegated user identity Input: userUPN, requestedAction Outcome: Authorized or denied based on Entra ID RBAC { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", "contentVersion": "1.0.0.0", "triggers": { "Copilot_Request": { "type": "Request", "kind": "Http", "inputs": { "schema": { "type": "object", "properties": { "userUPN": { "type": "string" }, "requestedAction": { "type": "string" } }, "required": [ "userUPN" ] } } } }, "actions": { "Get_User_Groups": { "type": "Http", "inputs": { "method": "GET", "uri": "https://graph.microsoft.com/v1.0/users/@{triggerBody()?['userUPN']}/memberOf?$select=displayName", "authentication": { "type": "ManagedServiceIdentity" } } }, "Normalize_Group_Names": { "type": "Select", "inputs": { "from": "@body('Get_User_Groups')?['value']", "select": { "groupName": "@toLower(item()?['displayName'])" } }, "runAfter": { "Get_User_Groups": [ "Succeeded" ] } }, "Check_Authorization": { "type": "Condition", "expression": "@contains(body('Normalize_Group_Names'), 'ai-authorized-users')", "runAfter": { "Normalize_Group_Names": [ "Succeeded" ] }, "actions": { "Authorized_Action": { "type": "Compose", "inputs": "User authorized via Entra ID RBAC" } }, "else": { "actions": { "Access_Denied": { "type": "Terminate", "inputs": { "status": "Failed", "message": "Access denied. User not authorized via Entra ID RBAC." } } } } } } } This pattern enforces authorization outside the agent, aligns with Zero Trust principles, and creates a clear audit boundary suitable for enterprise and regulated environments. Flow Diagram: Agent Integrated with RBAC Authorization Flow and Sample Prompt Execution: Delegated vs Application Permissions Scenario Recommended Permission Model User‑initiated agent actions Delegated permissions Background or system automation Application permissions Using delegated permissions ensures agent execution remains strictly within the requesting user’s identity boundary. Auditing and Compliance Benefits Deterministic and explainable authorization decisions Centralized enforcement aligned with identity governance Clear audit trails for security and compliance reviews Readiness for SOC, ISO, PCI, and FSI assessments Enterprise Security Takeaways Authorization belongs in Microsoft Entra ID, not prompts AI agents must respect enterprise identity boundaries Copilot Studio + Power Automate + Microsoft Graph enable secure‑by‑design AI agents By treating AI agents as first‑class enterprise actors and enforcing authorization at the identity layer, organizations can scale AI adoption with confidence, trust, and compliance.Microsoft Extends DLP Policy for Copilot Protection to All Storage Locations
Microsoft has enhanced the DLP policy for Copilot to cover Office files held in any storage location instead of only Microsoft 365 locations like SharePoint Online and OneDrive for Business. The change is made in the Office augmentation loop, a little-known internal component that coordinates use of connected experiences by apps. Extending the DLP policy to cover all locations makes perfect sense. https://office365itpros.com/2026/02/24/dlp-policy-for-copilot-storage/137Views0likes0CommentsSecuring the Modern Workplace: Transitioning from Legacy Authentication to Conditional Access
Authored by: Gonzalo Brown Ruiz, Senior Microsoft 365 Engineer & Cloud Security Specialist Date: July 2025 Introduction In today’s threat landscape, legacy authentication is one of the weakest links in enterprise security. Protocols like POP, IMAP, SMTP Basic, and MAPI are inherently vulnerable — they don’t support modern authentication methods like MFA and are frequently targeted in credential stuffing and password spray attacks. Despite the known risks, many organizations still allow legacy authentication to persist for “just one app” or “just a few users.” This article outlines a real-world, enterprise-tested strategy for eliminating legacy authentication and implementing a Zero Trust-aligned Conditional Access model using Microsoft Entra ID. Why Legacy Authentication Must Die No support for MFA: Enables attackers to bypass the most critical security control Password spray heaven: Common vector for brute-force and scripted login attempts Audit blind spots: Limited logging and correlation in modern SIEM tools Blocks Zero Trust progress: Hinders enforcement of identity- and device-based policies Removing legacy auth isn’t a nice-to-have — it’s a prerequisite for a modern security strategy. Phase 1: Auditing Your Environment A successful transition starts with visibility. Before blocking anything, I led an environment-wide audit to identify: All sign-ins using legacy protocols (POP, IMAP, SMTP AUTH, MAPI) App IDs and service principals requesting basic auth Users with outdated clients (Office 2010/2013) Devices and applications integrated via PowerShell, Azure Sign-In Logs, and Workbooks Tools used: Microsoft 365 Sign-In Logs Conditional Access insights workbook PowerShell (Get-SignInLogs, Get-CASMailbox, etc.) Phase 2: Policy Design and Strategy The goal is not just to block — it’s to transform authentication securely and gradually. My Conditional Access strategy included: Blocking legacy authentication protocols while allowing scoped exceptions Report-only mode to assess potential impact Role-based access rules (admins, execs, vendors, apps) Geo-aware policies and MFA enforcement Service account handling and migration to Graph or Modern Auth-compatible apps Key considerations: Apps that support legacy auth only Delegates and shared mailbox access scenarios BYOD and conditional registration enforcement Phase 3: Staged Rollout and Enforcement A phased approach reduced friction: Pilot group enforcement (IT, InfoSec, willing users) Report-only monitoring across business units Clear communications to stakeholders and impacted users User education campaigns on legacy app retirement Gradual enforcement by department, geography, or risk tier We used Microsoft Entra’s built-in messaging and Service Health alerts to notify users of policy triggers. Phase 4: Monitoring, Tuning, and Incident Readiness Once policies were in place: Monitored Sign-in logs for policy match rates and unexpected denials Used Microsoft Defender for Identity to correlate legacy sign-in attempts Created alerts and response playbooks for blocked sign-in anomalies Results: 100% of all user and app traffic transitioned to Modern Auth Drastic reduction in brute force traffic from foreign IPs Fewer support tickets around password lockouts and MFA prompts Lessons Learned Report-only mode is your best friend. Avoids surprise outages. Communication beats configuration. Even a perfect policy fails if users are caught off guard. Legacy mail clients still exist in vendor tools and old mobile apps. Service accounts can break silently. Replace or modernize them early. CA exclusions are dangerous. Every exception must be time-bound and documented. Conclusion Eliminating legacy authentication is not just a policy update — it’s a cultural shift toward Zero Trust. By combining deep visibility, staged enforcement, and a user-centric approach, organizations can securely modernize their identity perimeter. Microsoft Entra Conditional Access is more than a policy engine — it is the architectural pillar of enterprise-grade identity security. Author’s Note: This article is based on my real-world experience designing and enforcing Conditional Access strategies across global hybrid environments with Microsoft 365 and Azure AD/Entra ID. Copyright © 2025 Gonzalo Brown Ruiz. All rights reserved.906Views0likes1Comment