compliance
363 TopicsWelcome to the Microsoft Security Community!
Microsoft Security Community Hub | Protect it all with Microsoft Security Eliminate gaps and get the simplified, comprehensive protection, expertise, and AI-powered solutions you need to innovate and grow in a changing world. The Microsoft Security Community is your gateway to connect, learn, and collaborate with peers, experts, and product teams. Gain access to technical discussions, webinars, and help shape Microsoft’s security products. Get there fast To stay up to date on upcoming opportunities and the latest Microsoft Security Community news, make sure to subscribe to our email list. Find the latest skilling content and on-demand videos – subscribe to the Microsoft Security Community YouTube channel. Catch the latest announcements and connect with us on LinkedIn – Microsoft Security Community and Microsoft Entra Community. Read the latest in the the Microsoft Security Community blog. Upcoming Community Calls April 2026 Apr. 23 | 8:00am | Security Copilot Skilling Series | Getting started with Security Copilot New to Security Copilot? This session walks through what you actually need to get started, including E5 inclusion requirements and a practical overview of the core experiences and agents you will use on day one. RESCHEDULED Apr. 28 | 8:00am | Security Copilot Skilling Series | Security Copilot Agents, DSPM AI Observability, and IRM for Agents This session covers an overview of how Microsoft Purview supports AI risk visibility and investigation through Data Security Posture Management (DSPM) and Insider Risk Management (IRM), alongside Security Copilot–powered agents. This session will go over what is AI Observability in DSPM as well as IRM for Agents in Copilot Studio and Azure AI Foundry. Attendees will learn about the IRM Triage Agent and DSPM Posture Agent and their deployment. Attendees will gain an understanding of how DSPM and IRM capabilities could be leveraged to improve visibility, context, and response for AI-related data risks in Microsoft Purview. Apr. 30 | 8:00am | Microsoft Security Community Presents | Purview Lightning Talks Join the Microsoft Security Community for Purview Lightning Talks; quick technical sessions delivered by the community, for the community. You’ll pick up practical Purview gems: must-know Compliance Manager tips, smart data security tricks, real-world scenarios, and actionable governance recommendations all in one energizing event. Hear directly from Purview customers, partners, and community members and walk away with ideas you can put to work right immediately. Register now; full agenda coming soon! May 2026 May 12 | 9:00am | Microsoft Sentinel | Hyper scale your SOC: Manage delegated access and role-based scoping in Microsoft Defender In this session we'll discuss Unified role based access control (RBAC) and granular delegated admin privileges (GDAP) expansions including: How to use RBAC to -Allow multiple SOC teams to operate securely within a shared Sentinel environment-Support granular, row-level access without requiring workspace separation-Get consistent and reusable scope definitions across tables and experiences How to use GDAP to -Manage MSSPs and hyper-scaler organizations with delegated- access to governed tenants within the Defender portal-Manage delegated access for Sentinel. Looking for more? Join the Security Advisors! As a Security Advisor, you’ll gain early visibility into product roadmaps, participate in focus groups, and access private preview features before public release. You’ll have a direct channel to share feedback with engineering teams, influencing the direction of Microsoft Security products. The program also offers opportunities to collaborate and network with fellow end users and Microsoft product teams. Join the Security Advisors program that best fits your interests: www.aka.ms/joincommunity. Additional resources Microsoft Security Hub on Tech Community Virtual Ninja Training Courses Microsoft Security Documentation Azure Network Security GitHub Microsoft Defender for Cloud GitHub Microsoft Sentinel GitHub Microsoft Defender XDR GitHub Microsoft Defender for Cloud Apps GitHub Microsoft Defender for Identity GitHub Microsoft Purview GitHub46KViews7likes13CommentsSecurity Review for Microsoft Edge version 147
We have reviewed the new settings in Microsoft Edge version 147 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 139 security baseline continues to be our recommended configuration which can be downloaded from the Microsoft Security Compliance Toolkit. Microsoft Edge version 147 introduced 9 new Computer and User settings; we have included a spreadsheet listing the new settings to make it easier for you to find. Version 147 introduced the Control the availability of the XSLT feature policy (XSLTEnabled). This policy exists to support enterprise testing and transition scenarios while the Chromium project works toward deprecating and removing XSLT support from the browser due to security concerns associated with this legacy feature. XSLT support in modern browsers represents a disproportionate attack surface, and upstream Chromium has announced plans to disable and ultimately remove XSLT in a future release. As a result, organizations should treat continued reliance on client‑side XSLT as technical debt and plan migration accordingly. Additional details can be found here. Organizations are encouraged to proactively test setting XSLTEnabled = Disabled to identify application dependencies and remediation requirements ahead of any future default changes or removal of the feature. As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here. Please continue to give us feedback through the Security Baselines Discussion site or this post.Security Community Spotlight: Fabrício Assumpção
Meet Fabrício Assumpção, a Technical Specialist Architect for a Microsoft Security and Compliance Certified Partner, based in Brazil. Fabrício considers his involvement with the Microsoft Security Community defined by a dual approach: architectural innovation and technical enablement. As a Microsoft Certified Trainer (MCT) since 2021, he has been dedicated to bridging the gap between theory and real-world implementation for security professionals globally. What do you find most rewarding about being a member of the Microsoft Security Community? The most rewarding part of being a member of the Microsoft Security Community is the direct access to the pulse of cybersecurity innovation. As a Microsoft Certified Trainer (MCT) and a developer/engineer/architect focused on Cloud Security/M365 Security and SIEM, being in this ecosystem allows me to bridge the gap between complex architectural challenges and AI-driven solutions. Developing security agents for Microsoft Security Copilot is particularly fulfilling because I can see how the community’s collective knowledge shapes the future of automated defense. For me, it’s not just about the tools, but about being part of a global movement that empowers defenders to stay ahead of sophisticated threats through intelligence and automation. How would you describe your Microsoft Community involvement? In my role as a Security Architect and Engineer at adaQuest, I advocate for Microsoft’s vision by designing and deploying complex security infrastructures. My work spans the entire Microsoft Security stack, from high-level XDR (Microsoft Defender) strategies and SIEM (Microsoft Sentinel) deployments to the cutting edge of AI-driven defense. Currently, alongside my other activities, I'm focused on developing custom security agents for Microsoft Security Copilot, a task that allows me to push the boundaries of how automation and AI can empower modern SOCs. While my primary involvement has been focused on technical architecture and developing security Copilot agents, my ideal community experience would be centered on deep-tier technical co-creation. I envision a community space that facilitates direct architectural dialogues between Microsoft product teams and the engineers who are building on top of those platforms. For me, the most valuable community experience is one that prioritizes 'early-access' feedback loops and specialized hackathons where we can stress-test new features—like advanced XDR integrations or AI agent capabilities—before they hit the mainstream. My ideal is a community that functions as a high-octane R&D hub, where the collective expertise of architects and developers directly influences the roadmap of the security tools we use every day Editor’s note: The scenario Fabrício describes above is much like the Security Advisors program, which gives you early access to products, features, and private previews. Your feedback to engineering has the power to directly influence Microsoft Security products. If this interests you, consider joining! How long have you been working with Microsoft Security products? My Microsoft security journey is a story of evolution—from a cloud support engineer resolving complex L3/L4 infrastructure issues to a Security Architect leading global SOC operations. I have spent the last decade mastering the transition to the cloud, starting with identity and endpoint management (Entra ID and Intune) and progressing to end-to-end administration of the Microsoft 365 and Azure security stack. A turning point was joining adaQuest, where I took the lead on SOCaaS and began bridging the gap between governance and hands-on engineering and Sentinel. Today, my journey has reached its most exciting phase: pioneering the use of Generative AI in security to build scalable, automated solutions that protect clients worldwide. What features or products have provided the most impact? Please describe how it has helped you or your customers. The most impactful solution has been the integration of Microsoft Sentinel with Security Copilot through custom-developed security agents. This combination has revolutionized how our customers manage their security posture, allowing them to orchestrate and query the entire Defender XDR, Entra ID, and Purview stack through natural language automation. The most direct benefit for our clients has been a drastic reduction in Mean Time to Respond (MTTR) and a significant increase in operational efficiency, transforming complex security data into proactive defense. This unified approach ensures that our customers maximize their investment in the Microsoft ecosystem while maintaining high-speed resilience against sophisticated threats. You’ve indeed been instrumental in building with Microsoft Security. What can you share with us, and can you tell us about your journey? I am incredibly proud of being a pioneer in the Microsoft Security Copilot ecosystem. In early 2025, before official documentation was fully available or the feature had reached General Availability (GA), I conceptualized and developed six custom security agents designed to enhance automated defense and incident response. These agents were the result of a deep dive into the underlying architecture of AI-driven security, where I had to materialize complex ideas into functional, real-world tools without a predefined roadmap. My work was officially showcased and published during the historic announcement of the Microsoft Security Store in 2025, marking the debut of third-party security agents. Seeing these agents evolve from initial concepts to essential tools for the SOC of the future—enabling faster, more intelligent decision-making—is my most rewarding professional achievement. It represents my commitment to pushing the boundaries. Fabricio’s agents are available in the Microsoft Security Store. Here’s what he’s built (so far…) Admin Guard Insight An agent focused on privileged identity and access analysis. It reviews administrative roles, sensitive changes, and risk signals to identify exposure, misuse of privileges, and opportunities to strengthen security posture. Login Investigator An agent designed to investigate suspicious sign-in activity. It correlates authentication details, IPs, locations, devices, user risk, and related incidents to determine whether a login is legitimate or potentially malicious. Entity Guard An entity-centric investigation agent for users, devices, applications, or service principals. It consolidates signals from multiple sources to enrich entity context and identify abnormal behavior, exposure, and associated risks. Data Leak Agent An agent specialized in investigating potential data leakage and sensitive information exposure. It validates and correlates incidents across Microsoft Defender XDR and Microsoft Sentinel to produce a more reliable and contextualized investigation. L1 SOC Triage An agent built to support first-level SOC alert and incident triage. It helps classify events, enrich context, prioritize severity, and recommend next steps or escalation paths for analysts. Ransomware Kill Chain Investigator An agent focused on ransomware investigations. It correlates evidence and maps observed activity to the ransomware kill chain to help teams understand the attack, impacted assets, and priority response actions. EWS Sunset Readiness Assessor An agent that assesses an organization’s readiness for Exchange Web Services (EWS) deprecation. It identifies application and service principal dependencies and supports planning for migration to more modern and secure alternatives. What impact has integrating with Microsoft Security had on your business or your customers? Integrating with Microsoft Security has had a significant impact on both our business and our customers. For our business, it has enabled us to build higher-value security services and differentiated solutions, such as Security Copilot agents tailored to real operational challenges in identity protection, incident triage, data leakage investigations, ransomware analysis, and legacy dependency assessments. For our customers, the impact has been: improved speed, consistency, and depth in security operations. By leveraging Microsoft Security signals and platforms such as Microsoft Defender, Microsoft Sentinel, and Entra, we help teams investigate incidents faster, reduce manual effort, improve decision-making, and strengthen overall security posture. In practice, this means customers gain more actionable insights, better prioritization, and more efficient use of their security resources. What advice do you have for others who would like to get involved in the Microsoft Community? My advice is to bridge the gap between learning and building. Don’t just consume content; start creating solutions for real-world challenges, such as AI-driven automation in Security Copilot or Microsoft Sentinel. Use your practical experience to help others, and remember that teaching is one of the most powerful ways to contribute. In an era of rapid AI evolution, being a proactive 'early adopter' who shares insights is the best way to grow within the Microsoft Community and help protect the global digital landscape. Fabrício beyond Microsoft Security Beyond my technical career, I am a lifelong learner with a deep passion for understanding how the world works, from the complexities of Quantum Computing—which I studied at the University of Coimbra—to the fundamental principles of Physics, Astronomy, and Philosophy. I am currently pursuing two Master’s degrees, as I believe that diverse knowledge fuels creativity. I am also a polyglot at heart, teaching myself Italian, Spanish, Russian, and Chinese using open-source materials. My creative side is expressed through music, as I play both the violin and the piano. In my spare time, I enjoy the discipline of sports; I have a history as both a player and coach of Rugby, and I am a fan of Ice Hockey. My future plans include completing my Doctorate and embracing a nomadic lifestyle to experience different cultures and perspectives. For me, life is about the continuous pursuit of wisdom and the belief that we can always expand the boundaries of our own understanding. Connect with Fabrício on LinkedIn. ____________________________________________________________________________________________ Learn and Engage with the Microsoft Security Community Log in and follow this Microsoft Security Community Blog. Follow = Click the heart in the upper right when you're logged in 🤍. Join the Microsoft Security Community and be notified of upcoming events, product feedback surveys, and more. Get early access to Microsoft Security products and provide feedback to engineers by joining the Microsoft Security Advisors. Join the Microsoft Security Community LinkedIn Group and follow the Microsoft Entra Community on LinkedIn124Views2likes0CommentsEnterprise Cybersecurity in the Age of AI: Why Legacy Security Is Failing as Attackers Move Faster
Cybersecurity has always been an asymmetric game. But with the rise of AI‑enabled attacks, that imbalance has widened dramatically. Microsoft Threat Intelligence and Microsoft Defender Security Research have publicly reported a clear shift in how attackers operate: AI is now being embedded across the entire attack lifecycle. Threat actors are using it to accelerate reconnaissance, generate highly targeted phishing at scale, automate infrastructure, and adapt their techniques in real time - reducing the time and effort required to move from initial access to impact. In recent months, Microsoft has documented AI‑enabled phishing campaigns abusing legitimate authentication mechanisms - including OAuth and device‑code flows - to compromise enterprise accounts at scale. These campaigns rely on automation, dynamic code generation, and highly personalised lures, rather than on stealing passwords or exploiting traditional vulnerabilities. Meanwhile, many large enterprises are still defending themselves with security controls designed for a very different threat model - one rooted in predictability, static signatures, and trusted perimeters. These approaches were built to stop repeatable attacks, not adversaries that continuously adapt and blend into normal business activity. The result is a dangerous gap: highly adaptive attackers versus static, legacy defences. Below are some of the most common outdated security practices still widely used by enterprises today - and why they are no longer sufficient against modern, AI‑driven threats. 1. Signature‑Based Antivirus Traditional antivirus solutions rely on known signatures and hashes, assuming malware looks the same each time it is deployed. AI has completely broken that assumption. Modern malware families now automatically mutate their code, generate new variants on execution, and adapt behaviour based on the environment they encounter. Microsoft Threat Intelligence has observed multiple actors using AI‑assisted tooling to rapidly rewrite payload components during development and testing, making each deployment look subtly different. In this model, there is no stable signature to detect. By the time a pattern exists, the attacker has already iterated past it. Signature‑based detection is not just slow - it is structurally mismatched to how modern threats operate. What to adopt instead Shift from artifact‑based detection to behavior‑based endpoint protection: EDR/XDR platforms that analyse process behaviour, memory activity, and execution chains Machine‑learning models trained on what attackers do, not what binaries look like Continuous monitoring with automated response, not one‑time blocking 2. Firewalls Many enterprises still rely on firewalls that enforce static allow/deny rules based on ports and IP addresses. That approach worked when applications were predictable and networks were clearly segmented. Today, traffic is encrypted, cloud‑based, API‑driven, and deeply intertwined with legitimate SaaS and identity services. Recent AI‑assisted phishing campaigns abusing legitimate OAuth and device‑code authentication flows illustrate this perfectly. From a network perspective, everything looks allowed: HTTPS traffic to trusted identity providers. There is no suspicious port, no malicious domain, no obvious anomaly - yet the attacker successfully hijacks the authentication process itself. What to adopt instead Move from perimeter controls to identity‑ and context‑aware network security: Application‑aware firewalls with behavioural and risk‑based inspection Integration with identity signals (user, device, location, risk score) Continuous evaluation of sessions, not one‑time allow/deny decisions In modern environments, identity is the new control plane. 3. Single‑Factor Authentication Despite years of guidance, single‑factor passwords remain common - especially for legacy applications, VPN access, and service accounts. AI‑powered credential abuse changes the economics of these attacks entirely. Threat actors now operate credential‑stuffing and phishing campaigns that adapt lures in real time, testing millions of combinations with minimal cost. In multiple Microsoft‑observed campaigns, attackers didn’t brute‑force access broadly. Instead, they used AI to identify which compromised identities were financially or operationally valuable - executives, payroll, procurement - and focused only on those accounts. What to adopt instead Replace static authentication with phishing‑resistant, risk‑based identity controls: Phishing‑resistant MFA (hardware‑backed or passkeys) Conditional access based on user behaviour, device health, and risk Continuous authentication instead of a single login event 4. VPN‑Centric Security VPNs were designed to extend the corporate network to remote users, based on the assumption that “inside” meant trustworthy. That assumption no longer holds. AI‑assisted attacks increasingly exploit VPN access post‑compromise. Once credentials are obtained, automation is used to map internal resources, identify privilege escalation paths, and move laterally - often without triggering traditional alerts. In parallel, Microsoft has observed nation‑state actors using AI to create highly convincing fake employee personas, complete with AI‑generated resumes, consistent communication styles, and synthetic media, allowing them to pass hiring and onboarding processes and gain long‑term, trusted access. In these scenarios, VPN access is not breached - it is granted. What to adopt instead Transition from network trust to Zero Trust access models: Identity‑based access to applications, not networks Least‑privilege, per‑app/user/service access instead of broad internal connectivity Continuous verification using behavioural signals In modern enterprises, access should be explicit, scoped, and continuously re‑evaluated. 5. Treating Unencrypted Data as “Low‑Risk” It is still common to find sensitive data stored unencrypted in older databases, file shares, and backups. In an AI‑driven threat landscape, data discovery is no longer manual or slow. After compromise, attackers increasingly use AI as an on‑demand analyst - summarizing directory structures, classifying stolen datasets, and prioritizing what matters most for impact or monetization. Unencrypted data dramatically lowers the cost and consequence of breach activity, turning what could have been a limited incident into a full‑scale exposure. What to adopt instead Shift from passive data storage to data‑centric security: Encryption by default, both at rest and in transit Data classification and sensitivity labeling built into platforms Access controls tied to data sensitivity, not just system location Begin preparing for post‑quantum cryptography (PQC) as part of long‑term data protection and crypto‑agility strategy 6. Intrusion Detection Systems (IDS) Built on Known Patterns Traditional IDS platforms look for known indicators of compromise - assuming attackers reuse the same tools and techniques. AI‑driven attacks deliberately avoid that assumption. Microsoft Threat Intelligence reports actors using large language models to quickly analyse publicly disclosed vulnerabilities, understand exploitation paths, and compress the time between disclosure and weaponization. This isn’t about zero‑days - it’s about speed. What once took days or weeks now takes hours. Legacy IDS platforms often fail silently in these scenarios, detecting only what they already know how to recognize. What to adopt instead Move from static detection to adaptive, correlation‑based threat detection: Graph‑based XDR platforms correlating signals across identity, endpoint, email, cloud, and network Anomaly detection that focuses on deviation from normal behaviour Automated investigation and response to match attacker speed Closing Thought: Security Is a Journey, Not a Destination AI is not a future cybersecurity problem. It is a current force multiplier for attackers - and it is exposing the limits of legacy security architectures faster than many organisations are willing to admit. A realistic security strategy starts with an uncomfortable but necessary acknowledgement: no organisation can be 100% secure. Intrusions will happen. Credentials will be compromised. Controls will be tested. The difference between a resilient enterprise and a vulnerable one is not the absence of incidents, but how effectively risk is managed when they occur. In mature organisations, this means assuming breach and designing for containment. Strong access controls limit blast radius. Least privilege and conditional access reduce what an attacker can reach. Data Loss Prevention (DLP) ensures that even when access is misused, sensitive data cannot be freely exfiltrated. Just as importantly, leaders understand the business consequences of compromise - which data matters most, which systems are critical, and which risks are acceptable versus existential. As a cybersecurity architect, I see this moment as a unique opportunity. AI adoption does not have to repeat the mistakes of earlier technology waves, where innovation moved fast and security followed years later. AI gives organisations the chance to introduce a new class of service while embedding security from day one - designing access, data boundaries, monitoring, and governance into the platform before it becomes business‑critical. When security is built in upfront, enterprises don’t just reduce risk - they gain confidence to move faster and truly leverage AI’s value. Security, especially in the age of AI, is not about preventing every intrusion. It is about controlling impact, preserving trust, and maintaining operational continuity in a world where attackers move faster than ever. In the age of AI, standing still is the same as falling behind. References: Inside an AI‑enabled device code phishing campaign | Microsoft Security Blog AI as tradecraft: How threat actors operationalize AI | Microsoft Security Blog Detecting and analyzing prompt abuse in AI tools | Microsoft Security Blog Post-Quantum Cryptography | CSRC Microsoft Digital Defense Report 2025 | MicrosoftWhy External Users Can’t Open Encrypted Attachments in Certain Conditions & How to Fix It Securely
When Conditional Access policies enforce MFA across all cloud apps and include external users, encrypted attachments may require additional considerations. This post explains why. This behavior applies only in environments where all of the following are true: Microsoft Purview encryption is used for emails and attachments A Conditional Access (CA) policy is configured to: Require MFA Apply to all cloud applications Include guest or external users The Situation: Email Opens, Attachment Doesn’t When an email is encrypted using: Microsoft Purview Sensitivity Labels, or Information Rights Management (IRM) Any attached Office document automatically inherits encryption. This inheritance is intentional and enforced by the service, Ensures consistent protection of sensitive content. That inheritance is mandatory and cannot be disabled. So far, so good. But here’s where things break for external recipients. The Hidden Dependency: Identity & Conditional Access Reading an encrypted email and opening an encrypted attachment are two different flows. External users can usually read encrypted emails by authenticating through: One-Time Passcode (OTP) Microsoft personal accounts Their own organization’s identity However, encrypted attachments use Microsoft Rights Management Services (RMS) — and RMS expects an identity the sender’s tenant can evaluate. If your organization has: A global Conditional Access policy Enforcing MFA for all users Applied to all cloud apps external users can get blocked even after successful email decryption. This commonly results in errors like: “This account does not exist in the sender’s tenant…” AADSTS90072: The external user account does not exist in our tenant and cannot access the Microsoft Office application. The account needs to be added as an external user in the tenant or use an alternative authentication method. When It Works (and Why It Often Doesn’t) External access to encrypted attachments works only when one of these conditions is met: The sender trusts the recipient’s tenant MFA via Cross‑Tenant Access (MFA trust) The recipient already exists as a guest account in the sender’s tenant In real-world scenarios, these conditions often fail: External recipients use consumer or non‑Entra identities Recipient domains are not predictable Guest onboarding does not scale Cross‑tenant trust is intentionally restricted In such cases, Conditional Access policies designed for internal users can affect RMS evaluation for external users. So what’s the alternative? The Practical, Secure Alternative When the two standard access conditions (cross‑tenant trust or guest presence) cannot be met , you can refine Conditional Access evaluation without weakening encryption. The goal is not to remove MFA, but to ensure it is applied appropriately based on identity type and access path. In this scenario: MFA remains enforced for all internal users, including access to Microsoft Rights Management Services (RMS) MFA remains enforced for external users across cloud applications other than RMS The Key Idea Let encryption stay strong, but stop blocking external RMS authentication. This is achieved by: Keeping the existing Conditional Access policy that enforces MFA for all internal users across all cloud applications, including RMS Excluding guest and external users from that internal‑only policy Deploying a separate Conditional Access policy scoped to guest and external users to: Continue enforcing MFA for external users where supported Explicitly exclude Microsoft Rights Management Services (RMS) from evaluation RMS can be excluded from the external‑user policy by specifying the following application (client) ID: RMS App ID: 00000012-0000-0000-c000-000000000000 Why This Is Still Secure This approach: ✅ Keeps email and attachment encryption fully intact ✅ Internal security posture is unchanged ✅ External users remain protected by MFA where applicable ✅ Allows external users to authenticate using supported methods ✅ Avoids over-trusting external tenants ✅ Scales for large, unpredictable recipient sets Final Takeaway Encrypted attachment access is governed by identity recognition and policy design, not by email encryption alone. By aligning Conditional Access with how encrypted content is evaluated, organizations can enable secure external collaboration while maintaining strong protection standardsSecuring multicloud (Azure, AWS & GCP) with Microsoft Defender for Cloud: Connector best practices
Many organizations run workloads across multiple cloud providers and need to maintain a strong security posture while ensuring interoperability. Microsoft Defender for Cloud is a cloud-native application protection platform (CNAPP) solution that helps secure these environments by providing unified visibility and protection for resources in AWS and GCP alongside Azure. Planning for multicloud security with Microsoft Defender for Cloud As customers adopt Microsoft Defender for Cloud in multicloud environments, Microsoft provides several resources to support planning, deployment, and scalable onboarding: Planning Guides: Multicloud Protection Planning Guide that walks through key design considerations for securing multicloud with Microsoft Defender for Cloud. Deployment Guides: Connect your Azure subscriptions - Microsoft Defender for Cloud. With the right planning and adoption strategy, onboarding to Microsoft Defender for Cloud can be smooth and predictable. However, support cases show that some common challenges can still arise during or after onboarding AWS or GCP environments. Below, we walk through frequent multicloud scenarios, their symptoms, and recommended troubleshooting steps. Common multicloud connector problems and how to resolve them 1. Problem: Removed cloud account still appears in Microsoft Defender for Cloud The AWS/GCP account is deleted or removed from your organization, but in Microsoft Defender for Cloud it still appears under connected environments. Additionally, security recommendations for resources in the deleted account may still show up in recommendations page. Cause Microsoft Defender for Cloud does not automatically delete a cloud connector when the external account is removed. The security connector in Azure is a separate object that remains unless explicitly removed. Microsoft Defender for Cloud isn’t aware that the AWS/GCP side was decommissioned as there’s no automatic callback to Azure when an AWS account is closed. Therefore, the connector and its last known data linger until manually removed. Solution Delete the connector to clean up the stale entry. Use one of the following methods. Option 1: Use the Azure portal Sign in to the Azure portal. Go to Microsoft Defender for Cloud > Environment settings. Select the AWS account or GCP project that no longer exists. Select Delete to remove the connector. Option 2: REST API Delete the connector by using the REST API: Security Connectors - Delete - REST API (Azure Defender for Cloud). Note: If a multicloud organization connector was set up and the organization was later decommissioned or some accounts were removed, there would be several connectors to clean up. Start by deleting the organization’s management account connector, then remove any remaining child connectors. Removing connectors in this order helps prevent leftover dependencies. Additional guidance see: What you need to know when deleting and re-creating the security connector(s) in Defender for Cloud. 2. Problem: Identity provider is missing or partially configured After running the AWS CloudFormation template, the connector setup fails. Microsoft Defender for Cloud shows the AWS environment in an error state because the identity link between Azure and AWS is not established. On the AWS side, the CloudFormation stack exists, but the required OIDC identity provider or the IAM role trust policy that allows Microsoft Defender for Cloud to assume the role via web identity federation is missing or misconfigured. Cause The AWS CloudFormation template doesn’t match the correct Azure subscription or tenant. This can happen if: You were signed in to the wrong Azure directory when generating the template. You deployed the template to a different AWS account than intended. In both cases, the Azure and AWS IDs won’t align, and the connector setup will fail. Solution Verify your Azure directory and subscription. In the Azure portal, go to Directories + subscriptions and make sure the correct directory and subscription are selected before you set up the connector. Clean up the incorrect configuration In AWS, delete the CloudFormation stack and any IAM roles or identity providers it created. In Microsoft Defender for Cloud, remove the failed connector from Environment settings. Re-create the connector. Follow the steps in Connect your Azure subscriptions - Microsoft Defender for Cloud to generate and deploy a new CloudFormation template using the correct Azure and AWS accounts. Verify the connection. After the connection succeeds, the AWS environment shows Healthy in Microsoft Defender for Cloud. Resources and recommendations begin appearing within about an hour. 3. Problem: Duplicate security connector prevents onboarding When an AWS or GCP connector is added in Microsoft Defender for Cloud, onboarding fails with an error that indicates another connector with the same hierarchyId already exists. In the Azure portal, the environment shows Failed, and no resources appear in Microsoft Defender for Cloud. Cause Microsoft Defender for Cloud allows only one connector per cloud account within the same Microsoft Entra ID tenant. The hierarchyId uniquely identifies the cloud account (for example, an AWS account ID or a GCP project ID). If the account was previously onboarded in another Azure subscription within the same tenant, you can’t onboard it again until the existing connector is removed. Solution Find and remove the existing connector and then retry onboarding. Step 1: Identify the existing connector Sign in to the Azure portal. Go to Microsoft Defender for Cloud > Environment settings. Check each subscription in the same tenant for a pre-existing AWS account or GCP project connector. If you have access, you can also query Azure Resource Graph to locate existing connectors: | resources | where type == "microsoft.security/securityconnectors" | project name, location, properties.hierarchyIdentifier, tenantId, subscriptionId Step 2: Remove the duplicate connector Delete the connector that uses the same hierarchyId. Follow the steps outlined in the previous troubleshooting scenario for deleting security connectors. Step 3: Retry onboarding After the connector is removed, add the AWS or GCP connector again in the target subscription. If the error persists, verify that all duplicate connectors were deleted and allow a short time for changes to propagate. Conclusion Microsoft Defender for Cloud supports a strong multicloud security strategy, but cloud security is an ongoing effort. Onboarding multicloud environments is only the first step. After onboarding, regularly review security recommendations, alerts, and compliance posture across all connected clouds. With the right configuration, Microsoft Defender for Cloud provides a single source of truth to maintain visibility and control as threats continue to evolve. Further Resources: Microsoft Defender for Cloud – Multicloud Security Planning Guide – Start here to design your strategy for AWS/GCP integration, with guidance on prerequisites and best practices. Connect your AWS account - Microsoft Defender for Cloud. Connect your GCP project - Microsoft Defender for Cloud. Troubleshoot connectors guide - Microsoft Defender for Cloud. We hope this guide helps you successfully implement end-to-end ingestion of Microsoft Intune logs into Microsoft Sentinel. If you have any questions, feel free to leave a comment below or reach out to us on X @MSFTSecSuppTeam.297Views2likes0CommentsSearch and Purge using Microsoft Graph eDiscovery API
Welcome back to the series of blogs covering search and purge in Microsoft Purview eDiscovery! If you are new to this series, please first visit the blog post in our series that you can find here: Search and Purge workflow in the new modern eDiscovery experience Also, please ensure you have fully read the Microsoft Learn documentation on this topic as I will not be covering some of the steps in full (permissions, releasing holds, all limitations): Find and delete Microsoft Teams chat messages in eDiscovery | Microsoft Learn So as a reminder, for E5/G5 customers and cases with premium features enabled- you must use the Graph API to execute the purge operation. With the eDiscovery Graph API, you have the option to create the case, create a search, generate statistics, create an item report and issue the purge command all from the Graph API. It is also possible to use the Purview Portal to create the case, create the search, generate statistics/samples and generate the item report. However, the final validation of the items that would be purged by rerunning the statistics operation and issuing the purge command must be run via the Graph API. In this post, we will take a look at two examples, one involving an email message and one involving a Teams message. I will also look to show how to call the graph APIs. Purging email messages via the Graph API In this example, I want to purge the following email incorrectly sent to Debra Berger. I also want to remove it from the sender's mailbox as well. Let’s assume in this example I do not know exactly who sent and received the email, but I do know the subject and date it was sent on. In this example, I am going to use the Modern eDiscovery Purview experience to create a new case where I will undertake some initial searches to locate the item. Once the case is created, I will Create a search and give it a name. In this example, I do not know all the mailboxes where the email is present, so my initial search is going to be a tenant wide search of all Exchange mailboxes, using the subject and date range as conditions to see which locations have hits. Note: For scenarios where you know the location of the items there is no requirement to do a tenant wide search. You can target the search to the know locations instead. I will then select Run Query and trigger a Statistics job to see which locations in the tenant have hits. For our purposes, we do not need to select Include categories, Include query keywords report or Include partially indexed items. This will trigger a Generate statistics job and take you to the Statistics tab of the search. Once the job completes it will display information on the total matches and number of locations with hits. To find out exactly which locations have hits, I can use the improved process reports to review more granular detail on the locations with hits. The report for the Generate statistics job can be found by selecting Process manager and then selecting the job. Once displayed I can download the reports associated with this process by selecting Download report. Once we have downloaded the report for the process, we get a ZIP file containing four different reports, to understand where I had hits I can review the Locations report within the zip file. If I open the locations report and filter on the count column I can see in this instance I have two locations with hits, Admin and DebraB. I will use this to make my original search more targeted. It also gives me an opportunity to check that I am not going to exceed the limits on the number of items I can target for the purge per execution. Returning to our original search I will remove All people and groups from my Data Sources and replace it with the two locations I had hits from. I will re-run my Generate Statistics job to ensure I am still getting the expected results. As the numbers align and remain consistent, I will do a further check and generate samples from the search. This will allow me to review the items to confirm that they are the items I wish to purge. From the search query I select Run query and select Sample. This will trigger a Generate sample job and take you to the Sample tab of the search. Once complete, I can review samples of the items returned by the search to confirm if these items are the items I want to purge. Now that I have confirmed, based on the sampling, that I have the items I want to purge I want to generate a detailed item report of all items that are a match for my search. To do this I need to generate an export report for the search. Note: Sampling alone may not return all the results impacted by the search, it only returns a sample of the items that match the query. To determine the full set of items that will be targeted we need to generate the export report. From the Search I can select Export to perform a direct export without having to add the data to a review set (available when premium features are enabled). Ensure to configure the following options on the export: Indexed items that match your search query Unselect all the options under Messages and related items from mailboxes and Exchange Online Export Item report only If you want to manually review the items that would be impacted by the purge operation you can optionally export the items alongside the items report for further review. You can also add the search to a review set to review the items that you are targeting. The benefit of adding to the review set is that it enables to you review the items whilst still keeping the data within the M365 service boundary. Note: If you add to a review set, a copy of the items will remain in the review set until the case is deleted. I can review the progress of the export job and download the report via the Process Manager. Once I have downloaded the report, I can review the Items.csv file to check the items targeted by the search. It is at this stage I must switch to using the Graph APIs to validate the actions that will be taken by the purge command and to issue the purge command itself. Not undertaking these additional validation steps can result in un-intended purge of data. There are two approaches you can use to interact with the Microsoft Graph eDiscovery APIs: Via Graph Explorer Via the MS.Graph PS module For this example, I will show how to use the Graph Explorer to make the relevant Graph API calls. For the Teams example, I will use the MS.Graph PS Module. We are going to use the APIs to complete the following steps: Trigger a statistics job via the API and review the results Trigger the purge command The Graph Explorer can be accessed via the following link: Graph Explorer | Try Microsoft Graph APIs - Microsoft Graph To start using the Graph Explorer to work with Microsoft Graph eDiscovery APIs you first need to sign in with your admin account. You need to ensure that you consent to the required Microsoft Graph eDiscovery API permissions by selecting Consent to permissions. From the Permissions flyout search for eDiscovery and select Consent for eDiscovery.ReadWrite.All. When prompted to consent to the permissions for the Graph Explorer select Accept. Optionally you can consent on behalf of your organisation to suppress this step for others. Once complete we can start making calls to the APIs via Graph Explorer. To undertake the next steps we need to capture some additional information, specifically the Case ID and the Search ID. We can get the case ID from the Case Settings in the Purview Portal, recording the Id value shown on the Case details pane. If we return to the Graph Explorer we can use this CaseID to see all the searches within an eDiscovery case. The structure of the HTTPS call is as follows: GET https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<caseID>/searches List searches - Microsoft Graph v1.0 | Microsoft Learn If we replace <caseID> with the Id we captured from the case settings we can issue the API call to see all the searches within the case to find the required search ID. When you issue the GET request in Graph Explorer you can review the Response preview to find the search ID we are looking for. Now that we have the case ID and the Search ID we can trigger an estimate by using the following Graph API call. POST https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/estimateStatistics ediscoverySearch: estimateStatistics - Microsoft Graph v1.0 | Microsoft Learn Once you issue the POST command you will be returned with an Accepted – 202 message. Now I need to use the following REST API call to review the status of the Estimate Statistics job in Graph Explorer. GET https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/lastEstimateStatisticsOperation List lastEstimateStatisticsOperation - Microsoft Graph v1.0 | Microsoft Learn If the estimates job is not complete when you run the GET command the Response preview contents will show the status as running. If the estimates job is complete when you run the GET command the Response preview contents will show you the results of the estimates job. CRITICAL: Ensure that the indexedItemCount matches the items returned in the item report generated via the Portal. If this does not match do not proceed to issuing the purge command. Now that I have validated everything, I am ready to issue the purge command via the Graph API. I will use the following Graph API call. POST https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/purgeData ediscoverySearch: purgeData - Microsoft Graph v1.0 | Microsoft Learn With this POST command we also need to provide a Request Body to tell the API which areas we want to target (mailboxes or teamsMessages) and the purge type (recoverable, permantlyDelete). As we are targeting email items I will use mailboxes as the PurgeAreas option. As I only want to remove the item from the user’s mailbox view I am going to use recoverable as the PurgeType. { "purgeType": "recoverable", "purgeAreas": "mailboxes" } Once you issue the POST command you will be returned with an Accepted – 202 message. Once the command has been issued it will proceed to purge the items that match the search criteria from the locations targeted. If I go back to my original example, we can now see the item has been removed from the users mailbox. As it has been soft deleted I can review the recoverable items folder from Outlook on the Web where I will see that for the user, it has now been deleted pending clean-up from their mailbox. Purging Teams messages via the Graph API In this example, I want to purge the following Teams conversation between Debra, Adele and the admin (CDX) from all participants Teams client. I am going to reuse the “HK016 – Search and Purge” case to create a new search called “Teams conversation removal”. I add three participants of the chat as Data sources to the search, I am then going to use the KeyQL condition to target the items I want to remove. In this example I am using the following KeyQL. (Participants=AdeleV@M365x00001337.OnMicrosoft.com AND Participants=DebraB@M365x00001337.OnMicrosoft.com AND Participants=admin@M365x00001337.onmicrosoft.com) AND (Kind=im OR Kind=microsoftteams) AND (Date=2025-06-04) This is looking for all Teams messages that contain all three participants sent on the 4 th of June 2025. It is critical when targeting Teams messages that I ensure my query targets exactly the items that I want to purge. With Teams messages (opposed to email items) there are less options available that enable us to granularly target the team items for purging. Note: The use of the new Identifier condition is not supported for purge options. Use of this can lead to unintended data to be removed and should not be used as a condition in the search at this time. If I was to be looking for a very specific phrase, I could further refine the query by using the Keyword condition to look for that specific Teams message. Once I have created my search I am ready to generate both Statistics and Samples to enable me to validate I am targeting the right items for my search. My statistics job has returned 21 items, 7 from each location targeted. This aligns with the number of items within the Teams conversation. However, I am going to also validate that the samples I have generated match the content I want to purge, ensuring that I haven’t inadvertently returned additional items I was not expecting. Now that I have confirmed, based on the sampling, that the sample of items returned look to be correct I want to generate a detailed item report of all items that are a match for my search. To do this I need to generate an export report for the search. From the Search I can select Export to perform a direct export without having to add the data to a review set (available when premium features are enabled). Ensure to configure the following options on the export: Indexed items that match your search query Unselect all the options under Messages and related items from mailboxes and Exchange Online Export Item report only Once I select Export it will create a new export job, I can review the progress of the job and download the report via the Process Manager. Once I have downloaded the report, I can review the Items.csv file to check the items targeted by the search and that would be purged when I issue the purge call. Now that I have confirmed that the search is targeting the items I want to purge it is at this stage I must switch to using the Graph APIs. As discussed, there are two approaches you can use to interact with the Microsoft Graph eDiscovery APIs: Using Graph Explorer Using the MS.Graph PS module For this example, I will show how to use the MS.Graph PS Module to make the relevant Graph API calls. To understand how to use the Graph Explorer to issue the purge command please refer to the previous example for purging email messages. We are going to use the APIs to complete the following steps: Trigger a statistics job via the API and review the results Trigger the purge command To install the MS.Graph PowerShell module please refer to the following article. Install the Microsoft Graph PowerShell SDK | Microsoft Learn To understand more about the MS.Graph PS module and how to get started you can review the following article. Get started with the Microsoft Graph PowerShell SDK | Microsoft Learn Once the PowerShell module is installed you can connect to the eDiscovery Graph APIs by running the following command. connect-mgGraph -Scopes "ediscovery.ReadWrite.All" You will be prompted to authenticate, once complete you will be presented with the following banner. To undertake the next steps we need to capture some additional information, specifically the Case ID and the Search ID. As before we can get the case ID from the Case Settings in the Purview Portal, recording the Id value shown on the Case details pane. Alternatively we can use the following PowerShell command to find a list of cases and their ID. get-MgSecurityCaseEdiscoveryCase | ft displayname,id List ediscoveryCases - Microsoft Graph v1.0 | Microsoft Learn Once we have the ID of the case we want to execute the purge command from, we can run the following command to find the IDs of all the search jobs in the case. Get-MgSecurityCaseEdiscoveryCaseSearch -EdiscoveryCaseId <ediscoveryCaseId> | ft displayname,id,ContentQuery List searches - Microsoft Graph v1.0 | Microsoft Learn Now that we have both the Case ID and the Search ID we can trigger the generate statistics job using the following command. Invoke-MgEstimateSecurityCaseEdiscoveryCaseSearchStatistics -EdiscoveryCaseId <ediscoveryCaseId> -EdiscoverySearchId <ediscoverySearchId> ediscoverySearch: estimateStatistics - Microsoft Graph v1.0 | Microsoft Learn Now I need to use the following command to review the status of the Estimate Statistics job. Get-MgSecurityCaseEdiscoveryCaseSearchLastEstimateStatisticsOperation -EdiscoveryCaseID <ediscoveryCaseId> -EdiscoverySearchId <ediscoverySearchId> List lastEstimateStatisticsOperation - Microsoft Graph v1.0 | Microsoft Learn If the estimates job is not complete when you run the command the status will show as running. If the estimates job is complete when you run the command status will show as succeeded and will also show the number of hits in the IndexItemCount. CRITICAL: Ensure that the indexedItemCount matches the items returned in the item report generated via the Portal. If this does not match do not proceed to issuing the purge command. Now that I have validated everything I am ready to issue the purge command via the Graph API. With this command we need to provide a Request Body to tell the API which areas we want to target (mailboxes or teamsMessages) and the purge type (recoverable, permantlyDelete). As we are targeting teams items I will use teamsMessages as the PurgeAreas option. Note: If you specify mailboxes then only the compliance copy stored in the user mailbox will be purged and not the item from the teams services itself. This will mean the item will remain visible to the user in Teams and can no longer be purged. When purgeType is set to either recoverable or permanentlyDelete and purgeAreas is set to teamsMessages, the Teams messages are permanently deleted. In other words either option will result in the permanent deletion of the items from Teams and they cannot be recovered. $params = @{ purgeType = "recoverable" purgeAreas = "teamsMessages" } Once I have prepared my request body I will issue the following command. Clear-MgSecurityCaseEdiscoveryCaseSearchData -EdiscoveryCaseId $ediscoveryCaseId -EdiscoverySearchId $ediscoverySearchId -BodyParameter $params ediscoverySearch: purgeData - Microsoft Graph v1.0 | Microsoft Learn Once the command has been issued it will proceed to purge the items that match the search criteria from the locations targeted. If I go back to my original example, we can now see the items has been removed from Teams. Congratulations, you have made it to the end of the blog post. Hopefully you found it useful and it assists you to build your own operational processes for using the Graph API to issue search and purge actions.Registration Open: Community-Led Purview Lightning Talks
Get ready for an electrifying event! The Microsoft Security Community proudly presents Purview Lightning Talks; an action-packed series featuring your fellow Microsoft users, partners and passionate Microsoft Security community members of all sorts. Each 3-12 minute talk cuts straight to the chase, delivering expert insights, real-world use cases, and even a few game-changing tips and tricks. Don’t miss this opportunity to learn, connect, and be inspired! Secure your spot now for the big day: April 30th at 8am Redmond Time. 💙 See agenda details below and follow this blog post (sign in and click the "follow" heart in the upper right) to receive notifications. We have more speaker details and community connection information coming soon! AGENDA The Day Offboarding Exposed Infinite Retention - Nikki Chapple nikkichapple A real-world discovery of orphaned OneDrives and retention debt caused by retain-only policies, and how Adaptive Scopes help prevent it. Topic: Data Lifecycle Management Securing Data in the Age of AI - Julio Cesar Goncalves Vasconcelos How Microsoft Purview enables organizations to accelerate AI adoption while maintaining security, compliance, and transparency. Topic: Purview for AI What’s In My Compliance Manager Toolbox - Jerrad Dahlager j-dahl7 A practical walkthrough of using Compliance Manager to map controls, track improvements, and simplify multi-framework compliance. Topic: Compliance Manager Why You Should Create Your Own Sensitive Information Types (SITs) - Niels Jakobsen Niels_Jakobsen An in-depth analysis of why built-in SITs are not one-size-fits-all, and how to tailor them for real enterprise needs. Topic: Information Protection Beyond eDiscovery – Purview DSI for Security Investigation - Susantha Silva How to turn DLP alerts and Insider Risk signals into structured data investigations without jumping between portals. Topic: Data Security (DSI) Four Labels Max for Daily Use: Which Ones & Why? - Romain Dalle Romain DALLE A minimalist sensitivity labeling baseline designed for real-world adoption and usability. Topic: Information Protection Elevating Purview DLP with a Real-World Use Case - Victor Wingsing vicwingsing Hardening Purview DLP beyond default configurations to close real-world data loss gaps. Topic: Data Loss Prevention (DLP) Stop, Think, Protect: Data Security in Real Life with Purview - Oliver Sahlmann Oliver Sahlmann A traffic-light approach showing how simple labels and DLP policies still deliver meaningful protection. Topic: Data Security The Purview Label Engine: Automated Classification & Documentation - Michael Kirst Neshva MichaelKirst1970 A scalable framework for rolling out Microsoft Purview labels across global, multilingual enterprises. Topic: Information Protection Data-Driven Endpoint DLP with Advanced Hunting - Tatu Seppälä tatuseppala Using KQL queries and usage patterns to refine endpoint DLP policies based on real behavior. Topic: Data Loss Prevention (DLP) Improving Discovery, Trust, and Reuse of Analytics with Purview Data Products - Craig Wyndowe CraigWyndowe How Purview Governance Domains and Data Products create a trusted, reusable analytics ecosystem. Topic: Data Governance From Zero to First Signal: Insider Risk Management Prerequisites That Matter - Sathish Veerapandian Sathish Veerapandian A focused look at the configurations required for Insider Risk Management to actually generate alerts. Topic: Insider Risk Management The Purview Hack No One Talks About: Container Sensitivity Labels - Nikki Chapple nikkichapple How container sensitivity labels instantly fix oversharing for Teams, Groups, and SharePoint sites. Topic: Information Protection Using Purview to Prevent Oversharing with AI Services - Viktor Hedberg headburgh How Information Protection and DLP prevent Copilot and AI services from exposing sensitive data. Topic: Information Protection & DLP How I Helped Customers Understand Their AI Usage (and Protect Data) - Bram de Jager Bram de Jager Exposing risky AI usage patterns and protecting sensitive data entered into public AI tools. Topic: Data Security Posture Management for AI Bulk Sensitivity Label Removal with Microsoft Purview Information Protection (MPIP) - Zak Hepler A practical demo on safely removing sensitivity labels at scale from SharePoint libraries. Topic: Information Protection Does M365 Support eDiscovery? (Mythbusting) - Julian Kusenberg Leprechaun91 A myth-busting session separating perception from reality in Microsoft 365 eDiscovery. Topic: eDiscovery761Views4likes0CommentsAuthorization and Governance for AI Agents: Runtime Authorization Beyond Identity at Scale
Designing Authorization‑Aware AI Agents at Scale Enforcing Runtime RBAC + ABAC with Approval Injection (JIT) Microsoft Entra Agent Identity enables organizations to govern and manage AI agent identities in Copilot Studio, improving visibility and identity-level control. However, as enterprises deploy multiple autonomous AI agents, identity and OAuth permissions alone cannot answer a more critical question: “Should this action be executed now, by this agent, for this user, under the current business and regulatory context?” This post introduces a reusable Authorization Fabric—combining a Policy Enforcement Point (PEP) and Policy Decision Point (PDP)—implemented as a Microsoft Entra‑protected endpoint using Azure Functions/App Service authentication. Every AI agent (Copilot Studio or AI Foundry/Semantic Kernel) calls this fabric before tool execution, receiving a deterministic runtime decision: ALLOW / DENY / REQUIRE_APPROVAL / MASK Who this is for Anyone building AI agents (Copilot Studio, AI Foundry/Semantic Kernel) that call tools, workflows, or APIs Organizations scaling to multiple agents and needing consistent runtime controls Teams operating in regulated or security‑sensitive environments, where decisions must be deterministic and auditable Why a V2? Identity is necessary—runtime authorization is missing Entra Agent Identity (preview) integrates Copilot Studio agents with Microsoft Entra so that newly created agents automatically get an Entra agent identity, manageable in the Entra admin center, and identity activity is logged in Entra. That solves who the agent is and improves identity governance visibility. But multi-agent deployments introduce a new risk class: Autonomous execution sprawl — many agents, operating with delegated privileges, invoking the same backends independently. OAuth and API permissions answer “can the agent call this API?” They do not answer “should the agent execute this action under business policy, compliance constraints, data boundaries, and approval thresholds?” This is where a runtime authorization decision plane becomes essential. The pattern: Microsoft Entra‑Protected Authorization Fabric (PEP + PDP) Instead of embedding RBAC logic independently inside every agent, use a shared fabric: PEP (Policy Enforcement Point): Gatekeeper invoked before any tool/action PDP (Policy Decision Point): Evaluates RBAC + ABAC + approval policies Decision output: ALLOW / DENY / REQUIRE_APPROVAL / MASK This Authorization Fabric functions as a shared enterprise control plane, decoupling authorization logic from individual agents and enforcing policies consistently across all autonomous execution paths. Architecture (POC reference architecture) Use a single runtime decision plane that sits between agents and tools. What’s important here Every agent (Copilot Studio or AI Foundry/SK) calls the Authorization Fabric API first The fabric is a protected endpoint (Microsoft Entra‑protected endpoint required) Tools (Graph/ERP/CRM/custom APIs) are invoked only after an ALLOW decision (or approval) Trust boundaries enforced by this architecture Agents never call business tools directly without a prior authorization decision The Authorization Fabric validates caller identity via Microsoft Entra Authorization decisions are centralized, consistent, and auditable Approval workflows act as a runtime “break-glass” control for high-impact actions This ensures identity, intent, and execution are independently enforced, rather than implicitly trusted. Runtime flow (Decision → Approval → Execution) Here is the runtime sequence as a simple flow (you can keep your Mermaid diagram too). ```mermaid flowchart TD START(["START"]) --> S1["[1] User Request"] S1 --> S2["[2] Agent Extracts Intent\n(action, resource, attributes)"] S2 --> S3["[3] Call /authorize\n(Entra protected)"] S3 --> S4 subgraph S4["[4] PDP Evaluation"] ABAC["ABAC: Tenant · Region · Data Sensitivity"] RBAC["RBAC: Entitlement Check"] Threshold["Approval Threshold"] ABAC --> RBAC --> Threshold end S4 --> Decision{"[5] Decision?"} Decision -->|"ALLOW"| Exec["Execute Tool / API"] Decision -->|"MASK"| Masked["Execute with Masked Data"] Decision -->|"DENY"| Block["Block Request"] Decision -->|"REQUIRE_APPROVAL"| Approve{"[6] Approval Flow"} Approve -->|"Approved"| Exec Approve -->|"Rejected"| Block Exec --> Audit["[7] Audit & Telemetry"] Masked --> Audit Block --> Audit Audit --> ENDNODE(["END"]) style START fill:#4A90D9,stroke:#333,color:#fff style ENDNODE fill:#4A90D9,stroke:#333,color:#fff style S1 fill:#5B5FC7,stroke:#333,color:#fff style S2 fill:#5B5FC7,stroke:#333,color:#fff style S3 fill:#E8A838,stroke:#333,color:#fff style S4 fill:#FFF3E0,stroke:#E8A838,stroke-width:2px style ABAC fill:#FCE4B2,stroke:#999 style RBAC fill:#FCE4B2,stroke:#999 style Threshold fill:#FCE4B2,stroke:#999 style Decision fill:#fff,stroke:#333 style Exec fill:#2ECC71,stroke:#333,color:#fff style Masked fill:#27AE60,stroke:#333,color:#fff style Block fill:#C0392B,stroke:#333,color:#fff style Approve fill:#F39C12,stroke:#333,color:#fff style Audit fill:#3498DB,stroke:#333,color:#fff ``` Design principle: No tool execution occurs until the Authorization Fabric returns ALLOW or REQUIRE_APPROVAL is satisfied via an approval workflow. Where Power Automate fits (important for readers) In most Copilot Studio implementations, Agents calls Power Automate (agent flows), is the practical integration layer that calls enterprise services and APIs. Copilot Studio supports “agent flows” as a way to extend agent capabilities with low-code workflows. For this pattern, Power Automate typically: acquires/uses the right identity context for the call (depending on your tenant setup), and calls the /authorize endpoint of the Authorization Fabric, returns the decision payload to the agent for branching. Copilot Studio also supports calling REST endpoints directly using the HTTP Request node, including passing headers such as Authorization: Bearer <token>. Protected endpoint only: Securing the Authorization Fabric with Microsoft Entra For this V2 pattern, the Authorization Fabric must be protected using Microsoft Entra‑protected endpoint on Azure Functions/App Service (built‑in auth). Microsoft Learn provides the configuration guidance for enabling Microsoft Entra as the authentication provider for Azure App Service / Azure Functions. Step 1 — Create the Authorization Fabric API (Azure Function) Expose an authorization endpoint: HTTP Step 2 — Enable Microsoft Entra‑protected endpoint on the Function App In Azure Portal: Function App → Authentication Add identity provider → Microsoft Choose Workforce configuration (enterprise tenant) Set Require authentication for all requests This ensures the Authorization Fabric is not callable without a valid Entra token. Step 3 — Optional hardening (recommended) Depending on enterprise posture, layer: IP restrictions / Private endpoints APIM in front of the Function for rate limiting, request normalization, centralized logging (For a POC, keep it minimal—add hardening incrementally.) Externalizing policy (so governance scales) To make this pattern reusable across multiple agents, policies should not be hardcoded inside each agent. Instead, store policy definitions in a central policy store such as Cosmos DB (or equivalent configuration store), and have the PDP load/evaluate policies at runtime. Why this matters: Policy changes apply across all agents instantly (no agent republish) Central governance + versioning + rollback becomes possible Audit and reporting become consistent across environments (For the POC, a single JSON document per policy pack in Cosmos DB is sufficient. For production, add versioning and staged rollout.) Store one PolicyPack JSON document per environment (dev/test/prod). Include version, effectiveFrom, priority for safe rollout/rollback. Minimal decision contract (standard request / response) To keep the fabric reusable across agents, standardize the request payload. Request payload (example) Decision response (deterministic) Example scenario (1 minute to understand) Scenario: A user asks a Finance agent to create a Purchase Order for 70,000. Even if the user has API permission and the agent can technically call the ERP API, runtime policy should return: REQUIRE_APPROVAL (threshold exceeded) trigger an approval workflow execute only after approval is granted This is the difference between API access and authorized business execution. Sample Policy Model (RBAC + ABAC + Approval) This POC policy model intentionally stays simple while demonstrating both coarse and fine-grained governance. 1) Coarse‑grained RBAC (roles → actions) FinanceAnalyst CreatePO up to 50,000 ViewVendor FinanceManager CreatePO up to 100,000 and/or approve higher spend 2) Fine‑grained ABAC (conditions at runtime) ABAC evaluates context such as region, classification, tenant boundary, and risk: 3) Approval injection (Agent‑level JIT execution) For higher-risk/high-impact actions, the fabric returns REQUIRE_APPROVAL rather than hard deny (when appropriate): How policies should be evaluated (deterministic order) To ensure predictable and auditable behavior, evaluate in a deterministic order: Tenant isolation & residency (ABAC hard deny first) Classification rules (deny or mask) RBAC entitlement validation Threshold/risk evaluation Approval injection (JIT step-up) This prevents approval workflows from bypassing foundational security boundaries such as tenant isolation or data sovereignty. Copilot Studio integration (enforcing runtime authorization) Copilot Studio can call external REST APIs using the HTTP Request node, including passing headers such as Authorization: Bearer <token> and binding response schema for branching logic. Copilot Studio also supports using flows with agents (“agent flows”) to extend capabilities and orchestrate actions. Option A (Recommended): Copilot Studio → Agent Flow (Power Automate) → Authorization Fabric Why: Flows are a practical place to handle token acquisition patterns, approval orchestration, and standardized logging. Topic flow: Extract user intent + parameters Call an agent flow that: calls /authorize returns decision payload Branch in the topic: If ALLOW → proceed to tool call If REQUIRE_APPROVAL → trigger approval flow; proceed only if approved If DENY → stop and explain policy reason Important: Tool execution must never be reachable through an alternate topic path that bypasses the authorization check. Option B: Direct HTTP Request node to Authorization Fabric Use the Send HTTP request node to call the authorization endpoint and branch using the response schema. This approach is clean, but token acquisition and secure secretless authentication are often simpler when handled via a managed integration layer (flow + connector). AI Foundry / Semantic Kernel integration (tool invocation gate) For Foundry/SK agents, the integration point is before tool execution. Semantic Kernel supports Azure AI agent patterns and tool integration, making it a natural place to enforce a pre-tool authorization check. Pseudo-pattern: Agent extracts intent + context Calls Authorization Fabric Enforces decision Executes tool only when allowed (or after approval) Telemetry & audit (what Security Architects will ask for) Even the best policy engine is incomplete without audit trails. At minimum, log: agentId, userUPN, action, resource decision + reason + policyIds approval outcome (if any) correlationId for downstream tool execution Why it matters: you now have a defensible answer to: “Why did an autonomous agent execute this action?” Security signal bonus: Denials, unusual approval rates, and repeated policy mismatches can also indicate prompt injection attempts, mis-scoped agents, or governance drift. What this enables (and why it scales) With a shared Authorization Fabric: Avoid duplicating authorization logic across agents Standardize decisions across Copilot Studio + Foundry agents Update governance once (policy change) and apply everywhere Make autonomy safer without blocking productivity Closing: Identity gets you who. Runtime authorization gets you whether/when/how. Copilot Studio can automatically create Entra agent identities (preview), improving identity governance and visibility for agents. But safe autonomy requires a runtime decision plane. Securing that plane as an Entra-protected endpoint is foundational for enterprise deployments. In enterprise environments, autonomous execution without runtime authorization is equivalent to privileged access without PIM—powerful, fast, and operationally risky.