microsoft 365
987 TopicsSecurity Baseline for M365 Apps for enterprise v2512
Security baseline for Microsoft 365 Apps for enterprise (v2512, December 2025) Microsoft is pleased to announce the latest Security Baseline for Microsoft 365 Apps for enterprise, version 2512, is now available as part of the Microsoft Security Compliance Toolkit. This release builds on previous baselines and introduces updated, security‑hardened recommendations aligned with modern threat landscapes and the latest Office administrative templates. As with prior releases, this baseline is intended to help enterprise administrators quickly deploy Microsoft recommended security configurations, reduce configuration drift, and ensure consistent protection across user environments. Download the updated baseline today from the Microsoft Security Compliance Toolkit, test the recommended configurations, and implement as appropriate. This release introduces and updates several security focused policies designed to strengthen protections in Microsoft Excel, PowerPoint, and core Microsoft 365 Apps components. These changes reflect evolving attacker techniques, partner feedback, and Microsoft’s secure by design engineering standards. The recommended settings in this security baseline correspond with the administrative templates released in version 5516. Below are the updated settings included in this baseline: Excel: File Block Includes External Link Files Policy Path: User Configuration\Administrative Templates\Microsoft Excel 2016\Excel Options\Security\Trust Center\File Block Settings\File Block includes external link files The baseline will ensure that external links to workbooks blocked by File Block will no longer refresh. Attempts to create or update links to blocked files return an error. This prevents data ingestion from untrusted or potentially malicious sources. Block Insecure Protocols Across Microsoft 365 Apps Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Block Insecure Protocols The baseline will block all non‑HTTPS protocols when opening documents, eliminating downgrade paths and unsafe connections. This aligns with Microsoft’s broader effort to enforce TLS‑secure communication across productivity and cloud services. Block OLE Graph Functionality Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Block OLE Graph This setting will prevent MSGraph.Application and MSGraph.Chart (classic OLE Graph components) from executing. Microsoft 365 Apps will instead render a static image, mitigating a historically risky automation interface. Block OrgChart Add‑in Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Block OrgChart The legacy OrgChart add‑in is disabled, preventing execution and replacing output with an image. This reduces exposure to outdated automation frameworks while maintaining visual fidelity. Restrict FPRPC Fallback in Microsoft 365 Apps Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Restrict Apps from FPRPC Fallback The baseline disables the ability for Microsoft 365 Apps to fall back to FrontPage Server Extensions RPC which is an aging protocol not designed for modern security requirements. Avoiding fallback ensures consistent use of modern, authenticated file‑access methods. PowerPoint: OLE Active Content Controls Updated Policy Path: User Configuration\Administrative Templates\Microsoft PowerPoint 2016\PowerPoint Options\Security\OLE Active Content This baseline enforces disabling interactive OLE actions, no OLE content will be activate. The recommended baseline selection ensures secure‑by‑default OLE activation, reducing risk from embedded legacy objects. Deployment options for the baseline IT Admins can apply baseline settings in different ways. Depending on the method(s) chosen, different registry keys will be written, and they will be observed in order of precedence: Office cloud policies will override ADMX/Group Policies which will override end user settings in the Trust Center. Cloud policies may be deployed with the Office cloud policy service for policies in HKCU. Cloud policies apply to a user on any device accessing files in Office apps with their AAD account. In Office cloud policy service, you can create a filter for the Area column to display the current Security Baselines, and within each policy's context pane the recommended baseline setting is set by default. Learn more about Office cloud policy service. ADMX policies may be deployed with Microsoft Intune for both HKCU and HKLM policies. These settings are written to the same place as Group Policy, but managed from the cloud. There are two methods to create and deploy policy configurations: Administrative templates or the settings catalog. Group Policy may be deployed with on premise AD DS to deploy Group Policy Objects (GPO) to users and computers. The downloadable baseline package includes importable GPOs, a script to apply the GPOs to local policy, a script to import the GPOs into Active Directory Group Policy, updated custom administrative template (SecGuide.ADMX/L) file, all the recommended settings in spreadsheet form and a Policy Analyzer rules file. GPOs included in the baseline Most organizations can implement the baseline’s recommended settings without any problems. However, there are a few settings that will cause operational issues for some organizations. We've broken out related groups of such settings into their own GPOs to make it easier for organizations to add or remove these restrictions as a set. The local-policy script (Baseline-LocalInstall.ps1) offers command-line options to control whether these GPOs are installed. "MSFT Microsoft 365 Apps v2512" GPO set includes “Computer” and “User” GPOs that represent the “core” settings that should be trouble free, and each of these potentially challenging GPOs: “DDE Block - User” is a User Configuration GPO that blocks using DDE to search for existing DDE server processes or to start new ones. “Legacy File Block - User” is a User Configuration GPO that prevents Office applications from opening or saving legacy file formats. "Legacy JScript Block - Computer" disables the legacy JScript execution for websites in the Internet Zone and Restricted Sites Zone. “Require Macro Signing - User” is a User Configuration GPO that disables unsigned macros in each of the Office applications. If you have questions or issues, please let us know via the Security Baseline Community or this post. Related: Learn about Microsoft Baseline Security Mode5.7KViews0likes5CommentsFeature Request: Extend Security Copilot inclusion (M365 E5) to M365 A5 Education tenants
Background At Ignite 2025, Microsoft announced that Security Copilot is included for all Microsoft 365 E5 customers, with a phased rollout starting November 18, 2025. This is a significant step forward for security operations. The gap Microsoft 365 A5 for Education is the academic equivalent of E5 — it includes the same core security stack: Microsoft Defender, Entra, Intune, and Purview. However, the Security Copilot inclusion explicitly covers only commercial E5 customers. There is no public roadmap or timeline for extending this benefit to A5 education tenants. Why this matters Education institutions face the same cybersecurity threats as commercial organizations — often with fewer dedicated security resources. The A5 license was positioned as the premium security offering for education. Excluding it from Security Copilot inclusion creates an inequity between commercial and education customers holding functionally equivalent license tiers. Request We would like Microsoft to: Confirm whether Security Copilot inclusion will be extended to M365 A5 Education tenants If yes, provide an indicative timeline If no, clarify the rationale and what alternative paths exist for education customers Are other EDU admins in the same situation? Would appreciate any upvotes or comments to help raise visibility with the product team.115Views5likes1CommentHybrid Join Lifecycle Model
Microsoft Entra hybrid join is still a common reality in enterprise environments. For many organizations, it remains necessary because legacy applications still rely on Active Directory machine authentication, Group Policy is still in use, and on-premises operational dependencies have not fully been retired. At the same time, the long-term direction for endpoint identity is increasingly cloud-native. That creates an important architectural question: Should hybrid join be treated as a permanent device state, or as a lifecycle stage in a broader modernization journey? In practice, hybrid join is often discussed as a binary condition: the device is either hybrid joined or it is not. But from an operational perspective, that view is too limited. In real enterprise environments, hybrid join behaves much more like a lifecycle. A device moves through provisioning, registration, trust establishment, management attachment, steady-state operation, recovery, retirement, and eventually transition. That distinction matters because most hybrid join issues do not fail loudly. They usually appear as stale objects, pending registrations, broken trust, inconsistent management ownership, and environments that remain temporarily hybrid far longer than intended. Why a lifecycle model is useful Treating hybrid join as a lifecycle helps explain why so many organizations struggle with it even when the initial implementation appears technically correct. The challenge is usually not the first successful join. The challenge is everything that happens around it: Provisioning quality Trust validation Management ownership Drift detection Stale object cleanup Exit criteria for transition to Entra join Without that lifecycle view, hybrid join often becomes a static design decision with no clear operational model behind it. The eight phases 1. Provisioning The lifecycle starts when the device is built, imaged, or provisioned. This stage is more important than it looks. If the device is provisioned from a contaminated image, or if cloning and snapshot practices are not handled carefully, later identity issues are often inherited rather than newly created. Provisioning should be treated as an identity-controlled event, not just an OS deployment task. 2. Registration The device becomes known to Microsoft Entra. This is where many environments confuse visibility with readiness. A device object may exist in the cloud, but that does not automatically mean the hybrid identity state is healthy or operationally usable. 3. Trust Establishment This is the point where hybrid join becomes real. A device should not be considered fully onboarded until both sides of trust are present and healthy. In operational terms, this means the device is not only registered, but also capable of supporting the expected sign-in and identity flows. 4. Management Attachment Once trust exists, governance becomes the next question. Many organizations still balance Group Policy, Configuration Manager, Intune, and legacy application dependencies at the same time. That is exactly why hybrid join often persists. But if management ownership is not clearly defined, organizations end up with overlapping policy planes, inconsistent control, and unclear accountability. 5. Operational Steady State Hybrid join does not stop at successful registration. The device must remain healthy over time, and that means monitoring trust health, registration state, token health, line-of-sight to required infrastructure, and management consistency. A device that was healthy once is not necessarily healthy now. 6. Recovery Every real environment eventually encounters drift. Pending states, broken trust, orphaned records, reimaged devices, and inconsistent registration scenarios should not be treated as unusual edge cases. They should be expected and handled with formal recovery playbooks. Recovery is not an exception to the lifecycle. It is part of the lifecycle. 7. Retirement Retirement is one of the weakest areas in many hybrid environments. Devices are replaced or decommissioned, but their identity records often remain behind. That leads to stale objects, inventory noise, and administrative confusion. A proper lifecycle model should include a controlled retirement sequence rather than ad hoc cleanup. 8. Transition This is the most important strategic phase. The key question is no longer whether a device can remain hybrid joined, but whether there is still a justified reason to keep it there. Hybrid join may still be necessary in many environments today, but in many cases it should be treated as transitional architecture rather than the target end state. Practical takeaway Looking at hybrid join as a lifecycle creates a more useful framework for architecture decisions, operational ownership, troubleshooting, directory hygiene, governance, and transition planning toward Microsoft Entra join. That is the real value of this model. It does not replace technical implementation guidance, but it helps organizations think more clearly about why hybrid join exists, how it should be operated, and when it should eventually be retired. Final thought Hybrid join is still relevant in many enterprise environments, but it should not automatically be treated as a default destination. In many cases, it works best when it is managed as a lifecycle-driven operating model with defined phases, controls, and exit criteria. That makes it easier to stabilize operations today, while also creating a clearer path toward a more cloud-native endpoint identity model tomorrow. Full article: https://www.modernendpoint.tech/hybrid-join-lifecycle-model84Views0likes0CommentsWhy External Users Can’t Open Encrypted Attachments in Certain Conditions & How to Fix It Securely
When Conditional Access policies enforce MFA across all cloud apps and include external users, encrypted attachments may require additional considerations. This post explains why. This behavior applies only in environments where all of the following are true: Microsoft Purview encryption is used for emails and attachments A Conditional Access (CA) policy is configured to: Require MFA Apply to all cloud applications Include guest or external users The Situation: Email Opens, Attachment Doesn’t When an email is encrypted using: Microsoft Purview Sensitivity Labels, or Information Rights Management (IRM) Any attached Office document automatically inherits encryption. This inheritance is intentional and enforced by the service, Ensures consistent protection of sensitive content. That inheritance is mandatory and cannot be disabled. So far, so good. But here’s where things break for external recipients. The Hidden Dependency: Identity & Conditional Access Reading an encrypted email and opening an encrypted attachment are two different flows. External users can usually read encrypted emails by authenticating through: One-Time Passcode (OTP) Microsoft personal accounts Their own organization’s identity However, encrypted attachments use Microsoft Rights Management Services (RMS) — and RMS expects an identity the sender’s tenant can evaluate. If your organization has: A global Conditional Access policy Enforcing MFA for all users Applied to all cloud apps external users can get blocked even after successful email decryption. This commonly results in errors like: “This account does not exist in the sender’s tenant…” AADSTS90072: The external user account does not exist in our tenant and cannot access the Microsoft Office application. The account needs to be added as an external user in the tenant or use an alternative authentication method. When It Works (and Why It Often Doesn’t) External access to encrypted attachments works only when one of these conditions is met: The sender trusts the recipient’s tenant MFA via Cross‑Tenant Access (MFA trust) The recipient already exists as a guest account in the sender’s tenant In real-world scenarios, these conditions often fail: External recipients use consumer or non‑Entra identities Recipient domains are not predictable Guest onboarding does not scale Cross‑tenant trust is intentionally restricted In such cases, Conditional Access policies designed for internal users can affect RMS evaluation for external users. So what’s the alternative? The Practical, Secure Alternative When the two standard access conditions (cross‑tenant trust or guest presence) cannot be met , you can refine Conditional Access evaluation without weakening encryption. The goal is not to remove MFA, but to ensure it is applied appropriately based on identity type and access path. In this scenario: MFA remains enforced for all internal users, including access to Microsoft Rights Management Services (RMS) MFA remains enforced for external users across cloud applications other than RMS The Key Idea Let encryption stay strong, but stop blocking external RMS authentication. This is achieved by: Keeping the existing Conditional Access policy that enforces MFA for all internal users across all cloud applications, including RMS Excluding guest and external users from that internal‑only policy Deploying a separate Conditional Access policy scoped to guest and external users to: Continue enforcing MFA for external users where supported Explicitly exclude Microsoft Rights Management Services (RMS) from evaluation RMS can be excluded from the external‑user policy by specifying the following application (client) ID: RMS App ID: 00000012-0000-0000-c000-000000000000 Why This Is Still Secure This approach: ✅ Keeps email and attachment encryption fully intact ✅ Internal security posture is unchanged ✅ External users remain protected by MFA where applicable ✅ Allows external users to authenticate using supported methods ✅ Avoids over-trusting external tenants ✅ Scales for large, unpredictable recipient sets Final Takeaway Encrypted attachment access is governed by identity recognition and policy design, not by email encryption alone. By aligning Conditional Access with how encrypted content is evaluated, organizations can enable secure external collaboration while maintaining strong protection standardsSearch and Purge using Microsoft Graph eDiscovery API
Welcome back to the series of blogs covering search and purge in Microsoft Purview eDiscovery! If you are new to this series, please first visit the blog post in our series that you can find here: Search and Purge workflow in the new modern eDiscovery experience Also, please ensure you have fully read the Microsoft Learn documentation on this topic as I will not be covering some of the steps in full (permissions, releasing holds, all limitations): Find and delete Microsoft Teams chat messages in eDiscovery | Microsoft Learn So as a reminder, for E5/G5 customers and cases with premium features enabled- you must use the Graph API to execute the purge operation. With the eDiscovery Graph API, you have the option to create the case, create a search, generate statistics, create an item report and issue the purge command all from the Graph API. It is also possible to use the Purview Portal to create the case, create the search, generate statistics/samples and generate the item report. However, the final validation of the items that would be purged by rerunning the statistics operation and issuing the purge command must be run via the Graph API. In this post, we will take a look at two examples, one involving an email message and one involving a Teams message. I will also look to show how to call the graph APIs. Purging email messages via the Graph API In this example, I want to purge the following email incorrectly sent to Debra Berger. I also want to remove it from the sender's mailbox as well. Let’s assume in this example I do not know exactly who sent and received the email, but I do know the subject and date it was sent on. In this example, I am going to use the Modern eDiscovery Purview experience to create a new case where I will undertake some initial searches to locate the item. Once the case is created, I will Create a search and give it a name. In this example, I do not know all the mailboxes where the email is present, so my initial search is going to be a tenant wide search of all Exchange mailboxes, using the subject and date range as conditions to see which locations have hits. Note: For scenarios where you know the location of the items there is no requirement to do a tenant wide search. You can target the search to the know locations instead. I will then select Run Query and trigger a Statistics job to see which locations in the tenant have hits. For our purposes, we do not need to select Include categories, Include query keywords report or Include partially indexed items. This will trigger a Generate statistics job and take you to the Statistics tab of the search. Once the job completes it will display information on the total matches and number of locations with hits. To find out exactly which locations have hits, I can use the improved process reports to review more granular detail on the locations with hits. The report for the Generate statistics job can be found by selecting Process manager and then selecting the job. Once displayed I can download the reports associated with this process by selecting Download report. Once we have downloaded the report for the process, we get a ZIP file containing four different reports, to understand where I had hits I can review the Locations report within the zip file. If I open the locations report and filter on the count column I can see in this instance I have two locations with hits, Admin and DebraB. I will use this to make my original search more targeted. It also gives me an opportunity to check that I am not going to exceed the limits on the number of items I can target for the purge per execution. Returning to our original search I will remove All people and groups from my Data Sources and replace it with the two locations I had hits from. I will re-run my Generate Statistics job to ensure I am still getting the expected results. As the numbers align and remain consistent, I will do a further check and generate samples from the search. This will allow me to review the items to confirm that they are the items I wish to purge. From the search query I select Run query and select Sample. This will trigger a Generate sample job and take you to the Sample tab of the search. Once complete, I can review samples of the items returned by the search to confirm if these items are the items I want to purge. Now that I have confirmed, based on the sampling, that I have the items I want to purge I want to generate a detailed item report of all items that are a match for my search. To do this I need to generate an export report for the search. Note: Sampling alone may not return all the results impacted by the search, it only returns a sample of the items that match the query. To determine the full set of items that will be targeted we need to generate the export report. From the Search I can select Export to perform a direct export without having to add the data to a review set (available when premium features are enabled). Ensure to configure the following options on the export: Indexed items that match your search query Unselect all the options under Messages and related items from mailboxes and Exchange Online Export Item report only If you want to manually review the items that would be impacted by the purge operation you can optionally export the items alongside the items report for further review. You can also add the search to a review set to review the items that you are targeting. The benefit of adding to the review set is that it enables to you review the items whilst still keeping the data within the M365 service boundary. Note: If you add to a review set, a copy of the items will remain in the review set until the case is deleted. I can review the progress of the export job and download the report via the Process Manager. Once I have downloaded the report, I can review the Items.csv file to check the items targeted by the search. It is at this stage I must switch to using the Graph APIs to validate the actions that will be taken by the purge command and to issue the purge command itself. Not undertaking these additional validation steps can result in un-intended purge of data. There are two approaches you can use to interact with the Microsoft Graph eDiscovery APIs: Via Graph Explorer Via the MS.Graph PS module For this example, I will show how to use the Graph Explorer to make the relevant Graph API calls. For the Teams example, I will use the MS.Graph PS Module. We are going to use the APIs to complete the following steps: Trigger a statistics job via the API and review the results Trigger the purge command The Graph Explorer can be accessed via the following link: Graph Explorer | Try Microsoft Graph APIs - Microsoft Graph To start using the Graph Explorer to work with Microsoft Graph eDiscovery APIs you first need to sign in with your admin account. You need to ensure that you consent to the required Microsoft Graph eDiscovery API permissions by selecting Consent to permissions. From the Permissions flyout search for eDiscovery and select Consent for eDiscovery.ReadWrite.All. When prompted to consent to the permissions for the Graph Explorer select Accept. Optionally you can consent on behalf of your organisation to suppress this step for others. Once complete we can start making calls to the APIs via Graph Explorer. To undertake the next steps we need to capture some additional information, specifically the Case ID and the Search ID. We can get the case ID from the Case Settings in the Purview Portal, recording the Id value shown on the Case details pane. If we return to the Graph Explorer we can use this CaseID to see all the searches within an eDiscovery case. The structure of the HTTPS call is as follows: GET https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/<caseID>/searches List searches - Microsoft Graph v1.0 | Microsoft Learn If we replace <caseID> with the Id we captured from the case settings we can issue the API call to see all the searches within the case to find the required search ID. When you issue the GET request in Graph Explorer you can review the Response preview to find the search ID we are looking for. Now that we have the case ID and the Search ID we can trigger an estimate by using the following Graph API call. POST https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/estimateStatistics ediscoverySearch: estimateStatistics - Microsoft Graph v1.0 | Microsoft Learn Once you issue the POST command you will be returned with an Accepted – 202 message. Now I need to use the following REST API call to review the status of the Estimate Statistics job in Graph Explorer. GET https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/lastEstimateStatisticsOperation List lastEstimateStatisticsOperation - Microsoft Graph v1.0 | Microsoft Learn If the estimates job is not complete when you run the GET command the Response preview contents will show the status as running. If the estimates job is complete when you run the GET command the Response preview contents will show you the results of the estimates job. CRITICAL: Ensure that the indexedItemCount matches the items returned in the item report generated via the Portal. If this does not match do not proceed to issuing the purge command. Now that I have validated everything, I am ready to issue the purge command via the Graph API. I will use the following Graph API call. POST https://graph.microsoft.com/v1.0/security/cases/ediscoveryCases/{ediscoveryCaseId}/searches/{ediscoverySearchId}/purgeData ediscoverySearch: purgeData - Microsoft Graph v1.0 | Microsoft Learn With this POST command we also need to provide a Request Body to tell the API which areas we want to target (mailboxes or teamsMessages) and the purge type (recoverable, permantlyDelete). As we are targeting email items I will use mailboxes as the PurgeAreas option. As I only want to remove the item from the user’s mailbox view I am going to use recoverable as the PurgeType. { "purgeType": "recoverable", "purgeAreas": "mailboxes" } Once you issue the POST command you will be returned with an Accepted – 202 message. Once the command has been issued it will proceed to purge the items that match the search criteria from the locations targeted. If I go back to my original example, we can now see the item has been removed from the users mailbox. As it has been soft deleted I can review the recoverable items folder from Outlook on the Web where I will see that for the user, it has now been deleted pending clean-up from their mailbox. Purging Teams messages via the Graph API In this example, I want to purge the following Teams conversation between Debra, Adele and the admin (CDX) from all participants Teams client. I am going to reuse the “HK016 – Search and Purge” case to create a new search called “Teams conversation removal”. I add three participants of the chat as Data sources to the search, I am then going to use the KeyQL condition to target the items I want to remove. In this example I am using the following KeyQL. (Participants=AdeleV@M365x00001337.OnMicrosoft.com AND Participants=DebraB@M365x00001337.OnMicrosoft.com AND Participants=admin@M365x00001337.onmicrosoft.com) AND (Kind=im OR Kind=microsoftteams) AND (Date=2025-06-04) This is looking for all Teams messages that contain all three participants sent on the 4 th of June 2025. It is critical when targeting Teams messages that I ensure my query targets exactly the items that I want to purge. With Teams messages (opposed to email items) there are less options available that enable us to granularly target the team items for purging. Note: The use of the new Identifier condition is not supported for purge options. Use of this can lead to unintended data to be removed and should not be used as a condition in the search at this time. If I was to be looking for a very specific phrase, I could further refine the query by using the Keyword condition to look for that specific Teams message. Once I have created my search I am ready to generate both Statistics and Samples to enable me to validate I am targeting the right items for my search. My statistics job has returned 21 items, 7 from each location targeted. This aligns with the number of items within the Teams conversation. However, I am going to also validate that the samples I have generated match the content I want to purge, ensuring that I haven’t inadvertently returned additional items I was not expecting. Now that I have confirmed, based on the sampling, that the sample of items returned look to be correct I want to generate a detailed item report of all items that are a match for my search. To do this I need to generate an export report for the search. From the Search I can select Export to perform a direct export without having to add the data to a review set (available when premium features are enabled). Ensure to configure the following options on the export: Indexed items that match your search query Unselect all the options under Messages and related items from mailboxes and Exchange Online Export Item report only Once I select Export it will create a new export job, I can review the progress of the job and download the report via the Process Manager. Once I have downloaded the report, I can review the Items.csv file to check the items targeted by the search and that would be purged when I issue the purge call. Now that I have confirmed that the search is targeting the items I want to purge it is at this stage I must switch to using the Graph APIs. As discussed, there are two approaches you can use to interact with the Microsoft Graph eDiscovery APIs: Using Graph Explorer Using the MS.Graph PS module For this example, I will show how to use the MS.Graph PS Module to make the relevant Graph API calls. To understand how to use the Graph Explorer to issue the purge command please refer to the previous example for purging email messages. We are going to use the APIs to complete the following steps: Trigger a statistics job via the API and review the results Trigger the purge command To install the MS.Graph PowerShell module please refer to the following article. Install the Microsoft Graph PowerShell SDK | Microsoft Learn To understand more about the MS.Graph PS module and how to get started you can review the following article. Get started with the Microsoft Graph PowerShell SDK | Microsoft Learn Once the PowerShell module is installed you can connect to the eDiscovery Graph APIs by running the following command. connect-mgGraph -Scopes "ediscovery.ReadWrite.All" You will be prompted to authenticate, once complete you will be presented with the following banner. To undertake the next steps we need to capture some additional information, specifically the Case ID and the Search ID. As before we can get the case ID from the Case Settings in the Purview Portal, recording the Id value shown on the Case details pane. Alternatively we can use the following PowerShell command to find a list of cases and their ID. get-MgSecurityCaseEdiscoveryCase | ft displayname,id List ediscoveryCases - Microsoft Graph v1.0 | Microsoft Learn Once we have the ID of the case we want to execute the purge command from, we can run the following command to find the IDs of all the search jobs in the case. Get-MgSecurityCaseEdiscoveryCaseSearch -EdiscoveryCaseId <ediscoveryCaseId> | ft displayname,id,ContentQuery List searches - Microsoft Graph v1.0 | Microsoft Learn Now that we have both the Case ID and the Search ID we can trigger the generate statistics job using the following command. Invoke-MgEstimateSecurityCaseEdiscoveryCaseSearchStatistics -EdiscoveryCaseId <ediscoveryCaseId> -EdiscoverySearchId <ediscoverySearchId> ediscoverySearch: estimateStatistics - Microsoft Graph v1.0 | Microsoft Learn Now I need to use the following command to review the status of the Estimate Statistics job. Get-MgSecurityCaseEdiscoveryCaseSearchLastEstimateStatisticsOperation -EdiscoveryCaseID <ediscoveryCaseId> -EdiscoverySearchId <ediscoverySearchId> List lastEstimateStatisticsOperation - Microsoft Graph v1.0 | Microsoft Learn If the estimates job is not complete when you run the command the status will show as running. If the estimates job is complete when you run the command status will show as succeeded and will also show the number of hits in the IndexItemCount. CRITICAL: Ensure that the indexedItemCount matches the items returned in the item report generated via the Portal. If this does not match do not proceed to issuing the purge command. Now that I have validated everything I am ready to issue the purge command via the Graph API. With this command we need to provide a Request Body to tell the API which areas we want to target (mailboxes or teamsMessages) and the purge type (recoverable, permantlyDelete). As we are targeting teams items I will use teamsMessages as the PurgeAreas option. Note: If you specify mailboxes then only the compliance copy stored in the user mailbox will be purged and not the item from the teams services itself. This will mean the item will remain visible to the user in Teams and can no longer be purged. When purgeType is set to either recoverable or permanentlyDelete and purgeAreas is set to teamsMessages, the Teams messages are permanently deleted. In other words either option will result in the permanent deletion of the items from Teams and they cannot be recovered. $params = @{ purgeType = "recoverable" purgeAreas = "teamsMessages" } Once I have prepared my request body I will issue the following command. Clear-MgSecurityCaseEdiscoveryCaseSearchData -EdiscoveryCaseId $ediscoveryCaseId -EdiscoverySearchId $ediscoverySearchId -BodyParameter $params ediscoverySearch: purgeData - Microsoft Graph v1.0 | Microsoft Learn Once the command has been issued it will proceed to purge the items that match the search criteria from the locations targeted. If I go back to my original example, we can now see the items has been removed from Teams. Congratulations, you have made it to the end of the blog post. Hopefully you found it useful and it assists you to build your own operational processes for using the Graph API to issue search and purge actions.Authorization and Governance for AI Agents: Runtime Authorization Beyond Identity at Scale
Designing Authorization‑Aware AI Agents at Scale Enforcing Runtime RBAC + ABAC with Approval Injection (JIT) Microsoft Entra Agent Identity enables organizations to govern and manage AI agent identities in Copilot Studio, improving visibility and identity-level control. However, as enterprises deploy multiple autonomous AI agents, identity and OAuth permissions alone cannot answer a more critical question: “Should this action be executed now, by this agent, for this user, under the current business and regulatory context?” This post introduces a reusable Authorization Fabric—combining a Policy Enforcement Point (PEP) and Policy Decision Point (PDP)—implemented as a Microsoft Entra‑protected endpoint using Azure Functions/App Service authentication. Every AI agent (Copilot Studio or AI Foundry/Semantic Kernel) calls this fabric before tool execution, receiving a deterministic runtime decision: ALLOW / DENY / REQUIRE_APPROVAL / MASK Who this is for Anyone building AI agents (Copilot Studio, AI Foundry/Semantic Kernel) that call tools, workflows, or APIs Organizations scaling to multiple agents and needing consistent runtime controls Teams operating in regulated or security‑sensitive environments, where decisions must be deterministic and auditable Why a V2? Identity is necessary—runtime authorization is missing Entra Agent Identity (preview) integrates Copilot Studio agents with Microsoft Entra so that newly created agents automatically get an Entra agent identity, manageable in the Entra admin center, and identity activity is logged in Entra. That solves who the agent is and improves identity governance visibility. But multi-agent deployments introduce a new risk class: Autonomous execution sprawl — many agents, operating with delegated privileges, invoking the same backends independently. OAuth and API permissions answer “can the agent call this API?” They do not answer “should the agent execute this action under business policy, compliance constraints, data boundaries, and approval thresholds?” This is where a runtime authorization decision plane becomes essential. The pattern: Microsoft Entra‑Protected Authorization Fabric (PEP + PDP) Instead of embedding RBAC logic independently inside every agent, use a shared fabric: PEP (Policy Enforcement Point): Gatekeeper invoked before any tool/action PDP (Policy Decision Point): Evaluates RBAC + ABAC + approval policies Decision output: ALLOW / DENY / REQUIRE_APPROVAL / MASK This Authorization Fabric functions as a shared enterprise control plane, decoupling authorization logic from individual agents and enforcing policies consistently across all autonomous execution paths. Architecture (POC reference architecture) Use a single runtime decision plane that sits between agents and tools. What’s important here Every agent (Copilot Studio or AI Foundry/SK) calls the Authorization Fabric API first The fabric is a protected endpoint (Microsoft Entra‑protected endpoint required) Tools (Graph/ERP/CRM/custom APIs) are invoked only after an ALLOW decision (or approval) Trust boundaries enforced by this architecture Agents never call business tools directly without a prior authorization decision The Authorization Fabric validates caller identity via Microsoft Entra Authorization decisions are centralized, consistent, and auditable Approval workflows act as a runtime “break-glass” control for high-impact actions This ensures identity, intent, and execution are independently enforced, rather than implicitly trusted. Runtime flow (Decision → Approval → Execution) Here is the runtime sequence as a simple flow (you can keep your Mermaid diagram too). ```mermaid flowchart TD START(["START"]) --> S1["[1] User Request"] S1 --> S2["[2] Agent Extracts Intent\n(action, resource, attributes)"] S2 --> S3["[3] Call /authorize\n(Entra protected)"] S3 --> S4 subgraph S4["[4] PDP Evaluation"] ABAC["ABAC: Tenant · Region · Data Sensitivity"] RBAC["RBAC: Entitlement Check"] Threshold["Approval Threshold"] ABAC --> RBAC --> Threshold end S4 --> Decision{"[5] Decision?"} Decision -->|"ALLOW"| Exec["Execute Tool / API"] Decision -->|"MASK"| Masked["Execute with Masked Data"] Decision -->|"DENY"| Block["Block Request"] Decision -->|"REQUIRE_APPROVAL"| Approve{"[6] Approval Flow"} Approve -->|"Approved"| Exec Approve -->|"Rejected"| Block Exec --> Audit["[7] Audit & Telemetry"] Masked --> Audit Block --> Audit Audit --> ENDNODE(["END"]) style START fill:#4A90D9,stroke:#333,color:#fff style ENDNODE fill:#4A90D9,stroke:#333,color:#fff style S1 fill:#5B5FC7,stroke:#333,color:#fff style S2 fill:#5B5FC7,stroke:#333,color:#fff style S3 fill:#E8A838,stroke:#333,color:#fff style S4 fill:#FFF3E0,stroke:#E8A838,stroke-width:2px style ABAC fill:#FCE4B2,stroke:#999 style RBAC fill:#FCE4B2,stroke:#999 style Threshold fill:#FCE4B2,stroke:#999 style Decision fill:#fff,stroke:#333 style Exec fill:#2ECC71,stroke:#333,color:#fff style Masked fill:#27AE60,stroke:#333,color:#fff style Block fill:#C0392B,stroke:#333,color:#fff style Approve fill:#F39C12,stroke:#333,color:#fff style Audit fill:#3498DB,stroke:#333,color:#fff ``` Design principle: No tool execution occurs until the Authorization Fabric returns ALLOW or REQUIRE_APPROVAL is satisfied via an approval workflow. Where Power Automate fits (important for readers) In most Copilot Studio implementations, Agents calls Power Automate (agent flows), is the practical integration layer that calls enterprise services and APIs. Copilot Studio supports “agent flows” as a way to extend agent capabilities with low-code workflows. For this pattern, Power Automate typically: acquires/uses the right identity context for the call (depending on your tenant setup), and calls the /authorize endpoint of the Authorization Fabric, returns the decision payload to the agent for branching. Copilot Studio also supports calling REST endpoints directly using the HTTP Request node, including passing headers such as Authorization: Bearer <token>. Protected endpoint only: Securing the Authorization Fabric with Microsoft Entra For this V2 pattern, the Authorization Fabric must be protected using Microsoft Entra‑protected endpoint on Azure Functions/App Service (built‑in auth). Microsoft Learn provides the configuration guidance for enabling Microsoft Entra as the authentication provider for Azure App Service / Azure Functions. Step 1 — Create the Authorization Fabric API (Azure Function) Expose an authorization endpoint: HTTP Step 2 — Enable Microsoft Entra‑protected endpoint on the Function App In Azure Portal: Function App → Authentication Add identity provider → Microsoft Choose Workforce configuration (enterprise tenant) Set Require authentication for all requests This ensures the Authorization Fabric is not callable without a valid Entra token. Step 3 — Optional hardening (recommended) Depending on enterprise posture, layer: IP restrictions / Private endpoints APIM in front of the Function for rate limiting, request normalization, centralized logging (For a POC, keep it minimal—add hardening incrementally.) Externalizing policy (so governance scales) To make this pattern reusable across multiple agents, policies should not be hardcoded inside each agent. Instead, store policy definitions in a central policy store such as Cosmos DB (or equivalent configuration store), and have the PDP load/evaluate policies at runtime. Why this matters: Policy changes apply across all agents instantly (no agent republish) Central governance + versioning + rollback becomes possible Audit and reporting become consistent across environments (For the POC, a single JSON document per policy pack in Cosmos DB is sufficient. For production, add versioning and staged rollout.) Store one PolicyPack JSON document per environment (dev/test/prod). Include version, effectiveFrom, priority for safe rollout/rollback. Minimal decision contract (standard request / response) To keep the fabric reusable across agents, standardize the request payload. Request payload (example) Decision response (deterministic) Example scenario (1 minute to understand) Scenario: A user asks a Finance agent to create a Purchase Order for 70,000. Even if the user has API permission and the agent can technically call the ERP API, runtime policy should return: REQUIRE_APPROVAL (threshold exceeded) trigger an approval workflow execute only after approval is granted This is the difference between API access and authorized business execution. Sample Policy Model (RBAC + ABAC + Approval) This POC policy model intentionally stays simple while demonstrating both coarse and fine-grained governance. 1) Coarse‑grained RBAC (roles → actions) FinanceAnalyst CreatePO up to 50,000 ViewVendor FinanceManager CreatePO up to 100,000 and/or approve higher spend 2) Fine‑grained ABAC (conditions at runtime) ABAC evaluates context such as region, classification, tenant boundary, and risk: 3) Approval injection (Agent‑level JIT execution) For higher-risk/high-impact actions, the fabric returns REQUIRE_APPROVAL rather than hard deny (when appropriate): How policies should be evaluated (deterministic order) To ensure predictable and auditable behavior, evaluate in a deterministic order: Tenant isolation & residency (ABAC hard deny first) Classification rules (deny or mask) RBAC entitlement validation Threshold/risk evaluation Approval injection (JIT step-up) This prevents approval workflows from bypassing foundational security boundaries such as tenant isolation or data sovereignty. Copilot Studio integration (enforcing runtime authorization) Copilot Studio can call external REST APIs using the HTTP Request node, including passing headers such as Authorization: Bearer <token> and binding response schema for branching logic. Copilot Studio also supports using flows with agents (“agent flows”) to extend capabilities and orchestrate actions. Option A (Recommended): Copilot Studio → Agent Flow (Power Automate) → Authorization Fabric Why: Flows are a practical place to handle token acquisition patterns, approval orchestration, and standardized logging. Topic flow: Extract user intent + parameters Call an agent flow that: calls /authorize returns decision payload Branch in the topic: If ALLOW → proceed to tool call If REQUIRE_APPROVAL → trigger approval flow; proceed only if approved If DENY → stop and explain policy reason Important: Tool execution must never be reachable through an alternate topic path that bypasses the authorization check. Option B: Direct HTTP Request node to Authorization Fabric Use the Send HTTP request node to call the authorization endpoint and branch using the response schema. This approach is clean, but token acquisition and secure secretless authentication are often simpler when handled via a managed integration layer (flow + connector). AI Foundry / Semantic Kernel integration (tool invocation gate) For Foundry/SK agents, the integration point is before tool execution. Semantic Kernel supports Azure AI agent patterns and tool integration, making it a natural place to enforce a pre-tool authorization check. Pseudo-pattern: Agent extracts intent + context Calls Authorization Fabric Enforces decision Executes tool only when allowed (or after approval) Telemetry & audit (what Security Architects will ask for) Even the best policy engine is incomplete without audit trails. At minimum, log: agentId, userUPN, action, resource decision + reason + policyIds approval outcome (if any) correlationId for downstream tool execution Why it matters: you now have a defensible answer to: “Why did an autonomous agent execute this action?” Security signal bonus: Denials, unusual approval rates, and repeated policy mismatches can also indicate prompt injection attempts, mis-scoped agents, or governance drift. What this enables (and why it scales) With a shared Authorization Fabric: Avoid duplicating authorization logic across agents Standardize decisions across Copilot Studio + Foundry agents Update governance once (policy change) and apply everywhere Make autonomy safer without blocking productivity Closing: Identity gets you who. Runtime authorization gets you whether/when/how. Copilot Studio can automatically create Entra agent identities (preview), improving identity governance and visibility for agents. But safe autonomy requires a runtime decision plane. Securing that plane as an Entra-protected endpoint is foundational for enterprise deployments. In enterprise environments, autonomous execution without runtime authorization is equivalent to privileged access without PIM—powerful, fast, and operationally risky.Introducing the Entra Helpdesk Portal: A Zero-Trust, Dockerized ITSM Interface for Tier 1 Support
Hello everyone, If you manage identity in Microsoft Entra ID at an enterprise scale, you know the struggle: delegating day-to-day operational tasks (like password resets, session revocations, and MFA management) to Tier 1 and Tier 2 support staff is inherently risky. The native Azure/Entra portal is incredibly powerful, but it’s complex and lacks mandatory ITSM enforcement. Giving a helpdesk technician the "Helpdesk Administrator" role grants them access to a portal where a single misclick can cause a major headache. To solve this, I’ve developed the Entra Helpdesk Portal (Community Edition)—an open-source, containerized application designed to act as an isolated "airlock" between your support team and your Entra ID tenant. Why This Adds Value to Your Tenant Instead of having technicians log into the Azure portal, they log into this clean, Material Design web interface. It leverages a backend Service Principal (using MSAL and the Graph API) to execute commands on their behalf. Strict Zero Trust: Logging in via Microsoft SSO isn’t enough. The app intercepts the token and checks the user’s UPN against a hardcoded ALLOWED_ADMINS whitelist in your Docker environment file. Mandatory ITSM Ticketing: You cannot enforce ticketing in the native Azure Portal. In this app, every write action prompts a modal requiring a valid ticket number (e.g., INC-123456). Local Audit Logging: All actions, along with the actor, timestamp, and ticket number, are written to an immutable local SQLite database (audit.db) inside the container volume. Performance: Heavy Graph API reads are cached in-memory with a Time-To-Live (TTL) and smart invalidation. Searching for users or loading Enterprise Apps takes milliseconds. What Can It Do? Identity Lifecycle: Create users, auto-generate secure 16-character passwords, revoke sign-in sessions, reset passwords, and delete specific MFA methods to force re-registration. Diagnostics: View a user's last 5 sign-in logs, translating Microsoft error codes into plain English. Group Management: Add/remove members to Security and M365 groups. App/SPN Management: Lazy-load raw requiredResourceAccess Graph API payloads to audit app permissions, and instantly rotate client secrets. Universal Restore: Paste the Object ID of any soft-deleted item into the Recycle Bin tab to instantly resurrect it. How Easy Is It to Setup? I wanted this to be universally deployable, so I compiled it as a multi-architecture Docker image (linux/amd64 and linux/arm64). It will run on a massive Windows Server or a simple Raspberry Pi. Setup takes less than 5 minutes: Create an App Registration in Entra ID and grant it the necessary Graph API Application Permissions (e.g., User.ReadWrite.All, AuditLog.Read.All). Create a docker-compose.yml file. Define your feature toggles. You can literally turn off features (like User Deletion) by setting an environment variable to false. version: '3.8' services: helpdesk-portal: image: jahmed22/entra-helpdesk:latest container_name: entra_helpdesk restart: unless-stopped ports: - "8000:8000" environment: # CORE IDENTITY - TENANT_ID=your_tenant_id_here - CLIENT_ID=your_client_id_here - CLIENT_SECRET=your_client_secret_here - BASE_URL=https://entradesk.jahmed.cloud - ALLOWED_ADMINS=email address removed for privacy reasons # CUSTOMIZATION & FEATURE FLAGS - APP_NAME=Entra Help Desk - ENABLE_PASSWORD_RESET=true - ENABLE_MFA_MANAGEMENT=true - ENABLE_USER_DELETION=false - ENABLE_GROUP_MANAGEMENT=true - ENABLE_APP_MANAGEMENT=true volumes: - entra_helpdesk_data:/app/static/uploads - entra_helpdesk_db:/app volumes: entra_helpdesk_data: entra_helpdesk_db: 4.Run docker compose up -d and you are done! I built this to give back to the community and help secure our Tier 1 operations. If you are interested in testing it out in your dev tenants or want to see the full architecture breakdown, you can read the complete documentation on my website here I’d love to hear your thoughts, feedback, or any feature requests you might have!75Views0likes0CommentsAuthorization and Identity Governance Inside AI Agents
Designing Authorization‑Aware AI Agents Enforcing Microsoft Entra ID RBAC in Copilot Studio As AI agents move from experimentation to enterprise execution, authorization becomes the defining line between innovation and risk. AI agents are rapidly evolving from experimental assistants into enterprise operators—retrieving user data, triggering workflows, and invoking protected APIs. While many early implementations rely on prompt‑level instructions to control access, regulated enterprise environments require authorization to be enforced by identity systems, not language models. This article presents a production‑ready, identity‑first architecture for building authorization‑aware AI agents using Copilot Studio, Power Automate, Microsoft Entra ID, and Microsoft Graph, ensuring every agent action executes strictly within the requesting user’s permissions. Why Prompt‑Level Security Is Not Enough Large Language Models interpret intent—they do not enforce policy. Even the most carefully written prompts cannot: Validate Microsoft Entra ID group or role membership Reliably distinguish delegated user identity from application identity Enforce deterministic access decisions Produce auditable authorization outcomes Relying on prompts for authorization introduces silent security failures, over‑privileged access, and compliance gaps—particularly in Financial Services, Healthcare, and other regulated industries. Authorization is not a reasoning problem. It is an identity enforcement problem. Common Authorization Anti‑Patterns in AI Agents The following patterns frequently appear in early AI agent implementations and should be avoided in enterprise environments: Hard‑coded role or group checks embedded in prompts Trusting group names passed as plain‑text parameters Using application permissions for user‑initiated actions Skipping verification of the user’s Entra ID identity Lacking an auditable authorization decision point These approaches may work in demos, but they do not survive security reviews, compliance audits, or real‑world misuse scenarios. Authorization‑Aware Agent Architecture In an authorization‑aware design, the agent never decides access. Authorization is enforced externally, by identity‑aware workflows that sit outside the language model’s reasoning boundary. High‑Level Flow The Copilot Studio agent receives a user request The agent passes the User Principal Name (UPN) and intended action A Power Automate flow validates permissions using Microsoft Entra ID via Microsoft Graph Only authorized requests are allowed to proceed Unauthorized requests fail fast with a deterministic outcome Authorization‑aware Copilot Studio architecture enforces Entra ID RBAC before executing any business action. The agent orchestrates intent. Identity systems enforce access. Enforcing Entra ID RBAC with Microsoft Graph Power Automate acts as the authorization enforcement layer: Resolve user identity from the supplied UPN Retrieve group or role memberships using Microsoft Graph Normalize and compare memberships against approved RBAC groups Explicitly deny execution when authorization fails This keeps authorization logic: Centralized Deterministic Auditable Independent of the AI model Reference Implementation: Power Automate RBAC Enforcement Flow The following import‑ready Power Automate cloud flow demonstrates a secure RBAC enforcement pattern for Copilot Studio agents. It validates Microsoft Entra ID group membership before allowing any business action. Scenario Trigger: User‑initiated agent action Identity model: Delegated user identity Input: userUPN, requestedAction Outcome: Authorized or denied based on Entra ID RBAC { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", "contentVersion": "1.0.0.0", "triggers": { "Copilot_Request": { "type": "Request", "kind": "Http", "inputs": { "schema": { "type": "object", "properties": { "userUPN": { "type": "string" }, "requestedAction": { "type": "string" } }, "required": [ "userUPN" ] } } } }, "actions": { "Get_User_Groups": { "type": "Http", "inputs": { "method": "GET", "uri": "https://graph.microsoft.com/v1.0/users/@{triggerBody()?['userUPN']}/memberOf?$select=displayName", "authentication": { "type": "ManagedServiceIdentity" } } }, "Normalize_Group_Names": { "type": "Select", "inputs": { "from": "@body('Get_User_Groups')?['value']", "select": { "groupName": "@toLower(item()?['displayName'])" } }, "runAfter": { "Get_User_Groups": [ "Succeeded" ] } }, "Check_Authorization": { "type": "Condition", "expression": "@contains(body('Normalize_Group_Names'), 'ai-authorized-users')", "runAfter": { "Normalize_Group_Names": [ "Succeeded" ] }, "actions": { "Authorized_Action": { "type": "Compose", "inputs": "User authorized via Entra ID RBAC" } }, "else": { "actions": { "Access_Denied": { "type": "Terminate", "inputs": { "status": "Failed", "message": "Access denied. User not authorized via Entra ID RBAC." } } } } } } } This pattern enforces authorization outside the agent, aligns with Zero Trust principles, and creates a clear audit boundary suitable for enterprise and regulated environments. Flow Diagram: Agent Integrated with RBAC Authorization Flow and Sample Prompt Execution: Delegated vs Application Permissions Scenario Recommended Permission Model User‑initiated agent actions Delegated permissions Background or system automation Application permissions Using delegated permissions ensures agent execution remains strictly within the requesting user’s identity boundary. Auditing and Compliance Benefits Deterministic and explainable authorization decisions Centralized enforcement aligned with identity governance Clear audit trails for security and compliance reviews Readiness for SOC, ISO, PCI, and FSI assessments Enterprise Security Takeaways Authorization belongs in Microsoft Entra ID, not prompts AI agents must respect enterprise identity boundaries Copilot Studio + Power Automate + Microsoft Graph enable secure‑by‑design AI agents By treating AI agents as first‑class enterprise actors and enforcing authorization at the identity layer, organizations can scale AI adoption with confidence, trust, and compliance.Sent email cannot be viewed by sender when encrypted.
Emails that are sent in outlook are not viewable in the sent items folder of the outlook desktop app but work fine in the outlook web app. The error that shows is the same as the one for when recipients cannot view the email in the email client but if they click on the link it still does not let them view the email. Seems to be affecting only certain members of the organisation but there does not seems to be a pattern of software/admin rights/plugins/etc. Does anyone have any ideas as to what is likely to cause this? Thanks.3.4KViews0likes4Comments