entraid
21 TopicsHow to Properly Configure IIS Reverse Proxy for ASP.NET Core Applications Secured with Entra ID
If you’ve ever worked on an ASP.NET Core application protected with Entra ID, you might have encountered an issue where the backend server URL appears as the redirect URI instead of the IIS Reverse Proxy URL. This is because ASP.NET Core applications use the backend server’s hostname to generate the redirect URI. While this behavior is the default, it can be problematic. While you can work around this by manually setting the redirect URI to the ARR/IIS Reverse Proxy endpoint in your code as follows: builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAd")); builder.Services.Configure<OpenIdConnectOptions>(options => { options.Events.OnRedirectToIdentityProvider = context => { context.ProtocolMessage.RedirectUri = "https://arr.local.lab"; return Task.FromResult(0); }; }); It isn’t the most elegant solution, especially in environments where configuration changes might often be required. Instead, using Forwarded Headers offers a cleaner, more scalable approach. In this post, I’ll walk you through how to resolve this issue using Forwarded Headers. ASP.NET Core provides a ForwardedHeaders Middleware , which reads headers such as X-Forwarded-Host and X-Forwarded-Proto. These headers replace values in HttpContext such as HttpContext.Request.Host and HttpContext.Request.Scheme. By passing these headers appropriately from IIS Reverse Proxy, we can resolve the redirect URI issue. But IIS reverse proxy or server farms doesn't send X-Forwarded-Host & X-Forwarded-Proto headers by default. You’ll need to configure IIS to include these headers using the URL Rewrite feature. To do so, follow these steps: Set Server Variables Open the URL Rewrite module in the IIS Manager Console and Select View Server Variables. Add following Server Variables: HTTP_X_Forwarded_Host HTTP_X_Forwarded_Proto Edit Inbound Rules Once Server Variables are added, select the concerned reverse proxy inbound rule and select Edit under Inbound rules in Actions Pane. Add the Server Variables to the inbound rule: Map HTTP_X_Forwarded_Host to {HTTP_HOST} Map HTTP_X_Forwarded_Proto to https Once IIS is configured to pass forwarded headers, the application needs to process them. Add ForwardedHeaders Middleware in your ASP.NET Core application and configure ForwardedHeadersOptions as follows: using Microsoft.AspNetCore.HttpOverrides; var builder = WebApplication.CreateBuilder(args); // Add services to the container. builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAd")); builder.Services.AddAuthorization(options => { // By default, all incoming requests will be authorized according to the default policy. options.FallbackPolicy = options.DefaultPolicy; }); builder.Services.AddRazorPages() .AddMicrosoftIdentityUI(); builder.Services.Configure<ForwardedHeadersOptions>(options => { options.KnownProxies.Add(IPAddress.Parse("10.160.7.4")); // Reverse Proxy IP address options.ForwardedHeaders = ForwardedHeaders.XForwardedProto | ForwardedHeaders.XForwardedHost; }); var app = builder. Build(); app.UseForwardedHeaders(); // ForwardedHeaders Middleware // Configure the HTTP request pipeline. if (!app.Environment.IsDevelopment()) { app.UseExceptionHandler("/Error"); app.UseHsts(); } app.UseHttpsRedirection(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.MapStaticAssets(); app.MapRazorPages() .WithStaticAssets(); app.MapControllers(); app.Run(); Note: Order of the Middleware is important. Ensure ForwardedHeaders Middleware is called before any other middleware in the pipeline. Make sure to add the IP address of your ARR/IIS Reverse Proxy to the KnownProxies list. Alternatively, you can use KnownNetwork to set IP range. With these configurations, X-Forwarded-Host and X-Forwarded-Proto headers sent from IIS Reverse Proxy will replace the Host and Scheme in HttpContext. This ensures that the redirect URI correctly points to the IIS Reverse Proxy endpoint, resolving the issue seamlessly. Further Reading: Refer to these resources for more information: Configure ASP.NET Core to work with proxy servers and load balancers | Microsoft Learn Setting HTTP request headers and IIS server variables | Microsoft Learn IIS Server Variables | Microsoft Learn Hope this guide helps!2.3KViews4likes0CommentsThe Future of Identity: Self-Service Account Recovery (Preview) in Microsoft Entra
In the modern enterprise, the "Help Desk" is paradoxically both a vital resource and a massive security liability. As organizations move toward phishing-resistant, passwordless environments using passkeys and FIDO2 tokens, a critical question remains: What happens when a user loses their only authentication device? Historically, this required a phone call to a support agent. However, in an era of sophisticated social engineering and AI-generated deepfakes, a human agent is often the easiest point of entry for an attacker. Microsoft Entra’s new Self-Service Account Recovery solves this by replacing manual verification with high-assurance, automated identity proofing. The Fatal Flaw in Traditional Recovery Most organizations currently rely on one of two methods for recovery, both of which have significant drawbacks: Self-Service Password Reset (SSPR): Often relies on "weak" factors like SMS codes or security questions. These are easily intercepted or guessed and don't help a user who is trying to move away from passwords entirely. The Help Desk: Requires an agent to "vouch" for a user. Attackers can impersonate employees, use voice-cloning technology, or provide leaked personal information to trick an agent into issuing a Temporary Access Pass (TAP). The new Entra flow removes the human element from the validation process, ensuring that the person regaining access is exactly who they claim to be. How the New Recovery Flow Works: The recovery process is built on the concept of "identity proofing," utilizing government-issued documents and biometric liveness checks. Integration with Verification Partners Microsoft doesn’t store your passport or driver's license. Instead, Entra integrates with specialized Third-Party Identity Verification providers (such as True Credential, IDEMIA, AU10TIX). These services are experts in forensic document analysis. The Verification Process When a user begins a recovery, they are redirected to the partner service. The process typically involves: Document Capture: The user takes a photo of a government ID (Passport, Driver’s License, etc.). Forensic Analysis: The service checks for security features like holograms, fonts, and watermarks to ensure the ID is genuine. Liveness Check: The user takes a "selfie" or video. The system uses "Face Check" technology projecting specific light patterns or colors on the user’s face to ensure it is a live person and not a photo, video, or deepfake. Issuance of a Verified ID Once the third party confirms the user's identity, Microsoft Entra issues Verified ID. This is a decentralized, digital credential that sits in the user's Microsoft Authenticator app. It serves as digital proof of their identity that Entra can trust. The Final Handshake: Face Check To bridge the gap between the digital credential and the person at the keyboard, Entra performs a Face Check. It compares the live user's face against the photo contained within the Verified ID. If they match, Entra considers the identity "proven." Bootstrapping the New Device Once verified, Entra automatically issues a Temporary Access Pass (TAP). This allows the user to log in and immediately register their new device, passkey, or Authenticator app, effectively "bootstrapping" their new secure environment without ever speaking to a human. Strategic Advantages for IT Leaders Zero Trust Maturity: This process fulfills the Zero Trust requirement of "explicit verification" even during the recovery phase. Scalability: By automating the most time-consuming part of help desk tickets identity verification IT teams can focus on more complex tasks. Phishing Resistance: Because the recovery is tied to physical ID and biometrics, there is no "code" for an attacker to phish. Global Compliance: Leveraging government-issued IDs allows organizations to meet high-bar regulatory requirements for identity assurance (such as NIST IAL2). Deployment and Prerequisites To implement this, administrators need to ensure a few things are in place: Verified ID Setup: You must configure Microsoft Entra Verified ID within your tenant. Matching Logic: Entra uses attributes like First Name and Last Name to match the Verified ID to the user account. Ensuring your HR data is clean and synchronized is essential. License & Costs: While the recovery flow is a feature of Entra, the verification partners and the Face Check service (typically a per-check fee) must be provisioned through the Microsoft Security Store. Conclusion The transition to a passwordless world is incomplete if the "back door" (recovery) remains open and insecure. By integrating government-grade identity verification directly into the login flow, Microsoft Entra provides the final piece of the puzzle: a recovery method that is as secure as the primary login itself.Moving from MDT/WDS to Autopilot – Real-World Lessons, Wins & Gotchas
Hi all, We’ve been moving away from an ageing WDS + MDT setup and over to Windows Autopilot, and I thought I’d share a few key lessons and experiences from the journey. In case anyone else is working through the same transition (...or about to). Why the change? MDT was becoming unreliable, drivers/apps would randomly fail to install, WDS is on the way out, and we needed a more remote-friendly approach. We also wanted to simplify things for our small IT team and shift from Hybrid Azure AD Join to Azure AD Join only. We’re doing this as a phased rollout. I harvested existing device hashes using a script from a central server, and manually added machines that weren’t online at the time (most of which were just unused spares, we haven't introduced new hardware yet). If you want a copy of this auto-harvest, please see my next post, this script is useful as it'll just go off and import the hardware hashes into Intune, and can run against multiple computers at a time. (I will add the link to the post once made). Some of the biggest hurdles: • 0x80070002 / 0x80070643 errors (typically due to incomplete registration or app deployment failures) • Enrollment Status Page (ESP) hangs due to app targeting issues (user vs device) and BitLocker config conflicts • Wi-Fi setup with RADIUS (NPS) was complex, Enterprise Certificates and we're still using internal AD for authentication, so user accounts exist there and sync over to Azure. • Legacy GPOs had to be rebuilt manually in Intune, lots of trial and error • Some software (like SolidWorks) wouldn’t install silently via Intune, so I used NinjaOne to handle these, along with remediation scripts in Intune where needed We also moved from WSUS to Windows Autopatch, which improved update reliability and even helped with driver delivery via Windows Update. What’s gone well: Device provisioning is more consistent, updates are more reliable, build time per machine has dropped, and remote users get systems faster. It’s also reduced our reliance on legacy infrastructure. What I’m still working on: Tightening up compliance and reporting, improving detection/remediation coverage, figuring out new errors that may occur, and automating as much manual processes as possible. Ask me anything or share your own experience! I’m happy to help anyone dealing with similar issues or just curious about the move. Feel free to reply here or message me. Always happy to trade lessons learned, especially if you’re in the middle of an Autopilot project yourself. Cheers, Timothy Jeens1.4KViews3likes5CommentsAugust 2025 Recap: Azure Database for PostgreSQL
Hello Azure Community, August was an exciting month for Azure Database for PostgreSQL! We have introduced updates that make your experience smarter and more secure. From simplified Entra ID group login to integrations with LangChain and LangGraph, these updates help with improving access control and seamless integration for your AI agents and applications. Stay tuned as we dive deeper into each of these feature updates. Feature Highlights Enhanced Performance recommendations for Azure Advisor - Generally Available Entra-ID group login using user credentials - Public Preview New Region Buildout: Austria East LangChain and LangGraph connector Active-Active Replication Guide Enhanced Performance recommendations for Azure Advisor - Generally Available Azure Advisor now offers enhanced recommendations to further optimize PostgreSQL server performance, security, and resource management. These key updates are as follows: Index Scan Insights: Detection and recommendations for disabled index and index-only scans to improve query efficiency. Audit Logging Review: Identification of excessive logging via the pgaudit.log parameter, with guidance to reduce overhead. Statistics Monitoring: Alerts on server statistics resets and suggestions to restore accurate performance tracking. Storage Optimization: Analysis of storage usage with recommendations to enable the Storage Autogrow feature for seamless scaling. Connection Management: Evaluation of workloads for short-lived connections and frequent connectivity errors, with recommendations to implement PgBouncer for efficient connection pooling. These enhancements aim to provide deeper operational insights and support proactive performance tuning for PostgreSQL workloads. For more details read the Performance recommendations documentation. Entra-ID group login using user credentials - Public Preview The public preview for Entra-ID group login using user credentials is now available. This feature simplifies user management and improves security within the Azure Database for PostgreSQL. This allows administrators and users to benefit from a more streamlined process like: Changes in Entra-ID group memberships are synchronized on a periodic 30min basis. This scheduled syncing ensures that access controls are kept up to date, simplifying user management and maintaining current permissions. Users can log in with their own credentials, streamlining authentication, and improving auditing and access management for PostgreSQL environments. As organizations continue to adopt cloud-native identity solutions, this update represents a major improvement in operational efficiency and security for PostgreSQL database environments. For more details read the documentation on Entra-ID group login. New Region Buildout: Austria East New region rollout! Azure Database for PostgreSQL flexible server is now available in Austria East, giving customers in and around the region lower latency and data residency options. This continues our mission to bring Azure PostgreSQL closer to where you build and run your apps. For the full list of regions visit: Azure Database for PostgreSQL Regions. LangChain and LangGraph connector We are excited to announce that native LangChain & LangGraph support is now available for Azure Database for PostgreSQL! This integration brings native support for Azure Database for PostgreSQL into LangChain or LangGraph workflows, enabling developers to use Azure PostgreSQL as a secure and high-performance vector store and memory store for their AI agents and applications. Specifically, this package adds support for: Microsoft Entra ID (formerly Azure AD) authentication when connecting to your Azure Database for PostgreSQL instances, and, DiskANN indexing algorithm when indexing your (semantic) vectors. This package makes it easy to connect LangChain to your Azure-hosted PostgreSQL instances whether you're building intelligent agents, semantic search, or retrieval-augmented generation (RAG) systems. Read more at https://aka.ms/azpg-agent-frameworks Active-Active Replication Guide We have published a new blog article that guides you through setting up active-active replication in Azure Database for PostgreSQL using the pglogical extension. This walkthrough covers the fundamentals of active-active replication, key prerequisites for enabling bi-directional replication, and step-by-step demo scripts for the setup. It also compares native and pglogical approaches helping you choose the right strategy for high availability, and multi-region resilience in production environments. Read more about the active-active replication guide on this blog. Azure Postgres Learning Bytes 🎓 Enabling Zone-Redundant High Availability for Azure Database for PostgreSQL Flexible Server Using APIs. High availability (HA) is essential for ensuring business continuity and minimizing downtime in production workloads. With Zone-Redundant HA, Azure Database for PostgreSQL Flexible Server automatically provisions a standby replica in a different availability zone, providing stronger fault tolerance against zone-level failures. This section will guide you on how to enable Zone-Redundant HA using REST APIs. Using REST APIs gives you clear visibility into the exact requests and responses, making it easier to debug issues and validate configurations as you go. You can use any REST API client tool of your choice to perform these operations including Postman, Thunder Client (VS Code extension), curl, etc. to send requests and inspect the results directly. Before enabling Zone-Redundant HA, make sure your server is on the General Purpose or Memory Optimized tier and deployed in a region that supports it. If your server is currently using Same-Zone HA, you must first disable it before switching to Zone-Redundant. Steps to Enable Zone-Redundant HA: Get an ARM Bearer token: Run this in a terminal where Azure CLI is signed in (or use Azure Cloud Shell) az account get-access-token --resource https://management.azure.com --query accessToken -o tsv Paste token in your API client tool Authorization: `Bearer <token>` </token> Inspect the server (GET) using the following URL: https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroup}}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{{serverName}}?api-version={{apiVersion}} In the JSON response, note: sku.tier → must be 'GeneralPurpose' or 'MemoryOptimized' properties.availabilityZone → '1' or '2' or '3' (depends which availability zone that was specified while creating the primary server, it will be selected by system if the availability zone is not specified) properties.highAvailability.mode → 'Disabled', 'SameZone', or 'ZoneRedundant' properties.highAvailability.state → e.g. 'NotEnabled','CreatingStandby', 'Healthy' If HA is currently SameZone, disable it first (PATCH) using API. Use the same URL in Step 3, in the Body header insert: { "properties": { "highAvailability": { "mode": "Disabled" } } } Enable Zone Redundant HA (PATCH) using API: Use the same URL in Step 3, in the Body header insert: { "properties": { "highAvailability": { "mode": "ZoneRedundant" } } } Monitor until HA is Healthy: Re-run the GET from Step 3 every 30-60 seconds until you see: "highAvailability": { "mode": "ZoneRedundant", "state": "Healthy" } Conclusion That’s all for our August 2025 feature updates! We’re committed to making Azure Database for PostgreSQL better with every release, and your feedback plays a key role in shaping what’s next. 💬 Have ideas, questions, or suggestions? Share them with us: https://aka.ms/pgfeedback 📢 Want to stay informed about the latest features and best practices? Follow us here for the latest announcements, feature releases, and best practices: Azure Database for PostgreSQL Blog More exciting improvements are on the way—stay tuned for what’s coming next!1.2KViews2likes0CommentsStrengthening Identity Resilience: A Deep Dive into Microsoft Entra Backup and Recovery
In the modern security landscape, we often say that "Identity is the new perimeter." We spend significant resources on Conditional Access, Phishing-Resistant MFA, and Identity Protection to keep the "bad guys" out. But what happens when the threat is already inside, or when a legitimate administrative action goes sideways? If our identity data the "brain" of our Microsoft 365 and Azure ecosystem is corrupted or maliciously altered, usr entire security posture collapses. Today, we’re exploring the new Microsoft Entra Backup and Recovery capability, a native safety net designed to ensure usr identity infrastructure remains resilient against both accidents and attacks. Why Native Backup Matters For years, Entra ID administrators relied on the Recycle Bin for deleted objects. However, a major gap existed: Attribute Corruption. If a script accidentally wipes the department and manager attributes for 10,000 users, or if a malicious actor modifies our most restrictive Conditional Access policies to create a backdoor, the Recycle Bin can't help us the objects aren't deleted; they are just wrong. Restoring these specific states previously required complex PowerShell scripting or expensive third-party tools. Entra Backup and Recovery closes this gap by providing a native, automated way to "roll back" the state of usr objects. Core Capabilities: How it Works The service is currently available in Public Preview for customers with Entra ID P1 or P2 licenses. It operates on a simple yet powerful "Snapshot" model: Automated Daily Snapshots The system automatically captures a point-in-time view of our tenant every day. Currently, the service maintains a 5-day retention window. This allows us to look back at the state of our environment from yesterday or earlier in the week to find a "known good" configuration. Visibility via Difference Reports One of the most powerful features is the Difference Report. Before committing to a restoration, we can compare a specific snapshot against the live state of our tenant. The report provides a granular view of: Object ID: Exactly which user, group, or policy is affected. Attribute Changes: A side-by-side comparison showing the "Old Value" (from the backup) versus the "Current Value" (live in the tenant). Metadata Loading: While the first report may take a moment to load metadata, subsequent reports are lightning-fast, allowing for quick triaging during an incident. Granular Restoration We aren't forced into an "all or nothing" recovery. We can choose to restore: An entire object class (e.g., all Conditional Access Policies). Specific object types (e.g., only Service Principals). Individual Object IDs for targeted fixes. The "Defense in Depth" Identity Strategy Entra Backup and Recovery is not a standalone silo; it is the third pillar of a complete identity resilience strategy. To truly harden our tenant, we must coordinate these three features: Pillar 1: Soft Delete (The Recycle Bin) Used for Deleted Objects. If a user or Microsoft 365 group is deleted, it sits in the Recycle Bin for 30 days. We can restore these easily via the portal or Graph API to maintain the original Object ID and SID. Pillar 2: Protected Actions (The Vault) To prevent an attacker from "hard deleting" our objects (purging them from the Recycle Bin so they can't be recovered), we must implement Protected Actions. How it works: we assign a "Conditional Access Authentication Context" to sensitive actions like Microsoft.Directory/deletedItems/delete. The Result: Even a Global Admin cannot permanently purge an object unless they meet strict requirements, such as using a Phishing-Resistant MFA key or working from a Secure Access Workstation (SAW). Pillar 3: Backup and Recovery (The Time Machine) Used for Corruption and Configuration Drift. When the object exists but its properties are compromised, this is our "Time Machine" to revert attributes and policy logic to a functional state. Real-World Scenario: Recovering from a Bulk Logic Error Imagine an admin runs a bulk update script intended to update the JobTitle for the Sales team. Due to a logic error in the CSV, the script instead clears the SecurityGroup memberships and ExtensionAttributes for the entire department. Detection: Users lose access to apps because their group memberships are gone. Analysis: The Admin generates a Difference Report between today and yesterday’s snapshot. Validation: The report confirms that 500 users now have "null" values for the affected attributes. Recovery: The Admin selects those 500 User IDs and hits Restore. Within minutes, the attributes are repopulated, and dynamic group memberships begin to recalculate automatically. Conclusion and Next Steps The preview of Microsoft Entra Backup and Recovery is a significant step forward in native tenant protection. By combining it with Protected Actions and the Recycle Bin, organizations can finally achieve a "circular" protection model for identity. Ready to try it? Navigate to the Microsoft Entra Admin Center, look for Backup and Recovery in the left-hand navigation, and explore usr first snapshot today.Conditional Access for Agent Identities in Microsoft Entra
AI agents are rapidly becoming part of everyday enterprise operations summarizing incidents, analyzing logs, orchestrating workflows, or even acting as digital colleagues. As organizations adopt these intelligent automations, securing them becomes just as important as securing human identities. Microsoft Entra introduces Agent Identities and extends Conditional Access to them but with very limited controls compared to traditional users and workload identities. This blog breaks down what Agent Identities are, how Conditional Access applies to them, and what are current limitations. What Exactly Are Agent Identities? Microsoft Entra now supports a new identity type designed specifically for AI systems: Agent Identity – like an app/service principal but specialized for AI Agent User – an identity that behaves more like a human user Agent Blueprint – a template used to create agent identities This model exists because AI systems behave differently than humans or applications: they can act autonomously, operate continuously, and make decisions without user input. AI-driven automation must be governed and that’s where Conditional Access comes in. Conditional Access for Agents, but with Important Limitations Today, Conditional Access for agent identities is purposely minimal. Microsoft clearly states: Conditional Access applies only when: An agent identity requests a token An agent user requests a token It does NOT apply when: A blueprint acquires a token to create identities An agent performs intermediate token exchange What Controls Are Actually Available Today? ✔ Supported Today Category Supported? Details Identity Targeting ✔ Yes You can include/exclude agent identities & agent users Block Access ✔ Yes This is the only Grant control currently available Agent Risk (Preview) ✔ Yes Early stage risk evaluation Sign-in evaluation ✔ Yes Token acquisition governed by CA ❌NOT Supported Today These CA controls do not apply to Agent Identities: MFA Authentication strength Device compliance Approved client apps App protection policies Session controls User sign-in frequency Terms of Use Location conditions (network/device-based) Client apps (legacy/modern access) Why? Because agents do not perform interactive authentication and do not use device signals or session context like humans. Their authentication is purely machine‑driven. How Conditional Access Works for Agents When an agent identity (or agent user) requests a token, Microsoft Entra: Identifies the requesting agent Checks CA policy assignments Evaluates any agent-risk conditions Allow/Blocks token issuance if conditions meet That’s it. No MFA prompt. No device check. No authentication strength evaluation. This makes CA for agents fundamentally different from CA for humans. Why Is Conditional Access So Limited for Agents? Two major reasons: Agents cannot satisfy user-based controls AI agents cannot: Perform MFA Use biometrics Run on compliant devices Follow session prompts These are human-driven processes. Agents authenticate via secure credential flows They use: Client credentials Federated identity credentials Token exchange flows So CA is limited to identity-level allow/block and risk-based token decisions. Practical Use Cases (Given Today’s Limitations) Even with limited controls, CA for agents is still important. Stop compromised agents from continuing to operate If Microsoft Entra detects high agent risk: CA can block token issuance This halts the agent’s ability to act immediately Enforce separation of duties for AI agents Even though you cannot apply MFA or auth strength, you can: Separate agents into “allowed” vs “blocked” groups Apply different CA rules per department or system Prevent AI sprawl Large enterprises may generate hundreds of AI agents. CA gives central admin control: Only approved, vetted agents can operate Others are blocked at token-request time Why Agent Blueprints Cannot Be Governed by CA Blueprints are templates, not active identities. Blueprint token flows are system-level operations, not access attempts. Therefore: ❌ No CA evaluation ❌ No controls applied ❌ Not counted as agent activity Only actual agent identities are governed by CA. What the Future Might Include Microsoft hints the capabilities will expand: Agent risk scoring Agent behaviour analytics More granularity in CA for agents Additional grant controls Policy scoping at task or capability level But as of today, CA for agents remains intentionally constrained to allow safe onboarding of the new identity type without accidental disruption. Final Summary Conditional Access for Agent Identities is currently a lightweight enforcement mechanism designed to block unauthorized or risky agents, not a full policy suite like we have for human users. ✔ What it does: Controls whether an agent identity can acquire a token Allows blocking specific agents Implements early agent‑risk logic Applies Zero Trust principles at the identity perimeter ❌ What it does not do: Enforce MFA Enforce authentication strength Enforce device or location conditions Apply session controls Govern blueprints As organizations adopt more autonomous agents, this foundational layer keeps AI identities visible and controllable and sets the stage for richer governance in the future.