microsoft entra
29 TopicsSecure Model Context Protocol (MCP) Implementation with Azure and Local Servers
Introduction The Model Context Protocol (MCP) enables AI systems to interact with external data sources and tools through a standardized interface. While powerful, MCP can introduce security risks in enterprise environments. This tutorial shows you how to implement MCP securely using local servers, Azure OpenAI with APIM, and proper authentication. Understanding MCP's Security Risks There are a couple of key security concerns to consider before implementing MCP: Data Exfiltration: External MCP servers could expose sensitive data. Unauthorized Access: Third-party services become potential security risks. Loss of Control: Unknown how external services handle your data. Compliance Issues: Difficulty meeting regulatory requirements with external dependencies. The solution? Keep everything local and controlled. Secure Architecture Before we dive into implementation, let's take a look at the overall architecture of our secure MCP solution: This architecture consists of three key components working together: Local MCP Server - Your custom tools run entirely within your local environment, reducing external exposure risks. Azure OpenAI + APIM Gateway - All AI requests are routed through Azure API Management with Microsoft Entra ID authentication, providing enterprise-grade security controls and compliance. Authenticated Proxy - A lightweight proxy service handles token management and request forwarding, ensuring seamless integration. One of the key benefits of this architecture is that no API key is required. Traditional implementations often require storing OpenAI API keys in configuration files, environment variables, or secrets management systems, creating potential security vulnerabilities. This approach uses Azure Managed Identity for backend authentication and Azure CLI credentials for client authentication, meaning no sensitive API keys are ever stored, logged, or exposed in your codebase. For more security, APIM and Azure OpenAI resources can be configured with IP restrictions or network rules to only accept traffic from certain sources. These configurations are available for most Azure resources and provide an additional layer of network-level security. This security-forward approach gives you the full power of MCP's tool integration capabilities while keeping your implementation completely under your control. How to Implement MCP Securely 1. Local MCP Server Implementation Building the MCP Server Let's start by creating a simple MCP server in .NET Core. 1. Create a web application dotnet new web -n McpServer 2.Add MCP packages dotnet add package ModelContextProtocol --prerelease dotnet add package ModelContextProtocol.AspNetCore --prerelease 3. Configure Program.cs var builder = WebApplication.CreateBuilder(args); builder.Services.AddMcpServer() .WithHttpTransport() .WithToolsFromAssembly(); var app = builder.Build(); app.MapMcp(); app.Run(); WithToolsFromAssembly() automatically discovers and registers tools from the current assembly. Look into the C# SDK for other ways to register tools for your use case. 4. Define Tools Now, we can define some tools that our MCP server can expose. here is a simple example for tools that echo input back to the client: using ModelContextProtocol.Server; using System.ComponentModel; namespace Tools; [McpServerToolType] public static class EchoTool { [McpServerTool] [Description("Echoes the input text back to the client in all capital letters.")] public static string EchoCaps(string input) { return new string(input.ToUpperInvariant()); } [McpServerTool] [Description("Echoes the input text back to the client in reverse.")] public static string ReverseEcho(string input) { return new string(input.Reverse().ToArray()); } } Key components of MCP tools are the McpServerToolType class decorator indicating that this class contains MCP tools, and the McpServerTool method decorator with a description that explains what the tool does. Alternative: STDIO Transport If you want to use STDIO transport instead of SSE (implemented here), check out this guide: Build a Model Context Protocol (MCP) Server in C# 2. Create a MCP Client with Cline Now that we have our MCP server set up with tools, we need a client that can discover and invoke these tools. For this implementation, we'll use Cline as our MCP client, configured to work through our secure Azure infrastructure. 1. Install Cline VS Code Extension Install the Cline extension in VS Code. 2. Deploy secure Azure OpenAI Endpoint with APIM Instead of connecting Cline directly to external AI services (which could expose the secure implementation to external bad actors), we will route through Azure API Management (APIM) for enterprise security. With this implementation, all requests go through Microsoft Entra ID and we use managed identity for all authentications. Quick Setup: Deploy the Azure OpenAI with APIM solution. Ensure your Azure OpenAI resources are configured to allow your APIM's managed identity to make calls. The APIM policy below uses managed identity authentication to connect to Azure OpenAI backends. Refer to the Azure OpenAI documentation on managed identity authentication for detailed setup instructions. 3. Configure APIM Policy After deploying APIM, configure the following policy to enable Azure AD token validation, managed identity authentication, and load balancing across multiple OpenAI backends: <!-- Azure API Management Policy for OpenAI Endpoint --> <!-- Implements Azure AD Token validation, managed identity authentication --> <!-- Supports round-robin load balancing across multiple OpenAI backends --> <!-- Requests with 'gpt-5' in the URL are routed to a single backend --> <!-- The client application ID '04b07795-8ddb-461a-bbee-02f9e1bf7b46' is the official Azure CLI app registration --> <!-- This policy allows requests authenticated by Azure CLI (az login) when the required claims are present --> <policies> <inbound> <!-- IP Allow List Fragment (external fragment for client IP restrictions) --> <include-fragment fragment-id="YourCompany-IPAllowList" /> <!-- Azure AD Token Validation for Azure CLI app ID --> <validate-azure-ad-token tenant-id="YOUR-TENANT-ID-HERE" header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <client-application-ids> <application-id>04b07795-8ddb-461a-bbee-02f9e1bf7b46</application-id> </client-application-ids> <audiences> <audience>api://YOUR-API-AUDIENCE-ID-HERE</audience> </audiences> <required-claims> <claim name="roles" match="any"> <value>YourApp.User</value> </claim> </required-claims> </validate-azure-ad-token> <!-- Acquire Managed Identity access token for backend authentication --> <authentication-managed-identity resource="https://cognitiveservices.azure.com" output-token-variable-name="managed-id-access-token" ignore-error="false" /> <!-- Set Authorization header for backend using the managed identity token --> <set-header name="Authorization" exists-action="override"> <value>@("Bearer " + (string)context.Variables["managed-id-access-token"])</value> </set-header> <!-- Check if URL contains 'gpt-5' and set backend accordingly --> <choose> <when condition="@(context.Request.Url.Path.ToLower().Contains("gpt-5"))"> <set-variable name="selected-backend-url" value="https://your-region1-oai.openai.azure.com/openai" /> </when> <otherwise> <cache-lookup-value key="backend-counter" variable-name="backend-counter" /> <choose> <when condition="@(context.Variables.ContainsKey("backend-counter") == false)"> <set-variable name="backend-counter" value="@(0)" /> </when> </choose> <set-variable name="current-backend-index" value="@((int)context.Variables["backend-counter"] % 7)" /> <choose> <when condition="@((int)context.Variables["current-backend-index"] == 0)"> <set-variable name="selected-backend-url" value="https://your-region1-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 1)"> <set-variable name="selected-backend-url" value="https://your-region2-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 2)"> <set-variable name="selected-backend-url" value="https://your-region3-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 3)"> <set-variable name="selected-backend-url" value="https://your-region4-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 4)"> <set-variable name="selected-backend-url" value="https://your-region5-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 5)"> <set-variable name="selected-backend-url" value="https://your-region6-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 6)"> <set-variable name="selected-backend-url" value="https://your-region7-oai.openai.azure.com/openai" /> </when> </choose> <set-variable name="next-counter" value="@(((int)context.Variables["backend-counter"] + 1) % 1000)" /> <cache-store-value key="backend-counter" value="@((int)context.Variables["next-counter"])" duration="300" /> </otherwise> </choose> <!-- Always set backend service using selected-backend-url variable --> <set-backend-service base-url="@((string)context.Variables["selected-backend-url"])" /> <!-- Inherit any base policies defined outside this section --> <base /> </inbound> <backend> <base /> </backend> <outbound> <base /> </outbound> <on-error> <base /> </on-error> </policies> This policy creates a secure gateway that validates Azure AD tokens from your local Azure CLI session, then uses APIM's managed identity to authenticate with Azure OpenAI backends, eliminating the need for API keys. It automatically load-balances requests across multiple Azure OpenAI regions using round-robin distribution for optimal performance. 4. Create Azure APIM proxy for Cline This FastAPI-based proxy forwards OpenAI-compatible API requests from Cline through APIM using Azure AD authentication via Azure CLI credentials, eliminating the need to store or manage OpenAI API keys. Prerequisites: Python 3.8 or higher Azure CLI (ensure az login has been run at least once) Ensure the user running the proxy script has appropriate Azure AD roles and permissions. This script uses Azure CLI credentials to obtain bearer tokens. Your user account must have the correct roles assigned and access to the target API audience configured in the APIM policy above. Quick setup for the proxy: Create this requirements.txt: fastapi uvicorn requests azure-identity Create this Python script for the proxy source code azure_proxy.py: import os import requests from fastapi import FastAPI, Request from fastapi.responses import StreamingResponse import uvicorn from azure.identity import AzureCliCredential # CONFIGURATION APIM_BASE_URL = <APIM BASE URL HERE> AZURE_SCOPE = <AZURE SCOPE HERE> PORT = int(os.environ.get("PORT", 8080)) app = FastAPI() credential = AzureCliCredential() # Use a single requests.Session for connection pooling from requests.adapters import HTTPAdapter session = requests.Session() session.mount("https://", HTTPAdapter(pool_connections=100, pool_maxsize=100)) import time _cached_token = None _cached_expiry = 0 def get_bearer_token(scope: str) -> str: """Get an access token using AzureCliCredential, caching until expiry is within 30 seconds.""" global _cached_token, _cached_expiry now = int(time.time()) if _cached_token and (_cached_expiry - now > 30): return _cached_token try: token_obj = credential.get_token(scope) _cached_token = token_obj.token _cached_expiry = token_obj.expires_on return _cached_token except Exception as e: raise RuntimeError(f"Could not get Azure access token: {e}") @app.api_route("/{path:path}", methods=["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"]) async def proxy(request: Request, path: str): # Assemble the destination URL (preserve trailing slash logic) dest_url = f"{APIM_BASE_URL.rstrip('/')}/{path}".rstrip("/") if request.url.query: dest_url += "?" + request.url.query # Get the Bearer token bearer_token = get_bearer_token(AZURE_SCOPE) # Prepare headers (copy all, overwrite Authorization) headers = dict(request.headers) headers["Authorization"] = f"Bearer {bearer_token}" headers.pop("host", None) # Read body body = await request.body() # Send the request to APIM using the pooled session resp = session.request( method=request.method, url=dest_url, headers=headers, data=body if body else None, stream=True, ) # Stream the response back to the client return StreamingResponse( resp.raw, status_code=resp.status_code, headers={k: v for k, v in resp.headers.items() if k.lower() != "transfer-encoding"}, ) if __name__ == "__main__": # Bind the app to 127.0.0.1 to avoid any Firewall updates uvicorn.run(app, host="127.0.0.1", port=PORT) Run the setup: pip install -r requirements.txt az login # Authenticate with Azure python azure_proxy.py Configure Cline to use the proxy: Using the OpenAI Compatible API Provider: Base URL: http://localhost:8080 API Key: <any random string> Model ID: <your Azure OpenAI deployment name> API Version: <your Azure OpenAI deployment version> The API key field is required by Cline but unused in our implementation - any random string works since authentication happens via Azure AD. 5. Configure Cline to listen to your MCP Server Now that we have both our MCP server running and Cline configured with secure OpenAI access, the final step is connecting them together. To enable Cline to discover and use your custom tools, navigate to your installed MCP servers on Cline, select Configure MCP Servers, and add in the configuration for your server: { "mcpServers": { "mcp-tools": { "autoApprove": [ "EchoCaps", "ReverseEcho", ], "disabled": false, "timeout": 60, "type": "sse", "url": "http://<your localhost url>/sse" } } } Now, you can use Cline's chat interface to interact with your secure MCP tools. Try asking Cline to use your custom tools - for example, "Can you echo 'Hello World' in capital letters?" and watch as it calls your local MCP server through the infrastructure you've built. Conclusion There you have it: A secure implementation of MCP that can be tailored to your specific use case. This approach gives you the power of MCP while maintaining enterprise security. You get: AI capabilities through secure Azure infrastructure. Custom tools that never leave your environment. Standard MCP interface for easy integration. Complete control over your data and tools. The key is keeping MCP servers local while routing AI requests through your secure Azure infrastructure. This way, you gain MCP's benefits without compromising security. Disclaimer While this tutorial provides a secure foundation for MCP implementation, organizations are responsible for configuring their Azure resources according to their specific security requirements and compliance standards. Ensure proper review of network rules, access policies, and authentication configurations before deploying to production environments. Resources MCP SDKs and Tools: MCP C# SDK MCP Python SDK Cline SDK Cline User Guide Azure OpenAI with APIM Azure API Management Network Security: Azure API Management - restrict caller IPs Azure API Management with an Azure virtual network Set up inbound private endpoint for Azure API Management Azure OpenAI and AI Services Network Security: Configure Virtual Networks for Azure AI services Securing Azure OpenAI inside a virtual network with private endpoints Add an Azure OpenAI network security perimeter az cognitiveservices account network-rule442Views2likes0CommentsFrom Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI
By Chirag Mehta, Vice President and Principal Analyst - Constellation Research AI is changing the way organizations work. It helps teams write code, detect fraud, automate workflows, and make complex decisions faster than ever before. But as AI adoption increases, so do the risks, many of which traditional security tools were not designed to address. Cybersecurity leaders are starting to see that AI security is not just another layer of defense. It is becoming essential to building trust, ensuring resilience, and maintaining business continuity. Earlier this year, after many conversations with CISOs and CIOs, I saw a clear need to bring more attention to this topic. That led to my report on AI Security, which explores how AI-specific vulnerabilities differ from traditional cybersecurity risks and why securing AI systems calls for a more intentional approach. Why AI Changes the Security Landscape AI systems do not behave like traditional software. They learn from data instead of following pre-defined logic. This makes them powerful, but also vulnerable. For example, an AI model can: Misinterpret input in ways that humans cannot easily detect Be tricked into producing harmful or unintended responses through crafted prompts Leak sensitive training data in its outputs Take actions that go against business policies or legal requirements These are not coding flaws. They are risks that originate from how AI systems process information and act on it. These risks become more serious with agentic AI. These systems act on behalf of humans, interact with other software, and sometimes with other AI agents. They can make decisions, initiate actions, and change configurations. If one is compromised, the consequences can spread quickly. A key challenge is that many organizations still rely on traditional defenses to secure AI systems. While those tools remain necessary, they are no longer enough. AI introduces new risks across every layer of the stack, including data, networks, endpoints, applications, and cloud infrastructure. As I explained in my report, the security focus must shift from defending the perimeter to governing the behavior of AI systems, the data they use, and the decisions they make. The Shift Toward AI-Aware Cyber Resilience Cyber resilience is the ability to withstand, adapt to, and recover from attacks. Meeting that standard today requires understanding how AI is developed, deployed, and used by employees, customers, and partners. To get there, organizations must answer questions such as: Where is our sensitive data going, and is it being used safely to train models? What non-human identities, such as AI agents, are accessing systems and data? Can we detect when an AI system is being misused or manipulated? Are we in compliance with new AI regulations and data usage rules? Let’s look at how Microsoft has evolved its mature security portfolio to help protect AI workloads and support this shift toward resilience. Microsoft’s Approach to Secure AI Microsoft has taken a holistic and integrated approach to AI security. Rather than creating entirely new tools, it is extending existing products already used by millions to support AI workloads. These features span identity, data, endpoint, and cloud protection. 1. Microsoft Defender: Treating AI Workloads as Endpoints AI models and applications are emerging as a new class of infrastructure that needs visibility and protection. Defender for Cloud secures AI workloads across Azure and other cloud platforms such as AWS and GCP by monitoring model deployments and detecting vulnerabilities. Defender for Cloud Apps extends protection to AI-enabled apps running at the edge Defender for APIs supports AI systems that use APIs, which are often exposed to risks such as prompt injection or model manipulation Additionally, Microsoft has launched tools to support AI red-teaming, content safety, and continuous evaluation capabilities to ensure agents operate safely and as intended. This allows teams identify and remediate risks such as jailbreaks or prompt injection before models are deployed. 2. Microsoft Entra: Managing Non-Human Identities As organizations roll out more AI agents and copilots, non-human identities are becoming more common. These digital identities need strong oversight. Microsoft Entra helps create and manage identities for AI agents Conditional Access ensures AI agents only access the resources they need, based on real-time signals and context Privileged Identity Management manages, controls, and monitors AI agents access to important resources within an organization 3. Microsoft Purview: Securing Data Used in AI Purview plays an important role in securing both the data that powers AI apps and agents, and the data they generate through interactions. Data discovery and classification helps label sensitive information and track its use Data Loss Prevention policies help prevent leaks or misuse of data in tools such as Copilot or agents built in Azure AI Foundry Insider Risk Management alerts security teams when employees feed sensitive data into AI systems without approval Purview also helps organizations meet transparency and compliance requirements, extending the same policies they already use today to AI workloads, without requiring separate configurations, as regulations like the EU AI Act take effect. Here's a video that explains the above Microsoft security products: Securing AI Is Now a Strategic Priority AI is evolving quickly, and the risks are evolving with it. Traditional tools still matter, but they were not built for systems that learn, adapt, and act independently. They also weren’t designed for the pace and development approaches AI requires, where securing from the first line of code is critical to staying protected at scale. Microsoft is adapting its security portfolio to meet this shift. By strengthening identity, data, and endpoint protections, it is helping customers build a more resilient foundation. Whether you are launching your first AI-powered tool or managing dozens of agents across your organization, the priority is clear. Secure your AI systems before they become a point of weakness. You can read more in my AI Security report and learn how Microsoft is helping organizations secure AI supporting these efforts across its security portfolio.Hacking Made Easy, Patching Made Optional: A Modern Cyber Tragedy
In today’s cyber threat landscape, the tools and techniques required to compromise enterprise environments are no longer confined to highly skilled adversaries or state-sponsored actors. While artificial intelligence is increasingly being used to enhance the sophistication of attacks, the majority of breaches still rely on simple, publicly accessible tools and well-established social engineering tactics. Another major issue is the persistent failure of enterprises to patch common vulnerabilities in a timely manner—despite the availability of fixes and public warnings. This negligence continues to be a key enabler of large-scale breaches, as demonstrated in several recent incidents. The Rise of AI-Enhanced Attacks Attackers are now leveraging AI to increase the credibility and effectiveness of their campaigns. One notable example is the use of deepfake technology—synthetic media generated using AI—to impersonate individuals in video or voice calls. North Korean threat actors, for instance, have been observed using deepfake videos and AI-generated personas to conduct fraudulent job interviews with HR departments at Western technology companies. These scams are designed to gain insider access to corporate systems or to exfiltrate sensitive intellectual property under the guise of legitimate employment. Social Engineering: Still the Most Effective Entry Point And yet, many recent breaches have begun with classic social engineering techniques. In the cases of Coinbase and Marks & Spencer, attackers impersonated employees through phishing or fraudulent communications. Once they had gathered sufficient personal information, they contacted support desks or mobile carriers, convincingly posing as the victims to request password resets or SIM swaps. This impersonation enabled attackers to bypass authentication controls and gain initial access to sensitive systems, which they then leveraged to escalate privileges and move laterally within the network. Threat groups such as Scattered Spider have demonstrated mastery of these techniques, often combining phishing with SIM swap attacks and MFA bypass to infiltrate telecom and cloud infrastructure. Similarly, Solt Thypoon (formerly DEV-0343), linked to North Korean operations, has used AI-generated personas and deepfake content to conduct fraudulent job interviews—gaining insider access under the guise of legitimate employment. These examples underscore the evolving sophistication of social engineering and the need for robust identity verification protocols. Built for Defense, Used for Breach Despite the emergence of AI-driven threats, many of the most successful attacks continue to rely on simple, freely available tools that require minimal technical expertise. These tools are widely used by security professionals for legitimate purposes such as penetration testing, red teaming, and vulnerability assessments. However, they are also routinely abused by attackers to compromise systems Case studies for tools like Nmap, Metasploit, Mimikatz, BloodHound, Cobalt Strike, etc. The dual-use nature of these tools underscores the importance of not only detecting their presence but also understanding the context in which they are being used. From CVE to Compromise While social engineering remains a common entry point, many breaches are ultimately enabled by known vulnerabilities that remain unpatched for extended periods. For example, the MOVEit Transfer vulnerability (CVE-2023-34362) was exploited by the Cl0p ransomware group to compromise hundreds of organizations, despite a patch being available. Similarly, the OpenMetadata vulnerability (CVE-2024-28255, CVE-2024-28847) allowed attackers to gain access to Kubernetes workloads and leverage them for cryptomining activity days after a fix had been issued. Advanced persistent threat groups such as APT29 (also known as Cozy Bear) have historically exploited unpatched systems to maintain long-term access and conduct stealthy operations. Their use of credential harvesting tools like Mimikatz and lateral movement frameworks such as Cobalt Strike highlights the critical importance of timely patch management—not just for ransomware defense, but also for countering nation-state actors. Recommendations To reduce the risk of enterprise breaches stemming from tool misuse, social engineering, and unpatched vulnerabilities, organizations should adopt the following practices: 1. Patch Promptly and Systematically Ensure that software updates and security patches are applied in a timely and consistent manner. This involves automating patch management processes to reduce human error and delay, while prioritizing vulnerabilities based on their exploitability and exposure. Microsoft Intune can be used to enforce update policies across devices, while Windows Autopatch simplifies the deployment of updates for Windows and Microsoft 365 applications. To identify and rank vulnerabilities, Microsoft Defender Vulnerability Management offers risk-based insights that help focus remediation efforts where they matter most. 2. Implement Multi-Factor Authentication (MFA) To mitigate credential-based attacks, MFA should be enforced across all user accounts. Conditional access policies should be configured to adapt authentication requirements based on contextual risk factors such as user behavior, device health, and location. Microsoft Entra Conditional Access allows for dynamic policy enforcement, while Microsoft Entra ID Protection identifies and responds to risky sign-ins. Organizations should also adopt phishing-resistant MFA methods, including FIDO2 security keys and certificate-based authentication, to further reduce exposure. 3. Identity Protection Access Reviews and Least Privilege Enforcement Conducting regular access reviews ensures that users retain only the permissions necessary for their roles. Applying least privilege principles and adopting Microsoft Zero Trust Architecture limits the potential for lateral movement in the event of a compromise. Microsoft Entra Access Reviews automates these processes, while Privileged Identity Management (PIM) provides just-in-time access and approval workflows for elevated roles. Just-in-Time Access and Risk-Based Controls Standing privileges should be minimized to reduce the attack surface. Risk-based conditional access policies can block high-risk sign-ins and enforce additional verification steps. Microsoft Entra ID Protection identifies risky behaviors and applies automated controls, while Conditional Access ensures access decisions are based on real-time risk assessments to block or challenge high-risk authentication attempts. Password Hygiene and Secure Authentication Promoting strong password practices and transitioning to passwordless authentication enhances security and user experience. Microsoft Authenticator supports multi-factor and passwordless sign-ins, while Windows Hello for Business enables biometric authentication using secure hardware-backed credentials. 4. Deploy SIEM and XDR for Detection and Response A robust detection and response capability is vital for identifying and mitigating threats across endpoints, identities, and cloud environments. Microsoft Sentinel serves as a cloud-native SIEM that aggregates and analyses security data, while Microsoft Defender XDR integrates signals from multiple sources to provide a unified view of threats and automate response actions. 5. Map and Harden Attack Paths Organizations should regularly assess their environments for attack paths such as privilege escalation and lateral movement. Tools like Microsoft Defender for Identity help uncover Lateral Movement Paths, while Microsoft Identity Threat Detection and Response (ITDR) integrates identity signals with threat intelligence to automate response. These capabilities are accessible via the Microsoft Defender portal, which includes an attack path analysis feature for prioritizing multicloud risks. 6. Stay Current with Threat Actor TTPs Monitor the evolving tactics, techniques, and procedures (TTPs) employed by sophisticated threat actors. Understanding these behaviours enables organizations to anticipate attacks and strengthen defenses proactively. Microsoft Defender Threat Intelligence provides detailed profiles of threat actors and maps their activities to the MITRE ATT&CK framework. Complementing this, Microsoft Sentinel allows security teams to hunt for these TTPs across enterprise telemetry and correlate signals to detect emerging threats. 7. Build Organizational Awareness Organizations should train staff to identify phishing, impersonation, and deepfake threats. Simulated attacks help improve response readiness and reduce human error. Use Attack Simulation Training, in Microsoft Defender for Office 365 to run realistic phishing scenarios and assess user vulnerability. Additionally, educate users about consent phishing, where attackers trick individuals into granting access to malicious apps. Conclusion The democratization of offensive security tooling, combined with the persistent failure to patch known vulnerabilities, has significantly lowered the barrier to entry for cyber attackers. Organizations must recognize that the tools used against them are often the same ones available to their own security teams. The key to resilience lies not in avoiding these tools, but in mastering them—using them to simulate attacks, identify weaknesses, and build a proactive defense. Cybersecurity is no longer a matter of if, but when. The question is: will you detect the attacker before they achieve their objective? Will you be able to stop them before reaching your most sensitive data? Additional read: Gartner Predicts 30% of Enterprises Will Consider Identity Verification and Authentication Solutions Unreliable in Isolation Due to AI-Generated Deepfakes by 2026 Cyber security breaches survey 2025 - GOV.UK Jasper Sleet: North Korean remote IT workers’ evolving tactics to infiltrate organizations | Microsoft Security Blog MOVEit Transfer vulnerability Solt Thypoon Scattered Spider SIM swaps Attackers exploiting new critical OpenMetadata vulnerabilities on Kubernetes clusters | Microsoft Security Blog Microsoft Defender Vulnerability Management - Microsoft Defender Vulnerability Management | Microsoft Learn Zero Trust Architecture | NIST tactics, techniques, and procedures (TTP) - Glossary | CSRC https://learn.microsoft.com/en-us/security/zero-trust/deploy/overviewIntroducing Microsoft Sentinel data lake
Today, we announced a significant expansion of Microsoft Sentinel’s capabilities through the introduction of Sentinel data lake, now rolling out in public preview. Security teams cannot defend what they cannot see and analyze. With exploding volumes of security data, organizations are struggling to manage costs while maintaining effective threat coverage. Do-it-yourself security data architectures have perpetuated data silos, which in turn have reduced the effectiveness of AI solutions in security operations. With Sentinel data lake, we are taking a major step to address these challenges. Microsoft Sentinel data lake enables a fully managed, cloud-native, data lake that is purposefully designed for security, right inside Sentinel. Built on a modern lake architecture and powered by Azure, Sentinel data lake simplifies security data management, eliminates security data silos, and enables cost-effective long-term security data retention with the ability to run multiple forms of analytics on a single copy of that data. Security teams can now store and manage all security data. This takes the market-leading capabilities of Sentinel SIEM and supercharges it even further. Customers can leverage the data lake for retroactive TI matching and hunting over a longer time horizon, track low and slow attacks, conduct forensics analysis, build anomaly insights, and meet reporting & compliance needs. By unifying security data, Sentinel data lake provides the AI ready data foundation for AI solutions. Let’s look at some of Sentinel data lake’s core features. Simplified onboarding and enablement inside Defender Portal: Customers can easily discover and enable the new data lake from within the Defender portal, either from the banner on the home page or from settings. Setting up a modern data lake now is just a click away, empowering security teams to get started quickly without a complex setup. Simplified security data management: Sentinel data lake works seamlessly with existing Sentinel connectors. It brings together security logs from Microsoft services across M365, Defender, Azure, Entra, Purview, Intune plus third-party sources like AWS, GCP, network and firewall data from 350+ connectors and solutions. The data lake supports Sentinel’s existing table schemas while customers can also create custom connectors to bring raw data into the data lake or transform it during ingestion. In the future, we will enable additional industry-standard schemas. The data lake expands beyond just activity logs by including a native asset store. Critical asset information is added to the data lake using new Sentinel data connectors for Microsoft 365, Entra, and Azure, enabling a single place to analyze activity and asset data enriched with Threat intelligence. A new table management experience makes it easy for customers to choose where to send and store data, as well as set related retention policies to optimize their security data estate. Customers can easily send critical, high-fidelity security data to the analytics tier or choose to send high-volume, low fidelity logs to the new data lake tier. Any data brought into the analytics tier is automatically mirrored into the data lake at no additional charge, making data lake the central location for all security data. Advanced data analysis capabilities over data in the data lake: Sentinel data lake stores all security data in an open format to enable analysts to do multi-modal security analytics on a single copy of data. Through the new data lake exploration experience in the Defender portal, customers can leverage Kusto query language to analyze historical data using the full power of Kusto. Since the data lake supports the Sentinel table schema, advanced hunting queries can be run directly on the data lake. Customers can also schedule long-running jobs, either once or on a schedule, that perform complex analysis on historical data for in-depth security insights. These insights generated from the data lake can be easily elevated to analytics tier and leveraged in Sentinel for threat investigation and response. Additionally, as part of the public preview, we are also releasing a new Sentinel Visual Studio Code extension that enables security teams to easily connect to the same data lake data and use Python notebooks, as well as spark and ML libraries to deeply analyze lake data for anomalies. Since the environment is fully managed, there is no compute infrastructure to set up. Customers can just install the Visual Studio Code extension and use AI coding agents like GitHub Copilot to build a notebook and execute it in the managed environment. These notebooks can also be scheduled as jobs and the resulting insights can be elevated to analytics tier and leveraged in Sentinel for threat investigation and response. Flexible business model: Sentinel data lake enables customers to separate their data ingestion and retention needs from their security analytics needs, allowing them to ingest and store data cost effectively and then pay separately when analyzing data for their specific needs. Let’s put this all together and show an example of how a customer can operationalize and derive value from the data lake for retrospective threat intelligence matching in Microsoft Sentinel. Network logs are typically high-volume logs but can often contain key insights for detecting initial entry point of an attack, command and control connection, lateral movement or an exfiltration attempt. Customers can now send these high-volume logs to the data lake tier. Next, they can create a python notebook that can join latest threat intelligence from Microsoft Defender Threat Intelligence to scan network logs for any connections to/from a suspicious IP or domain. They can schedule this notebook to run as a scheduled job, and any insights can then be promoted to analytics tiers and leveraged to enrich ongoing investigation, hunts, response or forensics analysis. All this is possible cost-effectively without having to set up any complex infrastructure, enabling security teams to achieve deeper insights. This preview is now rolling out for customers in Defender portal in our supported regions. To learn more, check out our Mechanics video and our documentation or talk to your account teams. Get started today Join us as we redefine what’s possible in security operations: Onboard Sentinel data lake: https://aka.ms/sentineldatalakedocs Explore our pricing: https://aka.ms/sentinel/pricingblog For the supported regions, please refer to https://aka.ms/sentinel/datalake/geos Learn more about our MDTI news: http://aka.ms/mdti-convergence General Availability of Auxiliary Logs and Reduced PricingCheck out the latest security skill-building resources on Microsoft Learn
Prove your experience with this new Microsoft Applied Skill Are you an identity and access professional? Do you have a foundational understanding of Microsoft Entra ID? Showcase your experience and readiness for identity scenarios by earning our new Microsoft Applied Skill: Get started with identities and access using Microsoft Entra. You can prepare for the skills assessment by completing our Learning Path—Perform basic identity and access tasks—here you'll learn how to: Create, configure, and manage identities Describe the authentication capabilities of Microsoft Entra ID Describe the access management capabilities of Microsoft Entra Describe the identity protection and governance capabilities of Microsoft Entra Get started with identity and access labs On average, this Learning Path requires less than four hours to complete. Get started today! Certification update: Goodbye, SC-400 – hello, SC-401! As you may already know, we will be retiring Microsoft Certified: Information Protection and Compliance Administrator Associate Certification and its related Exam SC-400: Administering Information Protection and Compliance in Microsoft 365 on May 31, 2025. If you are considering renewing the certification please do so before the date. There is still several ways to showcase your expertise of Purview through the new Microsoft Certified: Information Security Administrator Certification and applied skills mentioned in this blog. There's still time: catch our Learn Live Series and enhance your security for AI capabilities As organizations develop, use, and increasingly rely on AI applications, they must address new and amplified security risks. Are you prepared to secure your environment for AI adoption? How about identifying threats to your AI and safeguarding data? Watch on demand: Learn Live – Security for AI with Microsoft Purview and Defender for Cloud In this four-part series, IT pros and security practitioners can hone their security skillsets with a deeper understanding of AI-centric challenges, opportunities, and best practices using Microsoft Security solutions. Topics include: Manage AI Data Security Challenges with Microsoft Purview: Microsoft Purview helps you strengthen data security in AI environments, providing tools to manage challenges from AI technology. Manage Compliance with Microsoft Purview with Microsoft 365 Copilot: Use Microsoft Purview for compliance management with Microsoft 365 Copilot. You'll learn how to handle compliance aspects of Copilot's AI functionalities through Purview. Identify and Mitigate AI Data Security Risks: Microsoft Purview Data Security Posture Management (DSPM) for AI helps organizations monitor AI activity, enforce security policies, and prevent unauthorized data exposure. Enable Advanced Protection for AI Workloads with Microsoft Defender for Cloud: As organizations use and develop AI applications, they need to address new and amplified security risks. Prepare your environment for secure AI adoption to safeguard your data and identify threats to your AI. If you are looking for more training and resources related to Microsoft Security, please visit the Security Hub.Enterprise-grade controls for AI apps and agents built with Azure AI Foundry and Copilot Studio
AI innovation is moving faster than ever, and more AI projects are moving beyond experimentation into deployment, to drive tangible business impact. As organizations accelerate innovation with custom AI applications and agents, new risks emerge across the software development lifecycle and AI stack related to data oversharing and leaks, new vulnerabilities and threats, and non-compliance with stringent regulatory requirements Through 2025, poisoning of software supply chains and infrastructure technology stacks will constitute more than 70% of malicious attacks against AI used in the enterprise 1 , highlighting potential threats that originate early in development. Today, the average cost of a data breach is $4.88 million, but when security issues are caught early in the development process, that number drops dramatically to just $80 per incident 2 . The message is very clear; security can’t be an afterthought anymore. It must be a team sport across the organization, embedded from the start and throughout the development lifecycle. That's why developers and security teams should align on processes and tools that bring security into every stage of the AI development lifecycle and give security practitioners visibility into and the ability to mitigate risks. To address these growing challenges and help customers secure and govern their AI workloads across development and security teams, we are: Enabling Azure AI Foundry and Microsoft Copilot Studio to provide best-in-class foundational capabilities to secure and govern AI workloads Deeply integrating and embedding industry-leading capabilities from Microsoft Purview, Microsoft Defender, and Microsoft Entra into Azure AI Foundry and Microsoft Copilot Studio This week, 3,000 developers are gathering in Seattle for the annual Microsoft Build conference, with many more tuning in online, to learn practical skills for accelerating their AI apps and agents' innovation. To support their AI innovation journey, today we are excited to announce several new capabilities to help developers and organizations secure and govern AI apps and agents. New Azure AI Foundry foundational capabilities to secure and govern AI workloads Azure AI Foundry enhancements for AI security and safety With 70,000 customers, 100 trillion tokens processed this quarter, and 2 billion enterprise search queries each day, Azure AI Foundry has grown beyond just an application layer—it's now a comprehensive platform for building agents that can plan, take action, and continuously learn to drive real business outcomes. To help organizations build and deploy AI with confidence, we’re introducing new security and safety capabilities and insights for developers in Azure AI Foundry Introducing Spotlighting to detect and block prompt injection attacks in real time As AI systems increasingly rely on external data sources, a new class of threats has emerged. Indirect prompt injection attacks embed hidden instructions in documents, emails, and web content, tricking models into taking unauthorized actions without any direct user input. These attacks are difficult to detect and hard to prevent using traditional filters alone. To address this, Azure AI Content Safety is introducing Spotlighting, now available in preview. Spotlighting strengthens the Prompt Shields guardrail by improving its ability to detect and handle potential indirect prompt injections, where hidden adversarial instructions are embedded in external content. This new capability helps prevent the model from inadvertently acting on malicious prompts that are not directly visible to the user. Enable Spotlighting in Azure AI Content Safety to detect potential indirect prompt injection attacks New capabilities for task adherence evaluation and task adherence mitigation to ensure agents remain within scope As developers build more capable agents, organizations face growing pressure to help confirm those agents act within defined instructions and policy boundaries. Even small deviations can lead to tool misuse, broken workflows, or risks like unintended exposure of sensitive data. To solve this, Azure AI Foundry now includes task adherence for agents, now in preview and powered by two components: a real-time evaluation and a new control within Azure AI Content Safety. At the core is a real-time task adherence evaluation API, part of Azure AI Content Safety. This API assesses whether an agent’s behavior is aligned with its assigned task by analyzing the user’s query, system instructions, planned tool calls, and the agent’s response. The evaluation framework is built on Microsoft’s Agent Evaluators, which measure intent resolution, tool selection accuracy, completeness of response, and overall alignment to the original request. Developers can run this scoring logic locally using the Task Adherence Evaluator in the Azure AI Evaluation SDK, with a five-point scale that ranges from fully nonadherent to fully adherent. This gives teams a flexible and transparent way to inspect task-level behavior before it causes downstream issues. Task adherence is enforced through a new control in Azure AI Content Safety. If an agent goes off-task, the control can block tool use, pause execution, or trigger human review. In Azure AI Agent Service, it is available as an opt-in feature and runs automatically. Combined with real-time evaluation, this control helps to ensure that agents stay on task, follow instructions, and operate according to enterprise policies. Learn more about Prompt Shields in Azure AI Content Safety. Azure AI Foundry continuous evaluation and monitoring of agentic systems Maintaining high performance and compliance for AI agents after deployment is a growing challenge. Without ongoing oversight, issues like performance degradation, safety risks, or unintentional misuse of resources can slip through unnoticed. To address this, Azure AI Foundry introduces continuous evaluation and monitoring of agentic systems, now in preview, provides a single pane of glass dashboard to track key metrics such as performance, quality, safety, and resource usage in real time. Continuous evaluation runs quality and safety evaluations at a sampled rate of production usage with results made available in the Azure AI Foundry Monitoring dashboard and published to Application Insights. Developers can set alerts to detect drift or regressions and use Azure Monitor to gain full-stack visibility into their AI systems. For example, an organization using an AI agent to assist with customer-facing tasks can monitor groundedness and detect a decline in quality when the agent begins referencing irrelevant information, helping teams to act before it potentially negatively affects trust of users. Azure AI Foundry evaluation integrations with Microsoft Purview Compliance Manager, Credo AI, and Saidot for streamlined compliance AI regulations and standards introduce new requirements for transparency, documentation, and risk management for high-risk AI systems. As developers build AI applications and agents, they may need guidance and tools to help them evaluate risks based on these requirements and seamlessly share control and evaluation insights with compliance and risk teams. Today, we are announcing previews for Azure AI Foundry evaluation tool’s integration with a compliance management solution, Microsoft Purview Compliance Manager, and AI governance solutions, Credo AI and Saidot. These integrations help define risk parameters, run suggested compliance evaluations, and collect evidence for control testing and auditing. For example, for a developer who’s building an AI agent in Europe may be required by their compliance team to complete a Data Protection Impact Assets (DPIA) and Algorithmic Impact Assessment (AIA) to meet internal risk management and technical documentation requirements aligned with emerging AI governance standards and best practices. Based on Purview Compliance Manager’s step-by-step guidance on controls implementation and testing, the compliance teams can evaluate risks such as potential bias, cybersecurity vulnerabilities, or lack of transparency in model behavior. Once the evaluation is conducted in Azure AI Foundry, the developer can obtain a report with documented risk, mitigation, and residual risk for compliance teams to upload to Compliance Manager to support audits and provide evidence to regulators or external stakeholders. Assess controls for Azure AI Foundry against emerging AI governance standards Learn more about Purview Compliance Manager. Learn more about the integration with Credo AI and Saidot in this blogpost. Leading Microsoft Entra, Defender and Purview value extended to Azure AI Foundry and Microsoft Copilot Studio Introducing Microsoft Entra Agent ID to help address agent sprawl and manage agent identity Organizations are rapidly building their own AI agents, leading to agent sprawl and a lack of centralized visibility and management. Security teams often struggle to keep up, unable to see which agents exist and whether they introduce security or compliance risks. Without proper oversight, agent sprawl increases the attack surface and makes it harder to manage these non-human identities. To address this challenge, we’re announcing the public preview of Microsoft Entra Agent ID, a new capability in the Microsoft Entra admin center that gives security admins visibility and control over AI agents built with Copilot Studio and Azure AI Foundry. With Microsoft Entra Agent ID, an agent created through Copilot Studio or Azure AI Foundry is automatically assigned an identity with no additional work required from the developers building them. This is the first step in a broader initiative to manage and protect non-human identities as organizations continue to build AI agents. : Security and identity admins can gain visibility into AI agents built in Copilot Studio and Azure AI Foundry in the Microsoft Entra Admin Center This new capability lays the foundation for more advanced capabilities coming soon to Microsoft Entra. We also know that no one can do it alone. Security has always been a team sport, and that’s especially true as we enter this new era of protecting AI agents and their identities. We’re energized by the momentum across the industry; two weeks ago, we announced support for the Agent-to-Agent (A2A) protocol and began collaborating with partners to shape the future of AI identity workflows. Today, we’re also excited to announce new partnerships with ServiceNow and Workday. As part of this, we’ll integrate Microsoft Entra Agent ID with the ServiceNow AI Platform and the Workday Agent System of Record. This will allow for automated provisioning of identities for future digital employees. Learn more about Microsoft Entra Agent ID. Microsoft Defender security alerts and recommendations now available in Azure AI Foundry As more AI applications are deployed to production, organizations need to predict and prevent potential AI threats with natively integrated security controls backed by industry-leading Gen AI and threat intelligence for AI deployments. Developers need critical signals from security teams to effectively mitigate security risks related to their AI deployments. When these critical signals live in separate systems outside the developer experience, this can create delays in mitigation, leaving opportunities for AI apps and agents to become liabilities and exposing organizations to various threats and compliance violations. Now in preview, Microsoft Defender for Cloud integrates AI security posture management recommendations and runtime threat protection alerts directly into the Azure AI Foundry portal. These capabilities, previously announced as part of the broader Microsoft Defender for Cloud solution, are extended natively into Azure AI Foundry enabling developers to access alerts and recommendations without leaving their workflows. This provides real-time visibility into security risks, misconfigurations, and active threats targeting their AI applications on specific Azure AI projects, without needing to switch tools or wait on security teams to provide details. Security insights from Microsoft Defender for Cloud help developers identify and respond to threats like jailbreak attacks, sensitive data leakage, and misuse of system resources. These insights include: AI security posture recommendations that identify misconfigurations and vulnerabilities in AI services and provide best practices to reduce risk Threat protection alerts for AI services that notify developers of active threats and provide guidance for mitigation, across more than 15 detection types For example, a developer building an AI-powered agent can receive security recommendations suggesting the use of Azure Private Link for Azure AI Services resources. This reduces the risk of data leakage by handling the connectivity between consumers and services over the Azure backbone network. Each recommendation includes actionable remediation steps, helping teams identify and mitigate risks in both pre- and post-deployment phases. This helps to reduce risks without slowing down innovation. : Developers can view security alerts on the Risks + alerts page in Azure AI Foundry : Developers can view recommendations on the Guardrails + controls page in Azure AI Foundry This integration is currently in preview and will be generally available in June 2025 in Azure AI Foundry. Learn more about protecting AI services with Microsoft Defender for Cloud. Microsoft Purview capabilities extended to secure and govern data in custom-built AI apps and agents Data oversharing and leakage are among the top concerns for AI adoption, and central to many regulatory requirements. For organizations to confidently deploy AI applications and agents, both low code and pro code developers need a seamless way to embed security and compliance controls into their AI creations. Without simple, developer-friendly solutions, security gaps can quickly become blockers, delaying deployment and increasing risks as applications move from development to production. Today, Purview is extending its enterprise-grade data security and compliance capabilities, making it easier for both low code and pro code developers to integrate data security and compliance into their AI applications and agents, regardless of which tools or platforms they use. For example, with this update, Microsoft Purview DSPM for AI becomes the one place data security teams can see all the data risk insights across Microsoft Copilots, agents built in Agent Builder and Copilot Studio, and custom AI apps and agents built in Azure AI Foundry and other platforms. Admins can easily drill into security and compliance insights for specific AI apps or agents, making it easier to investigate and take action on potential risks. : Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents in Microsoft Purview DSPM for AI In the following sections, we will provide more details about the updates to Purview capabilities in various AI workloads. 1. Microsoft Purview data security and compliance controls can be extended to any custom-built AI application and agent via the new Purview SDK or the native Purview integration with Azure AI Foundry. The new capabilities make it easy and effortless for security teams to bring the same enterprise-grade data security compliance controls available today for Microsoft 365 Copilot to custom AI applications and agents, so organizations can: Discover data security risks, such as sensitive data in user prompts, and data compliance risks, such as harmful content, and get recommended actions to mitigate risks proactively in Microsoft Purview Data Security Posture Management (DSPM) for AI. Protect sensitive data against data leakage and insider risks with Microsoft Purview data security policies. Govern AI interactions with Audit, Data Lifecycle Management, eDiscovery, and Communication Compliance. Microsoft Purview SDK Microsoft Purview now offers Purview SDK, a set of REST APIs, documentation, and code samples, currently in preview, enabling developers to integrate Purview's data security and compliance capabilities into AI applications or agents within any integrated development environment (IDE). : By embedding Purview APIs into the IDE, developers help enable their AI apps to be secured and governed at runtime For example, a developer building an AI agent using an AWS model can use the Purview SDK to enable their AI app to automatically identify and block sensitive data entered by users before it’s exposed to the model, while also providing security teams with valuable signals that support compliance. With Purview SDK, startups, ISVs, and partners can now embed Purview industry-leading capabilities directly into their AI software solutions, making these solutions Purview aware and easier for their customers to secure and govern data in their AI solutions. For example, Infosys Vice President and Delivery Head of Cyber Security Practice, Ashish Adhvaryu indicates, “Infosys Cyber Next platform integrates Microsoft Purview to provide enhanced AI security capabilities. Our solution, the Cyber Next AI assistant (Cyber Advisor) for the SOC analyst, leverages Purview SDK to drive proactive threat mitigation with real-time monitoring and auditing capabilities. This integration provides holistic AI-assisted protection, enhancing cybersecurity posture." Microsoft partner EY (previously known as Ernst and Young) has also leveraged the new Purview SDK to embed Purview value into their GenAI initiatives. “We’re not just building AI tools, we are creating Agentic solutions where trust, security, and transparency are present from the start, supported by the policy controls provided through the Purview SDK. We’re seeing 25 to 30 percent time savings when we build secure features using the Purview SDK,” noted Sumanta Kar, Partner, Innovation and Emerging Tech at EY. Learn more about the Purview SDK. Microsoft Purview integrates natively with Azure AI Foundry Organizations are developing an average of 14 custom AI applications. The rapid pace of AI innovation may leave security teams unaware of potential data security and compliance risks within their environments. With the update announced today, Azure AI Foundry signals are now directly integrated with Purview Data Security Posture Management for AI, Insider Risk Management, and data compliance controls, minimizing the need for additional development work. For example, for AI applications and agents built with Azure AI Foundry models, data security teams can gain visibility into AI usage and data risks in Purview DSPM for AI, with no additional work from developers. Data security teams can also detect, investigate, and respond to both malicious and inadvertent user activities, such as a departing employee leveraging an AI agent to retrieve an anomalous amount of sensitive data, with Microsoft Purview Insider Risk Management (IRM) policies. Lastly, user prompts and AI responses in Azure AI apps and agents can now be ingested into Purview compliance tools as mentioned above. Learn more about Microsoft Purview for Azure AI Foundry. 2. Purview data protections extended to Copilot Studio agents grounded in Microsoft Dataverse data Coming to preview in June, Purview Information Protection extends auto-labeling and label inheritance coverage to Dataverse to help prevent oversharing and data leaks. Information Protection makes it easier for organizations to automatically classify and protect sensitive data at scale. A common challenge is that sensitive data often lands in Dataverse from various sources without consistent labeling or protection. The rapid adoption of agents built using Copilot Studio and grounding data from Dataverse increases the risk of data oversharing and leakage if data is not properly protected. With auto-labeling, data stored in Dataverse tables can be automatically labeled based on policies set in Microsoft Purview, regardless of its source. This reduces the need for manual labeling effort and protects sensitive information from the moment it enters Dataverse. With label inheritance, AI agent responses grounded in Dataverse data will automatically carry and honor the source data’s sensitivity label. If a response pulls from multiple tables with different labels, the most restrictive label is applied to ensure consistent protection. For example, a financial advisor building an agent in Copilot Studio might connect multiple Dataverse tables, some labeled as “General” and others as “Highly Confidential.” If a response pulls from both, it will inherit the most restrictive label, in this case, "Highly Confidential,” to prevent unauthorized access and ensure appropriate protections are applied across both maker and users of the agent. Together, auto-labeling and label inheritance in Dataverse support a more secure, automated foundation for AI. : Sensitivity labels will be automatically applied to data in Dataverse : AI-generated responses will inherit and honor the source data’s sensitivity labels Learn more about protecting Dataverse data with Microsoft Purview. 3. Purview DSPM for AI can now provide visibility into unauthenticated interactions with Copilot Studio agents As organizations increasingly use Microsoft Copilot Studio to deploy AI agents for frontline customer interactions, gaining visibility into unauthenticated user interactions and proactively mitigating risks becomes increasingly critical. Building on existing Purview and Copilot Studio integrations, we’ve extended DSPM for AI and Audit in Copilot Studio to provide visibility into unauthenticated interactions, now in preview. This gives organizations a more comprehensive view of AI-related data security risks across authenticated and unauthenticated users. For example, a healthcare provider hosting an external, customer-facing agent assistant must be able to detect and respond to attempts by unauthenticated users to access sensitive patient data. With these new capabilities in DSPM for AI, data security teams can now identify these interactions, assess potential exposure of sensitive data, and act accordingly. Additionally, integration with Purview Audit provides teams with seamless access to information needed for audit requirements. : Gain visibility into all AI interactions, including those from unauthenticated users Learn more about Purview for Copilot Studio. 4. Purview Data Loss Prevention extended to more Microsoft 365 agent scenarios To help organizations prevent data oversharing through AI, at Ignite 2024, we announced that data security admins could prevent Microsoft 365 Copilot from using certain labeled documents as grounding data to generate summaries or responses. Now in preview, this control also extends to agents published in Microsoft 365 Copilot that are grounded by Microsoft 365 data, including pre-built Microsoft 365 agents, agents built with the Agent Builder, and agents built with Copilot Studio. This helps ensure that files containing sensitive content are used appropriately by AI agents. For example, confidential legal documents with highly specific language that could lead to improper guidance if summarized by an AI agent, or "Internal only” documents that shouldn’t be used to generate content that can be shared outside of the organization. : Extend data loss prevention (DLP) policies to Microsoft 365 Copilot agents to protect sensitive data Learn more about Data Loss Prevention for Microsoft 365 Copilot and agents. The data protection capabilities we are extending to agents in Agent Builder and Copilot Studio demonstrate our continued investment in strengthening the Security and Governance pillar of the Copilot Control System (CSS). CCS provides integrated controls to help IT and security teams secure, manage, and monitor Copilot and agents across Microsoft 365, spanning governance, management, and reporting. Learn more here. Explore additional resources As developers and security teams continue to secure AI throughout its lifecycle, it’s important to stay ahead of emerging risks and ensure protection. Microsoft Security provides a range of tools and resources to help you proactively secure AI models, apps, and agents from code to runtime. Explore the following resources to deepen your understanding and strengthen your approach to AI security: Learn more about Security for AI solutions on our webpage Learn more about Microsoft Purview SDK Get started with Azure AI Foundry Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Defender for Cloud Get started with Microsoft 365 Copilot Get started with Copilot Studio Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity, Jeremy D'Hoinne, Akif Khan, Manuel Acosta, Avivah Litan, Deepak Seth, Bart Willemsen, 10 February 2025 2 IBM. "Cost of a Data Breach 2024: Financial Industry." IBM Think, 13 Aug. 2024, https://www.ibm.com/think/insights/cost-of-a-data-breach-2024-financial-industry; Cser, Tamas. "The Cost of Finding Bugs Later in the SDLC." Functionize, 5 Jan. 2023, https://www.functionize.com/blog/the-cost-of-finding-bugs-later-in-the-sdlcDecrypt DKE protected content by Super User
In today's digital age, safeguarding sensitive information is paramount for every organization. One of the most advanced methods to ensure data security is through Double Key Encryption (DKE) protection from the Microsoft Purview solution. This encryption technique uses two keys to protect data: one key is stored securely within the organization's control, while the second key is managed by Microsoft Azure. However, decrypting DKE protected content by the organization without user interference is a complex process, especially when dealing with highly sensitive information. This is where the super user account comes into play. The super user account is a privileged account that should be configured with the necessary permissions to decrypt DKE-protected content. Setting up DKE involves several steps, including deploying the DKE service, creating sensitivity labels, and configuring client devices. Once these steps are completed, the super user account can be used to decrypt the DKE protected content, providing a secure and efficient way to manage sensitive information. Why Use a Super User Account to Decrypt DKE protected content? A super user always has the Rights Management Full Control usage right for documents protected by your organization’s Azure Information Protection tenant. This means that the super user account will have access to the first key stored in the Azure tenant. However, this alone is not sufficient to decrypt documents protected by DKE. To gain decryption access, administrators need to ensure that the super user also has access to the second key, which is stored securely within the organization. Like any other user - the super user also needs to be granted access for using the DKE key. For example, organizations can use this feature in the following scenarios: An employee leaves the organization, but highly confidential documents encrypted with DKE labels need to be accessed by directors or other VIP users. An IT administrator needs to remove the current DKE protection policy configured for files and apply a new protection policy. Administrators need to bulk decrypt files for auditing, legal, or other compliance reasons. How it works Please find the below architecture diagram which details the decryption flow of DKE. Microsoft Office client or MPIP client running under a super user account (sends the double encrypted part of the metadata that controls access (aka double encrypted content key) to Azure Information Protection Azure Information Protection checks Azure Active Directory to check access and if the user is in EntraID and configured with super user access, Azure Information Protection authorizes access and decrypts the part of the metadata controlling access using your key in Azure Information Protection. Thereby, removing the outer layer of encryption. The part of metadata controlling access to the content is now encrypted with only the DKE key Microsoft Office client or MPIP client sends the encrypted part of metadata that controls access to the content to DKE service to decrypt using customer’s private key. If this is the first time accessing the encrypted document or if cached access (End-user license) has expired, or have changed key in Azure, the DKE service will receive an empty request and will deny the access to the content from Microsoft Office client due to authentication issue. If the above happens, the DKE service sends a signal to the Microsoft Office client asking for authentication information. The Microsoft Office client sends a request to a token service in EntraID which returns the JSON web token (JWT) with adequate information to the Microsoft Office or MPIP client. The client then makes a second request to the DKE service and send the encrypted part of metadata that controls access to content (aka content key) with the JWT token it received from Azure. DKE service decrypts the encrypted part of the metadata controlling access to the content using the private key in the DKE service and sends the decrypted content key back to the client Set up Super User Account to Decrypt DKE protected content Please refer to the workflow diagram below, which outlines the configuration steps. Step 1: Configure the Super User Account in Azure RMS service using PowerShell Install AIP module by running Import-Module AIPService Connect the AIP service into your tenant by running Connect-AipService Enable the Super User feature by running Enable-AipServiceSuperUserFeature Configure the admin account into Super User Add-AipServiceSuperUser -EmailAddress <Mention the primary email address or user principal name> Validate the configuration by running Get-AipServiceSuperUser Step 2: Grant permission to Super User Account in DKE service I have added Super User account as part of Authorized Email Address in DKE service Step 3: Login as Super User Account in Office or MPIP client to decrypt the file. Try to open the DKE protected document Office application prompt for authentication Provide Super User credentials You will be able to decrypt the document Conclusion By leveraging the Super User account, organizations can ensure that they can decrypt DKE-protected documents without requiring explicit permissions on the label. This provides a secure and efficient way to manage sensitive information, especially in emergency situations where immediate access to encrypted data is necessary. Understanding and implementing this process is essential for any organization looking to enhance their data security and protect their valuable information. References The following table contains links to additional information that may provide context for the design and plan. Content Description https://learn.microsoft.com/en-us/purview/double-key-encryption Information about Double Key Encryption https://learn.microsoft.com/en-us/purview/double-key-encryption-setup Setup Double Key Encryption Service in Azure https://learn.microsoft.com/en-us/azure/information-protection/configure-super-users Configure Super User access in Azure Information ProtectionEnhance AI security and governance across multi-model and multi-cloud environments
Generative AI adoption is accelerating, with AI transformation happening in real-time across various industries. This rapid adoption is reshaping how organizations operate and innovate, but it also introduces new challenges that require careful attention. At Ignite last fall, we announced several new capabilities to help organizations secure their AI transformation. These capabilities were designed to address top customer priorities such as preventing data oversharing, safeguarding custom AI, and preparing for emerging AI regulations. Organizations like Cummins, KPMG, and Mia Labs have leveraged these capabilities to confidently strengthen their AI security and governance efforts. However, despite these advancements, challenges persist. One major concern is the rise of shadow AI—applications used without IT or security oversight. In fact, 78% of AI users report bringing their own AI tools, such as ChatGPT and DeepSeek, into the workplace 1 . Additionally, new threats, like indirect prompt injection attacks, are emerging, with 77% of organizations expressing concerns and 11% of organizations identifying them as a critical risk 2 . To address these challenges, we are excited to announce new features and capabilities that help customers do the following: Prevent risky access and data leakage in shadow AI with granular access controls and inline data security capabilities Manage AI security posture across multi-cloud and multi-model environments Detect and respond to new AI threats, such as indirect prompt injections and wallet abuse Secure and govern data in Microsoft 365 Copilot and beyond In this blog, we’ll explore these announcements and demonstrate how they help organizations navigate AI adoption with confidence, mitigating risks, and unlocking AI’s full potential on their transformation journey. Prevent risky access and data leakage in shadow AI With the rapid rise of generative AI, organizations are increasingly encountering unauthorized employee use of AI applications without IT or security team approval. This unsanctioned and unprotected usage has given rise to “shadow AI,” significantly heightening the risk of sensitive data exposure. Today, we are introducing a set of access and data security controls designed to support a defense-in-depth strategy, helping you mitigate risks and prevent data leakage in third-party AI applications. Real-time access controls to shadow AI The first line of defense against security risks in AI applications is controlling access. While security teams can use endpoint controls to block access for all users across the organization, this approach is often too restrictive and impractical. Instead, they need more granular controls at the user level to manage access to SaaS-based AI applications. Today we are announcing the general availability of the AI web category filter in Microsoft Entra Internet Access to help enforce access controls that govern which users and groups have access to different AI applications. Internet Access deep integration with Microsoft Entra ID extends Conditional Access to any AI application, enabling organizations to apply AI access policies with granularity. By using Conditional Access as the policy control engine, organizations can enforce policies based on user roles, locations, device compliance, user risk levels, and other conditions, ensuring secure and adaptive access to AI applications. For example, with Internet Access, organizations can allow your strategy team to experiment with all or most consumer AI apps while blocking those apps for highly privileged roles, such as accounts payable or IT infrastructure admins. For even greater security, organizations can further restrict access to all AI applications if Microsoft Entra detects elevated identity risk. Inline discovery and protection of sensitive data Once users gain access to sanctioned AI applications, security teams still need to ensure that sensitive data isn’t shared with those applications. Microsoft Purview provides data security capabilities to prevent users from sending sensitive data to AI applications. Today, we are announcing enhanced Purview data security capabilities for the browser available in preview in the coming weeks. The new inline discovery & protection controls within Microsoft Edge for Business detect and block sensitive data from being sent to AI apps in real-time, even if typed directly. This prevents sensitive data leaks as users interact with consumer AI applications, starting with ChatGPT, Google Gemini, and DeepSeek. For example, if an employee attempts to type sensitive details about an upcoming merger or acquisition into Google Gemini to generate a written summary, the new inline protection controls in Microsoft Purview will block the prompt from being submitted, effectively blocking the potential leaks of confidential data to an unsanctioned AI app. This augments existing DLP controls for Edge for Business, including protections that prevent file uploads and the pasting of sensitive content into AI applications. Since inline protection is built natively into Edge for Business, newly deployed policies automatically take effect in the browser even if endpoint DLP is not deployed to the device. : Inline DLP in Edge for Business prevents sensitive data from being submitted to consumer AI applications like Google Gemini by blocking the action. The new inline protection controls are integrated with Adaptive Protection to dynamically enforce different levels of DLP policies based on the risk level of the user interacting with the AI application. For example, admins can block low-risk users from submitting prompts containing the highest-sensitivity classifiers for their organization, such as M&A-related data or intellectual property, while blocking prompts containing any sensitive information type (SIT) for elevated-risk users. Learn more about inline discovery & protection in the Edge for Business browser in this blog. In addition to the new capabilities within Edge for Business, today we are also introducing Purview data security capabilities for the network layer available in preview starting in early May. Enabled through integrations with Netskope and iboss to start, organizations will be able to extend inline discovery of sensitive data to interactions between managed devices and untrusted AI sites. By integrating Purview DLP with their SASE solution (e.g. Netskope and iBoss), data security admins can gain visibility into the use of sensitive data on the network as users interact with AI applications. These interactions can originate from desktop applications such as the ChatGPT desktop app or Microsoft Word with a ChatGPT plugin installed, or non-Microsoft browsers such as Opera and Brave that are accessing AI sites. Using Purview Data Security Posture Management (DSPM) for AI, admins will also have visibility into how these interactions contribute to organizational risk and can take action through DSPM for AI policy recommendations. For example, if there is a high volume of prompts containing sensitive data sent to ChatGPT, DSPM for AI will detect and recommend a new DLP policy to help mitigate this risk. Learn more about inline discovery for the network, including Purview integrations with Netskope and iBoss, in this blog. Manage AI security posture across multi-cloud and multi-model environments In today’s rapidly evolving AI landscape, developers frequently leverage multiple cloud providers to optimize cost, performance, and availability. Different AI models excel at various tasks, leading developers to deploy models from various providers for different use cases. Consequently, managing security posture across multi-cloud and multi-model environments has become essential. Today, Microsoft Defender for Cloud supports deployed AI workloads across Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock. To further enhance our security coverage, we are expanding AI Security Posture Management (AI-SPM) in Defender for Cloud to improve compatibility with additional cloud service providers and models. This includes: Support for Google Vertex AI models Enhanced support for Azure AI Foundry model catalog and custom models With this expansion, AI-SPM in Defender for Cloud will now offer the discovery of the AI inventory and vulnerabilities, attack path analysis, and recommended actions to address risks in Google VertexAI workloads. Additionally, it will support all models in Azure AI Foundry model catalog, including Meta Llama, Mistral, DeepSeek, as well as custom models. This expansion ensures a consistent and unified approach to managing AI security risks across multi-model and multi-cloud environments. Support for Google Vertex AI models will be available in public preview starting May 1, while support for Azure AI Foundry model catalog and custom models is generally available today. Learn More. 2: Microsoft Defender for Cloud detects an attack path to a DeepSeek R1 workload. In addition, Defender for Cloud will also offer a new data and AI security dashboard. Security teams will have access to an intuitive overview of their datastores and AI services across their multi-cloud environment, top recommendations, and critical attack paths to prioritize and accelerate remediation. The dashboard will be generally available on May 1. The new data & AI security dashboard in Microsoft Defender for Cloud provides a comprehensive overview of your data and AI security posture. These new capabilities reflect Microsoft’s commitment to helping organizations address the most critical security challenges in managing AI security posture in their heterogeneous environments. Detect and respond to new AI threats Organizations are integrating generative AI into their workflows and facing new security risks unique to AI. Detecting and responding to these evolving threats is critical to maintaining a secure AI environment. The Open Web Application Security Project (OWASP) provides a trusted framework for identifying and mitigating such vulnerabilities, such as prompt injection and sensitive information disclosure. Today, we are announcing Threat protection for AI services, a new capability that enhances threat protection in Defender for Cloud, enabling organizations to secure custom AI applications by detecting and responding to emerging AI threats more effectively. Building on the OWASP Top 10 risks for LLM applications, this capability addresses those critical vulnerabilities highlighted on the top 10 list, such as prompt injections and sensitive information disclosure. Threat protection for AI services helps organizations identify and mitigate threats to their custom AI applications using anomaly detection and AI-powered insights. With this announcement, Defender for Cloud will now extend its threat protection for AI workloads, providing a rich suite of new and enriched detections for Azure OpenAI Service and models in the Azure AI Foundry model catalog. New detections include direct and indirect prompt injections, novel attack techniques like ASCII smuggling, malicious URL in user prompts and AI responses, wallet abuse, suspicious access to AI resources, and more. Security teams can leverage evidence-based security alerts to enhance investigation and response actions through integration with Microsoft Defender XDR. For example, in Microsoft Defender XDR, a SOC analyst can detect and respond to a wallet abuse attack, where an attacker exploits an AI system to overload resources and increase costs. The analyst gains detailed visibility into the attack, including the affected application, user-entered prompts, IP address, and other suspicious activities performed by the bad actor. With this information, the SOC analyst can take action and block the attacker from accessing the AI application, preventing further risks. This capability will be generally available on May 1. Learn More. : Security teams can investigate new detections of AI threats in Defender XDR. Secure and govern data in Microsoft 365 Copilot and beyond Data oversharing and non-compliant AI use are significant concerns when it comes to securing and governing data in Microsoft Copilots. Today, we are announcing new data security and compliance capabilities. New data oversharing insights for unclassified data available in Microsoft Purview DSPM for AI: Today, we are announcing the public preview of on-demand classification for SharePoint and OneDrive. This new capability gives data security admins visibility into unclassified data stored in SharePoint and OneDrive and enables them to classify that data on demand. This helps ensure that Microsoft 365 Copilot is indexing and referencing files in its responses that have been properly classified. Previously, unclassified and unscanned files did not appear in DSPM for AI oversharing assessments. Now admins can initiate an on-demand data classification scan, directly from the oversharing assessment, ensuring that older or previously unscanned files are identified, classified, and incorporated into the reports. This allows organizations to detect and address potential risks more comprehensively. For example, an admin can initiate a scan of legacy customer contracts stored in a specified SharePoint library to detect and classify sensitive information such as account numbers or contact information. If these newly classified documents match the classifiers included in any existing auto-labeling policies, they will be automatically labeled. This helps ensure that documents containing sensitive information remain protected when they are referenced in Microsoft 365 Copilot interactions. Learn More. Security teams can trigger on-demand classification scan results in the oversharing assessment in Purview DSPM for AI. Secure and govern data in Security Copilot and Copilot in Fabric: We are excited to announce the public preview of Purview for Security Copilot and Copilot in Fabric, starting with Copilot in Power BI, offering DSPM for AI, Insider Risk Management, and data compliance controls, including eDiscovery, Audit, Data Lifecycle Management, and Communication Compliance. These capabilities will help organizations enhance data security posture, manage compliance, and mitigate risks more effectively. For example, admins can now use DSPM for AI to discover sensitive data in user prompts and responses and detect unethical or risky AI usage. Purview’s DSPM for AI provides admins with comprehensive reports on user activities and data interactions in Copilot for Power BI, as part of the Copilot in Fabric experience, and Security Copilot. DSPM Discoverability for Communication Compliance: This new feature in Communication Compliance, which will be available in public preview starting May 1, enables organizations to quickly create policies that detect inappropriate messages that could lead to data compliance risks. The new recommendation card on the DSPM for AI page offers a one-click policy creation in Microsoft Purview Communication Compliance, simplifying the detection and mitigation of potential threats, such as regulatory violations or improperly shared sensitive information. With these enhanced capabilities for securing and governing data in Microsoft 365 Copilot and beyond, organizations can confidently embrace AI innovation while maintaining strict security and compliance standards. Explore additional resources As organizations embrace AI, securing and governing its use is more important than ever. Staying informed and equipped with the right tools is key to navigating its challenges. Explore these resources to see how Microsoft Security can help you confidently adopt AI in your organization. Learn more about Security for AI solutions on our webpage Get started with Microsoft Purview Get started with Microsoft Defender for Cloud Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial Learn more about the innovations designed to help your organization protect data, defend against cyber threats, and stay compliant. Join Microsoft leaders online at Microsoft Secure on April 9. [1] 2024 Work Trend Index Annual Report, Microsoft and LinkedIn, May 2024, N=31,000. [2] Gartner®, Gartner Peer Community Poll – If your org’s using any virtual assistants with AI capabilities, are you concerned about indirect prompt injection attacks? GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.4.9KViews2likes0CommentsMicrosoft Security in Action: Deploying and Maximizing Advanced Identity Protection
As cyber threats grow in sophistication, identity remains the first line of defense. With credentials being a primary target for attackers, organizations must implement advanced identity protection to prevent unauthorized access, reduce the risk of breaches, and maintain regulatory compliance. This blog outlines a phased deployment approach to implement Microsoft’s identity solutions, helping ensure a strong Zero Trust foundation by enhancing security without compromising user experience. Phase 1: Deploy advanced identity protection Step 1: Build your hybrid identity foundation with synchronized identity Establishing a synchronized identity is foundational for seamless user experiences across on-premises and cloud environments. Microsoft Entra Connect synchronizes Active Directory identities with Microsoft Entra ID, enabling unified governance while enabling users to securely access resources across hybrid environments. To deploy, install Microsoft Entra Connect, configure synchronization settings to sync only necessary accounts, and monitor health through built-in tools to detect and resolve sync issues. A well-implemented hybrid identity enables consistent authentication, centralized management, and a frictionless user experience across all environments. Step 2: Enforce strong authentication with MFA and Conditional Access Multi-Factor Authentication (MFA) is the foundation of identity security. By requiring an additional verification step, MFA significantly reduces the risk of account compromise—even if credentials are stolen. Start by enforcing MFA for all users, prioritizing high-risk accounts such as administrators, finance teams, and executives. Microsoft recommends deploying passwordless authentication methods, such as Windows Hello, FIDO2 security keys, and Microsoft Authenticator, to further reduce phishing risks. Next, to balance security with usability, use Conditional Access policies to apply adaptive authentication requirements based on conditions such as user behavior, device health, and risk levels. For example, block sign-ins from non-compliant or unmanaged devices while allowing access from corporate-managed endpoints. Step 3: Automate threat detection with Identity Protection Implementing AI-driven risk detection is crucial to identifying compromised accounts before attackers can exploit them. Start by enabling Identity Protection to analyze user behavior and detect anomalies such as impossible travel logins, leaked credentials, and atypical access patterns. To reduce security risk, evolve your Conditional Access policies with risk signals that trigger automatic remediation actions. For low-risk sign-ins, require additional authentication (such as MFA), while high-risk sign-ins should be blocked entirely. By integrating Identity Protection with Conditional Access, security teams can enforce real-time access decisions based on risk intelligence, strengthening identity security across the enterprise. Step 4: Secure privileged accounts with Privileged Identity Management (PIM) Privileged accounts are prime targets for attackers, making Privileged Identity Management (PIM) essential for securing administrative access. PIM allows organizations to apply the principle of least privilege by granting Just-in-Time (JIT) access, meaning users only receive elevated permissions when needed—and only for a limited time. Start by identifying all privileged roles and moving them to PIM-managed access policies. Configure approval workflows for high-risk roles like Global Admin or Security Admin, requiring justification and multi-factor authentication before privilege escalation. Next, to maintain control, enable privileged access auditing, which logs all administrative activities and generates alerts for unusual role assignments or excessive privilege usage. Regular access reviews further enable only authorized users to retain elevated permissions. Step 5: Implement self-service and identity governance tools Start by deploying Self-Service Password Reset (SSPR). SSPR enables users to recover their accounts securely without help desk intervention. Also integrate SSPR with MFA, so that only authorized users regain access. Next, organizations should implement automated Access Reviews on all users, not just privileged accounts, to periodically validate role assignments and remove unnecessary permissions. This helps mitigate privilege creep, where users accumulate excessive permissions over time. Phase 2: Optimize identity security and automate response With core identity protection mechanisms deployed, the next step is to enhance security operations with automation, continuous monitoring, and policy refinement. Step1: Enhance visibility with centralized monitoring Start by Integrating Microsoft Entra logs with Microsoft Sentinel to gain real-time visibility into identity-based threats. By analyzing failed login attempts, suspicious sign-ins, and privilege escalations, security teams can detect and mitigate identity-based attacks before they escalate. Step 2: Apply advanced Conditional Access scenarios To further tighten access control, implement session-based Conditional Access policies. For example, allow read-only access to SharePoint Online from unmanaged devices and block data downloads entirely. By refining policies based on user roles, locations, and device health, organizations can strengthen security while ensuring seamless collaboration. Phase 3: Enable secure collaboration across teams Identity security is not just about protection—it also enables secure collaboration across employees, partners, and customers. Step 1: Secure external collaboration Collaboration with partners, vendors, and contractors requires secure, managed access without the complexity of managing external accounts. Microsoft Entra External Identities allows organizations to provide seamless authentication for external users while enforcing security policies like MFA and Conditional Access. By enabling lifecycle management policies, organizations can automate external user access reviews and expirations, ensuring least-privilege access at all times. Step 2: Automate identity governance with entitlement management To streamline access requests and approvals, Microsoft Entra Entitlement Management lets organizations create pre-configured access packages for both internal and external users. External guests can request access to pre-approved tools and resources without IT intervention. Automated access reviews and expiration policies enable users retain access only as long as needed. This reduces administrative overheads while enhancing security and compliance. Strengthening identity security for the future Deploying advanced identity protection in a structured, phased approach allows organizations to proactively defend against identity-based threats while maintaining secure, seamless access. Ready to take the next step? Explore these Microsoft identity security deployment resources: Microsoft Entra Identity Protection Documentation Conditional Access Deployment Guide Privileged Identity Management Configuration Guide The Microsoft Security in Action blog series is an evolving collection of posts that explores practical deployment strategies, real-world implementations, and best practices to help organizations secure their digital estate with Microsoft Security solutions. Stay tuned for our next blog on deploying and maximizing your investments in Microsoft Threat Protection solutions.