security for ai
4 TopicsMicrosoft Purview: The Ultimate AI Data Security Solution
Introduction AI is transforming the way enterprises operate, however with great innovation comes great responsibility. I’ve spent the last few years helping organizations secure their data with tools like Azure Information Protection, Data Loss Prevention, and now Microsoft Purview. As generative AI tools like Microsoft Copilot become embedded in everyday workflows, the need for clear governance and robust data protection is more urgent than ever. Through this blog post, let's explore how Microsoft Purview can help organizations stay ahead of securing AI interactions without slowing down innovation. What’s the Issue? AI agents are increasingly used to process sensitive data, often through natural language prompts. Without proper oversight, this can lead to data oversharing, compliance violations, and security risks. Why It’s Urgent? According to the recent trends of 2025, over half of corporate users bring their own AI tools to work, often consumer-grade apps like ChatGPT or DeepSeek. These tools bypass enterprise protections, making it difficult to monitor and control data exposure. Use Cases Enterprise AI Governance: Apply consistent policies across Microsoft and third-party AI tools. Compliance Auditing: Generate audit logs for AI interactions to meet regulatory requirements. Risk Mitigation: Block risky uploads and enforce adaptive protection based on user behavior. How Microsoft Purview Solves It Data Security Posture Management (DSPM) for AI Purview’s DSPM for AI provides a centralized dashboard to monitor AI activity, assess data risks, and enforce compliance policies across Copilots, agents, and third-party AI apps. It correlates data classification, user behavior, and policy coverage to surface real-time risks, such as oversharing via AI agents, and generates actionable recommendations to remediate gaps. DSPM integrates with tools like Microsoft Security Copilot for AI-assisted investigations and supports automated scanning, trend analytics, and posture reporting. It also extends protection to third-party AI tools like ChatGPT through endpoint DLP and browser extensions, ensuring consistent governance across both managed and unmanaged environments 2. Unified Protection Across AI Agents Whether you're using Microsoft 365 Copilot, Security Copilot, or Azure AI services, Purview applies consistent security and compliance controls. Agents inherit protection from their parent apps, including sensitivity labels, data loss prevention (DLP), and Insider Risk Management. Real-Time Risk Detection Purview enables real-time monitoring of prompts and responses, helping security teams detect oversharing and policy violations instantly. From Microsoft Learn – Insider Risk 4. One-Click Policy Activation Administrators can leverage Microsoft Purview’s Data Security Posture Management (DSPM) for AI to rapidly deploy comprehensive security and compliance controls via one-click policy activation. This streamlined mechanism enables organizations to enforce prebuilt policy templates across AI ecosystems, ensuring prompt implementation of data loss prevention (DLP), sensitivity labeling, and Insider Risk Management on both Microsoft and third-party AI services. Through DSPM’s unified policy orchestration layer, security teams gain granular telemetry into prompt and response flows, real-time policy enforcement, and detailed incident reporting. Automated analytics continuously assess risk posture, enabling adaptive policy adjustments and scalable governance as new AI tools and user workflows are introduced into the enterprise environment. Please note: After implementing policy changes, it can take up to 24 hours for changes to become visible and take full effect across your environment. From Microsoft Learn – Purview Data Security Posture Management (DSPM) portal 5. Support for Third-Party AI Apps Purview extends robust data security and compliance to browser-based AI tools such as ChatGPT and Google Gemini by employing endpoint Data Loss Prevention (DLP) and browser extensions that monitor and control data flows in real time. Through Microsoft Purview’s Data Security Posture Management (DSPM) for AI, organizations can implement granular controls for sensitive data accessed during both Microsoft-native and third-party AI interactions. DSPM offers continuous discovery and classification of data assets, linking AI prompts and responses to their original data sources to automatically enforce data protection policies, including sensitivity labeling, adaptive access controls, and comprehensive content inspection, contextually for each AI transaction. For unsanctioned AI services reached via browsers, the Purview browser extension inspects both input and output, enabling endpoint DLP to block, alert, or redact sensitive material instantly, thus preventing unauthorized uploads, downloads, or copy/paste activities. Security teams benefit from rich telemetry on AI usage patterns, which integrate with user risk profiles and anomaly detection to identify and flag suspicious attempts to extract confidential information. Close integration with Microsoft Security Copilot and automated analytics further enhances visibility across all AI data flows, supporting incident response, audit, and compliance reporting needs. Purview’s adaptive policy orchestration ensures that evolving AI services and workflows are continuously assessed for risk, and that controls are dynamically aligned with business, regulatory, and security requirements, enabling scalable, policy-driven governance for the expanding enterprise AI ecosystem. Pros and Cons The following table outlines the key advantages and potential limitations of implementing AI and agent data security controls within Microsoft Purview. Pros Cons License Needed Centralized AI governance Requires proper licensing and setup Microsoft 365 E5 or equivalent Purview add-on license Real-time risk detection May need browser extensions for full coverage Microsoft 365 E5 or Purview add-on Supports both Microsoft and third-party AI apps Some features limited to enterprise versions Microsoft 365 E5, E5 Compliance, or equivalent Purview add-on Conclusion Microsoft Purview offers a comprehensive solution for securing AI agents and their data interactions. By leveraging DSPM for AI, organizations can confidently adopt AI technologies while maintaining control over sensitive information. Explore Microsoft Purview’s DSPM for AI here. Start by assessing your current AI usage and activate one-click policies to secure your environment today! FAQ 1. What is the purpose of Microsoft Purview’s AI and agent data security controls? The purpose is to ensure that sensitive data accessed or processed by AI systems and agents is governed, protected, and monitored using Microsoft Purview’s compliance and security capabilities. Microsoft Purview data security and compliance protection 2. How does Microsoft Purview help secure AI-generated content? Microsoft Purview applies data loss prevention (DLP), sensitivity labels, and information protection policies to AI-generated content, ensuring it adheres to organizational compliance standards. Microsoft Purview Information Protection 3. Can Microsoft Purview track and audit AI interactions with sensitive data? Yes. Microsoft Purview provides audit logs and activity explorer capabilities that allow organizations to monitor how AI systems and agents interact with sensitive data. Search the audit log 4. What role do sensitivity labels play in AI data governance? Sensitivity labels classify and protect data based on its sensitivity level. When applied, they enforce encryption, access restrictions, and usage rights, even when data is processed by AI. Learn about sensitivity labels 5. How does Microsoft Purview integrate with Copilot and other AI tools? Microsoft Purview extends its data protection and compliance capabilities to Microsoft 365 Copilot and other AI tools by ensuring that data accessed by these tools is governed under existing policies. Microsoft 365 admin center Microsoft 365 Copilot usage 6. Are there specific controls for third-party AI agents? Yes. Microsoft Purview supports conditional access, DLP, and access reviews to manage and monitor third-party AI agents that interact with organizational data. What is Conditional Access in Microsoft Entra ID? 7. How can organizations ensure AI usage complies with regulatory requirements? By using Microsoft Purview’s compliance manager, organizations can assess and manage regulatory compliance risks associated with AI usage. Microsoft Purview Compliance Manager About the Author: Hi! Jacques “Jack” here, I’m a Microsoft Technical Trainer at Microsoft. I wanted to share a topic that is often top of mind, AI governance. I’ve been working with Microsoft Purview since its launch in 2022, building on prior experience with Azure Information Protection and Data Loss Prevention. I also have great expertise with Generative AI technologies since their public release in November 2022, including Microsoft Copilot and other enterprise-grade AI solutions.Secure Model Context Protocol (MCP) Implementation with Azure and Local Servers
Introduction The Model Context Protocol (MCP) enables AI systems to interact with external data sources and tools through a standardized interface. While powerful, MCP can introduce security risks in enterprise environments. This tutorial shows you how to implement MCP securely using local servers, Azure OpenAI with APIM, and proper authentication. Understanding MCP's Security Risks There are a couple of key security concerns to consider before implementing MCP: Data Exfiltration: External MCP servers could expose sensitive data. Unauthorized Access: Third-party services become potential security risks. Loss of Control: Unknown how external services handle your data. Compliance Issues: Difficulty meeting regulatory requirements with external dependencies. The solution? Keep everything local and controlled. Secure Architecture Before we dive into implementation, let's take a look at the overall architecture of our secure MCP solution: This architecture consists of three key components working together: Local MCP Server - Your custom tools run entirely within your local environment, reducing external exposure risks. Azure OpenAI + APIM Gateway - All AI requests are routed through Azure API Management with Microsoft Entra ID authentication, providing enterprise-grade security controls and compliance. Authenticated Proxy - A lightweight proxy service handles token management and request forwarding, ensuring seamless integration. One of the key benefits of this architecture is that no API key is required. Traditional implementations often require storing OpenAI API keys in configuration files, environment variables, or secrets management systems, creating potential security vulnerabilities. This approach uses Azure Managed Identity for backend authentication and Azure CLI credentials for client authentication, meaning no sensitive API keys are ever stored, logged, or exposed in your codebase. For more security, APIM and Azure OpenAI resources can be configured with IP restrictions or network rules to only accept traffic from certain sources. These configurations are available for most Azure resources and provide an additional layer of network-level security. This security-forward approach gives you the full power of MCP's tool integration capabilities while keeping your implementation completely under your control. How to Implement MCP Securely 1. Local MCP Server Implementation Building the MCP Server Let's start by creating a simple MCP server in .NET Core. 1. Create a web application dotnet new web -n McpServer 2.Add MCP packages dotnet add package ModelContextProtocol --prerelease dotnet add package ModelContextProtocol.AspNetCore --prerelease 3. Configure Program.cs var builder = WebApplication.CreateBuilder(args); builder.Services.AddMcpServer() .WithHttpTransport() .WithToolsFromAssembly(); var app = builder.Build(); app.MapMcp(); app.Run(); WithToolsFromAssembly() automatically discovers and registers tools from the current assembly. Look into the C# SDK for other ways to register tools for your use case. 4. Define Tools Now, we can define some tools that our MCP server can expose. here is a simple example for tools that echo input back to the client: using ModelContextProtocol.Server; using System.ComponentModel; namespace Tools; [McpServerToolType] public static class EchoTool { [McpServerTool] [Description("Echoes the input text back to the client in all capital letters.")] public static string EchoCaps(string input) { return new string(input.ToUpperInvariant()); } [McpServerTool] [Description("Echoes the input text back to the client in reverse.")] public static string ReverseEcho(string input) { return new string(input.Reverse().ToArray()); } } Key components of MCP tools are the McpServerToolType class decorator indicating that this class contains MCP tools, and the McpServerTool method decorator with a description that explains what the tool does. Alternative: STDIO Transport If you want to use STDIO transport instead of SSE (implemented here), check out this guide: Build a Model Context Protocol (MCP) Server in C# 2. Create a MCP Client with Cline Now that we have our MCP server set up with tools, we need a client that can discover and invoke these tools. For this implementation, we'll use Cline as our MCP client, configured to work through our secure Azure infrastructure. 1. Install Cline VS Code Extension Install the Cline extension in VS Code. 2. Deploy secure Azure OpenAI Endpoint with APIM Instead of connecting Cline directly to external AI services (which could expose the secure implementation to external bad actors), we will route through Azure API Management (APIM) for enterprise security. With this implementation, all requests go through Microsoft Entra ID and we use managed identity for all authentications. Quick Setup: Deploy the Azure OpenAI with APIM solution. Ensure your Azure OpenAI resources are configured to allow your APIM's managed identity to make calls. The APIM policy below uses managed identity authentication to connect to Azure OpenAI backends. Refer to the Azure OpenAI documentation on managed identity authentication for detailed setup instructions. 3. Configure APIM Policy After deploying APIM, configure the following policy to enable Azure AD token validation, managed identity authentication, and load balancing across multiple OpenAI backends: <!-- Azure API Management Policy for OpenAI Endpoint --> <!-- Implements Azure AD Token validation, managed identity authentication --> <!-- Supports round-robin load balancing across multiple OpenAI backends --> <!-- Requests with 'gpt-5' in the URL are routed to a single backend --> <!-- The client application ID '04b07795-8ddb-461a-bbee-02f9e1bf7b46' is the official Azure CLI app registration --> <!-- This policy allows requests authenticated by Azure CLI (az login) when the required claims are present --> <policies> <inbound> <!-- IP Allow List Fragment (external fragment for client IP restrictions) --> <include-fragment fragment-id="YourCompany-IPAllowList" /> <!-- Azure AD Token Validation for Azure CLI app ID --> <validate-azure-ad-token tenant-id="YOUR-TENANT-ID-HERE" header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <client-application-ids> <application-id>04b07795-8ddb-461a-bbee-02f9e1bf7b46</application-id> </client-application-ids> <audiences> <audience>api://YOUR-API-AUDIENCE-ID-HERE</audience> </audiences> <required-claims> <claim name="roles" match="any"> <value>YourApp.User</value> </claim> </required-claims> </validate-azure-ad-token> <!-- Acquire Managed Identity access token for backend authentication --> <authentication-managed-identity resource="https://cognitiveservices.azure.com" output-token-variable-name="managed-id-access-token" ignore-error="false" /> <!-- Set Authorization header for backend using the managed identity token --> <set-header name="Authorization" exists-action="override"> <value>@("Bearer " + (string)context.Variables["managed-id-access-token"])</value> </set-header> <!-- Check if URL contains 'gpt-5' and set backend accordingly --> <choose> <when condition="@(context.Request.Url.Path.ToLower().Contains("gpt-5"))"> <set-variable name="selected-backend-url" value="https://your-region1-oai.openai.azure.com/openai" /> </when> <otherwise> <cache-lookup-value key="backend-counter" variable-name="backend-counter" /> <choose> <when condition="@(context.Variables.ContainsKey("backend-counter") == false)"> <set-variable name="backend-counter" value="@(0)" /> </when> </choose> <set-variable name="current-backend-index" value="@((int)context.Variables["backend-counter"] % 7)" /> <choose> <when condition="@((int)context.Variables["current-backend-index"] == 0)"> <set-variable name="selected-backend-url" value="https://your-region1-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 1)"> <set-variable name="selected-backend-url" value="https://your-region2-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 2)"> <set-variable name="selected-backend-url" value="https://your-region3-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 3)"> <set-variable name="selected-backend-url" value="https://your-region4-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 4)"> <set-variable name="selected-backend-url" value="https://your-region5-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 5)"> <set-variable name="selected-backend-url" value="https://your-region6-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 6)"> <set-variable name="selected-backend-url" value="https://your-region7-oai.openai.azure.com/openai" /> </when> </choose> <set-variable name="next-counter" value="@(((int)context.Variables["backend-counter"] + 1) % 1000)" /> <cache-store-value key="backend-counter" value="@((int)context.Variables["next-counter"])" duration="300" /> </otherwise> </choose> <!-- Always set backend service using selected-backend-url variable --> <set-backend-service base-url="@((string)context.Variables["selected-backend-url"])" /> <!-- Inherit any base policies defined outside this section --> <base /> </inbound> <backend> <base /> </backend> <outbound> <base /> </outbound> <on-error> <base /> </on-error> </policies> This policy creates a secure gateway that validates Azure AD tokens from your local Azure CLI session, then uses APIM's managed identity to authenticate with Azure OpenAI backends, eliminating the need for API keys. It automatically load-balances requests across multiple Azure OpenAI regions using round-robin distribution for optimal performance. 4. Create Azure APIM proxy for Cline This FastAPI-based proxy forwards OpenAI-compatible API requests from Cline through APIM using Azure AD authentication via Azure CLI credentials, eliminating the need to store or manage OpenAI API keys. Prerequisites: Python 3.8 or higher Azure CLI (ensure az login has been run at least once) Ensure the user running the proxy script has appropriate Azure AD roles and permissions. This script uses Azure CLI credentials to obtain bearer tokens. Your user account must have the correct roles assigned and access to the target API audience configured in the APIM policy above. Quick setup for the proxy: Create this requirements.txt: fastapi uvicorn requests azure-identity Create this Python script for the proxy source code azure_proxy.py: import os import requests from fastapi import FastAPI, Request from fastapi.responses import StreamingResponse import uvicorn from azure.identity import AzureCliCredential # CONFIGURATION APIM_BASE_URL = <APIM BASE URL HERE> AZURE_SCOPE = <AZURE SCOPE HERE> PORT = int(os.environ.get("PORT", 8080)) app = FastAPI() credential = AzureCliCredential() # Use a single requests.Session for connection pooling from requests.adapters import HTTPAdapter session = requests.Session() session.mount("https://", HTTPAdapter(pool_connections=100, pool_maxsize=100)) import time _cached_token = None _cached_expiry = 0 def get_bearer_token(scope: str) -> str: """Get an access token using AzureCliCredential, caching until expiry is within 30 seconds.""" global _cached_token, _cached_expiry now = int(time.time()) if _cached_token and (_cached_expiry - now > 30): return _cached_token try: token_obj = credential.get_token(scope) _cached_token = token_obj.token _cached_expiry = token_obj.expires_on return _cached_token except Exception as e: raise RuntimeError(f"Could not get Azure access token: {e}") @app.api_route("/{path:path}", methods=["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"]) async def proxy(request: Request, path: str): # Assemble the destination URL (preserve trailing slash logic) dest_url = f"{APIM_BASE_URL.rstrip('/')}/{path}".rstrip("/") if request.url.query: dest_url += "?" + request.url.query # Get the Bearer token bearer_token = get_bearer_token(AZURE_SCOPE) # Prepare headers (copy all, overwrite Authorization) headers = dict(request.headers) headers["Authorization"] = f"Bearer {bearer_token}" headers.pop("host", None) # Read body body = await request.body() # Send the request to APIM using the pooled session resp = session.request( method=request.method, url=dest_url, headers=headers, data=body if body else None, stream=True, ) # Stream the response back to the client return StreamingResponse( resp.raw, status_code=resp.status_code, headers={k: v for k, v in resp.headers.items() if k.lower() != "transfer-encoding"}, ) if __name__ == "__main__": # Bind the app to 127.0.0.1 to avoid any Firewall updates uvicorn.run(app, host="127.0.0.1", port=PORT) Run the setup: pip install -r requirements.txt az login # Authenticate with Azure python azure_proxy.py Configure Cline to use the proxy: Using the OpenAI Compatible API Provider: Base URL: http://localhost:8080 API Key: <any random string> Model ID: <your Azure OpenAI deployment name> API Version: <your Azure OpenAI deployment version> The API key field is required by Cline but unused in our implementation - any random string works since authentication happens via Azure AD. 5. Configure Cline to listen to your MCP Server Now that we have both our MCP server running and Cline configured with secure OpenAI access, the final step is connecting them together. To enable Cline to discover and use your custom tools, navigate to your installed MCP servers on Cline, select Configure MCP Servers, and add in the configuration for your server: { "mcpServers": { "mcp-tools": { "autoApprove": [ "EchoCaps", "ReverseEcho", ], "disabled": false, "timeout": 60, "type": "sse", "url": "http://<your localhost url>/sse" } } } Now, you can use Cline's chat interface to interact with your secure MCP tools. Try asking Cline to use your custom tools - for example, "Can you echo 'Hello World' in capital letters?" and watch as it calls your local MCP server through the infrastructure you've built. Conclusion There you have it: A secure implementation of MCP that can be tailored to your specific use case. This approach gives you the power of MCP while maintaining enterprise security. You get: AI capabilities through secure Azure infrastructure. Custom tools that never leave your environment. Standard MCP interface for easy integration. Complete control over your data and tools. The key is keeping MCP servers local while routing AI requests through your secure Azure infrastructure. This way, you gain MCP's benefits without compromising security. Disclaimer While this tutorial provides a secure foundation for MCP implementation, organizations are responsible for configuring their Azure resources according to their specific security requirements and compliance standards. Ensure proper review of network rules, access policies, and authentication configurations before deploying to production environments. Resources MCP SDKs and Tools: MCP C# SDK MCP Python SDK Cline SDK Cline User Guide Azure OpenAI with APIM Azure API Management Network Security: Azure API Management - restrict caller IPs Azure API Management with an Azure virtual network Set up inbound private endpoint for Azure API Management Azure OpenAI and AI Services Network Security: Configure Virtual Networks for Azure AI services Securing Azure OpenAI inside a virtual network with private endpoints Add an Azure OpenAI network security perimeter az cognitiveservices account network-ruleInvestigating M365 Copilot Activity with Sentinel & Defender XDR
As organizations embrace AI-powered tools like Microsoft Copilot, ChatGPT, and other generative assistants, one thing becomes immediately clear: AI is only as trustworthy as the data it can see. These systems are increasingly woven into everyday workstreams, surfacing insights, drafting content, and answering questions based on enterprise data signals. Yet behind the magic lies a new security frontier: making sure AI only accesses the right data, the right way, at the right time. That’s where Data Security Posture Management (DSPM) comes into play. Data Security Posture Management (DSPM) for AI is a Microsoft Purview capability designed to help organizations discover, secure, and apply compliance controls for AI usage across your enterprise. With personalized recommendations, one-click policies help you protect your data and comply with regulatory requirements and get ahead of questions like: Where is my sensitive data stored and who has access? Are we protecting data from potential oversharing risks Are we protecting sensitive data references in Copilot and agent responses? How to maintain compliance and governance over data accessed by AI Are we empowering users with AI safely and responsibly, backed by security? In this blog, we will explore how Microsoft Sentinel and Defender XDR can help security teams operationalize DSPM for AI. From capturing Copilot interaction telemetry to building investigations and accelerating response. To learn more about Data Security Posture Management (DSPM) for AI, please visit DSPM for AI M365 Copilot activity in the SOC Getting Started: This advanced hunting table is populated by records from Microsoft Defender for Cloud Apps. If your organization hasn’t deployed the service in Microsoft Defender XDR, queries that use the table aren’t going to work or return any results. To make sure the CloudAppEvents table is populated, make sure to enable Microsoft 365 activities. Follow this article for detailed steps. For more information about how to deploy Defender for Cloud apps in Defender XDR, refer to Deploy supported services. You can perform Advanced Hunting of Microsoft 365 Copilot data through CloudAppEvents. CloudAppEvents is a powerful table in Microsoft Defender XDRs advanced hunting schema that captures user and admin activities across Microsoft Cloud apps. To make sure the CloudAppEvents table is populated, follow the steps mentioned in the article here. The CloudAppEvents table contains enriched logs from all SaaS applications connected to Microsoft Defender for Cloud Apps refer to Apps and Services covered by CloudAppEvents DSPM for AI and CloudAppEvents Activity Explorer is the central investigative hub in Data Security Posture Management (DSPM) for AI. It surfaces granular telemetry about AI interactions, capturing prompts, responses, user identities, and sensitive information types (SITs) provided you’ve the right permissions/policies enabled. Whether the activity originates from Microsoft Copilot, third-party GenAI apps, or custom enterprise agents, Activity Explorer provides the visibility needed to assess risk and take action. Microsoft Purview’s Data Security Posture Management (DSPM) for AI provides visibility and it’s tightly integrated with Microsoft Sentinel and Defender XDR through the CloudAppEvents table. The Flow Explained Event Generation in DSPM for AI Every AI interaction, whether from Copilot, Fabric, or unmanaged apps like DeepSeek is captured in the Microsoft 365 Unified Audit Log. These logs include metadata like user identity, app name, agent name, prompt content, and sensitivity label matches Ingestion into CloudAppEvents The audit data flows into the CloudAppEventstable within Microsoft Defender XDR if you’ve enabled the app connector. Follow this article for detailed steps. This table is part of the advanced hunting schema and includes telemetry for user and object activities across Microsoft 365 and other cloud apps Availability in Microsoft Sentinel Because CloudAppEvents is also exposed in Microsoft Sentinel, customers can query AI-related activities using KQL for threat hunting, incident correlation, and compliance investigations. This enables a unified view across Sentinel and XDR without needing a separate connector What You Can Do with CloudAppEvents Advanced Hunting: Use KQL to search for AI interactions that match specific sensitivity labels, user risk scores, or app types. Incident Investigation: Correlate AI activity with alerts from Office 365. Compliance Audits: Track for activity hunting. Custom Dashboards: Visualize AI usage patterns in Power BI or Sentinel dashboards or Workbooks. Example KQL Query Best Practice: Build KQL queries that filter by Application == "Microsoft 365 Copilot" and ActionType == " Interactwithcopilot" to surface relevant events. For eg., A simple query to get started analyzing the interactions of M365 Copilot CloudAppEvents | where Application in ("Microsoft 365", "Microsoft 365 Copilot Chat") | where ActivityType == "Interactwithcopilot" Known Gaps The CloudAppEvents table, which ingests AI activity from the Microsoft 365 Unified Audit Log, is incredibly useful for activity hunting. It gives you metadata like: Timestamp User identity App and agent name Action type (e.g., AIInteraction) You won’t see the actual prompt or response from the AI interaction and you won’t get DSPM enrichment like, Sensitivity Information Types (SITs), Policy hits. These records only contain message metadata. Navigate to Purview’s DSPM for AI Activity Explorer to review the prompts and responses. While CloudAppEvents is great for identifying patterns and correlating activity across users and apps, it doesn’t give you the full picture needed for deep investigation or compliance auditing. If you need that level of detail, you’ll want to pivot into DSPM for AIs Activity Explorer, where you can inspect the full interaction including prompt, response, and policy context. Acknowledgements: Special Thanks to Martin Gagné, Principal Group Engineering Manager, for reviewing this blog and providing valuable feedback.From Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI
By Chirag Mehta, Vice President and Principal Analyst - Constellation Research AI is changing the way organizations work. It helps teams write code, detect fraud, automate workflows, and make complex decisions faster than ever before. But as AI adoption increases, so do the risks, many of which traditional security tools were not designed to address. Cybersecurity leaders are starting to see that AI security is not just another layer of defense. It is becoming essential to building trust, ensuring resilience, and maintaining business continuity. Earlier this year, after many conversations with CISOs and CIOs, I saw a clear need to bring more attention to this topic. That led to my report on AI Security, which explores how AI-specific vulnerabilities differ from traditional cybersecurity risks and why securing AI systems calls for a more intentional approach. Why AI Changes the Security Landscape AI systems do not behave like traditional software. They learn from data instead of following pre-defined logic. This makes them powerful, but also vulnerable. For example, an AI model can: Misinterpret input in ways that humans cannot easily detect Be tricked into producing harmful or unintended responses through crafted prompts Leak sensitive training data in its outputs Take actions that go against business policies or legal requirements These are not coding flaws. They are risks that originate from how AI systems process information and act on it. These risks become more serious with agentic AI. These systems act on behalf of humans, interact with other software, and sometimes with other AI agents. They can make decisions, initiate actions, and change configurations. If one is compromised, the consequences can spread quickly. A key challenge is that many organizations still rely on traditional defenses to secure AI systems. While those tools remain necessary, they are no longer enough. AI introduces new risks across every layer of the stack, including data, networks, endpoints, applications, and cloud infrastructure. As I explained in my report, the security focus must shift from defending the perimeter to governing the behavior of AI systems, the data they use, and the decisions they make. The Shift Toward AI-Aware Cyber Resilience Cyber resilience is the ability to withstand, adapt to, and recover from attacks. Meeting that standard today requires understanding how AI is developed, deployed, and used by employees, customers, and partners. To get there, organizations must answer questions such as: Where is our sensitive data going, and is it being used safely to train models? What non-human identities, such as AI agents, are accessing systems and data? Can we detect when an AI system is being misused or manipulated? Are we in compliance with new AI regulations and data usage rules? Let’s look at how Microsoft has evolved its mature security portfolio to help protect AI workloads and support this shift toward resilience. Microsoft’s Approach to Secure AI Microsoft has taken a holistic and integrated approach to AI security. Rather than creating entirely new tools, it is extending existing products already used by millions to support AI workloads. These features span identity, data, endpoint, and cloud protection. 1. Microsoft Defender: Treating AI Workloads as Endpoints AI models and applications are emerging as a new class of infrastructure that needs visibility and protection. Defender for Cloud secures AI workloads across Azure and other cloud platforms such as AWS and GCP by monitoring model deployments and detecting vulnerabilities. Defender for Cloud Apps extends protection to AI-enabled apps running at the edge Defender for APIs supports AI systems that use APIs, which are often exposed to risks such as prompt injection or model manipulation Additionally, Microsoft has launched tools to support AI red-teaming, content safety, and continuous evaluation capabilities to ensure agents operate safely and as intended. This allows teams identify and remediate risks such as jailbreaks or prompt injection before models are deployed. 2. Microsoft Entra: Managing Non-Human Identities As organizations roll out more AI agents and copilots, non-human identities are becoming more common. These digital identities need strong oversight. Microsoft Entra helps create and manage identities for AI agents Conditional Access ensures AI agents only access the resources they need, based on real-time signals and context Privileged Identity Management manages, controls, and monitors AI agents access to important resources within an organization 3. Microsoft Purview: Securing Data Used in AI Purview plays an important role in securing both the data that powers AI apps and agents, and the data they generate through interactions. Data discovery and classification helps label sensitive information and track its use Data Loss Prevention policies help prevent leaks or misuse of data in tools such as Copilot or agents built in Azure AI Foundry Insider Risk Management alerts security teams when employees feed sensitive data into AI systems without approval Purview also helps organizations meet transparency and compliance requirements, extending the same policies they already use today to AI workloads, without requiring separate configurations, as regulations like the EU AI Act take effect. Here's a video that explains the above Microsoft security products: Securing AI Is Now a Strategic Priority AI is evolving quickly, and the risks are evolving with it. Traditional tools still matter, but they were not built for systems that learn, adapt, and act independently. They also weren’t designed for the pace and development approaches AI requires, where securing from the first line of code is critical to staying protected at scale. Microsoft is adapting its security portfolio to meet this shift. By strengthening identity, data, and endpoint protections, it is helping customers build a more resilient foundation. Whether you are launching your first AI-powered tool or managing dozens of agents across your organization, the priority is clear. Secure your AI systems before they become a point of weakness. You can read more in my AI Security report and learn how Microsoft is helping organizations secure AI supporting these efforts across its security portfolio.