security for ai
6 TopicsMicrosoft Purview: The Ultimate AI Data Security Solution
Introduction AI is transforming the way enterprises operate, however with great innovation comes great responsibility. I’ve spent the last few years helping organizations secure their data with tools like Azure Information Protection, Data Loss Prevention, and now Microsoft Purview. As generative AI tools like Microsoft Copilot become embedded in everyday workflows, the need for clear governance and robust data protection is more urgent than ever. Through this blog post, let's explore how Microsoft Purview can help organizations stay ahead of securing AI interactions without slowing down innovation. What’s the Issue? AI agents are increasingly used to process sensitive data, often through natural language prompts. Without proper oversight, this can lead to data oversharing, compliance violations, and security risks. Why It’s Urgent? According to the recent trends of 2025, over half of corporate users bring their own AI tools to work, often consumer-grade apps like ChatGPT or DeepSeek. These tools bypass enterprise protections, making it difficult to monitor and control data exposure. Use Cases Enterprise AI Governance: Apply consistent policies across Microsoft and third-party AI tools. Compliance Auditing: Generate audit logs for AI interactions to meet regulatory requirements. Risk Mitigation: Block risky uploads and enforce adaptive protection based on user behavior. How Microsoft Purview Solves It Data Security Posture Management (DSPM) for AI Purview’s DSPM for AI provides a centralized dashboard to monitor AI activity, assess data risks, and enforce compliance policies across Copilots, agents, and third-party AI apps. It correlates data classification, user behavior, and policy coverage to surface real-time risks, such as oversharing via AI agents, and generates actionable recommendations to remediate gaps. DSPM integrates with tools like Microsoft Security Copilot for AI-assisted investigations and supports automated scanning, trend analytics, and posture reporting. It also extends protection to third-party AI tools like ChatGPT through endpoint DLP and browser extensions, ensuring consistent governance across both managed and unmanaged environments 2. Unified Protection Across AI Agents Whether you're using Microsoft 365 Copilot, Security Copilot, or Azure AI services, Purview applies consistent security and compliance controls. Agents inherit protection from their parent apps, including sensitivity labels, data loss prevention (DLP), and Insider Risk Management. Real-Time Risk Detection Purview enables real-time monitoring of prompts and responses, helping security teams detect oversharing and policy violations instantly. From Microsoft Learn – Insider Risk 4. One-Click Policy Activation Administrators can leverage Microsoft Purview’s Data Security Posture Management (DSPM) for AI to rapidly deploy comprehensive security and compliance controls via one-click policy activation. This streamlined mechanism enables organizations to enforce prebuilt policy templates across AI ecosystems, ensuring prompt implementation of data loss prevention (DLP), sensitivity labeling, and Insider Risk Management on both Microsoft and third-party AI services. Through DSPM’s unified policy orchestration layer, security teams gain granular telemetry into prompt and response flows, real-time policy enforcement, and detailed incident reporting. Automated analytics continuously assess risk posture, enabling adaptive policy adjustments and scalable governance as new AI tools and user workflows are introduced into the enterprise environment. Please note: After implementing policy changes, it can take up to 24 hours for changes to become visible and take full effect across your environment. From Microsoft Learn – Purview Data Security Posture Management (DSPM) portal 5. Support for Third-Party AI Apps Purview extends robust data security and compliance to browser-based AI tools such as ChatGPT and Google Gemini by employing endpoint Data Loss Prevention (DLP) and browser extensions that monitor and control data flows in real time. Through Microsoft Purview’s Data Security Posture Management (DSPM) for AI, organizations can implement granular controls for sensitive data accessed during both Microsoft-native and third-party AI interactions. DSPM offers continuous discovery and classification of data assets, linking AI prompts and responses to their original data sources to automatically enforce data protection policies, including sensitivity labeling, adaptive access controls, and comprehensive content inspection, contextually for each AI transaction. For unsanctioned AI services reached via browsers, the Purview browser extension inspects both input and output, enabling endpoint DLP to block, alert, or redact sensitive material instantly, thus preventing unauthorized uploads, downloads, or copy/paste activities. Security teams benefit from rich telemetry on AI usage patterns, which integrate with user risk profiles and anomaly detection to identify and flag suspicious attempts to extract confidential information. Close integration with Microsoft Security Copilot and automated analytics further enhances visibility across all AI data flows, supporting incident response, audit, and compliance reporting needs. Purview’s adaptive policy orchestration ensures that evolving AI services and workflows are continuously assessed for risk, and that controls are dynamically aligned with business, regulatory, and security requirements, enabling scalable, policy-driven governance for the expanding enterprise AI ecosystem. Pros and Cons The following table outlines the key advantages and potential limitations of implementing AI and agent data security controls within Microsoft Purview. Pros Cons License Needed Centralized AI governance Requires proper licensing and setup Microsoft 365 E5 or equivalent Purview add-on license Real-time risk detection May need browser extensions for full coverage Microsoft 365 E5 or Purview add-on Supports both Microsoft and third-party AI apps Some features limited to enterprise versions Microsoft 365 E5, E5 Compliance, or equivalent Purview add-on Conclusion Microsoft Purview offers a comprehensive solution for securing AI agents and their data interactions. By leveraging DSPM for AI, organizations can confidently adopt AI technologies while maintaining control over sensitive information. Explore Microsoft Purview’s DSPM for AI here. Start by assessing your current AI usage and activate one-click policies to secure your environment today! FAQ 1. What is the purpose of Microsoft Purview’s AI and agent data security controls? The purpose is to ensure that sensitive data accessed or processed by AI systems and agents is governed, protected, and monitored using Microsoft Purview’s compliance and security capabilities. Microsoft Purview data security and compliance protection 2. How does Microsoft Purview help secure AI-generated content? Microsoft Purview applies data loss prevention (DLP), sensitivity labels, and information protection policies to AI-generated content, ensuring it adheres to organizational compliance standards. Microsoft Purview Information Protection 3. Can Microsoft Purview track and audit AI interactions with sensitive data? Yes. Microsoft Purview provides audit logs and activity explorer capabilities that allow organizations to monitor how AI systems and agents interact with sensitive data. Search the audit log 4. What role do sensitivity labels play in AI data governance? Sensitivity labels classify and protect data based on its sensitivity level. When applied, they enforce encryption, access restrictions, and usage rights, even when data is processed by AI. Learn about sensitivity labels 5. How does Microsoft Purview integrate with Copilot and other AI tools? Microsoft Purview extends its data protection and compliance capabilities to Microsoft 365 Copilot and other AI tools by ensuring that data accessed by these tools is governed under existing policies. Microsoft 365 admin center Microsoft 365 Copilot usage 6. Are there specific controls for third-party AI agents? Yes. Microsoft Purview supports conditional access, DLP, and access reviews to manage and monitor third-party AI agents that interact with organizational data. What is Conditional Access in Microsoft Entra ID? 7. How can organizations ensure AI usage complies with regulatory requirements? By using Microsoft Purview’s compliance manager, organizations can assess and manage regulatory compliance risks associated with AI usage. Microsoft Purview Compliance Manager About the Author: Hi! Jacques “Jack” here, I’m a Microsoft Technical Trainer at Microsoft. I wanted to share a topic that is often top of mind, AI governance. I’ve been working with Microsoft Purview since its launch in 2022, building on prior experience with Azure Information Protection and Data Loss Prevention. I also have great expertise with Generative AI technologies since their public release in November 2022, including Microsoft Copilot and other enterprise-grade AI solutions.Secure Model Context Protocol (MCP) Implementation with Azure and Local Servers
Introduction The Model Context Protocol (MCP) enables AI systems to interact with external data sources and tools through a standardized interface. While powerful, MCP can introduce security risks in enterprise environments. This tutorial shows you how to implement MCP securely using local servers, Azure OpenAI with APIM, and proper authentication. Understanding MCP's Security Risks There are a couple of key security concerns to consider before implementing MCP: Data Exfiltration: External MCP servers could expose sensitive data. Unauthorized Access: Third-party services become potential security risks. Loss of Control: Unknown how external services handle your data. Compliance Issues: Difficulty meeting regulatory requirements with external dependencies. The solution? Keep everything local and controlled. Secure Architecture Before we dive into implementation, let's take a look at the overall architecture of our secure MCP solution: This architecture consists of three key components working together: Local MCP Server - Your custom tools run entirely within your local environment, reducing external exposure risks. Azure OpenAI + APIM Gateway - All AI requests are routed through Azure API Management with Microsoft Entra ID authentication, providing enterprise-grade security controls and compliance. Authenticated Proxy - A lightweight proxy service handles token management and request forwarding, ensuring seamless integration. One of the key benefits of this architecture is that no API key is required. Traditional implementations often require storing OpenAI API keys in configuration files, environment variables, or secrets management systems, creating potential security vulnerabilities. This approach uses Azure Managed Identity for backend authentication and Azure CLI credentials for client authentication, meaning no sensitive API keys are ever stored, logged, or exposed in your codebase. For more security, APIM and Azure OpenAI resources can be configured with IP restrictions or network rules to only accept traffic from certain sources. These configurations are available for most Azure resources and provide an additional layer of network-level security. This security-forward approach gives you the full power of MCP's tool integration capabilities while keeping your implementation completely under your control. How to Implement MCP Securely 1. Local MCP Server Implementation Building the MCP Server Let's start by creating a simple MCP server in .NET Core. 1. Create a web application dotnet new web -n McpServer 2.Add MCP packages dotnet add package ModelContextProtocol --prerelease dotnet add package ModelContextProtocol.AspNetCore --prerelease 3. Configure Program.cs var builder = WebApplication.CreateBuilder(args); builder.Services.AddMcpServer() .WithHttpTransport() .WithToolsFromAssembly(); var app = builder.Build(); app.MapMcp(); app.Run(); WithToolsFromAssembly() automatically discovers and registers tools from the current assembly. Look into the C# SDK for other ways to register tools for your use case. 4. Define Tools Now, we can define some tools that our MCP server can expose. here is a simple example for tools that echo input back to the client: using ModelContextProtocol.Server; using System.ComponentModel; namespace Tools; [McpServerToolType] public static class EchoTool { [McpServerTool] [Description("Echoes the input text back to the client in all capital letters.")] public static string EchoCaps(string input) { return new string(input.ToUpperInvariant()); } [McpServerTool] [Description("Echoes the input text back to the client in reverse.")] public static string ReverseEcho(string input) { return new string(input.Reverse().ToArray()); } } Key components of MCP tools are the McpServerToolType class decorator indicating that this class contains MCP tools, and the McpServerTool method decorator with a description that explains what the tool does. Alternative: STDIO Transport If you want to use STDIO transport instead of SSE (implemented here), check out this guide: Build a Model Context Protocol (MCP) Server in C# 2. Create a MCP Client with Cline Now that we have our MCP server set up with tools, we need a client that can discover and invoke these tools. For this implementation, we'll use Cline as our MCP client, configured to work through our secure Azure infrastructure. 1. Install Cline VS Code Extension Install the Cline extension in VS Code. 2. Deploy secure Azure OpenAI Endpoint with APIM Instead of connecting Cline directly to external AI services (which could expose the secure implementation to external bad actors), we will route through Azure API Management (APIM) for enterprise security. With this implementation, all requests go through Microsoft Entra ID and we use managed identity for all authentications. Quick Setup: Deploy the Azure OpenAI with APIM solution. Ensure your Azure OpenAI resources are configured to allow your APIM's managed identity to make calls. The APIM policy below uses managed identity authentication to connect to Azure OpenAI backends. Refer to the Azure OpenAI documentation on managed identity authentication for detailed setup instructions. 3. Configure APIM Policy After deploying APIM, configure the following policy to enable Azure AD token validation, managed identity authentication, and load balancing across multiple OpenAI backends: <!-- Azure API Management Policy for OpenAI Endpoint --> <!-- Implements Azure AD Token validation, managed identity authentication --> <!-- Supports round-robin load balancing across multiple OpenAI backends --> <!-- Requests with 'gpt-5' in the URL are routed to a single backend --> <!-- The client application ID '04b07795-8ddb-461a-bbee-02f9e1bf7b46' is the official Azure CLI app registration --> <!-- This policy allows requests authenticated by Azure CLI (az login) when the required claims are present --> <policies> <inbound> <!-- IP Allow List Fragment (external fragment for client IP restrictions) --> <include-fragment fragment-id="YourCompany-IPAllowList" /> <!-- Azure AD Token Validation for Azure CLI app ID --> <validate-azure-ad-token tenant-id="YOUR-TENANT-ID-HERE" header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <client-application-ids> <application-id>04b07795-8ddb-461a-bbee-02f9e1bf7b46</application-id> </client-application-ids> <audiences> <audience>api://YOUR-API-AUDIENCE-ID-HERE</audience> </audiences> <required-claims> <claim name="roles" match="any"> <value>YourApp.User</value> </claim> </required-claims> </validate-azure-ad-token> <!-- Acquire Managed Identity access token for backend authentication --> <authentication-managed-identity resource="https://cognitiveservices.azure.com" output-token-variable-name="managed-id-access-token" ignore-error="false" /> <!-- Set Authorization header for backend using the managed identity token --> <set-header name="Authorization" exists-action="override"> <value>@("Bearer " + (string)context.Variables["managed-id-access-token"])</value> </set-header> <!-- Check if URL contains 'gpt-5' and set backend accordingly --> <choose> <when condition="@(context.Request.Url.Path.ToLower().Contains("gpt-5"))"> <set-variable name="selected-backend-url" value="https://your-region1-oai.openai.azure.com/openai" /> </when> <otherwise> <cache-lookup-value key="backend-counter" variable-name="backend-counter" /> <choose> <when condition="@(context.Variables.ContainsKey("backend-counter") == false)"> <set-variable name="backend-counter" value="@(0)" /> </when> </choose> <set-variable name="current-backend-index" value="@((int)context.Variables["backend-counter"] % 7)" /> <choose> <when condition="@((int)context.Variables["current-backend-index"] == 0)"> <set-variable name="selected-backend-url" value="https://your-region1-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 1)"> <set-variable name="selected-backend-url" value="https://your-region2-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 2)"> <set-variable name="selected-backend-url" value="https://your-region3-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 3)"> <set-variable name="selected-backend-url" value="https://your-region4-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 4)"> <set-variable name="selected-backend-url" value="https://your-region5-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 5)"> <set-variable name="selected-backend-url" value="https://your-region6-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 6)"> <set-variable name="selected-backend-url" value="https://your-region7-oai.openai.azure.com/openai" /> </when> </choose> <set-variable name="next-counter" value="@(((int)context.Variables["backend-counter"] + 1) % 1000)" /> <cache-store-value key="backend-counter" value="@((int)context.Variables["next-counter"])" duration="300" /> </otherwise> </choose> <!-- Always set backend service using selected-backend-url variable --> <set-backend-service base-url="@((string)context.Variables["selected-backend-url"])" /> <!-- Inherit any base policies defined outside this section --> <base /> </inbound> <backend> <base /> </backend> <outbound> <base /> </outbound> <on-error> <base /> </on-error> </policies> This policy creates a secure gateway that validates Azure AD tokens from your local Azure CLI session, then uses APIM's managed identity to authenticate with Azure OpenAI backends, eliminating the need for API keys. It automatically load-balances requests across multiple Azure OpenAI regions using round-robin distribution for optimal performance. 4. Create Azure APIM proxy for Cline This FastAPI-based proxy forwards OpenAI-compatible API requests from Cline through APIM using Azure AD authentication via Azure CLI credentials, eliminating the need to store or manage OpenAI API keys. Prerequisites: Python 3.8 or higher Azure CLI (ensure az login has been run at least once) Ensure the user running the proxy script has appropriate Azure AD roles and permissions. This script uses Azure CLI credentials to obtain bearer tokens. Your user account must have the correct roles assigned and access to the target API audience configured in the APIM policy above. Quick setup for the proxy: Create this requirements.txt: fastapi uvicorn requests azure-identity Create this Python script for the proxy source code azure_proxy.py: import os import requests from fastapi import FastAPI, Request from fastapi.responses import StreamingResponse import uvicorn from azure.identity import AzureCliCredential # CONFIGURATION APIM_BASE_URL = <APIM BASE URL HERE> AZURE_SCOPE = <AZURE SCOPE HERE> PORT = int(os.environ.get("PORT", 8080)) app = FastAPI() credential = AzureCliCredential() # Use a single requests.Session for connection pooling from requests.adapters import HTTPAdapter session = requests.Session() session.mount("https://", HTTPAdapter(pool_connections=100, pool_maxsize=100)) import time _cached_token = None _cached_expiry = 0 def get_bearer_token(scope: str) -> str: """Get an access token using AzureCliCredential, caching until expiry is within 30 seconds.""" global _cached_token, _cached_expiry now = int(time.time()) if _cached_token and (_cached_expiry - now > 30): return _cached_token try: token_obj = credential.get_token(scope) _cached_token = token_obj.token _cached_expiry = token_obj.expires_on return _cached_token except Exception as e: raise RuntimeError(f"Could not get Azure access token: {e}") @app.api_route("/{path:path}", methods=["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"]) async def proxy(request: Request, path: str): # Assemble the destination URL (preserve trailing slash logic) dest_url = f"{APIM_BASE_URL.rstrip('/')}/{path}".rstrip("/") if request.url.query: dest_url += "?" + request.url.query # Get the Bearer token bearer_token = get_bearer_token(AZURE_SCOPE) # Prepare headers (copy all, overwrite Authorization) headers = dict(request.headers) headers["Authorization"] = f"Bearer {bearer_token}" headers.pop("host", None) # Read body body = await request.body() # Send the request to APIM using the pooled session resp = session.request( method=request.method, url=dest_url, headers=headers, data=body if body else None, stream=True, ) # Stream the response back to the client return StreamingResponse( resp.raw, status_code=resp.status_code, headers={k: v for k, v in resp.headers.items() if k.lower() != "transfer-encoding"}, ) if __name__ == "__main__": # Bind the app to 127.0.0.1 to avoid any Firewall updates uvicorn.run(app, host="127.0.0.1", port=PORT) Run the setup: pip install -r requirements.txt az login # Authenticate with Azure python azure_proxy.py Configure Cline to use the proxy: Using the OpenAI Compatible API Provider: Base URL: http://localhost:8080 API Key: <any random string> Model ID: <your Azure OpenAI deployment name> API Version: <your Azure OpenAI deployment version> The API key field is required by Cline but unused in our implementation - any random string works since authentication happens via Azure AD. 5. Configure Cline to listen to your MCP Server Now that we have both our MCP server running and Cline configured with secure OpenAI access, the final step is connecting them together. To enable Cline to discover and use your custom tools, navigate to your installed MCP servers on Cline, select Configure MCP Servers, and add in the configuration for your server: { "mcpServers": { "mcp-tools": { "autoApprove": [ "EchoCaps", "ReverseEcho", ], "disabled": false, "timeout": 60, "type": "sse", "url": "http://<your localhost url>/sse" } } } Now, you can use Cline's chat interface to interact with your secure MCP tools. Try asking Cline to use your custom tools - for example, "Can you echo 'Hello World' in capital letters?" and watch as it calls your local MCP server through the infrastructure you've built. Conclusion There you have it: A secure implementation of MCP that can be tailored to your specific use case. This approach gives you the power of MCP while maintaining enterprise security. You get: AI capabilities through secure Azure infrastructure. Custom tools that never leave your environment. Standard MCP interface for easy integration. Complete control over your data and tools. The key is keeping MCP servers local while routing AI requests through your secure Azure infrastructure. This way, you gain MCP's benefits without compromising security. Disclaimer While this tutorial provides a secure foundation for MCP implementation, organizations are responsible for configuring their Azure resources according to their specific security requirements and compliance standards. Ensure proper review of network rules, access policies, and authentication configurations before deploying to production environments. Resources MCP SDKs and Tools: MCP C# SDK MCP Python SDK Cline SDK Cline User Guide Azure OpenAI with APIM Azure API Management Network Security: Azure API Management - restrict caller IPs Azure API Management with an Azure virtual network Set up inbound private endpoint for Azure API Management Azure OpenAI and AI Services Network Security: Configure Virtual Networks for Azure AI services Securing Azure OpenAI inside a virtual network with private endpoints Add an Azure OpenAI network security perimeter az cognitiveservices account network-rule1.4KViews3likes2CommentsIntroducing developer solutions for Microsoft Sentinel platform
Security is being reengineered for the AI era, moving beyond static, rule-bound controls and toward after-the-fact response toward platform-led, machine-speed defense. The challenge is clear: fragmented tools, sprawling signals, and legacy architectures that can’t match the velocity and scale of modern attacks. What’s needed is an AI-ready, data-first foundation - one that turns telemetry into a security graph, standardizes access for agents, and coordinates autonomous actions while keeping humans in command of strategy and high-impact investigations. Security teams already center operations on their SIEM for end-to-end visibility, and we’re advancing that foundation by evolving Microsoft Sentinel into both the SIEM and the platform for agentic defense—connecting analytics and context across ecosystems. And today, we’re introducing new platform capabilities that build on Sentinel data lake: Sentinel graph for deeper insight and context; Sentinel MCP server and tools to make data agent ready; new developer capabilities; and Security Store for effortless discovery and deployment—so protection accelerates to machine speed while analysts do their best work. Today, customers use a breadth of solutions to keep themselves secure. Each solution typically ingests, processes, and stores the security data it needs which means applications maintain identical copies of the same underlying data. This is painful for both customers and partners, who don’t want to build and maintain duplicate infrastructure and create data silos that make it difficult to counter sophisticated attacks. With today’s announcement, we’re directly addressing those challenges by giving partners the ability to create solutions that can reason over the single copy of the security data that each customer has in their Sentinel data lake instance. Partners can create AI solutions that use Sentinel and Security Copilot and distribute them in Microsoft Security Store to reach audiences, grow their revenue, and keep their customers safe. Sentinel already has a rich partner ecosystem with hundreds of SIEM solutions that include connectors, playbooks, and other content types. These new platform capabilities extend those solutions, creating opportunities for partners to address new scenarios and bring those solutions to market quickly since they don’t need to build complex data pipelines or store and process new data sets in their own infrastructure. For example, partners can use Sentinel connectors to bring their own data into the Sentinel data lake. They can create Jupyter notebook jobs in the updated Sentinel Visual Studio Code extension to analyze that data or take advantage of the new Model Context Protocol (MCP) server which makes the data understandable and accessible to AI agents in Security Copilot. With Security Copilot’s new vibe-coding capabilities, partners can create their agent in the same Sentinel Visual Studio Code extension or the environment of their choice. The solution can then be packaged and published to the new Microsoft Security Store, which gives partners an opportunity to expand their audience and grow their revenue while protecting more customers across the ecosystem. These capabilities are being embraced across our ecosystem by mature and emerging partners alike. Services partners such as Accenture and ISVs such as Zscaler and ServiceNow are already creating solutions that leverage the capabilities of the Sentinel platform. Partners have already brought several solutions to market using the integrated capabilities of the Sentinel platform: Illumio. Illumio for Microsoft Sentinel combines Illumio Insights with Microsoft Sentinel data lake and Security Copilot to revolutionize detection and response to cyber threats. It fuses data from Illumio and all the other sources feeding into Sentinel to deliver a unified view of threats, giving SOC analysts, incident responders, and threat hunters visibility and AI-driven breach containment capabilities for lateral traffic threats and attack paths across hybrid and multi-cloud environments. To learn more, visit Illumio for Microsoft Sentinel. OneTrust. OneTrust’s AI-ready governance platform enables 14,000 customers globally – including over half of the Fortune 500 – to accelerate innovation while ensuring responsible data use. Privacy and risk teams know that undiscovered personal data in their digital estate puts their business and customers at risk. OneTrust’s Privacy Risk Agent uses Security Copilot, Purview scan logs, Entra ID data, and Jupyter notebook jobs in the Sentinel data lake to automatically discover personal data, assess risk, and take mitigating actions. To learn more, visit here. Tanium. The Tanium Security Triage Agent accelerates alert triage using real-time endpoint intelligence from Tanium. Tanium intends to expand its agent to ingest contextual identity data from Microsoft Entra using Sentinel data lake. Discover how Tanium’s integrations empower IT and security teams to make faster, more informed decisions. Simbian. Simbian’s Threat Hunt Agent makes hunters more effective by automating the process of validating threat hunt hypotheses with AI. Threat hunters provide a hypothesis in natural language, and the Agent queries and analyzes the full breadth of data available in Sentinel data lake to validate the hypothesis and do deep investigation. Simbian's AI SOC Agent investigates and responds to security alerts from Sentinel, Defender, and other alert sources and also uses Sentinel data lake to enhance the depth of investigations. Learn more here. Lumen. Lumen’s Defender℠ Threat Feed for Microsoft Sentinel helps customers correlate known-bad artifacts with activity in their environment. Lumen’s Black Lotus Labs® harnesses unmatched network visibility and machine intelligence to produce high-confidence indicators that can be operationalized at scale for detection and investigation. Currently Lumen’s Defender℠ Threat Feed for Microsoft Sentinel is available as an invite only preview. To request an invite, reach out to the Lumen Defender Threat Feed Sales team. The updated Sentinel Visual Studio Code extension for Microsoft Sentinel The Sentinel Extension for Visual Studio code brings new AI and packaging capabilities on top of existing Jupyter notebook jobs to help developers efficiently create new solutions. Building with AI Impactful AI security solutions need access and understanding of relevant security data to address a scenario. The new Microsoft Sentinel Model Context Protocol (MCP) server makes data in Sentinel data lake AI-discoverable and understandable to agents so they can reason over it to generate powerful new insights. It integrates with the Sentinel VS Code extension so developers can use those tools to explore the data in the lake and have agents use those tools as they do their work. To learn more, read the Microsoft Sentinel MCP server announcement. Microsoft is also releasing MCP tools to make creating AI agents more straightforward. Developers can use Security Copilot’s MCP tools to create agents within either the Sentinel VS Code extension or the environment of their choice. They can also take advantage of the low code agent authoring experience right in the Security Copilot portal. To learn more about the Security Copilot pro code and low code agent authoring experiences visit the Security Copilot blog post on Building your own Security Copilot agents. Jupyter Notebook Jobs Jupyter notebooks jobs are an important part of the Sentinel data lake and were launched at our public preview a couple of months ago. See the documentation here for more details on Jupyter notebooks jobs and how they can be used in a solution. Note that when jobs write to the data lake, agents can use the Sentinel MCP tools to read and act on those results in the same way they’re able to read any data in the data lake. Packaging and Publishing Developers can now package solutions containing notebook jobs and Copilot agents so they can be distributed through the new Microsoft Security Store. With just a few clicks in the Sentinel VS Code extension, a developer can create a package which they can then upload to Security Store. Distribution and revenue opportunities with Security Store Sentinel platform solutions can be packaged and offered through the new Microsoft Security Store, which gives partners new ways to grow revenue and reach customers. Learn more about the ways Microsoft Security Store can help developers reach customers and grow revenue by visiting securitystore.microsoft.com. Getting started Developers can get started building powerful applications that bring together Sentinel data, Jupyter notebook jobs, and Security Copilot today: Become a partner to publish solutions to Microsoft Security Store Onboarding to Sentinel data lake Downloading the Sentinel Visual Studio Code extension Learn about Security Copilot news Learn about Microsoft Security Store1.3KViews2likes0CommentsFrom Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI
By Chirag Mehta, Vice President and Principal Analyst - Constellation Research AI is changing the way organizations work. It helps teams write code, detect fraud, automate workflows, and make complex decisions faster than ever before. But as AI adoption increases, so do the risks, many of which traditional security tools were not designed to address. Cybersecurity leaders are starting to see that AI security is not just another layer of defense. It is becoming essential to building trust, ensuring resilience, and maintaining business continuity. Earlier this year, after many conversations with CISOs and CIOs, I saw a clear need to bring more attention to this topic. That led to my report on AI Security, which explores how AI-specific vulnerabilities differ from traditional cybersecurity risks and why securing AI systems calls for a more intentional approach. Why AI Changes the Security Landscape AI systems do not behave like traditional software. They learn from data instead of following pre-defined logic. This makes them powerful, but also vulnerable. For example, an AI model can: Misinterpret input in ways that humans cannot easily detect Be tricked into producing harmful or unintended responses through crafted prompts Leak sensitive training data in its outputs Take actions that go against business policies or legal requirements These are not coding flaws. They are risks that originate from how AI systems process information and act on it. These risks become more serious with agentic AI. These systems act on behalf of humans, interact with other software, and sometimes with other AI agents. They can make decisions, initiate actions, and change configurations. If one is compromised, the consequences can spread quickly. A key challenge is that many organizations still rely on traditional defenses to secure AI systems. While those tools remain necessary, they are no longer enough. AI introduces new risks across every layer of the stack, including data, networks, endpoints, applications, and cloud infrastructure. As I explained in my report, the security focus must shift from defending the perimeter to governing the behavior of AI systems, the data they use, and the decisions they make. The Shift Toward AI-Aware Cyber Resilience Cyber resilience is the ability to withstand, adapt to, and recover from attacks. Meeting that standard today requires understanding how AI is developed, deployed, and used by employees, customers, and partners. To get there, organizations must answer questions such as: Where is our sensitive data going, and is it being used safely to train models? What non-human identities, such as AI agents, are accessing systems and data? Can we detect when an AI system is being misused or manipulated? Are we in compliance with new AI regulations and data usage rules? Let’s look at how Microsoft has evolved its mature security portfolio to help protect AI workloads and support this shift toward resilience. Microsoft’s Approach to Secure AI Microsoft has taken a holistic and integrated approach to AI security. Rather than creating entirely new tools, it is extending existing products already used by millions to support AI workloads. These features span identity, data, endpoint, and cloud protection. 1. Microsoft Defender: Treating AI Workloads as Endpoints AI models and applications are emerging as a new class of infrastructure that needs visibility and protection. Defender for Cloud secures AI workloads across Azure and other cloud platforms such as AWS and GCP by monitoring model deployments and detecting vulnerabilities. Defender for Cloud Apps extends protection to AI-enabled apps running at the edge Defender for APIs supports AI systems that use APIs, which are often exposed to risks such as prompt injection or model manipulation Additionally, Microsoft has launched tools to support AI red-teaming, content safety, and continuous evaluation capabilities to ensure agents operate safely and as intended. This allows teams identify and remediate risks such as jailbreaks or prompt injection before models are deployed. 2. Microsoft Entra: Managing Non-Human Identities As organizations roll out more AI agents and copilots, non-human identities are becoming more common. These digital identities need strong oversight. Microsoft Entra helps create and manage identities for AI agents Conditional Access ensures AI agents only access the resources they need, based on real-time signals and context Privileged Identity Management manages, controls, and monitors AI agents access to important resources within an organization 3. Microsoft Purview: Securing Data Used in AI Purview plays an important role in securing both the data that powers AI apps and agents, and the data they generate through interactions. Data discovery and classification helps label sensitive information and track its use Data Loss Prevention policies help prevent leaks or misuse of data in tools such as Copilot or agents built in Azure AI Foundry Insider Risk Management alerts security teams when employees feed sensitive data into AI systems without approval Purview also helps organizations meet transparency and compliance requirements, extending the same policies they already use today to AI workloads, without requiring separate configurations, as regulations like the EU AI Act take effect. Here's a video that explains the above Microsoft security products: Securing AI Is Now a Strategic Priority AI is evolving quickly, and the risks are evolving with it. Traditional tools still matter, but they were not built for systems that learn, adapt, and act independently. They also weren’t designed for the pace and development approaches AI requires, where securing from the first line of code is critical to staying protected at scale. Microsoft is adapting its security portfolio to meet this shift. By strengthening identity, data, and endpoint protections, it is helping customers build a more resilient foundation. Whether you are launching your first AI-powered tool or managing dozens of agents across your organization, the priority is clear. Secure your AI systems before they become a point of weakness. You can read more in my AI Security report and learn how Microsoft is helping organizations secure AI supporting these efforts across its security portfolio.