cloud security
1405 TopicsWelcome to the Microsoft Security Community!
Protect it all with Microsoft Security Eliminate gaps and get the simplified, comprehensive protection, expertise, and AI-powered solutions you need to innovate and grow in a changing world. The Microsoft Security Community is your gateway to connect, learn, and collaborate with peers, experts, and product teams. Gain access to technical discussions, webinars, and help shape Microsoft’s security products. Get there fast To stay up to date on upcoming opportunities and the latest Microsoft Security Community news, make sure to subscribe to our email list. Find the latest skilling content and on-demand videos – subscribe to the Microsoft Security Community YouTube channel. Catch the latest announcements and connect with us on LinkedIn – Microsoft Security Community and Microsoft Entra Community. Index Community Calls: December 2025 | January 2026 | February 2026 Upcoming Community Calls December 2025 Dec. 18 | 8:00am | Security Copilot Skilling Series | What's New in Security Copilot for Defender Discover the latest innovations in Microsoft Security Copilot embedded in Defender that are transforming how organizations detect, investigate, and respond to threats. This session will showcase powerful new capabilities like AI-driven incident response, contextual insights, and automated workflows that help security teams stop attacks faster and simplify operations. January 2026 Jan. 8 | 8:00am | Microsoft Purview | Data Security & Compliance for Azure Foundry AI Apps & Agents As organizations accelerate adoption of Azure AI Foundry to build generative AI applications and autonomous agents, ensuring robust data security and regulatory compliance becomes mission-critical. This session outlines the end-to-end security, governance, and compliance controls that Microsoft Purview, DSPM for AI, offers to provide the governance for your Foundry apps and agents. The guidance provides architects, developers, and security teams with a prescriptive framework to design, deploy, and operate secure, compliant, and enterprise-ready AI solutions on Azure. Jan. 14 | 8:00am | 425 Show | Microsoft MCP Server for Enterprise: Transforming User, Security & Identity Tasks with AI See Microsoft’s MCP Server in action! Discover how AI-powered workflows simplify tasks and strengthen security. Packed with demos, this session shows how to operationalize AI across your organization. Jan. 15 | 8:00am | Microsoft Purview | Purview Data Security and Entra Global Secure Access Deep Dive Learn how Microsoft Global Secure Access (GSA) and Purview extend data loss prevention to the network, inspecting traffic to and from sanctioned and unsanctioned apps, including AI, and block sensitive data exfiltration in real time. The guidance in this session will provide actionable steps to security teams getting started with extending data security to the network layer to support compliance and zero trust strategies. Jan. 20 | 8:00am | Microsoft Defender for Cloud | What’s New in Microsoft Defender CSPM Cloud security posture management (CSPM) continues to evolve, and Microsoft Defender CSPM is leading the way with powerful enhancements introduced at Microsoft Ignite. This session will showcase the latest innovations designed to help security teams strengthen their posture and streamline operations. Jan. 22 | 8:00am | Azure Network Security | Advancing web application Protection with Azure WAF: Ruleset and Security Enhancements Explore the latest Azure WAF ruleset and security enhancements. Learn to fine-tune configurations, reduce false positives, gain threat visibility, and ensure consistent protection for web workloads—whether starting fresh or optimizing deployments. Jan. 22 | 8:00am | Security Copilot Skilling Series | Building Custom Agents: Unlocking Context, Automation, and Scale Microsoft Security Copilot already features a robust ecosystem of first-party and partner-built agents, but some scenarios require solutions tailored to your organization’s specific needs and context. In this session, you'll learn how the Security Copilot agent builder platform and MCP servers empower you to create tailored agents that provide context-aware reasoning and enterprise-scale solutions for your unique scenarios. RESCHEDULED for Jan.27 | 9:00am | Microsoft Sentinel | AI-Powered Entity Analysis in Sentinel’s MCP Server Simplify entity risk assessment with Entity Analyzer. Eliminate complex playbooks; get unified, AI-driven analysis using Sentinel’s semantic understanding. Accelerate automation and enrich SOAR workflows with native Logic Apps integration. February 2026 Feb. 26 | 9:00am | Azure Network Security | Azure Firewall Integration with Microsoft Sentinel Learn how Azure Firewall integrates with Microsoft Sentinel to enhance threat visibility and streamline security investigations. This webinar will demonstrate how firewall logs and insights can be ingested into Sentinel to correlate network activity with broader security signals, enabling faster detection, deeper context, and more effective incident response. Looking for more? Join the Microsoft Customer Connection Program (MCCP)! As a MCCP member, you’ll gain early visibility into product roadmaps, participate in focus groups, and access private preview features before public release. You’ll have a direct channel to share feedback with engineering teams, influencing the direction of Microsoft Security products. The program also offers opportunities to collaborate and network with fellow security experts and Microsoft product teams. Join the MCCP that best fits your interests: www.aka.ms/joincommunity. Additional resources Microsoft Security Hub on Tech Community Virtual Ninja Training Courses Microsoft Security Documentation Azure Network Security GitHub Microsoft Defender for Cloud GitHub Microsoft Sentinel GitHub Microsoft Defender XDR GitHub Microsoft Defender for Cloud Apps GitHub Microsoft Defender for Identity GitHub Microsoft Purview GitHubSecure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Copilot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurPart 3: Unified Security Intelligence - Orchestrating GenAI Threat Detection with Microsoft Sentinel
Why Sentinel for GenAI Security Observability? Before diving into detection rules, let's address why Microsoft Sentinel is uniquely positioned for GenAI security operations—especially compared to traditional or non-native SIEMs. Native Azure Integration: Zero ETL Overhead The problem with external SIEMs: To monitor your GenAI workloads with a third-party SIEM, you need to: Configure log forwarding from Log Analytics to external systems Set up data connectors or agents for Azure OpenAI audit logs Create custom parsers for Azure-specific log schemas Maintain authentication and network connectivity between Azure and your SIEM Pay data egress costs for logs leaving Azure The Sentinel advantage: Your logs are already in Azure. Sentinel connects directly to: Log Analytics workspace - Where your Container Insights logs already flow Azure OpenAI audit logs - Native access without configuration Azure AD sign-in logs - Instant correlation with identity events Defender for Cloud alerts - Platform-level AI threat detection included Threat intelligence feeds - Microsoft's global threat data built-in Microsoft Defender XDR - AI-driven cybersecurity that unifies threat detection and response across endpoints, email, identities, cloud apps and Sentinel There's no data movement, no ETL pipelines, and no latency from log shipping. Your GenAI security data is queryable in real-time. KQL: Built for Complex Correlation at Scale Why this matters for GenAI: Detecting sophisticated AI attacks requires correlating: Application logs (your code from Part 2) Azure OpenAI service logs (API calls, token usage, throttling) Identity signals (who authenticated, from where) Threat intelligence (known malicious IPs) Defender for Cloud alerts (platform-level anomalies) KQL's advantage: Kusto Query Language is designed for this. You can: Join across multiple data sources in a single query Parse nested JSON (like your structured logs) natively Use time-series analysis functions for anomaly detection and behavior patterns Aggregate millions of events in seconds Extract entities (users, IPs, sessions) automatically for investigation graphs Example: Correlating your app logs with Azure AD sign-ins and Defender alerts takes 10 lines of KQL. In a traditional SIEM, this might require custom scripts, data normalization, and significantly slower performance. User Security Context Flows Natively Remember the user_security_context you pass in extra_body from Part 2? That context: Automatically appears in Azure OpenAI's audit logs Flows into Defender for Cloud AI alerts Is queryable in Sentinel without custom parsing Maps to the same identity schema as Azure AD logs With external SIEMs: You'd need to: Extract user context from your application logs Separately ingest Azure OpenAI logs Write correlation logic to match them Maintain entity resolution across different data sources With Sentinel: It just works. The end_user_id, source_ip, and application_name are already normalized across Azure services. Built-In AI Threat Detection Sentinel includes pre-built detections for cloud and AI workloads: Azure OpenAI anomalous access patterns (out of the box) Unusual token consumption (built-in analytics templates) Geographic anomalies (using Azure's global IP intelligence) Impossible travel detection (cross-referencing sign-ins with AI API calls) Microsoft Defender XDR (correlation with endpoint, email, cloud app signals) These aren't generic "high volume" alerts—they're tuned for Azure AI services by Microsoft's security research team. You can use them as-is or customize them with your application-specific context. Entity Behavior Analytics (UEBA) Sentinel's UEBA automatically builds baselines for: Normal request volumes per user Typical request patterns per application Expected geographic access locations Standard model usage patterns Then it surfaces anomalies: "User_12345 normally makes 10 requests/day, suddenly made 500 in an hour" "Application_A typically uses GPT-3.5, suddenly switched to GPT-4 exclusively" "User authenticated from Seattle, made AI requests from Moscow 10 minutes later" This behavior modeling happens automatically—no custom ML model training required. Traditional SIEMs would require you to build this logic yourself. The Bottom Line For GenAI security on Azure: Sentinel reduces time-to-detection because data is already there Correlation is simpler because everything speaks the same language Investigation is faster because entities are automatically linked Cost is lower because you're not paying data egress fees Maintenance is minimal because connectors are native If your GenAI workloads are on Azure, using anything other than Sentinel means fighting against the platform instead of leveraging it. From Logs to Intelligence: The Complete Picture Your structured logs from Part 2 are flowing into Log Analytics. Here's what they look like: { "timestamp": "2025-10-21T14:32:17.234Z", "level": "INFO", "message": "LLM Request Received", "request_id": "a7c3e9f1-4b2d-4a8e-9c1f-3e5d7a9b2c4f", "session_id": "550e8400-e29b-41d4-a716-446655440000", "prompt_hash": "d3b07384d113edec49eaa6238ad5ff00", "security_check_passed": "PASS", "source_ip": "203.0.113.42", "end_user_id": "user_550e8400", "application_name": "AOAI-Customer-Support-Bot", "model_deployment": "gpt-4-turbo" } These logs are in the ContainerLogv2 table since our application “AOAI-Customer-Support-Bot” is running on Azure Kubernetes Services (AKS). Steps to Setup AKS to stream logs to Sentinel/Log Analytics From Azure portal, navigate to your AKS, then to Monitoring -> Insights Select Monitor Settings Under Container Logs Select the Sentinel-enabled Log Analytics workspace Select Logs and events Check the ‘Enable ContainerLogV2’ and ‘Enable Syslog collection’ options More details can be found at this link Kubernetes monitoring in Azure Monitor - Azure Monitor | Microsoft Learn Critical Analytics Rules: What to Detect and Why Rule 1: Prompt Injection Attack Detection Why it matters: Prompt injection is the GenAI equivalent of SQL injection. Attackers try to manipulate the model by overriding system instructions. Multiple attempts indicate intentional malicious behavior. What to detect: 3+ prompt injection attempts within 10 minutes from similar IP let timeframe = 1d; let threshold = 3; AlertEvidence | where TimeGenerated >= ago(timeframe) and EntityType == "Ip" | where DetectionSource == "Microsoft Defender for AI Services" | where Title contains "jailbreak" or Title contains "prompt injection" | summarize count() by bin (TimeGenerated, 1d), RemoteIP | where count_ >= threshold What the SOC sees: User identity attempting injection Source IP and geographic location Sample prompts for investigation Frequency indicating automation vs. manual attempts Severity: High (these are actual attempts to bypass security) Rule 2: Content Safety Filter Violations Why it matters: When Azure AI Content Safety blocks a request, it means harmful content (violence, hate speech, etc.) was detected. Multiple violations indicate intentional abuse or a compromised account. What to detect: Users with 3+ content safety violations in a 1 hour block during a 24 hour time period. let timeframe = 1d; let threshold = 3; ContainerLogV2 | where TimeGenerated >= ago(timeframe) | where isnotempty(LogMessage.end_user_id) | where LogMessage.security_check_passed == "FAIL" | extend source_ip=tostring(LogMessage.source_ip) | extend end_user_id=tostring(LogMessage.end_user_id) | extend session_id=tostring(LogMessage.session_id) | extend application_name = tostring(LogMessage.application_name) | extend security_check_passed = tostring (LogMessage.security_check_passed) | summarize count() by bin(TimeGenerated, 1h),source_ip,end_user_id,session_id,Computer,application_name,security_check_passed | where count_ >= threshold What the SOC sees: Severity based on violation count Time span showing if it's persistent vs. isolated Prompt samples (first 80 chars) for context Session ID for conversation history review Severity: High (these are actual harmful content attempts) Rule 3: Rate Limit Abuse Why it matters: Persistent rate limit violations indicate automated attacks, credential stuffing, or attempts to overwhelm the system. Legitimate users who hit rate limits don't retry 10+ times in minutes. What to detect: Users blocked by rate limiter 5+ times in 10 minutes let timeframe = 1h; let threshold = 5; AzureDiagnostics | where ResourceProvider == "MICROSOFT.COGNITIVESERVICES" | where OperationName == "Completions" or OperationName contains "ChatCompletions" | extend tokensUsed = todouble(parse_json(properties_s).usage.total_tokens) | summarize totalTokens = sum(tokensUsed), requests = count(), rateLimitErrors = countif(httpstatuscode_s == "429") by bin(TimeGenerated, 1h) | where count_ >= threshold What the SOC sees: Whether it's a bot (immediate retries) or human (gradual retries) Duration of attack Which application is targeted Correlation with other security events from same user/IP Severity: Medium (nuisance attack, possible reconnaissance) Rule 4: Anomalous Source IP for User Why it matters: A user suddenly accessing from a new country or VPN could indicate account compromise. This is especially critical for privileged accounts or after-hours access. What to detect: User accessing from an IP never seen in the last 7 days let lookback = 7d; let recent = 1h; let baseline = IdentityLogonEvents | where Timestamp between (ago(lookback + recent) .. ago(recent)) | where isnotempty(IPAddress) | summarize knownIPs = make_set(IPAddress) by AccountUpn; ContainerLogV2 | where TimeGenerated >= ago(recent) | where isnotempty(LogMessage.source_ip) | extend source_ip=tostring(LogMessage.source_ip) | extend end_user_id=tostring(LogMessage.end_user_id) | extend session_id=tostring(LogMessage.session_id) | extend application_name = tostring(LogMessage.application_name) | extend security_check_passed = tostring (LogMessage.security_check_passed) | extend full_prompt_sample = tostring (LogMessage.full_prompt_sample) | lookup baseline on $left.AccountUpn == $right.end_user_id | where isnull(knownIPs) or IPAddress !in (knownIPs) | project TimeGenerated, source_ip, end_user_id, session_id, Computer, application_name, security_check_passed, full_prompt_sample What the SOC sees: User identity and new IP address Geographic location change Whether suspicious prompts accompanied the new IP Timing (after-hours access is higher risk) Severity: Medium (environment compromise, reconnaissance) Rule 5: Coordinated Attack - Same Prompt from Multiple Users Why it matters: When 5+ users send identical prompts, it indicates a bot network, credential stuffing, or organized attack campaign. This is not normal user behavior. What to detect: Same prompt hash from 5+ different users within 1 hour let timeframe = 1h; let threshold = 5; ContainerLogV2 | where TimeGenerated >= ago(timeframe) | where isnotempty(LogMessage.prompt_hash) | where isnotempty(LogMessage.end_user_id) | extend source_ip=tostring(LogMessage.source_ip) | extend end_user_id=tostring(LogMessage.end_user_id) | extend prompt_hash=tostring(LogMessage.prompt_hash) | extend application_name = tostring(LogMessage.application_name) | extend security_check_passed = tostring (LogMessage.security_check_passed) | project TimeGenerated, prompt_hash, source_ip, end_user_id, application_name, security_check_passed | summarize DistinctUsers = dcount(end_user_id), Attempts = count(), Users = make_set(end_user_id, 100), IpAddress = make_set(source_ip, 100) by prompt_hash, bin(TimeGenerated, 1h) | where DistinctUsers >= threshold What the SOC sees: Attack pattern (single attacker with stolen accounts vs. botnet) List of compromised user accounts Source IPs for blocking Prompt sample to understand attack goal Severity: High (indicates organized attack) Rule 6: Malicious model detected Why it matters: Model serialization attacks can lead to serious compromise. When Defender for Cloud Model Scanning identifies issues with a custom or opensource model that is part of Azure ML Workspace, Registry, or hosted in Foundry, that may be or may not be a user oversight. What to detect: Model scan results from Defender for Cloud and if it is being actively used. What the SOC sees: Malicious model Applications leveraging the model Source IPs and users accessed the model Severity: Medium (can be user oversight) Advanced Correlation: Connecting the Dots The power of Sentinel is correlating your application logs with other security signals. Here are the most valuable correlations: Correlation 1: Failed GenAI Requests + Failed Sign-Ins = Compromised Account Why: Account showing both authentication failures and malicious AI prompts is likely compromised within a 1 hour timeframe l let timeframe = 1h; ContainerLogV2 | where TimeGenerated >= ago(timeframe) | where isnotempty(LogMessage.source_ip) | extend source_ip=tostring(LogMessage.source_ip) | extend end_user_id=tostring(LogMessage.end_user_id) | extend session_id=tostring(LogMessage.session_id) | extend application_name = tostring(LogMessage.application_name) | extend security_check_passed = tostring (LogMessage.security_check_passed) | extend full_prompt_sample = tostring (LogMessage.full_prompt_sample) | extend message = tostring (LogMessage.message) | where security_check_passed == "FAIL" or message contains "WARNING" | join kind=inner ( SigninLogs | where ResultType != 0 // 0 means success, non-zero indicates failure | project TimeGenerated, UserPrincipalName, ResultType, ResultDescription, IPAddress, Location, AppDisplayName ) on $left.end_user_id == $right.UserPrincipalName | project TimeGenerated, source_ip, end_user_id, application_name, full_prompt_sample, prompt_hash, message, security_check_passed Severity: High (High probability of compromise) Correlation 2: Application Logs + Defender for Cloud AI Alerts Why: Defender for Cloud AI Threat Protection detects platform-level threats (unusual API patterns, data exfiltration attempts). When both your code and the platform flag the same user, confidence is very high. let timeframe = 1h; ContainerLogV2 | where TimeGenerated >= ago(timeframe) | where isnotempty(LogMessage.source_ip) | extend source_ip=tostring(LogMessage.source_ip) | extend end_user_id=tostring(LogMessage.end_user_id) | extend session_id=tostring(LogMessage.session_id) | extend application_name = tostring(LogMessage.application_name) | extend security_check_passed = tostring (LogMessage.security_check_passed) | extend full_prompt_sample = tostring (LogMessage.full_prompt_sample) | extend message = tostring (LogMessage.message) | where security_check_passed == "FAIL" or message contains "WARNING" | join kind=inner ( AlertEvidence | where TimeGenerated >= ago(timeframe) and AdditionalFields.Asset == "true" | where DetectionSource == "Microsoft Defender for AI Services" | project TimeGenerated, Title, CloudResource ) on $left.application_name == $right.CloudResource | project TimeGenerated, application_name, end_user_id, source_ip, Title Severity: Critical (Multi-layer detection) Correlation 3: Source IP + Threat Intelligence Feeds Why: If requests come from known malicious IPs (C2 servers, VPN exit nodes used in attacks), treat them as high priority even if behavior seems normal. //This rule correlates GenAI app activity with Microsoft Threat Intelligence feed available in Sentinel and Microsoft XDR for malicious IP IOCs let timeframe = 10m; ContainerLogV2 | where TimeGenerated >= ago(timeframe) | where isnotempty(LogMessage.source_ip) | extend source_ip=tostring(LogMessage.source_ip) | extend end_user_id=tostring(LogMessage.end_user_id) | extend session_id=tostring(LogMessage.session_id) | extend application_name = tostring(LogMessage.application_name) | extend security_check_passed = tostring (LogMessage.security_check_passed) | extend full_prompt_sample = tostring (LogMessage.full_prompt_sample) | join kind=inner ( ThreatIntelIndicators | where IsActive == "true" | where ObservableKey startswith "ipv4-addr" or ObservableKey startswith "network-traffic" | project IndicatorIP = ObservableValue ) on $left.source_ip == $right.IndicatorIP | project TimeGenerated, source_ip, end_user_id, application_name, full_prompt_sample, security_check_passed Severity: High (Known bad actor) Workbooks: What Your SOC Needs to See Executive Dashboard: GenAI Security Health Purpose: Leadership wants to know: "Are we secure?" Answer with metrics. Key visualizations: Security Status Tiles (24 hours) Total Requests Success Rate Blocked Threats (Self detected + Content Safety + Threat Protection for AI) Rate Limit Violations Model Security Score (Red Team evaluation status of currently deployed model) ContainerLogV2 | where TimeGenerated > ago (1d) | extend security_check_passed = tostring (LogMessage.security_check_passed) | summarize SuccessCount=countif(security_check_passed == "PASS"), FailedCount=countif(security_check_passed == "FAIL") by bin(TimeGenerated, 1h) | extend TotalRequests = SuccessCount + FailedCount | extend SuccessRate = todouble(SuccessCount)/todouble(TotalRequests) * 100 | order by SuccessRate 1. Trend Chart: Pass vs. Fail Over Time Shows if attack volume is increasing Identifies attack time windows Validates that defenses are working ContainerLogV2 | where TimeGenerated > ago (14d) | extend security_check_passed = tostring (LogMessage.security_check_passed) | summarize SuccessCount=countif(security_check_passed == "PASS"), FailedCount=countif(security_check_passed == "FAIL") by bin(TimeGenerated, 1d) | render timechart 2. Top 10 Users by Security Events Bar chart of users with most failures ContainerLogV2 | where TimeGenerated > ago (1d) | where isnotempty(LogMessage.end_user_id) | extend end_user_id=tostring(LogMessage.end_user_id) | extend security_check_passed = tostring (LogMessage.security_check_passed) | where security_check_passed == "FAIL" | summarize FailureCount = count() by end_user_id | top 20 by FailureCount | render barchart Applications with most failures ContainerLogV2 | where TimeGenerated > ago (1d) | where isnotempty(LogMessage.application_name) | extend application_name=tostring(LogMessage.application_name) | extend security_check_passed = tostring (LogMessage.security_check_passed) | where security_check_passed == "FAIL" | summarize FailureCount = count() by application_name | top 20 by FailureCount | render barchart 3. Geographic Threat Map Where are attacks originating? Useful for geo-blocking decisions ContainerLogV2 | where TimeGenerated > ago (1d) | where isnotempty(LogMessage.application_name) | extend application_name=tostring(LogMessage.application_name) | extend source_ip=tostring(LogMessage.source_ip) | extend security_check_passed = tostring (LogMessage.security_check_passed) | where security_check_passed == "FAIL" | extend GeoInfo = geo_info_from_ip_address(source_ip) | project sourceip, GeoInfo.counrty, GeoInfo.city Analyst Deep-Dive: User Behavior Analysis Purpose: SOC analyst investigating a specific user or session Key components: 1. User Activity Timeline Every request from the user in time order ContainerLogV2 | where isnotempty(LogMessage.end_user_id) | project TimeGenerated, LogMessage.source_ip, LogMessage.end_user_id, LogMessage. session_id, Computer, LogMessage.application_name, LogMessage.request_id, LogMessage.message, LogMessage.full_prompt_sample | order by tostring(LogMessage_end_user_id), TimeGenerated Color-coded by security status AlertInfo | where DetectionSource == "Microsoft Defender for AI Services" | project TimeGenerated, AlertId, Title, Category, Severity, SeverityColor = case( Severity == "High", "🔴 High", Severity == "Medium", "🟠 Medium", Severity == "Low", "🟢 Low", "⚪ Unknown" ) 2. Session Analysis Table All sessions for the user ContainerLogV2 | where TimeGenerated > ago (1d) | where isnotempty(LogMessage.end_user_id) | extend end_user_id=tostring(LogMessage.end_user_id) | where end_user_id == "<username>" // Replace with actual username | extend application_name=tostring(LogMessage.application_name) | extend source_ip=tostring(LogMessage.source_ip) | extend session_id=tostri1ng(LogMessage.session_id) | extend security_check_passed = tostring (LogMessage.security_check_passed) | project TimeGenerated, session_id, end_user_id, application_name, security_check_passed Failed requests per session ContainerLogV2 | where TimeGenerated > ago (1d) | extend security_check_passed = tostring (LogMessage.security_check_passed) | where security_check_passed == "FAIL" | extend end_user_id=tostring(LogMessage.end_user_id) | extend session_id=tostring(LogMessage.session_id) | extend security_check_passed = tostring (LogMessage.security_check_passed) | summarize Failed_Sessions = count() by end_user_id, session_id | order by Failed_Sessions Session duration ContainerLogV2 | where TimeGenerated > ago (1d) | where isnotempty(LogMessage.session_id) | extend security_check_passed = tostring (LogMessage.security_check_passed) | where security_check_passed == "PASS" | extend end_user_id=tostring(LogMessage.end_user_id) | extend session_id=tostring(LogMessage.session_id) | extend application_name=tostring(LogMessage.application_name) | extend source_ip=tostring(LogMessage.source_ip) | summarize Start=min(TimeGenerated), End=max(TimeGenerated), count() by end_user_id, session_id, source_ip, application_name | extend DurationSeconds = datetime_diff("second", End, Start) 3. Prompt Pattern Detection Unique prompts by hash Frequency of each pattern Detect if user is fuzzing/testing boundaries Sample query for user investigation: ContainerLogV2 | where TimeGenerated > ago (14d) | where isnotempty(LogMessage.prompt_hash) | where isnotempty(LogMessage.full_prompt_sample) | extend prompt_hash=tostring(LogMessage.prompt_hash) | extend full_prompt_sample=tostring(LogMessage.full_prompt_sample) | extend application_name=tostring(LogMessage.application_name) | summarize count() by prompt_hash, full_prompt_sample | order by count_ Threat Hunting Dashboard: Proactive Detection Purpose: Find threats before they trigger alerts Key queries: 1. Suspicious Keywords in Prompts (e.g. Ignore, Disregard, system prompt, instructions, DAN, jailbreak, pretend, roleplay) let suspicious_prompts = externaldata (content_policy:int, content_policy_name:string, q_id:int, question:string) [ @"https://raw.githubusercontent.com/verazuo/jailbreak_llms/refs/heads/main/data/forbidden_question/forbidden_question_set.csv"] with (format="csv", has_header_row=true, ignoreFirstRecord=true); ContainerLogV2 | where TimeGenerated > ago (14d) | where isnotempty(LogMessage.full_prompt_sample) | extend full_prompt_sample=tostring(LogMessage.full_prompt_sample) | where full_prompt_sample in (suspicious_prompts) | extend end_user_id=tostring(LogMessage.end_user_id) | extend session_id=tostring(LogMessage.session_id) | extend application_name=tostring(LogMessage.application_name) | extend source_ip=tostring(LogMessage.source_ip) | project TimeGenerated, session_id, end_user_id, source_ip, application_name, full_prompt_sample 2. High-Volume Anomalies User sending too many requests by a IP or User. Assuming that Foundry Projects are configured to use Azure AD and not API Keys. //50+ requests in 1 hour let timeframe = 1h; let threshold = 50; AzureDiagnostics | where ResourceProvider == "MICROSOFT.COGNITIVESERVICES" | where OperationName == "Completions" or OperationName contains "ChatCompletions" | extend tokensUsed = todouble(parse_json(properties_s).usage.total_tokens) | summarize totalTokens = sum(tokensUsed), requests = count() by bin(TimeGenerated, 1h),CallerIPAddress | where count_ >= threshold 3. Rare Failures (Novel Attack Detection) Rare failures might indicate zero-day prompts or new attack techniques //10 or more failures in 24 hours ContainerLogV2 | where TimeGenerated >= ago (24h) | where isnotempty(LogMessage.security_check_passed) | extend security_check_passed=tostring(LogMessage.security_check_passed) | where security_check_passed == "FAIL" | extend application_name=tostring(LogMessage.application_name) | extend end_user_id=tostring(LogMessage.end_user_id) | extend source_ip=tostring(LogMessage.source_ip) | summarize FailedAttempts = count(), FirstAttempt=min(TimeGenerated), LastAttempt=max(TimeGenerated) by application_name | extend DurationHours = datetime_diff('hour', LastAttempt, FirstAttempt) | where DurationHours >= 24 and FailedAttempts >=10 | project application_name, FirstAttempt, LastAttempt, DurationHours, FailedAttempts Measuring Success: Security Operations Metrics Key Performance Indicators Mean Time to Detect (MTTD): let AppLog = ContainerLogV2 | extend application_name=tostring(LogMessage.application_name) | extend security_check_passed=tostring (LogMessage.security_check_passed) | extend session_id=tostring(LogMessage.session_id) | extend end_user_id=tostring(LogMessage.end_user_id) | extend source_ip=tostring(LogMessage.source_ip) | where security_check_passed=="FAIL" | summarize FirstLogTime=min(TimeGenerated) by application_name, session_id, end_user_id, source_ip; let Alert = AlertEvidence | where DetectionSource == "Microsoft Defender for AI Services" | extend end_user_id = tostring(AdditionalFields.AadUserId) | extend source_ip=RemoteIP | extend application_name=CloudResource | summarize FirstAlertTime=min(TimeGenerated) by AlertId, Title, application_name, end_user_id, source_ip; AppLog | join kind=inner (Alert) on application_name, end_user_id, source_ip | extend DetectionDelayMinutes=datetime_diff('minute', FirstAlertTime, FirstLogTime) | summarize MTTD_Minutes=round(avg (DetectionDelayMinutes),2) by AlertId, Title Target: <= 15 minutes from first malicious activity to alert Mean Time to Respond (MTTR): SecurityIncident | where Status in ("New", "Active") | where CreatedTime >= ago(14d) | extend ResponseDelay = datetime_diff('minute', LastActivityTime, FirstActivityTime) | summarize MTTR_Minutes = round (avg (ResponseDelay),2) by CreatedTime, IncidentNumber | order by CreatedTime, IncidentNumber asc Target: < 4 hours from alert to remediation Threat Detection Rate: ContainerLogV2 | where TimeGenerated > ago (1d) | extend security_check_passed = tostring (LogMessage.security_check_passed) | summarize SuccessCount=countif(security_check_passed == "PASS"), FailedCount=countif(security_check_passed == "FAIL") by bin(TimeGenerated, 1h) | extend TotalRequests = SuccessCount + FailedCount | extend SuccessRate = todouble(SuccessCount)/todouble(TotalRequests) * 100 | order by SuccessRate Context: 1-3% is typical for production systems (most traffic is legitimate) What You've Built By implementing the logging from Part 2 and the analytics rules in this post, your SOC now has: ✅ Real-time threat detection - Alerts fire within minutes of malicious activity ✅ User attribution - Every incident has identity, IP, and application context ✅ Pattern recognition - Detect both volume-based and behavior-based attacks ✅ Correlation across layers - Application logs + platform alerts + identity signals ✅ Proactive hunting - Dashboards for finding threats before they trigger rules ✅ Executive visibility - Metrics showing program effectiveness Key Takeaways GenAI threats need GenAI-specific analytics - Generic rules miss context like prompt injection, content safety violations, and session-based attacks Correlation is critical - The most sophisticated attacks span multiple signals. Correlating app logs with identity and platform alerts catches what individual rules miss. User context from Part 2 pays off - end_user_id, source_ip, and session_id enable investigation and response at scale Prompt hashing enables pattern detection - Detect repeated attacks without storing sensitive prompt content Workbooks serve different audiences - Executives want metrics; analysts want investigation tools; hunters want anomaly detection Start with high-fidelity rules - Content Safety violations and rate limit abuse have very low false positive rates. Add behavioral rules after establishing baselines. What's Next: Closing the Loop You've now built detection and visibility. In Part 4, we'll close the security operations loop with: Part 4: Platform Integration and Automated Response Building SOAR playbooks for automated incident response Implementing automated key rotation with Azure Key Vault Blocking identities in Entra Creating feedback loops from incidents to code improvements The journey from blind spot to full security operations capability is almost complete. Previous: Part 1: Securing GenAI Workloads in Azure: A Complete Guide to Monitoring and Threat Protection - AIO11Y | Microsoft Community Hub Part 2: Part 2: Building Security Observability Into Your Code - Defensive Programming for Azure OpenAI | Microsoft Community Hub Next: Part 4: Platform Integration and Automated Response (Coming soon)Security Guidance Series: CAF 4.0 Understanding Threat From Awareness to Intelligence-Led Defence
The updated CAF 4.0 raises expectations around control A2.b - Understanding Threat. Rather than focusing solely on awareness of common cyber-attacks, the framework now calls for a sector-specific, intelligence-informed understanding of the threat landscape. According to the NCSC, CAF 4.0 emphasizes the need for detailed threat analysis that reflects the tactics, techniques, and resources of capable adversaries, and requires that this understanding directly shapes security and resilience decisions. For public sector authorities, this means going beyond static risk registers to build a living threat model that evolves alongside digital transformation and service delivery. Public sector authorities need to know which systems and datasets are most exposed, from citizen records and clinical information to education systems, operational platforms, and payment gateways, and anticipate how an attacker might exploit them to disrupt essential services. To support this higher level of maturity, Microsoft’s security ecosystem helps public sector authorities turn threat intelligence into actionable understanding, directly aligning with CAF 4.0’s Achieved criteria for control A2.b. Microsoft E3 - Building Foundational Awareness Microsoft E3 provides public sector authorities with the foundational capabilities to start aligning with CAF 4.0 A2.b by enabling awareness of common threats and applying that awareness to risk decisions. At this maturity level, organizations typically reach Partially Achieved, where threat understanding is informed by incidents rather than proactive analysis. How E3 contributes to Contributing Outcome A2.b: Visibility of basic threats: Defender for Endpoint Plan 1 surfaces malware and unsafe application activity, giving organizations insight into how adversaries exploit endpoints. This telemetry helps identify initial attacker entry points and informs reactive containment measures. Identity risk reduction: Entra ID P1 enforces MFA and blocks legacy authentication, mitigating common credential-based attacks. These controls reduce the likelihood of compromise at early stages of an attacker’s path. Incident-driven learning: Alerts and Security & Compliance Centre reports allow organizations to review how attacks unfolded, supporting documentation of observed techniques and feeding lessons into risk decisions. What’s missing for Achieved: To fully meet the contributing outcomes A2.b, public sector organizations must evolve from incident-driven awareness to structured, intelligence-led threat analysis. This involves anticipating probable attack methods, developing plausible scenarios, and maintaining a current threat picture through proactive hunting and threat intelligence. These capabilities extend beyond the E3 baseline and require advanced analytics and dedicated platforms. Microsoft E5 – Advancing to Intelligence-Led Defence Where E3 establishes the foundation for identifying and documenting known threats, Microsoft E5 helps public sector organizations to progress toward the Achieved level of CAF control A2.b by delivering continuous, intelligence-driven analysis across every attack surface. How E5 aligns with Contributing Outcome A2.b: Detailed, up-to-date view of attacker paths: At the core of E5 is Defender XDR, which correlates telemetry from Defender for Endpoint Plan 2, Defender for Office 365 Plan 2, Defender for Identity, and Defender for Cloud Apps. This unified view reveals how attackers move laterally between devices, identities, and SaaS applications - directly supporting CAF’s requirement to understand probable attack methods and the steps needed to reach critical targets. Advanced hunting and scenario development: Defender for Endpoint P2 introduces advanced hunting via Kusto Query Language (KQL) and behavioural analytics. Analysts can query historical data to uncover persistence mechanisms or privilege escalation techniques, assisting organizations to anticipate attack chains and develop plausible scenarios, a key expectation under A2.b. Email and collaboration threat modelling: Defender for Office 365 P2 detects targeted phishing, business email compromise, and credential harvesting campaigns. Attack Simulation Training adds proactive testing of social engineering techniques, helping organizations maintain awareness of evolving attacker tradecraft and refine mitigations. Identity-focused threat analysis: Defender for Identity and Entra ID P2 expose lateral movement, credential abuse, and risky sign-ins. By mapping tactics and techniques against frameworks like MITRE ATT&CK, organizations can gain the attacker’s perspective on identity systems - fulfilling CAF’s call to view networks from a threat actor’s lens. Cloud application risk visibility: Defender for Cloud Apps highlights shadow IT and potential data exfiltration routes, helping organizations to document and justify controls at each step of the attack chain. Continuous threat intelligence: Microsoft Threat Intelligence enriches detections with global and sector-specific insights on active adversary groups, emerging malware, and infrastructure trends. This sustained feed helps organizations maintain a detailed understanding of current threats, informing risk decisions and prioritization. Why this meets Achieved: E5 capabilities help organizations move beyond reactive alerting to a structured, intelligence-led approach. Threat knowledge is continuously updated, scenarios are documented, and controls are justified at each stage of the attacker path, supporting CAF control A2.b’s expectation that threat understanding informs risk management and defensive prioritization. Sentinel While Microsoft E5 delivers deep visibility across endpoints, identities, and applications, Microsoft Sentinel acts as the unifying layer that helps transform these insights into a comprehensive, evidence-based threat model, a core expectation of Achieved maturity under CAF 4.0 A2.b. How Sentinel enables Achieved outcomes: Comprehensive attack-chain visibility: As a cloud-native SIEM and SOAR, Sentinel ingests telemetry from Microsoft and non-Microsoft sources, including firewalls, OT environments, legacy servers, and third-party SaaS platforms. By correlating these diverse signals into a single analytical view, Sentinel allows defenders to visualize the entire attack chain, from initial reconnaissance through lateral movement and data exfiltration. This directly supports CAF’s requirement to understand how capable, well-resourced actors could systematically target essential systems. Attacker-centric analysis and scenario building: Sentinel’s Analytics Rules and MITRE ATT&CK-aligned detections provide a structured lens on tactics and techniques. Security teams can use Kusto Query Language (KQL) and advanced hunting to identify anomalies, map adversary behaviours, and build plausible threat scenarios, addressing CAF’s expectation to anticipate probable attack methods and justify mitigations at each step. Threat intelligence integration: Sentinel enriches local telemetry with intelligence from trusted sources such as the NCSC and Microsoft’s global network. This helps organizations maintain a current, sector-specific understanding of threats, applying that knowledge to prioritize risk treatment and policy decisions, a defining characteristic of Achieved maturity. Automation and repeatable processes: Sentinel’s SOAR capabilities operationalize intelligence through automated playbooks that contain threats, isolate compromised assets, and trigger investigation workflows. These workflows create a documented, repeatable process for threat analysis and response, reinforcing CAF’s emphasis on continuous learning and refinement. This video brings CAF A2.b – Understanding Threat – to life, showing how public sector organizations can use Microsoft security tools to build a clear, intelligence-led view of attacker behaviour and meet the expectations of CAF 4.0. Why this meets Achieved: By consolidating telemetry, threat intelligence, and automated response into one platform, Sentinel elevates public sector organizations from isolated detection to an integrated, intelligence-led defence posture. Every alert, query, and playbook contributes to an evolving organization-wide threat model, supporting CAF A2.b’s requirement for detailed, proactive, and documented threat understanding. CAF 4.0 challenges every public-sector organization to think like a threat actor, to understand not just what could go wrong, but how and why. Does your organization have the visibility, intelligence, and confidence to turn that understanding into proactive defence? To illustrate how this contributing outcome can be achieved in practice, the one-slider and demo show how Microsoft’s security capabilities help organizations build the detailed, intelligence-informed threat picture expected by CAF 4.0. These examples turn A2.b’s requirements into actionable steps for organizations. In the next article, we’ll explore C2 - Threat Hunting: moving from detection to anticipation and embedding proactive resilience as a daily capability.Breaking down security silos: Microsoft Defender for Cloud Expands into the Defender Portal
Picture this: You’re managing security across Azure, AWS, and GCP. Alerts are coming from every direction, dashboards are scattered and your team spends more time switching portals than mitigating threats. Sound familiar? That’s the reality for many organizations today. Now imagine a different world—where visibility, control and response converge into one unified experience, where posture management, vulnerability insights and incident response live side by side. That world is no longer a dream: Microsoft Defender for Cloud (MDC) is now integrated into Defender XDR in public preview. The expansion of MDC into the Defender portal isn’t just a facelift. It’s a strategic leap forward toward a Cloud-Native Application Protection Platform (CNAPP) that scales with your business. With Microsoft Defender for Cloud’s deep integration into the unified portal, we eliminate security silos and bring a modern, streamlined experience that is more intuitive and purpose-built for today’s security teams, while delivering a single pane of glass for hybrid and multi-cloud security. Here’s what makes this release a game-changer: Unified dashboard See everything with a single pane of glass—security posture, coverage, trends—across Azure, AWS and GCP. No more blind spots. Risk-based recommendations Prioritize by exploitability and business impact. Focus on what matters most, not just noise. Attack path analysis across all Defenders Visualize potential breach paths and cut them off before attackers can exploit them. Unified cloud assets inventory A consolidated view of assets, health data and onboarding state—so you know exactly where you stand. Cloud scopes & unified RBAC Create boundaries between teams, ensure each persona has access to the right level of data in the Defender portal. The enhanced in-portal experience includes all familiar Defender for Cloud capabilities and adds powerful new cloud-native workflows — now accessible directly within the Defender portal. Over time, additional features will be rolled out so that security teams can rely on a single pane of glass for all their pre- and post-breach operations. Unified cloud security dashboard A brand-new “Cloud Security→ Overview” page in Defender portal gives you a central place to assess your cloud posture across all connected clouds and environments (Azure, AWS, GCP, on-prem and onboarded environments such as Azure DevOps, Github, Gitlab, DockerHub, Jfrog). The unified dashboard displays the new Cloud Security Score, Threat Detection alerts and Defender coverage statistics. Amongst the high-level metrics, you can find the number of assessed resources, count of active recommendations, security alerts and more, giving you at-a-glance insight into your environment’s health. From here, you can drill into individual areas: Security posture, Exposure Management bringing visibility over Recommendations and Vulnerability Management, a unified asset inventory, workload specific insights and historical security posture data going back up to 6 months. Cloud Assets Inventory The cloud asset inventory view provides a unified, contextual inventory of all resources you have connected to Defender for Cloud — across cloud environments or on-premises. Assets are categorized by workload type, criticality, Defender coverage status, with integrated health data, risk signals, associated exposure management data, recommendations and related attack paths. Resources with unresolved security recommendations or alerts are clearly flagged — helping you quickly prioritize on risky or non-compliant assets. While you will get a complete list of cloud assets under "All assets", the rest of the tabs show you the complete view into each workload, with detailed and specific insights on each workload (VMs, Data, Containers, AI, API, DevOps, Identity and Serverless). Posture & Risk Management: From Secure Score to risk-based recommendations The traditional posture-management and CSPM capabilities of Defender for Cloud expand into the Defender portal under “Exposure Management.” A key upgrade is the new Cloud Secure Score — a risk-based model that factors in asset criticality and risk factors (e.g. internet exposure, data sensitivity) to give a more accurate, prioritized view of cloud security posture. The score ranges from 0 to 100, where 100 means perfect posture. It aggregates across all assets, weighting each asset by its criticality and the risk of its open recommendations. You can view the Cloud Secure Score overall, by subscription, cloud environment or workload type. This allows security teams to quickly understand which parts of their estate require urgent attention, and track posture improvements over time. Defender for Cloud continues to generate security recommendations based on assessments against built-in (or custom) security standards. When you have the Defender CSPM plan enabled in the Defender portal, these recommendations are surfaced with risk-based prioritization, where recommendations are tied to high-risk or critical assets show up first — helping you remediate what matters most. Each recommendation shows risk level, number of attack paths, MITRE ATT&CK tactics and techniques. For each recommendation you will see the remediation steps, attack map and the initiatives it contributes to - such as the Cloud Secure score. Continued remediation — across all subscriptions and environments — is the path toward a hardened cloud estate. Proactive Attack Surface Management: Attack path analysis A powerful addition is the "Attack paths" overview, which helps you visualize potential paths attackers could use — from external exposure zones to your most critical business assets to infiltrate your environment and access sensitive data. Defender’s algorithm models your network, resource interactions, vulnerabilities and external exposures to surface realistic, exploitable attack paths, rather than generic threat scenarios, while putting focus on the top targets, entry points and choke points involved in attack paths. The Attack Paths page organizes findings by risk level and correlates data across all Defender solutions, allowing users to rapidly detect high-impact attack paths and focus remediation on the most vulnerable assets. For some workloads, for example container-based or runtime workloads, additional prerequisites may apply (e.g. enabling agentless scanning or relevant Defender plans) to get full visualization. Governance, Visibility and Access: Cloud Scopes and Unified RBAC The expansion into the Defender portal doesn’t just bring new dashboards — it also brings unified access and governance using a single identity and RBAC model for the Defender solutions. Now you can manage cloud security permissions alongside identity, device and app permissions. Cloud Scopes ensure that teams with appropriate roles within the defined permission groups (e.g. Security operations, Security posture) can access the assets and features they need, scoped to specific subscriptions and environments. This unified scope system simplifies operations, reduces privilege sprawl and enforces consistent governance across cloud environments and across security domains. The expansion of Defender for Cloud into the Defender portal is more than a consolidation—it’s a strategic shift toward a truly integrated security ecosystem. Cloud security is no longer an isolated discipline. It is intertwined with exposure management, threat detection, identity protection and organizational governance. To conclude, this new experience empowers security teams to: Understand cloud risk in full context Prioritize remediation that reduces real-world threats Investigate attacks holistically across cloud and non-cloud systems Govern access and configurations with greater consistency Predict and prevent attack paths before they happen In this new era, cloud security becomes a continuous, intelligent and unified journey. The Defender portal is now the command center for that journey—one where insights, context and action converge to help organizations secure the present while anticipating the future. Ready to Explore? Defender for Cloud in the Defender portal Integration FAQ Enable Preview Features Azure portal vs Defender portal feature comparison What’s New in Defender for CloudDemystifying AI Security Posture Management
Introduction In the ever-evolving paradigm shift that is Generative AI, adoption is accelerating at an unprecedented level. Organizations find it increasingly challenging to keep up with the multiple security branches of defence and attack that are complementing the adoption. With agentic and autonomous agents being the new security frontier we will be concentrating on for the next 10 years, the need to understand, secure and govern what Generative AI applications are running within an organisation becomes critical. Organizations that have a strong “security first” principle have been able to integrate AI by following appropriate methodologies such as Microsoft’s Prepare, Discover, Protect and Govern approach, and are now accelerating the adoption with strong posture management. Link: Build a strong security posture for AI | Microsoft Learn However, due to the nature of this rapid adoption, many organizations have found themselves in a “chicken and egg” situation whereby they are racing to allow employees and developers to adopt and embrace both Low Code and Pro Code solutions such as Microsoft Copilot Studio and Microsoft Foundry, but due to governance and control policies not being implemented in time, now find themselves in a Shadow AI situation, and require the ability to retroactively assess already deployed solutions. Why AI Security Posture Management? Generative AI Workloads, like any other, can only be secured and governed if the organization is aware of their existence and usage. With the advent of Generative AI we now not only have Shadow IT but also Shadow AI, so the need to be able to discover, assess, understand, and govern the Generative AI tooling that is being used in an organisation is now more important than ever. Consider the risks mentioned in the recent Microsoft Digital Defence Report and how they align to AI Usage, AI Applications and AI Platform Security. As Generative AI becomes more ingrained in the day-to-day operations of organizations, so does the potential for increased attack vectors, misuse and the need for appropriate security oversight and mitigation. Link: Microsoft Digital Defense Report 2025 – Safeguarding Trust in the AI Era A recent study by KMPG discussing Shadow AI listed the following statistics: 44% of employees have used AI in ways that contravene policies and guidelines, indicating a significant prevalence of shadow AI in organizations. 57% of employees have made mistakes due to AI, and 58 percent have relied on AI output without evaluating its accuracy. 41% of employees report that their organization has a policy guiding the use of GenAI, highlighting a huge gap in guardrails. A very informed comment by Sawmi Chandrasekaran, Principal, US and Global AI and Data Labs leader at KPMG states: “Shadow AI isn’t a fringe issue—it’s a signal that employees are moving faster than the systems designed to support them. Without trusted oversight and a coordinated architectural strategy, even a single shortcut can expose the organization to serious risk. But with the right guardrails in place, shadow AI can become a powerful force for innovation, agility, and long-term competitive advantage. The time to act is now—with clarity, trust, and bold forward-looking leadership.” Link: Shadow AI is already here: Take control, reduce risk, and unleash innovation It’s abundantly clear that organizations require integrated solutions to deal with the escalating risks and potential flashpoints. The “Best of Breed” approach is no longer sustainable considering the integration challenges both in cross-platform support and data ingestion charges that can arise, this is where the requirements for a modern CNAPP start to come to the forefront. The Next Era of Cloud Security report created by the IDC highlights Cloud Native Application Protection Platforms (CNAPPs) as a key investment area for organizations: “The IDC CNAPP Survey affirmed that 71% of respondents believe that over the next two years, it would be beneficial for their organization to invest in an integrated SecOps platform that includes technologies such as XDR/EDR, SIEM, CNAPP/cloud security, GenAI, and threat intelligence.” Link: The Next Era of Cloud Security: Cloud-Native Application Protection Platform and Beyond AI Security Posture Management vs Data Security Posture Management Data Security Posture Management (DSPM) is often discussed, having evolved prior to the conceptualization of Generative AI. However, DSPM is its own solution that is covered in the Blog Post Data Security Posture Management for AI. AI Security Posture Management (AI-SPM) focuses solely on the ability to monitor, assess and improve the security of AI systems, models, data and infrastructure in the environment. Microsoft’s Approach – Defender for Cloud Defender for Cloud is Microsoft’s modern Cloud Native Application Protection Platform (CNAPP), encompassing multiple cloud security solution services across both Proactive Security and Runtime Protection. However, for the purposes of this article, we will just be delving into AI Security Posture Management (AI-SPM) which is a sub feature of Cloud Security Posture Management (CSPM), both of which sit under Proactive Security solutions. Link: Microsoft Defender for Cloud Overview - Microsoft Defender for Cloud | Microsoft Learn Understanding AI Security Posture Management The following is going to attempt to “cut to the chase” on each of the four areas and cover an overview of the solution and the requirements. For detailed information on feature enablement and usage, each section includes a link to the full documentation on Microsoft Learn for further reading AI Security Posture Management AI Security Posture Management is a key component of the all-up Cloud Security Posture Management (CSPM) solution, and focuses on 4 key areas: o Generative AI Workload Discover o Vulnerability Assessment o Attack Path Analysis o Security Recommendations Generative AI Workload Discovery Overview Arguably, the principal role of AI Security Posture Management is to discover and identify Generative AI Workloads in the organization. Understanding what AI resources exist in the environment being the key to understanding their defence. Microsoft refers to this as the AI Bill-Of-Materials or AI-BOM. Bill-Of-Materials is a manufacturing term used to describe the components that go together to create a product (think door, handle, latch, hinges and screws). In the AI World this becomes application components such as data and artifacts. AI-SPM can discover Generative AI Applications across multiple supported services including: Azure OpenAI Service Microsoft foundry Azure Machine Learning Amazon Bedrock Google Vertex AI (Preview) Why no Microsoft Copilot Studio Integration? Microsoft Copilot Studio is not an external or custom AI agent service and is deeply integrated into Microsoft 365. Security posture for Microsoft Copilot Studio is handed over to Microsoft Defender for Cloud Apps and Microsoft Purview, with applications being marked as Sanctioned or Unsanctioned via the Defender for Cloud portal. For more information on Microsoft Defender for Cloud Apps see the link below. Link: App governance in Microsoft Defender for Cloud Apps and Microsoft Defender XDR - Microsoft Defender for Cloud Apps | Microsoft Learn Requirements An active Azure Subscription with Microsoft Defender for Cloud. Cloud Security Posture Management (CSPM) Enabled Have at least one environment with an AI supported workload. Link: Discover generative AI workloads - Microsoft Defender for Cloud | Microsoft Learn Vulnerability Assessment Once you have a clear overview of which AI resources exist in your environment, Vulnerability Assessment in AI-SPM allows you to cover two main areas of consideration. The first allows for the organization to discover vulnerabilities within containers that are running generative AI images with known vulnerabilities. The second allows vulnerability discovery within Generative AI Library Dependences such as TensorFlow, PyTorch, and LangChain. Both options will align any vulnerabilities detected to known Common Vulnerabilities and Exposures (CVE) IDs via Microsoft Threat Detection. Requirements An active Azure Subscription with Microsoft Defender for Cloud. Cloud Security Posture Management (CSPM) Enabled Have at least one Azure OpenAI resource, with at least one model deployment connected to it via Azure AI Foundry portal. Link: Explore risks to pre-deployment generative AI artifacts - Microsoft Defender for Cloud | Microsoft Learn Attack Path Analysis AI-SPM hunts for potential attack paths in a multi-cloud environment, by concentrating on real, externally driven and exploitable threats rather than generic scenarios. Using a proprietary algorithm, the attack path is mapped from outside the organization, through to critical assets. The attack path analysis is used to highlight immediately, exploitable threats to the business, which attackers would be able to exploit and breach the environment. Recommendations are given to be able to resolve the detected security issues. Discovered Attack Paths are organized by risk levels, which are determined using a context-aware risk-prioritization engine that considers the risk factors of each resource. Requirements An active Azure Subscription with Microsoft Defender for Cloud. Cloud Security Posture Management (CSPM) with Agentless Scanning Enabled. Required roles and permissions: Security Reader, Security Admin, Reader, Contributor, or Owner. To view attack paths that are related to containers: You must enable agentless container posture extension in Defender CSPM or You can enable Defender for Containers, and install the relevant agents in order to view attack paths that are related to containers. This also gives you the ability to query containers data plane workloads in security explorer. Required roles and permissions: Security Reader, Security Admin, Reader, Contributor, or Owner. Link: Identify and remediate attack paths - Microsoft Defender for Cloud | Microsoft Learn Security Recommendations Microsoft Defender for Cloud evaluates all resources discovered, including AI resources, and all workloads based on both built-in and custom security standards, which are implemented across Azure subscriptions, Amazon Web Services (AWS) accounts, and Google Cloud Platform (GCP) projects. Following these assessments, security recommendations offer actionable guidance to address issues and enhance the overall security posture. Defender for Cloud utilizes an advanced dynamic engine to systematically assess risks within your environment by considering exploitation potential and possible business impacts. This engine prioritizes security recommendations according to the risk factors associated with each resource, determined by the context of the environment, including resource configuration, network connections, and existing security measures. Requirements No specific requirements are required for Security Recommendations if you have Defender for Cloud enabled in the tenant as the feature is included by default. However, you will not be able to see Risk Prioritization unless you have the Defender for CSPM plan enabled. Link: Review Security Recommendations - Microsoft Defender for Cloud | Microsoft Learn CSPM Pricing CSPM has two billing models, Foundational CSPM (Free) Defender CSPM, which has its own additional billing model. AI-SPM is only included as part of the Defender CSPM plan. Foundational CSPM Defender CSPM Cloud Availability AI Security Posture Management - Azure, AWS, GCP (Preview) Price Free $5.11/Billable resource/month Information regarding licensing in this article is provided for guidance purposes only and doesn’t provide any contractual commitment. This list and license requirements are subject to change without any prior notice. Full details can be found on the official Microsoft documentation found here, Link: Pricing - Microsoft Defender for Cloud | Microsoft Azure. Final Thoughts AI Security Posture Management can no longer be considered an optional component to security, but rather a cornerstone to any organization’s operations. The integration of Microsoft Defender for Cloud across all areas of an organization shows the true potential of a modern a CNAPP, where AI is no longer a business objective, but rather a functional business component.Microsoft Ignite 2025: Top Security Innovations You Need to Know
🤖 Security & AI -The Big Story This Year 2025 marks a turning point for cybersecurity. Rapid adoption of AI across enterprises has unlocked innovation but introduced new risks. AI agents are now part of everyday workflows-automating tasks and interacting with sensitive data—creating new attack surfaces that traditional security models cannot fully address. Threat actors are leveraging AI to accelerate attacks, making speed and automation critical for defense. Organizations need solutions that deliver visibility, governance, and proactive risk management for both human and machine identities. Microsoft Ignite 2025 reflects this shift with announcements focused on securing AI at scale, extending Zero Trust principles to AI agents, and embedding intelligent automation into security operations. As a Senior Cybersecurity Solution Architect, I’ve curated the top security announcements from Microsoft Ignite 2025 to help you stay ahead of evolving threats and understand the latest innovations in enterprise security. Agent 365: Control Plane for AI Agents Agent 365 is a centralized platform that gives organizations full visibility, governance, and risk management over AI agents across Microsoft and third-party ecosystems. Why it matters: Unmanaged AI agents can introduce compliance gaps and security risks. Agent 365 ensures full lifecycle control. Key Features: Complete agent registry and discovery Access control and conditional policies Visualization of agent interactions and risk posture Built-in integration with Defender, Entra, and Purview Available via the Frontier Program Microsoft Agent 365: The control plane for AI agents Deep dive blog on Agent 365 Entra Agent ID: Zero Trust for AI Identities Microsoft Entra is the identity and access management suite (covering Azure AD, permissions, and secure access). Entra Agent ID extends Zero Trust identity principles to AI agents, ensuring they are governed like human identities. Why it matters: Unmanaged or over-privileged AI agents can create major security gaps. Agent ID enforces identity governance on AI agents and reduces automation risks. Key Features: Provides unique identities for AI agents Lifecycle governance and sponsorship for agents Conditional access policies applied to agent activity Integrated with open SDKs/APIs for third‑party platforms Microsoft Entra Agent ID Overview Entra Ignite 2025 announcements Public Preview details Security Copilot Expansion Security Copilot is Microsoft’s AI assistant for security teams, now expanded to automate threat hunting, phishing triage, identity risk remediation, and compliance tasks. Why it matters: Security teams face alert fatigue and resource constraints. Copilot accelerates response and reduces manual effort. Key Features: 12 new Microsoft-built agents across Defender, Entra, Intune, and Purview. 30+ partner-built agents available in the Microsoft Security Store. Automates threat hunting, phishing triage, identity risk remediation, and compliance tasks. Included for Microsoft 365 E5 customers at no extra cost. Security Copilot inclusion in Microsoft 365 E5 Security Copilot Ignite blog Security Dashboard for AI A unified dashboard for CISOs and risk leaders to monitor AI risks, aggregate signals from Microsoft security services, and assign tasks via Security Copilot - included at no extra cost. Why it matters: Provides a single pane of glass for AI risk management, improving visibility and decision-making. Key Features: Aggregates signals from Entra, Defender, and Purview Supports natural language queries for risk insights Enables task assignment via Security Copilot Ignite Session: Securing AI at Scale Microsoft Security Blog Microsoft Defender Innovations Microsoft Defender serves as Microsoft’s CNAPP solution, offering comprehensive, AI-driven threat protection that spans endpoints, email, cloud workloads, and SIEM/SOAR integrations. Why It Matters Modern attacks target multi-cloud environments and software supply chains. These innovations provide proactive defense, reduce breach risks before exploitation, and extend protection beyond Microsoft ecosystems-helping organizations secure endpoints, identities, and workloads at scale. Key Features: Predictive Shielding: Proactively hardens attack paths before adversaries pivot. Automatic Attack Disruption: Extended to AWS, Okta, and Proofpoint via Sentinel. Supply Chain Security: Defender for Cloud now integrates with GitHub Advanced Security. What’s new in Microsoft Defender at Ignite Defender for Cloud innovations Global Secure Access & AI Gateway Part of Microsoft Entra’s secure access portfolio, providing secure connectivity and inspection for web and AI traffic. Why it matters: Protects against lateral movement and AI-specific threats while maintaining secure connectivity. Key Features: TLS inspection, URL/file filtering AI Prompt Injection protection Private access for domain controllers to prevent lateral movement attacks. Learn about Secure Web and AI Gateway for agents Microsoft Entra: What’s new in secure access on the AI frontier Purview Enhancements Microsoft Purview is the data governance and compliance platform, ensuring sensitive data is classified, protected, and monitored. Why it matters: Ensures sensitive data remains protected and compliant in AI-driven environments. Key Features: AI Observability: Monitor agent activities and prevent sensitive data leakage. Compliance Guardrails: Communication compliance for AI interactions. Expanded DSPM: Data Security Posture Management for AI workloads. Announcing new Microsoft Purview capabilities to protect GenAI agents Intune Updates Microsoft Intune is a cloud-based endpoint device management solution that secures apps, devices, and data across platforms. It simplifies endpoint security management and accelerates response to device risks using AI. Why it matters: Endpoint security is critical as organizations manage diverse devices in hybrid environments. These updates reduce complexity, speed up remediation, and leverage AI-driven automation-helping security teams stay ahead of evolving threats. Key Features: Security Copilot agents automate policy reviews, device offboarding, and risk-based remediation. Enhanced remote management for Windows Recovery Environment (WinRE). Policy Configuration Agent in Intune lets IT admins create and validate policies with natural language What’s new in Microsoft Intune at Ignite Your guide to Intune at Ignite Closing Thoughts Microsoft Ignite 2025 signals the start of an AI-driven security era. From visibility and governance for AI agents to Zero Trust for machine identities, automation in security operations, and stronger compliance for AI workloads-these innovations empower organizations to anticipate threats, simplify governance, and accelerate secure AI adoption without compromising compliance or control. 📘 Full Coverage: Microsoft Ignite 2025 Book of NewsSecurity Guidance Series: CAF 4.0 Building Proactive Cyber Resilience
It’s Time To Act Microsoft's Digital Defense Report 2025 clearly describes the cyber threat landscape that this guidance is situated in, one that has become more complex, more industrialized, and increasingly democratized. Each day, Microsoft processes more than 100 trillion security signals, giving unparalleled visibility into adversarial tradecraft. Identity remains the most heavily targeted attack vector, with 97% of identity-based attacks relying on password spray, while phishing and unpatched assets continue to provide easy routes for initial compromise. Financially motivated attacks, particularly ransomware and extortion, now make up over half of global incidents, and nation-state operators continue to target critical sectors, including IT, telecommunications, and Government networks. AI is accelerating both sides of the equation: enhancing attacker capability, lowering barriers to entry through open-source models, and simultaneously powering more automated, intelligence-driven defence. Alongside this, emerging risks such as quantum computing underline the urgency of preparing today for tomorrow’s threats. Cybersecurity has therefore become a strategic imperative shaping national resilience and demanding genuine cross-sector collaboration to mitigate systemic risk. It is within this environment that UK public sector organizations are rethinking their approach to cyber resilience. As an Account Executive Apprentice in the Local Public Services team here at Microsoft, I have seen how UK public sector organizations are rethinking their approach to cyber resilience, moving beyond checklists and compliance toward a culture of continuous improvement and intelligence-led defence. When we talk about the UK public sector in this series, we are referring specifically to central government departments, local government authorities, health and care organizations (including the NHS), education institutions, and public safety services such as police, fire, and ambulance. These organizations form a deeply interconnected ecosystem delivering essential services to millions of citizens every day, making cyber resilience not just a technical requirement but a foundation of public trust. Against this backdrop, the UK public sector is entering a new era of cyber resilience with the release of CAF 4.0, the latest evolution of the National Cyber Security Centre’s Cyber Assessment Framework. This guidance has been developed in consultation with national cyber security experts, including the UK’s National Cyber Security Centre (NCSC), and is an aggregation of knowledge and internationally recognized expertise. Building on the foundations of CAF 3.2, this update marks a decisive shift, like moving from a static map to a live radar. Instead of looking back at where threats once were, organizations can now better anticipate them and adjust their digital defences in real time. For the UK’s public sector, this transformation could not be timelier. The complexity of digital public services, combined with the growing threat of ransomware, insider threat, supply chain compromise, and threats from nation state actors, demands a faster, smarter, and more connected approach to resilience. Where CAF 3.2 focused on confirming the presence and effectiveness of security measures, CAF 4.0 places greater emphasis on developing organizational capability and improving resilience in a more dynamic threat environment. While the CAF remains an outcome-based framework, not a maturity model, it is structured around Objectives, Principles, and Contributing Outcomes, with each contributing outcome supported by Indicators of Good Practice. For simplicity, I refer to these contributing outcomes as “controls” throughout this blog and use that term to describe the practical expectations organizations are assessed against. CAF 4.0 challenges organizations not only to understand the threats they face but to anticipate, detect, and respond in a more informed and adaptive way. Two contributing outcomes exemplify this proactive mindset: A2.b Understanding Threat and C2 Threat Hunting. Together, they represent what it truly means to understand your adversaries and act before harm occurs. For the UK’s public sector, achieving these new objectives may seem daunting, but the path forward is clearer than ever. Many organizations are already beginning this journey, supported by technologies that help turn insight into action and coordination into resilience. At Microsoft, we’ve seen how tools like E3, E5, and Sentinel are already helping public sector teams to move from reactive to intelligence-driven security operations. Over the coming weeks, we’ll explore how these capabilities align to CAF 4.0’s core principles and share practical examples of how councils can strengthen their resilience journey through smarter visibility, automation, and collaboration. CAF 4.0 vs CAF 3.2 - What’s Changed and Why It Matters The move from CAF 3.2 to CAF 4.0 represents a fundamental shift in how the UK public sector builds cyber resilience. The focus is no longer on whether controls exist - it is on whether they work, adapt, and improve over time. CAF 4.0 puts maturity at the centre. It pushes organizations to evolve from compliance checklists to operational capability, adopting a threat-informed, intelligence-led, and proactive security posture, by design. CAF 4.0 raises the bar for cyber maturity across the public sector. It calls for departments and authorities to build on existing foundations and embrace live threat intelligence, behavioural analytics, and structured threat hunting to stay ahead of adversaries. By understanding how attackers might target essential services and adapting controls in real time, organizations can evolve from awareness to active defence. Today’s threat actors are agile, persistent, and increasingly well-resourced, which means reactive measures are no longer enough. CAF 4.0 positions resilience as a continuous process of learning, adapting, and improving, supported by data-driven insights and modern security operations. CAF 4.0 is reshaping how the UK’s public sector approaches security maturity. In the coming weeks, we’ll explore what this looks like in practice, starting with how to build a deeper understanding of threat (control A2.b) and elevate threat hunting (control C2) into an everyday capability, using the tools and insights that are available within existing Microsoft E3 and E5 licences to help support these objectives. Until then, how ready is your organization to turn insight into action?Microsoft Defender for Cloud Customer Newsletter
What's new in Defender for Cloud? Defender for Cloud integrates into the Defender portal as part of the broader Microsoft Security ecosystem, now in public preview. This integration, while adding posture management insight, eliminates silos natively to allow security teams to see and act on threats across all cloud, hybrid, and code environments from one place. For more information, see our public documentation. Discover Azure AI Foundry agents in your environment The Defender Cloud Security Posture Management (CSPM) plan secures generative AI applications and now, in public preview, AI agents throughout its entire lifecycle. Discover AI agent workloads and identify details of your organization’s AI Bill of Materials (BOM). Details like vulnerabilities, misconfigurations and potential attack paths help protect your environment. Plus, Defender for Cloud monitors for any suspicious or harmful actions initiated by the agent. Blogs of the month Unlocking Business Value: Microsoft’s Dual Approach to AI for Security and Security for AI Fast-Start Checklist for Microsoft Defender CSPM: From Enablement to Best Practices Announcing Microsoft cloud security benchmark v2 (public preview) Microsoft Defender for Cloud Innovations at Ignite 2025 Defender for AI services: Threat protection and AI red team workshop Defender for Cloud in the field Revisit the Cloud Detection Response experience here.. Visit our YouTube page: here GitHub Community Check out the Microsoft Defender for Cloud Enterprise Onboarding Guide. It has been updated to include the latest network requirements. This guide describes the actions an organization must take to successfully onboard to MDC at scale. Customer journeys Discover how other organizations successfully use Microsoft Defender for Cloud to protect their cloud workloads. This month we are featuring Icertis. Icertis, a global leader in contract intelligence, launched AI applications using Azure OpenAI in Foundry Models that help customers extract clauses, assess risk, and automate contract workflows. Because contracts contain highly sensitive business rules and arrangements, their deployment of Vera, their own generative AI technology that includes Copilot agents and analytics for tailored contract intelligence, introduced challenges like enforcing and maintaining compliance and security challenges like prompt injections, jailbreak attacks and hallucinations. Microsoft Defender for Cloud’s comprehensive AI posture visibility with risk reduction recommendations and threat protection for AI applications with contextual evidence helped preserve their generative AI applications. Icertis can monitor OpenAI deployments, detect malicious prompts and enforce security policies as their first line of defense against AI-related threats. Join our community! Join our experts in the upcoming webinars to learn what we are doing to secure your workloads running in Azure and other clouds. Check out our upcoming webinars this month! DECEMBER 4 (8:00 AM- 9:00 AM PT) Microsoft Defender for Cloud | Unlocking New Capabilities in Defender for Storage DECEMBER 10 (9:00 AM - 10:00 AM PT) Microsoft Defender for Cloud | Expose Less, Protect More with Microsoft Security Exposure Management DECEMBER 11 (8:00 AM - 9:00 AM PT) Microsoft Defender for Cloud | Modernizing Cloud Security with Next‑Generation Microsoft Defender for Cloud We offer several customer connection programs within our private communities. By signing up, you can help us shape our products through activities such as reviewing product roadmaps, participating in co-design, previewing features, and staying up-to-date with announcements. Sign up at aka.ms/JoinCCP. We greatly value your input on the types of content that enhance your understanding of our security products. Your insights are crucial in guiding the development of our future public content. We aim to deliver material that not only educates but also resonates with your daily security challenges. Whether it’s through in-depth live webinars, real-world case studies, comprehensive best practice guides through blogs, or the latest product updates, we want to ensure our content meets your needs. Please submit your feedback on which of these formats do you find most beneficial and are there any specific topics you’re interested in https://aka.ms/PublicContentFeedback. Note: If you want to stay current with Defender for Cloud and receive updates in your inbox, please consider subscribing to our monthly newsletter: https://aka.ms/MDCNewsSubscribeSecurity as the core primitive - Securing AI agents and apps
This week at Microsoft Ignite, we shared our vision for Microsoft security -- In the agentic era, security must be ambient and autonomous, like the AI it protects. It must be woven into and around everything we build—from silicon to OS, to agents, apps, data, platforms, and clouds—and throughout everything we do. In this blog, we are going to dive deeper into many of the new innovations we are introducing this week to secure AI agents and apps. As I spend time with our customers and partners, there are four consistent themes that have emerged as core security challenges to secure AI workloads. These are: preventing agent sprawl and access to resources, protecting against data oversharing and data leaks, defending against new AI threats and vulnerabilities, and adhering to evolving regulations. Addressing these challenges holistically requires a coordinated effort across IT, developers, and security leaders, not just within security teams and to enable this, we are introducing several new innovations: Microsoft Agent 365 for IT, Foundry Control Plane in Microsoft Foundry for developers, and the Security Dashboard for AI for security leaders. In addition, we are releasing several new purpose-built capabilities to protect and govern AI apps and agents across Microsoft Defender, Microsoft Entra, and Microsoft Purview. Observability at every layer of the stack To facilitate the organization-wide effort that it takes to secure and govern AI agents and apps – IT, developers, and security leaders need observability (security, management, and monitoring) at every level. IT teams need to enable the development and deployment of any agent in their environment. To ensure the responsible and secure deployment of agents into an organization, IT needs a unified agent registry, the ability to assign an identity to every agent, manage the agent’s access to data and resources, and manage the agent’s entire lifecycle. In addition, IT needs to be able to assign access to common productivity and collaboration tools, such as email and file storage, and be able to observe their entire agent estate for risks such as over-permissioned agents. Development teams need to build and test agents, apply security and compliance controls by default, and ensure AI models are evaluated for safety guardrails and security vulnerabilities. Post deployment, development teams must observe agents to ensure they are staying on task, accessing applications and data sources appropriately, and operating within their cost and performance expectations. Security & compliance teams must ensure overall security of their AI estate, including their AI infrastructure, platforms, data, apps, and agents. They need comprehensive visibility into all their security risks- including agent sprawl and resource access, data oversharing and leaks, AI threats and vulnerabilities, and complying with global regulations. They want to address these risks by extending their existing security investments that they are already invested in and familiar with, rather than using siloed or bolt-on tools. These teams can be most effective in delivering trustworthy AI to their organizations if security is natively integrated into the tools and platforms that they use every day, and if those tools and platforms share consistent security primitives such as agent identities from Entra; data security and compliance controls from Purview; and security posture, detections, and protections from Defender. With the new capabilities being released today, we are delivering observability at every layer of the AI stack, meeting IT, developers, and security teams where they are in the tools they already use to innovate with confidence. For IT Teams - Introducing Microsoft Agent 365, the control plane for agents, now in preview The best infrastructure for managing your agents is the one you already use to manage your users. With Agent 365, organizations can extend familiar tools and policies to confidently deploy and secure agents, without reinventing the wheel. By using the same trusted Microsoft 365 infrastructure, productivity apps, and protections, organizations can now apply consistent and familiar governance and security controls that are purpose-built to protect against agent-specific threats and risks. gement and governance of agents across organizations Microsoft Agent 365 delivers a unified agent Registry, Access Control, Visualization, Interoperability, and Security capabilities for your organization. These capabilities work together to help organizations manage agents and drive business value. The Registry powered by the Entra provides a complete and unified inventory of all the agents deployed and used in your organization including both Microsoft and third-party agents. Access Control allows you to limit the access privileges of your agents to only the resources that they need and protect their access to resources in real time. Visualization gives organizations the ability to see what matters most and gain insights through a unified dashboard, advanced analytics, and role-based reporting. Interop allows agents to access organizational data through Work IQ for added context, and to integrate with Microsoft 365 apps such as Outlook, Word, and Excel so they can create and collaborate alongside users. Security enables the proactive detection of vulnerabilities and misconfigurations, protects against common attacks such as prompt injections, prevents agents from processing or leaking sensitive data, and gives organizations the ability to audit agent interactions, assess compliance readiness and policy violations, and recommend controls for evolving regulatory requirements. Microsoft Agent 365 also includes the Agent 365 SDK, part of Microsoft Agent Framework, which empowers developers and ISVs to build agents on their own AI stack. The SDK enables agents to automatically inherit Microsoft's security and governance protections, such as identity controls, data security policies, and compliance capabilities, without the need for custom integration. For more details on Agent 365, read the blog here. For Developers - Introducing Microsoft Foundry Control Plane to observe, secure and manage agents, now in preview Developers are moving fast to bring agents into production, but operating them at scale introduces new challenges and responsibilities. Agents can access tools, take actions, and make decisions in real time, which means development teams must ensure that every agent behaves safely, securely, and consistently. Today, developers need to work across multiple disparate tools to get a holistic picture of the cybersecurity and safety risks that their agents may have. Once they understand the risk, they then need a unified and simplified way to monitor and manage their entire agent fleet and apply controls and guardrails as needed. Microsoft Foundry provides a unified platform for developers to build, evaluate and deploy AI apps and agents in a responsible way. Today we are excited to announce that Foundry Control Plane is available in preview. This enables developers to observe, secure, and manage their agent fleets with built-in security, and centralized governance controls. With this unified approach, developers can now identify risks and correlate disparate signals across their models, agents, and tools; enforce consistent policies and quality gates; and continuously monitor task adherence and runtime risks. Foundry Control Plane is deeply integrated with Microsoft’s security portfolio to provide a ‘secure by design’ foundation for developers. With Microsoft Entra, developers can ensure an agent identity (Agent ID) and access controls are built into every agent, mitigating the risk of unmanaged agents and over permissioned resources. With Microsoft Defender built in, developers gain contextualized alerts and posture recommendations for agents directly within the Foundry Control Plane. This integration proactively prevents configuration and access risks, while also defending agents from runtime threats in real time. Microsoft Purview’s native integration into Foundry Control Plane makes it easy to enable data security and compliance for every Foundry-built application or agent. This allows Purview to discover data security and compliance risks and apply policies to prevent user prompts and AI responses from safety and policy violations. In addition, agent interactions can be logged and searched for compliance and legal audits. This integration of the shared security capabilities, including identity and access, data security and compliance, and threat protection and posture ensures that security is not an afterthought; it’s embedded at every stage of the agent lifecycle, enabling you to start secure and stay secure. For more details, read the blog. For Security Teams - Introducing Security Dashboard for AI - unified risk visibility for CISOs and AI risk leaders, coming soon AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 90% of security professionals, including CISOs, report that their responsibilities have expanded to include data governance and AI oversight within the past year. 1 At the same time, 86% of risk managers say disconnected data and systems lead to duplicated efforts and gaps in risk coverage. 2 To address these needs, we are excited to introduce the Security Dashboard for AI. This serves as a unified dashboard that aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview. This unified dashboard allows CISOs and AI risk leaders to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. For example, you can see your full AI inventory and get visibility into a quarantined agent, flagged for high data risk due to oversharing sensitive information in Purview. The dashboard then correlates that signal with identity insights from Entra and threat protection alerts from Defender to provide a complete picture of exposure. From there, you can delegate tasks to the appropriate teams to enforce policies and remediate issues quickly. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, there’s nothing new to buy. If you’re already using Microsoft security products to secure AI, you’re already a Security Dashboard for AI customer. Figure 5: Security Dashboard for AI provides CISOs and AI risk leaders with a unified view of their AI risk by bringing together their AI inventory, AI risk, and security recommendations to strengthen overall posture Together, these innovations deliver observability and security across IT, development, and security teams, powered by Microsoft’s shared security capabilities. With Microsoft Agent 365, IT teams can manage and secure agents alongside users. Foundry Control Plane gives developers unified governance and lifecycle controls for agent fleets. Security Dashboard for AI provides CISOs and AI risk leaders with a consolidated view of AI risks across platforms, apps, and agents. Added innovation to secure and govern your AI workloads In addition to the IT, developer, and security leader-focused innovations outlined above, we continue to accelerate our pace of innovation in Microsoft Entra, Microsoft Purview, and Microsoft Defender to address the most pressing needs for securing and governing your AI workloads. These needs are: Manage agent sprawl and resource access e.g. managing agent identity, access to resources, and permissions lifecycle at scale Prevent data oversharing and leaks e.g. protecting sensitive information shared in prompts, responses, and agent interactions Defend against shadow AI, new threats, and vulnerabilities e.g. managing unsanctioned applications, preventing prompt injection attacks, and detecting AI supply chain vulnerabilities Enable AI governance for regulatory compliance e.g. ensuring AI development, operations, and usage comply with evolving global regulations and frameworks Manage agent sprawl and resource access 76% of business leaders expect employees to manage agents within the next 2–3 years. 3 Widespread adoption of agents is driving the need for visibility and control, which includes the need for a unified registry, agent identities, lifecycle governance, and secure access to resources. Today, Microsoft Entra provides robust identity protection and secure access for applications and users. However, organizations lack a unified way to manage, govern, and protect agents in the same way they manage their users. Organizations need a purpose-built identity and access framework for agents. Introducing Microsoft Entra Agent ID, now in preview Microsoft Entra Agent ID offers enterprise-grade capabilities that enable organizations to prevent agent sprawl and protect agent identities and their access to resources. These new purpose-built capabilities enable organizations to: Register and manage agents: Get a complete inventory of the agent fleet and ensure all new agents are created with an identity built-in and are automatically protected by organization policies to accelerate adoption. Govern agent identities and lifecycle: Keep the agent fleet under control with lifecycle management and IT-defined guardrails for both agents and people who create and manage them. Protect agent access to resources: Reduce risk of breaches, block risky agents, and prevent agent access to malicious resources with conditional access and traffic inspection. Agents built in Microsoft Copilot Studio, Microsoft Foundry, and Security Copilot get an Entra Agent ID built-in at creation. Developers can also adopt Entra Agent ID for agents they build through Microsoft Agent Framework, Microsoft Agent 365 SDK, or Microsoft Entra Agent ID SDK. Read the Microsoft Entra blog to learn more. Prevent data oversharing and leaks Data security is more complex than ever. Information Security Media Group (ISMG) reports that 80% of leaders cite leakage of sensitive data as their top concern. 4 In addition to data security and compliance risks of generative AI (GenAI) apps, agents introduces new data risks such as unsupervised data access, highlighting the need to protect all types of corporate data, whether it is accessed by employees or agents. To mitigate these risks, we are introducing new Microsoft Purview data security and compliance capabilities for Microsoft 365 Copilot and for agents and AI apps built with Copilot Studio and Microsoft Foundry, providing unified protection, visibility, and control for users, AI Apps, and Agents. New Microsoft Purview controls safeguard Microsoft 365 Copilot with real-time protection and bulk remediation of oversharing risks Microsoft Purview and Microsoft 365 Copilot deliver a fully integrated solution for protecting sensitive data in AI workflows. Based on ongoing customer feedback, we’re introducing new capabilities to deliver real-time protection for sensitive data in M365 Copilot and accelerated remediation of oversharing risks: Data risk assessments: Previously, admins could monitor oversharing risks such as SharePoint sites with unprotected sensitive data. Now, they can perform item-level investigations and bulk remediation for overshared files in SharePoint and OneDrive to quickly reduce oversharing exposure. Data Loss Prevention (DLP) for M365 Copilot: DLP previously excluded files with sensitivity labels from Copilot processing. Now in preview, DLP also prevents prompts that include sensitive data from being processed in M365 Copilot, Copilot Chat, and Copilot agents, and prevents Copilot from using sensitive data in prompts for web grounding. Priority cleanup for M365 Copilot assets: Many organizations have org-wide policies to retain or delete data. Priority cleanup, now generally available, lets admins delete assets that are frequently processed by Copilot, such as meeting transcripts and recordings, on an independent schedule from the org-wide policies while maintaining regulatory compliance. On-demand classification for meeting transcripts: Purview can now detect sensitive information in meeting transcripts on-demand. This enables data security admins to apply DLP policies and enforce Priority cleanup based on the sensitive information detected. & bulk remediation Read the full Data Security blog to learn more. Introducing new Microsoft Purview data security capabilities for agents and apps built with Copilot Studio and Microsoft Foundry, now in preview Microsoft Purview now extends the same data security and compliance for users and Copilots to agents and apps. These new capabilities are: Enhanced Data Security Posture Management: A centralized DSPM dashboard that provides observability, risk assessment, and guided remediation across users, AI apps, and agents. Insider Risk Management (IRM) for Agents: Uniquely designed for agents, using dedicated behavioral analytics, Purview dynamically assigns risk levels to agents based on their risky handing of sensitive data and enables admins to apply conditional policies based on that risk level. Sensitive data protection with Azure AI Search: Azure AI Search enables fast, AI-driven retrieval across large document collections, essential for building AI Apps. When apps or agents use Azure AI Search to index or retrieve data, Purview sensitivity labels are preserved in the search index, ensuring that any sensitive information remains protected under the organization’s data security & compliance policies. For more information on preventing data oversharing and data leaks - Learn how Purview protects and governs agents in the Data Security and Compliance for Agents blog. Defend against shadow AI, new threats, and vulnerabilities AI workloads are subject to new AI-specific threats like prompt injections attacks, model poisoning, and data exfiltration of AI generated content. Although security admins and SOC analysts have similar tasks when securing agents, the attack methods and surfaces differ significantly. To help customers defend against these novel attacks, we are introducing new capabilities in Microsoft Defender that deliver end-to-end protection, from security posture management to runtime defense. Introducing Security Posture Management for agents, now in preview As organizations adopt AI agents to automate critical workflows, they become high-value targets and potential points of compromise, creating a critical need to ensure agents are hardened, compliant, and resilient by preventing misconfigurations and safeguarding against adversarial manipulation. Security Posture Management for agents in Microsoft Defender now provides an agent inventory for security teams across Microsoft Foundry and Copilot Studio agents. Here, analysts can assess the overall security posture of an agent, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions, all aligned to the MITRE ATT&CK framework. Additionally, the new agent attack path analysis visualizes how an agent’s weak security posture can create broader organizational risk, so you can quickly limit exposure and prevent lateral movement. Introducing Threat Protection for agents, now in preview Attack techniques and attack surfaces for agents are fundamentally different from other assets in your environment. That’s why Defender is delivering purpose-built protections and detections to help defend against them. Defender is introducing runtime protection for Copilot Studio agents that automatically block prompt injection attacks in real time. In addition, we are announcing agent-specific threat detections for Copilot Studio and Microsoft Foundry agents coming soon. Defender automatically correlates these alerts with Microsoft’s industry-leading threat intelligence and cross-domain security signals to deliver richer, contextualized alerts and security incident views for the SOC analyst. Defender’s risk and threat signals are natively integrated into the new Microsoft Foundry Control Plane, giving development teams full observability and the ability to act directly from within their familiar environment. Finally, security analysts will be able to hunt across all agent telemetry in the Advanced Hunting experience in Defender, and the new Agent 365 SDK extends Defender’s visibility and hunting capabilities to third-party agents, starting with Genspark and Kasisto, giving security teams even more coverage across their AI landscape. To learn more about how you can harden the security posture of your agents and defend against threats, read the Microsoft Defender blog. Enable AI governance for regulatory compliance Global AI regulations like the EU AI Act and NIST AI RMF are evolving rapidly; yet, according to ISMG, 55% of leaders report lacking clarity on current and future AI regulatory requirements. 5 As enterprises adopt AI, they must ensure that their AI innovation aligns with global regulations and standards to avoid costly compliance gaps. Introducing new Microsoft Purview Compliance Manager capabilities to stay ahead of evolving AI regulations, now in preview Today, Purview Compliance Manager provides over 300 pre-built assessments for common industry, regional, and global standards and regulations. However, the pace of change for new AI regulations requires controls to be continuously re-evaluated and updated so that organizations can adapt to ongoing changes in regulations and stay compliant. To address this need, Compliance Manager now includes AI-powered regulatory templates. AI-powered regulatory templates enable real-time ingestion and analysis of global regulatory documents, allowing compliance teams to quickly adapt to changes as they happen. As regulations evolve, the updated regulatory documents can be uploaded to Compliance Manager, and the new requirements are automatically mapped to applicable recommended actions to implement controls across Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft 365, and Microsoft Foundry. Automated actions by Compliance Manager further streamline governance, reduce manual workload, and strengthen regulatory accountability. Introducing expanded Microsoft Purview compliance capabilities for agents and AI apps now in preview Microsoft Purview now extends its compliance capabilities across agent-generated interactions, ensuring responsible use and regulatory alignment as AI becomes deeply embedded across business processes. New capabilities include expanded coverage for: Audit: Surface agent interactions, lifecycle events, and data usage with Purview Audit. Unified audit logs across user and agent activities, paired with traceability for every agent using an Entra Agent ID, support investigation, anomaly detection, and regulatory reporting. Communication Compliance: Detect prompts sent to agents and agent-generated responses containing inappropriate, unethical, or risky language, including attempts to manipulate agents into bypassing policies, generating risky content, or producing noncompliant outputs. When issues arise, data security admins get full context, including the prompt, the agent’s output, and relevant metadata, so they can investigate and take corrective action Data Lifecycle Management: Apply retention and deletion policies to agent-generated content and communication flows to automate lifecycle controls and reduce regulatory risk. Read about Microsoft Purview data security for agents to learn more. Finally, we are extending our data security, threat protection, and identity access capabilities to third-party apps and agents via the network. Advancing Microsoft Entra Internet Access Secure Web + AI Gateway - extend runtime protections to the network, now in preview Microsoft Entra Internet Access, part of the Microsoft Entra Suite, has new capabilities to secure access to and usage of GenAI at the network level, marking a transition from Secure Web Gateway to Secure Web and AI Gateway. Enterprises can accelerate GenAI adoption while maintaining compliance and reducing risk, empowering employees to experiment with new AI tools safely. The new capabilities include: Prompt injection protection which blocks malicious prompts in real time by extending Azure AI Prompt Shields to the network layer. Network file filtering which extends Microsoft Purview to inspect files in transit and prevents regulated or confidential data from being uploaded to unsanctioned AI services. Shadow AI Detection that provides visibility into unsanctioned AI applications through Cloud Application Analytics and Defender for Cloud Apps risk scoring, empowering security teams to monitor usage trends, apply Conditional Access, or block high-risk apps instantly. Unsanctioned MCP server blocking prevents access to MCP servers from unauthorized agents. With these controls, you can accelerate GenAI adoption while maintaining compliance and reducing risk, so employees can experiment with new AI tools safely. Read the Microsoft Entra blog to learn more. As AI transforms the enterprise, security must evolve to meet new challenges—spanning agent sprawl, data protection, emerging threats, and regulatory compliance. Our approach is to empower IT, developers, and security leaders with purpose-built innovations like Agent 365, Foundry Control Plane, and the Security Dashboard for AI. These solutions bring observability, governance, and protection to every layer of the AI stack, leveraging familiar tools and integrated controls across Microsoft Defender, Microsoft Entra, and Microsoft Purview. The future of security is ambient, autonomous, and deeply woven into the fabric of how we build, deploy, and govern AI systems. Explore additional resources Learn more about Security for AI solutions on our webpage Learn more about Microsoft Agent 365 Learn more about Microsoft Entra Agent ID Get started with Microsoft 365 Copilot Get started with Microsoft Copilot Studio Get started with Microsoft Foundry Get started with Microsoft Defender for Cloud Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Purview Compliance Manager Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Bedrock Security, 2025 Data Security Confidence Index, published Mar 17, 2025. 2 AuditBoard & Ascend2, Connected Risk Report 2024; as cited by MIT Sloan Management Review, Spring 2025. 3 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more 4 First Annual Generative AI study: Business Rewards vs. Security Risks, , Q3 2023, ISMG, N=400 5 First Annual Generative AI study: Business Rewards vs. Security Risks, Q3 2023, ISMG, N=400