threat protection
86 TopicsSecurity Guidance Series: CAF 4.0 Understanding Threat From Awareness to Intelligence-Led Defence
The updated CAF 4.0 raises expectations around control A2.b - Understanding Threat. Rather than focusing solely on awareness of common cyber-attacks, the framework now calls for a sector-specific, intelligence-informed understanding of the threat landscape. According to the NCSC, CAF 4.0 emphasizes the need for detailed threat analysis that reflects the tactics, techniques, and resources of capable adversaries, and requires that this understanding directly shapes security and resilience decisions. For public sector authorities, this means going beyond static risk registers to build a living threat model that evolves alongside digital transformation and service delivery. Public sector authorities need to know which systems and datasets are most exposed, from citizen records and clinical information to education systems, operational platforms, and payment gateways, and anticipate how an attacker might exploit them to disrupt essential services. To support this higher level of maturity, Microsoft’s security ecosystem helps public sector authorities turn threat intelligence into actionable understanding, directly aligning with CAF 4.0’s Achieved criteria for control A2.b. Microsoft E3 - Building Foundational Awareness Microsoft E3 provides public sector authorities with the foundational capabilities to start aligning with CAF 4.0 A2.b by enabling awareness of common threats and applying that awareness to risk decisions. At this maturity level, organizations typically reach Partially Achieved, where threat understanding is informed by incidents rather than proactive analysis. How E3 contributes to Contributing Outcome A2.b: Visibility of basic threats: Defender for Endpoint Plan 1 surfaces malware and unsafe application activity, giving organizations insight into how adversaries exploit endpoints. This telemetry helps identify initial attacker entry points and informs reactive containment measures. Identity risk reduction: Entra ID P1 enforces MFA and blocks legacy authentication, mitigating common credential-based attacks. These controls reduce the likelihood of compromise at early stages of an attacker’s path. Incident-driven learning: Alerts and Security & Compliance Centre reports allow organizations to review how attacks unfolded, supporting documentation of observed techniques and feeding lessons into risk decisions. What’s missing for Achieved: To fully meet the contributing outcomes A2.b, public sector organizations must evolve from incident-driven awareness to structured, intelligence-led threat analysis. This involves anticipating probable attack methods, developing plausible scenarios, and maintaining a current threat picture through proactive hunting and threat intelligence. These capabilities extend beyond the E3 baseline and require advanced analytics and dedicated platforms. Microsoft E5 – Advancing to Intelligence-Led Defence Where E3 establishes the foundation for identifying and documenting known threats, Microsoft E5 helps public sector organizations to progress toward the Achieved level of CAF control A2.b by delivering continuous, intelligence-driven analysis across every attack surface. How E5 aligns with Contributing Outcome A2.b: Detailed, up-to-date view of attacker paths: At the core of E5 is Defender XDR, which correlates telemetry from Defender for Endpoint Plan 2, Defender for Office 365 Plan 2, Defender for Identity, and Defender for Cloud Apps. This unified view reveals how attackers move laterally between devices, identities, and SaaS applications - directly supporting CAF’s requirement to understand probable attack methods and the steps needed to reach critical targets. Advanced hunting and scenario development: Defender for Endpoint P2 introduces advanced hunting via Kusto Query Language (KQL) and behavioural analytics. Analysts can query historical data to uncover persistence mechanisms or privilege escalation techniques, assisting organizations to anticipate attack chains and develop plausible scenarios, a key expectation under A2.b. Email and collaboration threat modelling: Defender for Office 365 P2 detects targeted phishing, business email compromise, and credential harvesting campaigns. Attack Simulation Training adds proactive testing of social engineering techniques, helping organizations maintain awareness of evolving attacker tradecraft and refine mitigations. Identity-focused threat analysis: Defender for Identity and Entra ID P2 expose lateral movement, credential abuse, and risky sign-ins. By mapping tactics and techniques against frameworks like MITRE ATT&CK, organizations can gain the attacker’s perspective on identity systems - fulfilling CAF’s call to view networks from a threat actor’s lens. Cloud application risk visibility: Defender for Cloud Apps highlights shadow IT and potential data exfiltration routes, helping organizations to document and justify controls at each step of the attack chain. Continuous threat intelligence: Microsoft Threat Intelligence enriches detections with global and sector-specific insights on active adversary groups, emerging malware, and infrastructure trends. This sustained feed helps organizations maintain a detailed understanding of current threats, informing risk decisions and prioritization. Why this meets Achieved: E5 capabilities help organizations move beyond reactive alerting to a structured, intelligence-led approach. Threat knowledge is continuously updated, scenarios are documented, and controls are justified at each stage of the attacker path, supporting CAF control A2.b’s expectation that threat understanding informs risk management and defensive prioritization. Sentinel While Microsoft E5 delivers deep visibility across endpoints, identities, and applications, Microsoft Sentinel acts as the unifying layer that helps transform these insights into a comprehensive, evidence-based threat model, a core expectation of Achieved maturity under CAF 4.0 A2.b. How Sentinel enables Achieved outcomes: Comprehensive attack-chain visibility: As a cloud-native SIEM and SOAR, Sentinel ingests telemetry from Microsoft and non-Microsoft sources, including firewalls, OT environments, legacy servers, and third-party SaaS platforms. By correlating these diverse signals into a single analytical view, Sentinel allows defenders to visualize the entire attack chain, from initial reconnaissance through lateral movement and data exfiltration. This directly supports CAF’s requirement to understand how capable, well-resourced actors could systematically target essential systems. Attacker-centric analysis and scenario building: Sentinel’s Analytics Rules and MITRE ATT&CK-aligned detections provide a structured lens on tactics and techniques. Security teams can use Kusto Query Language (KQL) and advanced hunting to identify anomalies, map adversary behaviours, and build plausible threat scenarios, addressing CAF’s expectation to anticipate probable attack methods and justify mitigations at each step. Threat intelligence integration: Sentinel enriches local telemetry with intelligence from trusted sources such as the NCSC and Microsoft’s global network. This helps organizations maintain a current, sector-specific understanding of threats, applying that knowledge to prioritize risk treatment and policy decisions, a defining characteristic of Achieved maturity. Automation and repeatable processes: Sentinel’s SOAR capabilities operationalize intelligence through automated playbooks that contain threats, isolate compromised assets, and trigger investigation workflows. These workflows create a documented, repeatable process for threat analysis and response, reinforcing CAF’s emphasis on continuous learning and refinement. This video brings CAF A2.b – Understanding Threat – to life, showing how public sector organizations can use Microsoft security tools to build a clear, intelligence-led view of attacker behaviour and meet the expectations of CAF 4.0. Why this meets Achieved: By consolidating telemetry, threat intelligence, and automated response into one platform, Sentinel elevates public sector organizations from isolated detection to an integrated, intelligence-led defence posture. Every alert, query, and playbook contributes to an evolving organization-wide threat model, supporting CAF A2.b’s requirement for detailed, proactive, and documented threat understanding. CAF 4.0 challenges every public-sector organization to think like a threat actor, to understand not just what could go wrong, but how and why. Does your organization have the visibility, intelligence, and confidence to turn that understanding into proactive defence? To illustrate how this contributing outcome can be achieved in practice, the one-slider and demo show how Microsoft’s security capabilities help organizations build the detailed, intelligence-informed threat picture expected by CAF 4.0. These examples turn A2.b’s requirements into actionable steps for organizations. In the next article, we’ll explore C2 - Threat Hunting: moving from detection to anticipation and embedding proactive resilience as a daily capability.Security Guidance Series: CAF 4.0 Building Proactive Cyber Resilience
It’s Time To Act Microsoft's Digital Defense Report 2025 clearly describes the cyber threat landscape that this guidance is situated in, one that has become more complex, more industrialized, and increasingly democratized. Each day, Microsoft processes more than 100 trillion security signals, giving unparalleled visibility into adversarial tradecraft. Identity remains the most heavily targeted attack vector, with 97% of identity-based attacks relying on password spray, while phishing and unpatched assets continue to provide easy routes for initial compromise. Financially motivated attacks, particularly ransomware and extortion, now make up over half of global incidents, and nation-state operators continue to target critical sectors, including IT, telecommunications, and Government networks. AI is accelerating both sides of the equation: enhancing attacker capability, lowering barriers to entry through open-source models, and simultaneously powering more automated, intelligence-driven defence. Alongside this, emerging risks such as quantum computing underline the urgency of preparing today for tomorrow’s threats. Cybersecurity has therefore become a strategic imperative shaping national resilience and demanding genuine cross-sector collaboration to mitigate systemic risk. It is within this environment that UK public sector organizations are rethinking their approach to cyber resilience. As an Account Executive Apprentice in the Local Public Services team here at Microsoft, I have seen how UK public sector organizations are rethinking their approach to cyber resilience, moving beyond checklists and compliance toward a culture of continuous improvement and intelligence-led defence. When we talk about the UK public sector in this series, we are referring specifically to central government departments, local government authorities, health and care organizations (including the NHS), education institutions, and public safety services such as police, fire, and ambulance. These organizations form a deeply interconnected ecosystem delivering essential services to millions of citizens every day, making cyber resilience not just a technical requirement but a foundation of public trust. Against this backdrop, the UK public sector is entering a new era of cyber resilience with the release of CAF 4.0, the latest evolution of the National Cyber Security Centre’s Cyber Assessment Framework. This guidance has been developed in consultation with national cyber security experts, including the UK’s National Cyber Security Centre (NCSC), and is an aggregation of knowledge and internationally recognized expertise. Building on the foundations of CAF 3.2, this update marks a decisive shift, like moving from a static map to a live radar. Instead of looking back at where threats once were, organizations can now better anticipate them and adjust their digital defences in real time. For the UK’s public sector, this transformation could not be timelier. The complexity of digital public services, combined with the growing threat of ransomware, insider threat, supply chain compromise, and threats from nation state actors, demands a faster, smarter, and more connected approach to resilience. Where CAF 3.2 focused on confirming the presence and effectiveness of security measures, CAF 4.0 places greater emphasis on developing organizational capability and improving resilience in a more dynamic threat environment. While the CAF remains an outcome-based framework, not a maturity model, it is structured around Objectives, Principles, and Contributing Outcomes, with each contributing outcome supported by Indicators of Good Practice. For simplicity, I refer to these contributing outcomes as “controls” throughout this blog and use that term to describe the practical expectations organizations are assessed against. CAF 4.0 challenges organizations not only to understand the threats they face but to anticipate, detect, and respond in a more informed and adaptive way. Two contributing outcomes exemplify this proactive mindset: A2.b Understanding Threat and C2 Threat Hunting. Together, they represent what it truly means to understand your adversaries and act before harm occurs. For the UK’s public sector, achieving these new objectives may seem daunting, but the path forward is clearer than ever. Many organizations are already beginning this journey, supported by technologies that help turn insight into action and coordination into resilience. At Microsoft, we’ve seen how tools like E3, E5, and Sentinel are already helping public sector teams to move from reactive to intelligence-driven security operations. Over the coming weeks, we’ll explore how these capabilities align to CAF 4.0’s core principles and share practical examples of how councils can strengthen their resilience journey through smarter visibility, automation, and collaboration. CAF 4.0 vs CAF 3.2 - What’s Changed and Why It Matters The move from CAF 3.2 to CAF 4.0 represents a fundamental shift in how the UK public sector builds cyber resilience. The focus is no longer on whether controls exist - it is on whether they work, adapt, and improve over time. CAF 4.0 puts maturity at the centre. It pushes organizations to evolve from compliance checklists to operational capability, adopting a threat-informed, intelligence-led, and proactive security posture, by design. CAF 4.0 raises the bar for cyber maturity across the public sector. It calls for departments and authorities to build on existing foundations and embrace live threat intelligence, behavioural analytics, and structured threat hunting to stay ahead of adversaries. By understanding how attackers might target essential services and adapting controls in real time, organizations can evolve from awareness to active defence. Today’s threat actors are agile, persistent, and increasingly well-resourced, which means reactive measures are no longer enough. CAF 4.0 positions resilience as a continuous process of learning, adapting, and improving, supported by data-driven insights and modern security operations. CAF 4.0 is reshaping how the UK’s public sector approaches security maturity. In the coming weeks, we’ll explore what this looks like in practice, starting with how to build a deeper understanding of threat (control A2.b) and elevate threat hunting (control C2) into an everyday capability, using the tools and insights that are available within existing Microsoft E3 and E5 licences to help support these objectives. Until then, how ready is your organization to turn insight into action?Security as the core primitive - Securing AI agents and apps
This week at Microsoft Ignite, we shared our vision for Microsoft security -- In the agentic era, security must be ambient and autonomous, like the AI it protects. It must be woven into and around everything we build—from silicon to OS, to agents, apps, data, platforms, and clouds—and throughout everything we do. In this blog, we are going to dive deeper into many of the new innovations we are introducing this week to secure AI agents and apps. As I spend time with our customers and partners, there are four consistent themes that have emerged as core security challenges to secure AI workloads. These are: preventing agent sprawl and access to resources, protecting against data oversharing and data leaks, defending against new AI threats and vulnerabilities, and adhering to evolving regulations. Addressing these challenges holistically requires a coordinated effort across IT, developers, and security leaders, not just within security teams and to enable this, we are introducing several new innovations: Microsoft Agent 365 for IT, Foundry Control Plane in Microsoft Foundry for developers, and the Security Dashboard for AI for security leaders. In addition, we are releasing several new purpose-built capabilities to protect and govern AI apps and agents across Microsoft Defender, Microsoft Entra, and Microsoft Purview. Observability at every layer of the stack To facilitate the organization-wide effort that it takes to secure and govern AI agents and apps – IT, developers, and security leaders need observability (security, management, and monitoring) at every level. IT teams need to enable the development and deployment of any agent in their environment. To ensure the responsible and secure deployment of agents into an organization, IT needs a unified agent registry, the ability to assign an identity to every agent, manage the agent’s access to data and resources, and manage the agent’s entire lifecycle. In addition, IT needs to be able to assign access to common productivity and collaboration tools, such as email and file storage, and be able to observe their entire agent estate for risks such as over-permissioned agents. Development teams need to build and test agents, apply security and compliance controls by default, and ensure AI models are evaluated for safety guardrails and security vulnerabilities. Post deployment, development teams must observe agents to ensure they are staying on task, accessing applications and data sources appropriately, and operating within their cost and performance expectations. Security & compliance teams must ensure overall security of their AI estate, including their AI infrastructure, platforms, data, apps, and agents. They need comprehensive visibility into all their security risks- including agent sprawl and resource access, data oversharing and leaks, AI threats and vulnerabilities, and complying with global regulations. They want to address these risks by extending their existing security investments that they are already invested in and familiar with, rather than using siloed or bolt-on tools. These teams can be most effective in delivering trustworthy AI to their organizations if security is natively integrated into the tools and platforms that they use every day, and if those tools and platforms share consistent security primitives such as agent identities from Entra; data security and compliance controls from Purview; and security posture, detections, and protections from Defender. With the new capabilities being released today, we are delivering observability at every layer of the AI stack, meeting IT, developers, and security teams where they are in the tools they already use to innovate with confidence. For IT Teams - Introducing Microsoft Agent 365, the control plane for agents, now in preview The best infrastructure for managing your agents is the one you already use to manage your users. With Agent 365, organizations can extend familiar tools and policies to confidently deploy and secure agents, without reinventing the wheel. By using the same trusted Microsoft 365 infrastructure, productivity apps, and protections, organizations can now apply consistent and familiar governance and security controls that are purpose-built to protect against agent-specific threats and risks. gement and governance of agents across organizations Microsoft Agent 365 delivers a unified agent Registry, Access Control, Visualization, Interoperability, and Security capabilities for your organization. These capabilities work together to help organizations manage agents and drive business value. The Registry powered by the Entra provides a complete and unified inventory of all the agents deployed and used in your organization including both Microsoft and third-party agents. Access Control allows you to limit the access privileges of your agents to only the resources that they need and protect their access to resources in real time. Visualization gives organizations the ability to see what matters most and gain insights through a unified dashboard, advanced analytics, and role-based reporting. Interop allows agents to access organizational data through Work IQ for added context, and to integrate with Microsoft 365 apps such as Outlook, Word, and Excel so they can create and collaborate alongside users. Security enables the proactive detection of vulnerabilities and misconfigurations, protects against common attacks such as prompt injections, prevents agents from processing or leaking sensitive data, and gives organizations the ability to audit agent interactions, assess compliance readiness and policy violations, and recommend controls for evolving regulatory requirements. Microsoft Agent 365 also includes the Agent 365 SDK, part of Microsoft Agent Framework, which empowers developers and ISVs to build agents on their own AI stack. The SDK enables agents to automatically inherit Microsoft's security and governance protections, such as identity controls, data security policies, and compliance capabilities, without the need for custom integration. For more details on Agent 365, read the blog here. For Developers - Introducing Microsoft Foundry Control Plane to observe, secure and manage agents, now in preview Developers are moving fast to bring agents into production, but operating them at scale introduces new challenges and responsibilities. Agents can access tools, take actions, and make decisions in real time, which means development teams must ensure that every agent behaves safely, securely, and consistently. Today, developers need to work across multiple disparate tools to get a holistic picture of the cybersecurity and safety risks that their agents may have. Once they understand the risk, they then need a unified and simplified way to monitor and manage their entire agent fleet and apply controls and guardrails as needed. Microsoft Foundry provides a unified platform for developers to build, evaluate and deploy AI apps and agents in a responsible way. Today we are excited to announce that Foundry Control Plane is available in preview. This enables developers to observe, secure, and manage their agent fleets with built-in security, and centralized governance controls. With this unified approach, developers can now identify risks and correlate disparate signals across their models, agents, and tools; enforce consistent policies and quality gates; and continuously monitor task adherence and runtime risks. Foundry Control Plane is deeply integrated with Microsoft’s security portfolio to provide a ‘secure by design’ foundation for developers. With Microsoft Entra, developers can ensure an agent identity (Agent ID) and access controls are built into every agent, mitigating the risk of unmanaged agents and over permissioned resources. With Microsoft Defender built in, developers gain contextualized alerts and posture recommendations for agents directly within the Foundry Control Plane. This integration proactively prevents configuration and access risks, while also defending agents from runtime threats in real time. Microsoft Purview’s native integration into Foundry Control Plane makes it easy to enable data security and compliance for every Foundry-built application or agent. This allows Purview to discover data security and compliance risks and apply policies to prevent user prompts and AI responses from safety and policy violations. In addition, agent interactions can be logged and searched for compliance and legal audits. This integration of the shared security capabilities, including identity and access, data security and compliance, and threat protection and posture ensures that security is not an afterthought; it’s embedded at every stage of the agent lifecycle, enabling you to start secure and stay secure. For more details, read the blog. For Security Teams - Introducing Security Dashboard for AI - unified risk visibility for CISOs and AI risk leaders, coming soon AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 90% of security professionals, including CISOs, report that their responsibilities have expanded to include data governance and AI oversight within the past year. 1 At the same time, 86% of risk managers say disconnected data and systems lead to duplicated efforts and gaps in risk coverage. 2 To address these needs, we are excited to introduce the Security Dashboard for AI. This serves as a unified dashboard that aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview. This unified dashboard allows CISOs and AI risk leaders to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. For example, you can see your full AI inventory and get visibility into a quarantined agent, flagged for high data risk due to oversharing sensitive information in Purview. The dashboard then correlates that signal with identity insights from Entra and threat protection alerts from Defender to provide a complete picture of exposure. From there, you can delegate tasks to the appropriate teams to enforce policies and remediate issues quickly. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, there’s nothing new to buy. If you’re already using Microsoft security products to secure AI, you’re already a Security Dashboard for AI customer. Figure 5: Security Dashboard for AI provides CISOs and AI risk leaders with a unified view of their AI risk by bringing together their AI inventory, AI risk, and security recommendations to strengthen overall posture Together, these innovations deliver observability and security across IT, development, and security teams, powered by Microsoft’s shared security capabilities. With Microsoft Agent 365, IT teams can manage and secure agents alongside users. Foundry Control Plane gives developers unified governance and lifecycle controls for agent fleets. Security Dashboard for AI provides CISOs and AI risk leaders with a consolidated view of AI risks across platforms, apps, and agents. Added innovation to secure and govern your AI workloads In addition to the IT, developer, and security leader-focused innovations outlined above, we continue to accelerate our pace of innovation in Microsoft Entra, Microsoft Purview, and Microsoft Defender to address the most pressing needs for securing and governing your AI workloads. These needs are: Manage agent sprawl and resource access e.g. managing agent identity, access to resources, and permissions lifecycle at scale Prevent data oversharing and leaks e.g. protecting sensitive information shared in prompts, responses, and agent interactions Defend against shadow AI, new threats, and vulnerabilities e.g. managing unsanctioned applications, preventing prompt injection attacks, and detecting AI supply chain vulnerabilities Enable AI governance for regulatory compliance e.g. ensuring AI development, operations, and usage comply with evolving global regulations and frameworks Manage agent sprawl and resource access 76% of business leaders expect employees to manage agents within the next 2–3 years. 3 Widespread adoption of agents is driving the need for visibility and control, which includes the need for a unified registry, agent identities, lifecycle governance, and secure access to resources. Today, Microsoft Entra provides robust identity protection and secure access for applications and users. However, organizations lack a unified way to manage, govern, and protect agents in the same way they manage their users. Organizations need a purpose-built identity and access framework for agents. Introducing Microsoft Entra Agent ID, now in preview Microsoft Entra Agent ID offers enterprise-grade capabilities that enable organizations to prevent agent sprawl and protect agent identities and their access to resources. These new purpose-built capabilities enable organizations to: Register and manage agents: Get a complete inventory of the agent fleet and ensure all new agents are created with an identity built-in and are automatically protected by organization policies to accelerate adoption. Govern agent identities and lifecycle: Keep the agent fleet under control with lifecycle management and IT-defined guardrails for both agents and people who create and manage them. Protect agent access to resources: Reduce risk of breaches, block risky agents, and prevent agent access to malicious resources with conditional access and traffic inspection. Agents built in Microsoft Copilot Studio, Microsoft Foundry, and Security Copilot get an Entra Agent ID built-in at creation. Developers can also adopt Entra Agent ID for agents they build through Microsoft Agent Framework, Microsoft Agent 365 SDK, or Microsoft Entra Agent ID SDK. Read the Microsoft Entra blog to learn more. Prevent data oversharing and leaks Data security is more complex than ever. Information Security Media Group (ISMG) reports that 80% of leaders cite leakage of sensitive data as their top concern. 4 In addition to data security and compliance risks of generative AI (GenAI) apps, agents introduces new data risks such as unsupervised data access, highlighting the need to protect all types of corporate data, whether it is accessed by employees or agents. To mitigate these risks, we are introducing new Microsoft Purview data security and compliance capabilities for Microsoft 365 Copilot and for agents and AI apps built with Copilot Studio and Microsoft Foundry, providing unified protection, visibility, and control for users, AI Apps, and Agents. New Microsoft Purview controls safeguard Microsoft 365 Copilot with real-time protection and bulk remediation of oversharing risks Microsoft Purview and Microsoft 365 Copilot deliver a fully integrated solution for protecting sensitive data in AI workflows. Based on ongoing customer feedback, we’re introducing new capabilities to deliver real-time protection for sensitive data in M365 Copilot and accelerated remediation of oversharing risks: Data risk assessments: Previously, admins could monitor oversharing risks such as SharePoint sites with unprotected sensitive data. Now, they can perform item-level investigations and bulk remediation for overshared files in SharePoint and OneDrive to quickly reduce oversharing exposure. Data Loss Prevention (DLP) for M365 Copilot: DLP previously excluded files with sensitivity labels from Copilot processing. Now in preview, DLP also prevents prompts that include sensitive data from being processed in M365 Copilot, Copilot Chat, and Copilot agents, and prevents Copilot from using sensitive data in prompts for web grounding. Priority cleanup for M365 Copilot assets: Many organizations have org-wide policies to retain or delete data. Priority cleanup, now generally available, lets admins delete assets that are frequently processed by Copilot, such as meeting transcripts and recordings, on an independent schedule from the org-wide policies while maintaining regulatory compliance. On-demand classification for meeting transcripts: Purview can now detect sensitive information in meeting transcripts on-demand. This enables data security admins to apply DLP policies and enforce Priority cleanup based on the sensitive information detected. & bulk remediation Read the full Data Security blog to learn more. Introducing new Microsoft Purview data security capabilities for agents and apps built with Copilot Studio and Microsoft Foundry, now in preview Microsoft Purview now extends the same data security and compliance for users and Copilots to agents and apps. These new capabilities are: Enhanced Data Security Posture Management: A centralized DSPM dashboard that provides observability, risk assessment, and guided remediation across users, AI apps, and agents. Insider Risk Management (IRM) for Agents: Uniquely designed for agents, using dedicated behavioral analytics, Purview dynamically assigns risk levels to agents based on their risky handing of sensitive data and enables admins to apply conditional policies based on that risk level. Sensitive data protection with Azure AI Search: Azure AI Search enables fast, AI-driven retrieval across large document collections, essential for building AI Apps. When apps or agents use Azure AI Search to index or retrieve data, Purview sensitivity labels are preserved in the search index, ensuring that any sensitive information remains protected under the organization’s data security & compliance policies. For more information on preventing data oversharing and data leaks - Learn how Purview protects and governs agents in the Data Security and Compliance for Agents blog. Defend against shadow AI, new threats, and vulnerabilities AI workloads are subject to new AI-specific threats like prompt injections attacks, model poisoning, and data exfiltration of AI generated content. Although security admins and SOC analysts have similar tasks when securing agents, the attack methods and surfaces differ significantly. To help customers defend against these novel attacks, we are introducing new capabilities in Microsoft Defender that deliver end-to-end protection, from security posture management to runtime defense. Introducing Security Posture Management for agents, now in preview As organizations adopt AI agents to automate critical workflows, they become high-value targets and potential points of compromise, creating a critical need to ensure agents are hardened, compliant, and resilient by preventing misconfigurations and safeguarding against adversarial manipulation. Security Posture Management for agents in Microsoft Defender now provides an agent inventory for security teams across Microsoft Foundry and Copilot Studio agents. Here, analysts can assess the overall security posture of an agent, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions, all aligned to the MITRE ATT&CK framework. Additionally, the new agent attack path analysis visualizes how an agent’s weak security posture can create broader organizational risk, so you can quickly limit exposure and prevent lateral movement. Introducing Threat Protection for agents, now in preview Attack techniques and attack surfaces for agents are fundamentally different from other assets in your environment. That’s why Defender is delivering purpose-built protections and detections to help defend against them. Defender is introducing runtime protection for Copilot Studio agents that automatically block prompt injection attacks in real time. In addition, we are announcing agent-specific threat detections for Copilot Studio and Microsoft Foundry agents coming soon. Defender automatically correlates these alerts with Microsoft’s industry-leading threat intelligence and cross-domain security signals to deliver richer, contextualized alerts and security incident views for the SOC analyst. Defender’s risk and threat signals are natively integrated into the new Microsoft Foundry Control Plane, giving development teams full observability and the ability to act directly from within their familiar environment. Finally, security analysts will be able to hunt across all agent telemetry in the Advanced Hunting experience in Defender, and the new Agent 365 SDK extends Defender’s visibility and hunting capabilities to third-party agents, starting with Genspark and Kasisto, giving security teams even more coverage across their AI landscape. To learn more about how you can harden the security posture of your agents and defend against threats, read the Microsoft Defender blog. Enable AI governance for regulatory compliance Global AI regulations like the EU AI Act and NIST AI RMF are evolving rapidly; yet, according to ISMG, 55% of leaders report lacking clarity on current and future AI regulatory requirements. 5 As enterprises adopt AI, they must ensure that their AI innovation aligns with global regulations and standards to avoid costly compliance gaps. Introducing new Microsoft Purview Compliance Manager capabilities to stay ahead of evolving AI regulations, now in preview Today, Purview Compliance Manager provides over 300 pre-built assessments for common industry, regional, and global standards and regulations. However, the pace of change for new AI regulations requires controls to be continuously re-evaluated and updated so that organizations can adapt to ongoing changes in regulations and stay compliant. To address this need, Compliance Manager now includes AI-powered regulatory templates. AI-powered regulatory templates enable real-time ingestion and analysis of global regulatory documents, allowing compliance teams to quickly adapt to changes as they happen. As regulations evolve, the updated regulatory documents can be uploaded to Compliance Manager, and the new requirements are automatically mapped to applicable recommended actions to implement controls across Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft 365, and Microsoft Foundry. Automated actions by Compliance Manager further streamline governance, reduce manual workload, and strengthen regulatory accountability. Introducing expanded Microsoft Purview compliance capabilities for agents and AI apps now in preview Microsoft Purview now extends its compliance capabilities across agent-generated interactions, ensuring responsible use and regulatory alignment as AI becomes deeply embedded across business processes. New capabilities include expanded coverage for: Audit: Surface agent interactions, lifecycle events, and data usage with Purview Audit. Unified audit logs across user and agent activities, paired with traceability for every agent using an Entra Agent ID, support investigation, anomaly detection, and regulatory reporting. Communication Compliance: Detect prompts sent to agents and agent-generated responses containing inappropriate, unethical, or risky language, including attempts to manipulate agents into bypassing policies, generating risky content, or producing noncompliant outputs. When issues arise, data security admins get full context, including the prompt, the agent’s output, and relevant metadata, so they can investigate and take corrective action Data Lifecycle Management: Apply retention and deletion policies to agent-generated content and communication flows to automate lifecycle controls and reduce regulatory risk. Read about Microsoft Purview data security for agents to learn more. Finally, we are extending our data security, threat protection, and identity access capabilities to third-party apps and agents via the network. Advancing Microsoft Entra Internet Access Secure Web + AI Gateway - extend runtime protections to the network, now in preview Microsoft Entra Internet Access, part of the Microsoft Entra Suite, has new capabilities to secure access to and usage of GenAI at the network level, marking a transition from Secure Web Gateway to Secure Web and AI Gateway. Enterprises can accelerate GenAI adoption while maintaining compliance and reducing risk, empowering employees to experiment with new AI tools safely. The new capabilities include: Prompt injection protection which blocks malicious prompts in real time by extending Azure AI Prompt Shields to the network layer. Network file filtering which extends Microsoft Purview to inspect files in transit and prevents regulated or confidential data from being uploaded to unsanctioned AI services. Shadow AI Detection that provides visibility into unsanctioned AI applications through Cloud Application Analytics and Defender for Cloud Apps risk scoring, empowering security teams to monitor usage trends, apply Conditional Access, or block high-risk apps instantly. Unsanctioned MCP server blocking prevents access to MCP servers from unauthorized agents. With these controls, you can accelerate GenAI adoption while maintaining compliance and reducing risk, so employees can experiment with new AI tools safely. Read the Microsoft Entra blog to learn more. As AI transforms the enterprise, security must evolve to meet new challenges—spanning agent sprawl, data protection, emerging threats, and regulatory compliance. Our approach is to empower IT, developers, and security leaders with purpose-built innovations like Agent 365, Foundry Control Plane, and the Security Dashboard for AI. These solutions bring observability, governance, and protection to every layer of the AI stack, leveraging familiar tools and integrated controls across Microsoft Defender, Microsoft Entra, and Microsoft Purview. The future of security is ambient, autonomous, and deeply woven into the fabric of how we build, deploy, and govern AI systems. Explore additional resources Learn more about Security for AI solutions on our webpage Learn more about Microsoft Agent 365 Learn more about Microsoft Entra Agent ID Get started with Microsoft 365 Copilot Get started with Microsoft Copilot Studio Get started with Microsoft Foundry Get started with Microsoft Defender for Cloud Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Purview Compliance Manager Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Bedrock Security, 2025 Data Security Confidence Index, published Mar 17, 2025. 2 AuditBoard & Ascend2, Connected Risk Report 2024; as cited by MIT Sloan Management Review, Spring 2025. 3 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more 4 First Annual Generative AI study: Business Rewards vs. Security Risks, , Q3 2023, ISMG, N=400 5 First Annual Generative AI study: Business Rewards vs. Security Risks, Q3 2023, ISMG, N=400Microsoft Sentinel MCP server - Generally Available With Exciting New Capabilities
Today, we’re excited to announce the General Availability of Microsoft Sentinel MCP (Model Context Protocol) server, a fully managed cloud service built on an open standard that empowers AI agents to seamlessly access your entire security context through natural language, eliminating the need for complex data engineering as you build agents. This unlocks new levels of AI agent performance and effectiveness, enabling them to do more for you. Since the public preview launch on September 30, hundreds of customers have explored MCP tools that provide semantic access to their entire security context. These tools allow security AI agents to operate with unprecedented precision by understanding your unique security context in natural language. Today, we’re introducing multiple innovations and new capabilities designed to help even more customers unlock more with AI-driven security. This post offers a high-level overview of what’s new. Stay tuned for deep-dive blogs that will unpack each feature in detail. Connect to Sentinel MCP server from Multiple AI Platforms By adopting the MCP open standard, we can progress on our mission to empower effective AI agents wherever you choose to run them. Beyond Security Copilot and VSCode Github Copilot, Sentinel MCP server is now natively integrated with Copilot Studio and Microsoft Foundry agent-building experiences. When creating an agent in any of these platforms, you can easily select Sentinel MCP tools, no pre-configuration required. It’s ready to use, so if you are using any of these platforms, dive in and give it a try. Click here for detailed guidance Additionally, you can now connect OpenAI ChatGPT to Sentinel MCP server through a secured OAuth authentication through a simple configuration in Entra. Learn how here assess threat impact on your organization Custom KQL Tools Many organizations rely on a curated library of KQL queries for incident triage, investigation, and threat hunting used in manual Standard Operating Procedures (SOP) or SOAR playbooks—often managed within Defender Advanced Hunting. Now, with Sentinel MCP server, you can instantly transform these saved KQL queries into custom tools with just a click. This new capability allows you to empower your AI agents with precise, actionable data tailored to your unique security workflows. Once a KQL query is saved as a tool, Sentinel MCP server automatically creates and maintains a corresponding MCP tool—ensuring it’s always in sync with the latest version of your saved query in Defender Advanced Hunting. Any connected agent can invoke this tool, confident it reflects your most current logic and requirements. Learn more here Entity Analyzer Assessing the risk of entities is a core task for SOC teams—whether triaging incidents, investigating threats, or automating response workflows. Traditionally, this has required building complex playbooks or custom logic to gather and analyze fragmented security data from multiple sources. With entity analyzer, this complexity is eliminated. The tool leverages your organization’s security data in Sentinel to deliver comprehensive, reasoned risk assessments for any entity your agents encounter – starting with users and urls. By providing a unified, out-of-the-box solution for entity analysis, entity analyzer enables your AI agents to make smarter decisions and automate more tasks—without the need to manually engineer risk evaluation logic for each entity type. This not only accelerates agent development, but also ensures your agents are always working with the most relevant and up-to-date context from across your security environment. Entity Analyzer is now available to any MCP client integrated with Sentinel MCP Server. And for those building SOAR workflows, entity analyzer is natively integrated with Logic Apps, making it easy to enrich entities and automate verdicts within your playbooks. Learn how to build a Logic Apps playbook with Entity Analyzer Graph Tools Microsoft Sentinel graph connects assets, identities, activities, and threat intelligence into a unified security graph, uncovering insights that structured data alone can’t provide such as relationships, blast radius, and attack paths. The graph is now generally available, and these advanced insights can be accessed by AI agents in natural language through a dedicated set of MCP tools. Graph MCP tools are offered in a sign-up preview. Triage Incidents and Alerts Sentinel MCP server extends to enable natural language access to a set of APIs that enable incident and alert triage. AI agents can use these tools to carry out autonomous triage and investigation of Defender XDR and Sentinel alerts and incidents. In the next couple of weeks, it will be available, out of the box, to all customers using Microsoft Defender XDR, Microsoft Sentinel or Microsoft Defender for Endpoint. Stay tuned. Smarter Security, Less Effort With the latest innovations in Sentinel MCP server, security teams can now harness the full power of AI-driven automation with unprecedented simplicity and impact. From seamless integration with leading AI platforms to instant creation of custom KQL tools and out-of-the-box entity analysis, Sentinel MCP server empowers your agents to deliver smarter, faster, and more effective security outcomes. These advancements eliminate manual complexity, accelerate agent development, and ensure your SOC is always equipped with the most relevant context. Currently, features like entity analysis are available at no additional charge; as we continue to evolve the platform, we’ll share updates on future pricing well in advance. Try out the new features today and stay tuned for deep-dive updates as we continue to push the boundaries of AI-powered security automation. Learn how to get started4.6KViews5likes0CommentsUnlocking Developer Innovation with Microsoft Sentinel data lake
Introduction Microsoft Sentinel is evolving rapidly, transforming to be both an industry-leading SIEM and an AI-ready platform that empowers agentic defense across the security ecosystem. In our recent webinar: Introduction to Sentinel data lake for Developers, we explored how developers can leverage Sentinel’s unified data lake, extensible architecture, and integrated tools to build innovative security solutions. This post summarizes the key takeaways and actionable insights for developers looking to harness the full power of Sentinel. The Sentinel Platform: A Foundation for Agentic Security Unified Data and Context Sentinel centralizes security data cost-effectively, supporting massive volumes and diverse data types. This unified approach enables advanced analytics, graph-enabled context, and AI-ready data access—all essential for modern security operations. Developers can visualize relationships across assets, activities, and threats, mapping incidents and hunting scenarios with unprecedented clarity. Extensible and Open Platform Sentinel’s open architecture simplifies onboarding and data integration. Out-of-the-box connectors and codeless connector creation make it easy to bring in third-party data. Developers can quickly package and publish agents that leverage the centralized data lake and MCP server, distributing solutions through Microsoft Security Store for maximum reach. The Microsoft Security Store is a storefront for security professionals to discover, buy, and deploy vetted security SaaS solutions and AI agents from our ecosystem partners. These offerings integrate natively with Microsoft Security products—including the Sentinel platform, Defender, and Entra, to deliver end‑to‑end protection. By combining curated, deploy‑ready solutions with intelligent, AI‑assisted workflows, the Store reduces integration friction and speeds time‑to‑value for critical tasks like triage, threat hunting, and access management. Advanced Analytics and AI Integration With support for KQL, Spark, and ML tools, Sentinel separates storage and compute, enabling scalable analytics and semantic search. Jupyter Notebooks hosted in on-demand Spark environments allow for rich data engineering and machine learning directly on the data lake. Security Copilot agents, seamlessly integrated with Sentinel, deliver autonomous and adaptive automation, enhancing both security and IT operations. Developer Scenarios: Unlocking New Possibilities The webinar showcased several developer scenarios enabled by Sentinel’s platform components: Threat Investigations Over Extended Timelines: Query historical data to uncover slow-moving attacks and persistent threats. Behavioral Baselining: Model normal behavior using months of sign-in logs to detect anomalies. Alert Enrichment: Correlate alerts with firewall and NetFlow data to improve accuracy and reduce false positives. Retrospective Threat Hunting: React to new indicators of compromise by running historical queries across the data lake. ML-Powered Insights: Build machine learning models for anomaly detection, alert enrichment, and predictive analytics. These scenarios demonstrate how developers can leverage Sentinel’s data lake, graph capabilities, and integrated analytics to deliver powerful security solutions. End-to-End Developer Journey The following steps outline a potential workflow for developers to ingest and analyze their data within the Sentinel platform. Data Sources: Identify high-value data sources from your environment to integrate with Microsoft Security data. The journey begins with your unique view of the customer’s digital estate. This is data you have in your platform today. Bringing this data into Sentinel helps customers make sense of their entire security landscape at once. Data Ingestion: Import third-party data into the Sentinel data lake for secure, scalable analytics. As customer data flows from various platforms into Sentinel, it is centralized and normalized, providing a unified foundation for advanced analysis and threat detection across the customer’s digital environment. Sentinel data lake and Graph: Run Jupyter Notebook jobs for deep insights, combining contributed and first-party data. Once data resides in the Sentinel data lake, developers can leverage its graph capabilities to model relationships and uncover patterns, empowering customers with comprehensive insights into security events and trends. Agent Creation: Build Security Copilot agents that interact with Sentinel data using natural language prompts. These agents make the customer’s ingested data actionable, allowing users to ask questions or automate tasks, and helping teams quickly respond to threats or investigate incidents using their own enterprise data. Solution Packaging: Package and distribute solutions via the Microsoft Security Store, reaching customers at scale. By packaging these solutions, developers enable customers to seamlessly deploy advanced analytics and automation tools that harness their data journey— from ingestion to actionable insights—across their entire security estate. Conclusion Microsoft Sentinel’s data lake and platform capabilities open new horizons for developers. By centralizing data, enabling advanced analytics, and providing extensible tools, Sentinel empowers you to build solutions that address today’s security challenges and anticipate tomorrow’s threats. Explore the resources below, join the community, and start innovating with Sentinel today! App Assure: For assistance with developing a Sentinel Codeless Connector Framework (CCF) connector, you can contact AzureSentinelPartner@microsoft.com. Microsoft Security Community: aka.ms/communitychoice Next Steps: Resources and Links Ready to dive deeper? Explore these resources to get started: Get Educated! Sentinel data lake general availability announcement Sentinel data lake official documentation Connect Sentinel to Defender Portal Onboarding to Sentinel data lake Integration scenarios (e.g. hunt | jupyter) KQL queries Jupyter notebooks (link) as jobs (link) VS Code Extension Sentinel graph Sentinel MCP server Security Copilot agents Microsoft Security Store Take Action! Bring your data into Sentinel Build a composite solution Explore Security Copilot agents Publish to Microsoft Security Store List existing SaaS apps in Security StoreGenAI vs Cyber Threats: Why GenAI Powered Unified SecOps Wins
Cybersecurity is evolving faster than ever. Attackers are leveraging automation and AI to scale their operations, so how can defenders keep up? The answer lies in Microsoft Unified Security Operations powered by Generative AI (GenAI). This opens the Cybersecurity Paradox: Attackers only need one successful attempt, but defenders must always be vigilant, otherwise the impact can be huge. Traditional Security Operation Centers (SOCs) are hampered by siloed tools and fragmented data, which slows response and creates vulnerabilities. On average, attackers gain unauthorized access to organizational data in 72 minutes, while traditional defense tools often take on average 258 days to identify and remediate. This is over eight months to detect and resolve breaches, a significant and unsustainable gap. Notably, Microsoft Unified Security Operations, including GenAI-powered capabilities, is also available and supported in Microsoft Government Community Cloud (GCC) and GCC High/DoD environments, ensuring that organizations with the highest compliance and security requirements can benefit from these advanced protections. The Case for Unified Security Operations Unified security operations in Microsoft Defender XDR consolidates SIEM, XDR, Exposure management, and Enterprise Security Posture into a single, integrated experience. This approach allows the following: Breaks down silos by centralizing telemetry across identities, endpoints, SaaS apps, and multi-cloud environments. Infuses AI natively into workflows, enabling faster detection, investigation, and response. Microsoft Sentinel exemplifies this shift with its Data Lake architecture (see my previous post on Microsoft Sentinel’s New Data Lake: Cut Costs & Boost Threat Detection), offering schema-on-read flexibility for petabyte-scale analytics without costly data rehydration. This means defenders can query massive datasets in real time, accelerating threat hunting and forensic analysis. GenAI: A Force Multiplier for Cyber Defense Generative AI transforms security operations from reactive to proactive. Here’s how: Threat Hunting & Incident Response GenAI enables predictive analytics and anomaly detection across hybrid identities, endpoints, and workloads. It doesn’t just find threats—it anticipates them. Behavioral Analytics with UEBA Advanced User and Entity Behavior Analytics (UEBA) powered by AI correlates signals from multi-cloud environments and identity providers like Okta, delivering actionable insights for insider risk and compromised accounts. [13 -Micros...s new UEBA | Word] Automation at Scale AI-driven playbooks streamline repetitive tasks, reducing manual workload and accelerating remediation. This frees analysts to focus on strategic threat hunting. Microsoft Innovations Driving This Shift For SOC teams and cybersecurity practitioners, these innovations mean you spend less time on manual investigations and more time leveraging actionable insights, ultimately boosting productivity and allowing you to focus on higher-value security work that matters most to your organization. Plus, by making threat detection and response faster and more accurate, you can reduce stress, minimize risk, and demonstrate greater value to your stakeholders. Sentinel Data Lake: Unlocks real-time analytics at scale, enabling AI-driven threat detection without rehydration costs. Microsoft Sentinel data lake overview UEBA Enhancements: Multi-cloud and identity integrations for unified risk visibility. Sentinel UEBA’s Superpower: Actionable Insights You Can Use! Now with Okta and Multi-Cloud Logs! Security Copilot & Agentic AI: Harnesses AI and global threat intelligence to automate detection, response, and compliance across the security stack, enabling teams to scale operations and strengthen Zero Trust defenses defenders. Security Copilot Agents: The New Era of AI, Driven Cyber Defense Sector-Specific Impact All sectors are different, but I would like to focus a bit on the public sector at this time. This sector and critical infrastructure organizations face unique challenges: talent shortages, operational complexity, and nation-state threats. GenAI-centric platforms help these sectors shift from reactive defense to predictive resilience, ensuring mission-critical systems remain secure. By leveraging advanced AI-driven analytics and automation, public sector organizations can streamline incident detection, accelerate response times, and proactively uncover hidden risks before they escalate. With unified platforms that bridge data silos and integrate identity, endpoint, and cloud telemetry, these entities gain a holistic security posture that supports compliance and operational continuity. Ultimately, embracing generative AI not only helps defend against sophisticated cyber adversaries but also empowers public sector teams to confidently protect the services and infrastructure their communities rely on every day. Call to Action Artificial intelligence is driving unified cybersecurity. Solutions like Microsoft Defender XDR and Sentinel now integrate into a single dashboard, consolidating alerts, incidents, and data from multiple sources. AI swiftly correlates information, prioritizes threats, and automates investigations, helping security teams respond quickly with less manual work. This shift enables organizations to proactively manage cyber risks and strengthen their resilience against evolving challenges. Picture a single pane of glass where all your XDRs and Defenders converge, AI instantly shifts through the noise, highlighting what matters most so teams can act with clarity and speed. That may include: Assess your SOC maturity and identify silos. Use the Security Operations Self-Assessment Tool to determine your SOC’s maturity level and provide actionable recommendations for improving processes and tooling. Also see Security Maturity Model from the Well-Architected Framework Explore Microsoft Sentinel, Defender XDR, and Security Copilot for AI-powered security. Explains progressive security maturity levels and strategies for strengthening your security posture. What is Microsoft Defender XDR? - Microsoft Defender XDR and What is Microsoft Security Copilot? Design Security in Solutions from Day One! Drive embedding security from the start of solution design through secure-by-default configurations and proactive operations, aligning with Zero Trust and MCRA principles to build resilient, compliant, and scalable systems. Design Security in Solutions from Day One! Innovate boldly, Deploy Safely, and Never Regret it! Upskill your teams on GenAI tools and responsible AI practices. Guidance for securing AI apps and data, aligned with Zero Trust principles Build a strong security posture for AI About the Author: Hello Jacques "Jack” here! I am a Microsoft Technical Trainer focused on helping organizations use advanced security and AI solutions. I create and deliver training programs that combine technical expertise with practical use, enabling teams to adopt innovations like Microsoft Sentinel, Defender XDR, and Security Copilot for stronger cyber resilience. #SkilledByMTT #MicrosoftLearnBlog Series: Securing the Future: Protecting AI Workloads in the Enterprise
Post 1: The Hidden Threats in the AI Supply Chain Your AI Supply Chain Is Under Attack — And You Might Not Even Know It Imagine deploying a cutting-edge AI model that delivers flawless predictions in testing. The system performs beautifully, adoption soars, and your data science team celebrates. Then, a few weeks later, you discover the model has been quietly exfiltrating sensitive data — or worse, that a single poisoned dataset altered its decision-making from the start. This isn’t a grim sci-fi scenario. It’s a growing reality. AI systems today rely on a complex and largely opaque supply chain — one built on shared models, open-source frameworks, third-party datasets, and cloud-based APIs. Each link in that chain represents both innovation and vulnerability. And unless organizations start treating the AI supply chain as a security-critical system, they risk building intelligence on a foundation they can’t fully trust. Understanding the AI Supply Chain Much like traditional software, modern AI models rarely start from scratch. Developers and data scientists leverage a mix of external assets to accelerate innovation — pretrained models from public repositories like Hugging Face (https://huggingface.co/), data from external vendors, third-party labeling services, and open-source ML libraries. Each of these layers forms part of your AI supply chain — the ecosystem of components that power your model’s lifecycle, from data ingestion to deployment. In many cases, organizations don’t fully know: Where their datasets originated. Whether the pretrained model they fine-tuned was modified or backdoored. If the frameworks powering their pipeline contain known vulnerabilities. AI’s strength — its openness and speed of adoption — is also its greatest weakness. You can’t secure what you don’t see, and most teams have very little visibility into the origins of their AI assets. The New Threat Landscape Attackers have taken notice. As enterprises race to operationalize AI, threat actors are shifting their attention from traditional IT systems to the AI layer itself — particularly the model and data supply chain. Common attack vectors now include: Data poisoning: Injecting subtle malicious samples into training data to bias or manipulate model behavior. Model backdoors: Embedding hidden triggers in pretrained models that can be activated later. Dependency exploits: Compromising widely used ML libraries or open-source repositories. Model theft and leakage: Extracting proprietary weights or exploiting exposed inference APIs. These attacks are often invisible until after deployment, when the damage has already been done. In 2024, several research teams demonstrated how tampered open-source LLMs could leak sensitive data or respond with biased or unsafe outputs — all due to poisoned dependencies within the model’s lineage. The pattern is clear: adversaries are no longer only targeting applications; they’re targeting the intelligence that drives them. Why Traditional Security Approaches Fall Short Most organizations already have strong DevSecOps practices for traditional software — automated scanning, dependency tracking, and secure Continuous Integration/Continuous Deployment (CI/CD) pipelines. But those frameworks were never designed for the unique properties of AI systems. Here’s why: Opacity: AI models are often black boxes. Their behavior can change dramatically from minor data shifts, making tampering hard to detect. Lack of origin: Few organizations maintain a verifiable “family tree” of their models and datasets. Limited tooling: Security tools that detect code vulnerabilities don’t yet understand model weights, embeddings, or training lineage. In other words: You can’t patch what you can’t trace. The absence of traceability leaves organizations flying blind — relying on trust where verification should exist. Securing the AI Supply Chain: Emerging Best Practices The good news is that a new generation of frameworks and controls is emerging to bring security discipline to AI development. The following strategies are quickly becoming best practices in leading enterprises: Establish Model Origin and Integrity Maintain a record of where each model originated, who contributed to it, and how it’s been modified. Implement cryptographic signing for model artifacts. Use integrity checks (e.g., hash validation) before deploying any model. Incorporate continuous verification into your MLOps pipeline. This ensures that only trusted, validated models make it to production. Create a Model Bill of Materials (MBOM) Borrowing from software security, an MBOM documents every dataset, dependency, and component that went into building a model — similar to an SBOM for code. Helps identify which datasets and third-party assets were used. Enables rapid response when vulnerabilities are discovered upstream. Organizations like NIST, MITRE, and the Cloud Security Alliance are developing frameworks to make MBOMs a standard part of AI risk management. NIST AI Risk Management Framework (AI RMF) https://www.nist.gov/itl/ai-risk-management-framework NIST AI RMF Playbook https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook MITRE AI Risk Database (with Robust Intelligence) https://www.mitre.org/news-insights/news-release/mitre-and-robust-intelligence-tackle-ai-supply-chain-risks MITRE’s SAFE-AI Framework https://atlas.mitre.org/pdf-files/SAFEAI_Full_Report.pdf Cloud Security Alliance – AI Model Risk Management Framework https://cloudsecurityalliance.org/artifacts/ai-model-risk-management-framework Secure Your Data Supply Chain The quality and integrity of training data directly shape model behavior. Validate datasets for anomalies, duplicates, or bias. Use data versioning and lineage tracking for full transparency. Where possible, apply differential privacy or watermarking to protect sensitive data. Remember: even small amounts of corrupted data can lead to large downstream risks. Evaluate Third-Party and Open-Source Dependencies Open-source AI tools are powerful — but not always trustworthy. Regularly scan models and libraries for known vulnerabilities. Vet external model vendors and require transparency about their security practices. Treat external ML assets as untrusted code until verified. A simple rule of thumb: if you wouldn’t deploy a third-party software package without security review, don’t deploy a third-party model that way either. The Path Forward: Traceability as the Foundation of AI Trust AI’s transformative potential depends on trust — and trust depends on visibility. Securing your AI supply chain isn’t just about compliance or risk mitigation; it’s about protecting the integrity of the intelligence that drives business decisions, customer interactions, and even national infrastructure. As AI becomes the engine of enterprise innovation, we must bring the same rigor to securing its foundations that we once brought to software itself. Every model has a lineage. Every lineage is a potential attack path. In the next post, we’ll explore how to apply DevSecOps principles to MLOps pipelines — securing the entire AI lifecycle from data collection to deployment. Key Takeaway The AI supply chain is your new attack surface. The only way to defend it is through visibility, origin, and continuous validation — before, during, and after deployment. Contributors Juan José Guirola Sr. (Security GBB for Advanced Identity - Microsoft) References Hugging Face - https://huggingface.co/ Research Gate - https://www.researchgate.net/publication/381514112_Exploiting_Privacy_Vulnerabilities_in_Open_Source_LLMs_Using_Maliciously_Crafted_Prompts/fulltext/66722cb1de777205a338bbba/Exploiting-Privacy-Vulnerabilities-in-Open-Source-LLMs-Using-Maliciously-Crafted-Prompts.pdf NIST AI Risk Management Framework (AI RMF) https://www.nist.gov/itl/ai-risk-management-framework NIST AI RMF Playbook https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook MITRE AI Risk Database (with Robust Intelligence) https://www.mitre.org/news-insights/news-release/mitre-and-robust-intelligence-tackle-ai-supply-chain-risks MITRE’s SAFE-AI Framework https://atlas.mitre.org/pdf-files/SAFEAI_Full_Report.pdf Cloud Security Alliance – AI Model Risk Management Framework https://cloudsecurityalliance.org/artifacts/ai-model-risk-management-frameworkMicrosoft Sentinel data lake is now generally available
Security is being reengineered for the AI era, shifting from static controls to fast, platform-driven defense. Traditional tools, scattered data, and outdated systems struggle against modern threats. An AI-ready, data-first foundation is needed to unify telemetry, standardize agent access, and enable autonomous responses while ensuring humans are in command of strategy and high-impact investigations. Security teams already anchor their operations around SIEMs for comprehensive visibility. We're building on that foundation by evolving Microsoft Sentinel into both the SIEM and the platform for agentic defense—connecting analytics and context across ecosystems. Today, we’re introducing new platform capabilities that build on Sentinel data lake: Sentinel graph for deeper insight and context; an MCP server and tools to make data agent ready; new developer capabilities; and a Security Store for effortless discovery and deployment—so protection accelerates to machine speed while analysts do their best work. We’ve reached a major milestone in our journey to modernize security operations — Microsoft Sentinel data lake is now generally available. This fully managed, cloud-native data lake is redefining how security teams manage, analyze, and act on their data cost-effectively. Since its introduction, organizations across sectors are embracing Sentinel data lake for its transformative impact on security operations. Customers consistently highlight its ability to unify security data from diverse sources, enabling enhanced threat detection and investigation. Many cite cost efficiency as a key benefit, with tiered storage and flexible retention, helping reduce costs. With petabytes of data already ingested, users are gaining real-time and historical insights at scale. "With Microsoft Sentinel data lake integration, we now have a scalable and cost-efficient solution for retaining Microsoft Sentinel data for long-term retention. This empowers our security and compliance teams with seamless access to historical telemetry data right within the data lake explorer and Jupyter notebooks - enabling advanced threat hunting, forensic analysis, and AI-powered insights at scale" Farhan Nadeem, Senior Security Engineer Government of Nunavut Industry partners also commend its role in modernizing SOC workflows and accelerating AI-driven analytics. “Microsoft Sentinel data lake amplifies BlueVoyant’s ability to transform security operations into a mature, intelligence-driven discipline. It preserves institutional memory across years of telemetry, which empowers advanced threat hunting strategies that evolve with time. Security teams can validate which data sources yield actionable insights, uncover persistent attack patterns, and retain high-value indicators that support long-term strategic advantage.” Milan Patel, CRO BlueVoyant Microsoft Sentinel data lake use-cases There are many powerful ways customers are unlocking value with Sentinel data lake—here are just a few impactful examples.: Threat investigations over extended timelines: Security analysts query data older than 90 days to uncover slow-moving attacks—like brute-force and password spray campaigns—that span accounts and geographies. Behavioral baselining for deeper insights: SOC engineers build time-series models using months of sign-in logs to establish a standard of normal behavior and identify unusual patterns, such as credential abuse or lateral movement. Alert enrichment: SOC teams correlate alerts with Firewall and Netflow data, often stored only in the data lake, reducing false positives and increasing alert accuracy. Retrospective threat hunting with new indicators of compromise (IOCs): Threat intelligence teams react to emerging IOCs by running historical queries across the data lake, enabling rapid and informed response. ML-Powered insights: SOC engineers use Spark Notebooks to build and operationalize custom machine learning models for anomaly detection, alert enrichment, and predictive analytics. The Sentinel data lake is more than a storage solution—it’s the foundation for modern, AI-powered security operations. Whether you're scaling your SOC, building deeper analytics, or preparing for future threats, the Sentinel data lake is ready to support your journey. What’s new Regional expansion In light of strong customer demand in public preview, at GA we are expanding Sentinel data lake availability to additional regions. These new regions will roll out progressively over the coming weeks. For more information, see documentation. Flexible data ingestion and management With over 350 native connectors, SOC teams can seamlessly ingest both structured and semi-structured data at scale. Data is automatically mirrored from the analytics tier to the data lake tier, at no additional cost, ensuring a single, unified copy is available for diverse use cases across security operations. Since the public preview of Microsoft Sentinel data lake, we've launched 45 new connectors built on the scalable and performant Codeless Connector Framework (CCF), including connectors for: GCP: SQL, DNS, VPC Flow, Resource Manager, IAM, Apigee AWS: Security Hub findings, Route53 DNS Others: Alibaba Cloud, Oracle, Salesforce, Snowflake, Cisco Sentinel’s connector ecosystem is designed to help security teams seamlessly unify signals across hybrid environments, without the need for heavy engineering effort. Explore the full list of connectors in our documentation here. App Assure Microsoft Sentinel data lake promise As part of our commitment to customer success, we are expanding the App Assure Microsoft Sentinel promise to Sentinel data lake. This means customers can confidently onboard their data, knowing that App Assure stands ready to help resolve connector issues such as replacing deprecated APIs with updated ones, and accelerating new integrations. Whether you're working with existing Independent Software Vendor (ISV) solutions or building new ones, App Assure will collaborate directly with ISVs to ensure seamless data ingestion into the lake. This promise reinforces our dedication to delivering reliable, scalable, and secure security operations, backed by engineering support and a thriving partner ecosystem. Cost management and billing We are introducing new cost management features in public preview to help customers with cost predictability, billing transparency, and operational efficiency. Customers can set usage-based alerts on specific meters to monitor and control costs. For example, you can receive alerts when query or notebook usage passes set limits, helping avoid unexpected expenses and manage budgets. In-product reports provide customers with insights into usage trends over time, enabling them to identify cost drivers and optimize data retention and processing strategies. To support the ingestion and standardization of diverse data sources, we are introducing a new Data Processing feature that applies a $0.10 per GB charge for all data as it is ingested into the data lake. This feature enables a broad array of transformations like redaction, splitting, filtering and normalizing data. This feature was not billed during public preview but will be chargeable at GA starting October 1,2025. Data lake ingestion charges of $0.05 per GB will apply to Entra asset data; starting October 1, 2025. This was previously not billed during public preview. Retaining security data to perform deep analytics and investigations is crucial for defending against threats. To help enable customers to retain all their security data for extended periods cost effectively, data lake storage, including asset data storage, is now billed with a simple and uniform data compression rate of 6:1 across all data sources. Please refer to Plan costs and understand Microsoft Sentinel pricing and billing article for more information. For detailed prerequisites and instructions on configuring and managing asset connectors, refer to the official documentation: Asset data in Microsoft Sentinel data lake. KQL and Notebook enhancements We are introducing several enhancements to our data lake analytics capabilities with an upgraded KQL and notebook experience. Security teams can now run multi-workspace KQL queries for broader threat correlation and schedule KQL jobs more frequently. Frequent KQL jobs enable SOC teams to automate historical threat intelligence matching, summarize alert trends, and aggregate signals across workspaces. For example, schedule recurring jobs to scan for matches against newly ingested IOCs, helping uncover threats that were previously undetected and strengthening threat hunting and investigation workflows. The enhanced Jobs page offers operational clarity for SOC teams with a comprehensive view into job health and activity. At the top, a summary dashboard provides instant visibility into key metrics, total jobs, completions, and failures, helping teams quickly assess job health. A filterable list view displays essential details such as job names, status, frequency, and last run information, enabling quick prioritization and triage. For more detailed diagnostics, users can view individual jobs to access job runs telemetry such as job run duration, row count, and additional historical execution trends, providing additional visibility. Notebooks are receiving a significant upgrade, offering streamlined user experience for querying the data lake. Users now benefit from IntelliSense support for syntax and table names, making query authoring faster and more intuitive. They can also configure custom compute session timeouts and warning windows to better manage resources. Scheduling notebooks as jobs is now simpler, and users can leverage GitHub Copilot for intelligent assistance throughout the process. Together, these KQL and notebook improvements deliver deeper, more customizable analytics, helping customers unlock richer insights, accelerate threat response, and scale securely across diverse environments. Powering agentic defense Data centralization powers AI agents and automation to access comprehensive, historical, and real-time data for advanced analytics, anomaly detection, and autonomous threat response. Support for tools like KQL queries, Spark notebooks, and machine learning models in the data lake, allows agentic systems to continuously learn, adapt, and act on emerging threats. Integration with Security Copilot and MCP Server further enhances agentic defense, enabling smarter, faster, and context-rich security operations—all built on the foundation of Sentinel’s unified data lake. Microsoft Sentinel 50 GB commitment tier promotional pricing To make Microsoft Sentinel more accessible to small and mid-sized customers, we are introducing a new 50 GB commitment tier in public preview, with promotional pricing offered from October 1, 2025, to March 31, 2026. Customers who choose the 50 GB commitment tier during this period will maintain their promotional rate until March 31, 2027. This offer is available in all regions where Microsoft Sentinel is sold, with regional variations in promotional pricing. It is accessible through EA, CSP, and Direct channels. The new 50 GB commitment tier details will be available starting October 1, 2025, on the Microsoft Sentinel pricing page. Thank you to our customers and partners We’re incredibly grateful for the continued partnership and collaboration from our customers and partners throughout this journey. Your feedback and trust have been instrumental in shaping Microsoft Sentinel data lake into what it is today. Thank you for being a part of this critical milestone—we’re excited to keep building together. Get started today By centralizing data, optimizing costs, expanding coverage, and enabling deep analytics, Microsoft Sentinel empowers security teams to operate smarter, faster, and more effectively. Get started with Microsoft Sentinel data lake today in the Microsoft Defender experience. To learn more, see: Microsoft Sentinel—AI-Powered Cloud SIEM & Platform Pricing: Pricing page, Plan costs and understand Microsoft Sentinel pricing and billing Documentation: Connect Sentinel to Defender, Jupyter notebooks in Microsoft Sentinel data lake, KQL and the Microsoft Sentinel data lake, Permissions for Microsoft Sentinel data lake, Manage data tiers and retention in Microsoft Defender experience Blogs: Sentinel data lake FAQ blog, Empowering defenders in the era of AI, Microsoft Sentinel graph announcement, App Assure Microsoft Sentinel data lake promiseAnnouncing Microsoft Sentinel Model Context Protocol (MCP) server – Public Preview
Security is being reengineered for the AI era—moving beyond static, rulebound controls and after-the-fact response toward platform-led, machine-speed defense. The challenge is clear: fragmented tools, sprawling signals, and legacy architectures that can’t match the velocity and scale of modern attacks. What’s needed is an AI-ready, data-first foundation—one that turns telemetry into a security graph, standardizes access for agents, and coordinates autonomous actions while keeping humans in command of strategy and high-impact investigations. Security teams already center operations on their SIEM for end-to-end visibility, and we’re advancing that foundation by evolving Microsoft Sentinel into both the SIEM and the platform for agentic defense—connecting analytics and context across ecosystems. And now, we’re introducing new platform capabilities that build on Sentinel data lake: Sentinel graph for deeper insight and context; an MCP server and tools to make data agent ready; new developer capabilities; and Security Store for effortless discovery and deployment—so protection accelerates to machine speed while analysts do their best work. Introducing Sentinel MCP server We’re excited to announce the public preview of Microsoft Sentinel MCP (Model Context Protocol) server, a fully managed cloud service built on an open standard that lets AI agents seamlessly access the rich security context in your Sentinel data lake. Recent advances in large language models have enabled AI agents to perform reasoning—breaking down complex tasks, inferring patterns, and planning multistep actions, making them capable of autonomously performing business processes. To unlock this potential in cybersecurity, agents must operate with your organization’s real security context, not just public training data. Sentinel MCP server solves that by providing standardized, secure access to that context—across graph relationships, tabular telemetry, and vector embeddings—via reusable, natural language tools, enabling security teams to unlock the full potential of AI-driven automation and focus on what matters most. Why Model Context Protocol (MCP)? Model Context Protocol (MCP) is a rapidly growing open standard that allows AI models to securely communicate with external applications, services, and data sources through a well-structured interface. Think of MCP as a bridge that lets an AI agents understand and invoke an application’s capabilities. These capabilities are exposed as discrete “tools” with natural language inputs and outputs. The AI agent can autonomously choose the right tool (or combination of tools) for the task it needs to accomplish. In simpler terms, MCP standardizes how an AI talks to systems. Instead of developers writing custom connectors for each application, the MCP server presents a menu of available actions to the AI in a language it understands. This means an AI agent can discover what it can do (search data, run queries, trigger actions, etc.) and then execute those actions safely and intelligently. By adopting an open standard like MCP, Microsoft is ensuring that our AI integrations are interoperable and future-proof. Any AI application that speaks MCP can connect. Security Copilot offers built-in integration, while other MCP-compatible platforms can leverage your Sentinel data and services can quickly connect by simply adding a new MCP server and typing Sentinel’s MCP server URL. How to Get Started Sentinel MCP server is a fully managed service now available to all Sentinel data lake customers. If you are already onboarded to Sentinel data lake, you are ready to begin using MCP. Not using Sentinel data lake yet? Learn more here. Currently, you can connect to the Sentinel MCP server using Visual Studio Code (VS Code) with the GitHub Copilot add-on. Here’s a step-by-step guide: Open VS Code and authenticate with an account that has at least Security Reader role access (required to query the data lake via the Sentinel MCP server) Open the Command Palette in VS Code (Ctrl + Shift + P) Type or select “MCP: Add Server…” Choose “HTTP” (HTTP or Server-Sent Event) Enter the Sentinel MCP server URL: “https://sentinel.microsoft.com/mcp/data-exploration" When prompted, allow authentication with the Sentinel MCP server by clicking “Allow” Once connected, GitHub Copilot will be linked to Sentinel MCP server. Open the agent pane, set it to Agent mode (Ctrl + Shift + I), and you are ready to go. GitHub Copilot will autonomously identify and utilize Sentinel MCP tools as necessary. You can now experience how AI agents powered by the Sentinel MCP server access and utilize your security context using natural language, without knowing KQL, which tables to query and wrangle complex schemas. Try prompts like: “Find the top 3 users that are at risk and explain why they are at risk.” “Find sign-in failures in the last 24 hours and give me a brief summary of key findings.” “Identify devices that showed an outstanding amount of outgoing network connections.” To learn more about the existing capabilities of Sentinel MCP tools, refer to our documentation. Security Copilot will also feature native integration with Sentinel MCP server; this seamless connectivity will enhance autonomous security agents and the open prompting experience. Check out the Security Copilot blog for additional details. What’s coming next? The public preview of Sentinel MCP server marks the beginning of a new era in cybersecurity—one where AI agents operate with full context, precision, and autonomy. In the coming months, the MCP toolset will expand to support natural language access across tabular, graph, and embedding-based data, enabling agents to reason over both structured and unstructured signals. This evolution will dramatically boost agentic performance, allowing them to handle more complex tasks, surface deeper insights, and accelerate protection to machine speed. As we continue to build out this ecosystem, your feedback will be essential in shaping its future. Be sure to check out the Microsoft Secure for a deeper dive and live demo of Sentinel MCP server in action.6.5KViews3likes0Comments