enterprise security
43 TopicsAnnouncing public preview of custom graphs in Microsoft Sentinel
Security attacks span identities, devices, resources, and activity, making it critical to understand how these elements connect to expose real risk. In November, we shared how Sentinel graph brings these signals together into a relationship-aware view to help uncover hidden security risks. We’re excited to announce the public preview of custom graphs in Sentinel, available starting April 1 st . Custom graphs let defenders model relationships that are unique to their organization, then run graph analytics to surface blast radius, attack paths, privilege chains, chokepoints, and anomalies that are difficult to spot in tables alone. In this post, we’ll cover what custom graphs are, how they work, and how to get started so the entire team can use them. Custom graphs Security data is inherently connected: a sign-in leads to a token, a token touches a workload, a workload accesses data, and data movement triggers new activity. Graphs represent these relationships as nodes (entities) and edges (relationships), helping you answer questions like: “Who received the phishing email, who clicked, and which clicks were allowed by the proxy?” or “Show me users who exported notebooks, staged files in storage, then uploaded data to personal cloud storage- the full, three‑phase exfiltration chain through one identity.” With custom graphs, security teams can build, query, and visualize tailored security graphs using data from the Sentinel data lake and non-Microsoft sources, powered by Fabric. By uncovering hidden patterns and attack paths, graphs provide the relationship context needed to surface real risk. This context strengthens AI‑powered agent experiences, speeds investigations, clarifies blast radius, and helps teams move from noisy, disconnected alerts to confident decisions. In the words of our preview customers: “We ingested our Databricks management-plane telemetry into the Sentinel data lake and built a custom security graph. Without writing a single detection rule, the graph surfaced unusual patterns of activity and overprivileged access that we escalated for investigation. We didn't know what we were looking for, the graph surfaced the risk for us by revealing anomalous activity patterns and unusual access combinations driven by relationships, not alerts.” – SVP, Security Solutions | Financial Services organization Use cases Sentinel graph offers embedded, Microsoft managed, security graphs in Defender and Microsoft Purview experiences to help you at every stage of defense, from pre-breach to post-breach and across assets, activities, and threat intelligence. See here for more details. The new custom graph capability gives you full control to create your own graphs combining data from Microsoft sources, non-Microsoft sources, and federated sources in the Sentinel data lake. With custom graphs you can: Understand blast radius – Trace phishing campaigns, malware spread, OAuth abuse, or privilege escalation paths across identities, devices, apps, and data, without stitching together dozens of tables. Reconstruct real attack chains – Model multi-step attacker behavior (MITRE techniques, lateral movement, before/after malware) as connected sequences so investigations are complete and explainable, not a set of partial pivots. Reconstruct these chains from historical data in the Sentinel data lake. Figure 2: Drill into which specific MITRE techniques each IP is executing and in which tactic category Spot hidden risks and anomalies – Detect structural outliers like users with unusually broad access, anomalous email exfiltration, or dangerous permission combinations that are invisible in flat logs. Figure 3: OAuth consent chain – a single compromised user consented four dangerous permissions Creating custom graph Using the Sentinel VS Code extension, you can generate graphs to validate hunting hypotheses, such as understanding attack paths and blast radius of a phishing campaign, reconstructing multi‑step attack chains, and identifying structurally unusual or high‑risk behavior, making it accessible to your team and AI agents. Once persisted via a schedule job, you can access these custom graphs from the ready-to-use section in the graphs section in the Defender portal. Figure 4: Use AI-assisted vibe coding in Visual Studio Code to create tailored security graphs powered by Sentinel data lake and Fabric Graphs experience in the Microsoft Defender portal After creating your custom graphs, you can access them in the Graphs section of the Microsoft Defender portal under Sentinel. From there, you can perform interactive, graph-based investigations, for example, using a graph built for phishing analysis to quickly evaluate the impact of a recent incident, profile the attacker, and trace paths across Microsoft telemetry and third-party data. The graph experience lets you run Graph Query Language (GQL) queries, view the graph schema, visualize results, see results in a table, and interactively traverse to the next hop with a single click. Figure 5: Query, visualize, and traverse custom graphs with the new graph experience in Sentinel Billing Custom graph API usage for creating graph and querying graph is billed according to the Sentinel graph meter. Get started To use custom graphs, you’ll need Microsoft Sentinel data lake enabled in your tenant, since the lake provides the scalable, open-format foundation that custom graphs build on. Use the Sentinel data lake onboarding flow to provision the data lake if it isn’t already enabled. Ensure the required connectors are configured to populate your data lake. See Manage data tiers and retention in Microsoft Sentinel | Microsoft Learn. Create and persist a custom graph. See Get started with custom graphs in Microsoft Sentinel (preview) | Microsoft Learn. Run adhoc graph queries and visualize graph results. See Visualize custom graphs in Microsoft Sentinel graph (preview) | Microsoft Learn. [Optional] Schedule jobs to write graph query results to the lake tier and analytics tier using notebooks. See Exploring and interacting with lake data using Jupyter Notebooks - Microsoft Security | Microsoft Learn. Learn more Earlier posts (Sentinel graph general availability) RSAC 2026 announcement roundup Custom graphs documentation Custom graph billingSecurity as the core primitive - Securing AI agents and apps
This week at Microsoft Ignite, we shared our vision for Microsoft security -- In the agentic era, security must be ambient and autonomous, like the AI it protects. It must be woven into and around everything we build—from silicon to OS, to agents, apps, data, platforms, and clouds—and throughout everything we do. In this blog, we are going to dive deeper into many of the new innovations we are introducing this week to secure AI agents and apps. As I spend time with our customers and partners, there are four consistent themes that have emerged as core security challenges to secure AI workloads. These are: preventing agent sprawl and access to resources, protecting against data oversharing and data leaks, defending against new AI threats and vulnerabilities, and adhering to evolving regulations. Addressing these challenges holistically requires a coordinated effort across IT, developers, and security leaders, not just within security teams and to enable this, we are introducing several new innovations: Microsoft Agent 365 for IT, Foundry Control Plane in Microsoft Foundry for developers, and the Security Dashboard for AI for security leaders. In addition, we are releasing several new purpose-built capabilities to protect and govern AI apps and agents across Microsoft Defender, Microsoft Entra, and Microsoft Purview. Observability at every layer of the stack To facilitate the organization-wide effort that it takes to secure and govern AI agents and apps – IT, developers, and security leaders need observability (security, management, and monitoring) at every level. IT teams need to enable the development and deployment of any agent in their environment. To ensure the responsible and secure deployment of agents into an organization, IT needs a unified agent registry, the ability to assign an identity to every agent, manage the agent’s access to data and resources, and manage the agent’s entire lifecycle. In addition, IT needs to be able to assign access to common productivity and collaboration tools, such as email and file storage, and be able to observe their entire agent estate for risks such as over-permissioned agents. Development teams need to build and test agents, apply security and compliance controls by default, and ensure AI models are evaluated for safety guardrails and security vulnerabilities. Post deployment, development teams must observe agents to ensure they are staying on task, accessing applications and data sources appropriately, and operating within their cost and performance expectations. Security & compliance teams must ensure overall security of their AI estate, including their AI infrastructure, platforms, data, apps, and agents. They need comprehensive visibility into all their security risks- including agent sprawl and resource access, data oversharing and leaks, AI threats and vulnerabilities, and complying with global regulations. They want to address these risks by extending their existing security investments that they are already invested in and familiar with, rather than using siloed or bolt-on tools. These teams can be most effective in delivering trustworthy AI to their organizations if security is natively integrated into the tools and platforms that they use every day, and if those tools and platforms share consistent security primitives such as agent identities from Entra; data security and compliance controls from Purview; and security posture, detections, and protections from Defender. With the new capabilities being released today, we are delivering observability at every layer of the AI stack, meeting IT, developers, and security teams where they are in the tools they already use to innovate with confidence. For IT Teams - Introducing Microsoft Agent 365, the control plane for agents, now in preview The best infrastructure for managing your agents is the one you already use to manage your users. With Agent 365, organizations can extend familiar tools and policies to confidently deploy and secure agents, without reinventing the wheel. By using the same trusted Microsoft 365 infrastructure, productivity apps, and protections, organizations can now apply consistent and familiar governance and security controls that are purpose-built to protect against agent-specific threats and risks. gement and governance of agents across organizations Microsoft Agent 365 delivers a unified agent Registry, Access Control, Visualization, Interoperability, and Security capabilities for your organization. These capabilities work together to help organizations manage agents and drive business value. The Registry powered by the Entra provides a complete and unified inventory of all the agents deployed and used in your organization including both Microsoft and third-party agents. Access Control allows you to limit the access privileges of your agents to only the resources that they need and protect their access to resources in real time. Visualization gives organizations the ability to see what matters most and gain insights through a unified dashboard, advanced analytics, and role-based reporting. Interop allows agents to access organizational data through Work IQ for added context, and to integrate with Microsoft 365 apps such as Outlook, Word, and Excel so they can create and collaborate alongside users. Security enables the proactive detection of vulnerabilities and misconfigurations, protects against common attacks such as prompt injections, prevents agents from processing or leaking sensitive data, and gives organizations the ability to audit agent interactions, assess compliance readiness and policy violations, and recommend controls for evolving regulatory requirements. Microsoft Agent 365 also includes the Agent 365 SDK, part of Microsoft Agent Framework, which empowers developers and ISVs to build agents on their own AI stack. The SDK enables agents to automatically inherit Microsoft's security and governance protections, such as identity controls, data security policies, and compliance capabilities, without the need for custom integration. For more details on Agent 365, read the blog here. For Developers - Introducing Microsoft Foundry Control Plane to observe, secure and manage agents, now in preview Developers are moving fast to bring agents into production, but operating them at scale introduces new challenges and responsibilities. Agents can access tools, take actions, and make decisions in real time, which means development teams must ensure that every agent behaves safely, securely, and consistently. Today, developers need to work across multiple disparate tools to get a holistic picture of the cybersecurity and safety risks that their agents may have. Once they understand the risk, they then need a unified and simplified way to monitor and manage their entire agent fleet and apply controls and guardrails as needed. Microsoft Foundry provides a unified platform for developers to build, evaluate and deploy AI apps and agents in a responsible way. Today we are excited to announce that Foundry Control Plane is available in preview. This enables developers to observe, secure, and manage their agent fleets with built-in security, and centralized governance controls. With this unified approach, developers can now identify risks and correlate disparate signals across their models, agents, and tools; enforce consistent policies and quality gates; and continuously monitor task adherence and runtime risks. Foundry Control Plane is deeply integrated with Microsoft’s security portfolio to provide a ‘secure by design’ foundation for developers. With Microsoft Entra, developers can ensure an agent identity (Agent ID) and access controls are built into every agent, mitigating the risk of unmanaged agents and over permissioned resources. With Microsoft Defender built in, developers gain contextualized alerts and posture recommendations for agents directly within the Foundry Control Plane. This integration proactively prevents configuration and access risks, while also defending agents from runtime threats in real time. Microsoft Purview’s native integration into Foundry Control Plane makes it easy to enable data security and compliance for every Foundry-built application or agent. This allows Purview to discover data security and compliance risks and apply policies to prevent user prompts and AI responses from safety and policy violations. In addition, agent interactions can be logged and searched for compliance and legal audits. This integration of the shared security capabilities, including identity and access, data security and compliance, and threat protection and posture ensures that security is not an afterthought; it’s embedded at every stage of the agent lifecycle, enabling you to start secure and stay secure. For more details, read the blog. For Security Teams - Introducing Security Dashboard for AI - unified risk visibility for CISOs and AI risk leaders, coming soon AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 90% of security professionals, including CISOs, report that their responsibilities have expanded to include data governance and AI oversight within the past year. 1 At the same time, 86% of risk managers say disconnected data and systems lead to duplicated efforts and gaps in risk coverage. 2 To address these needs, we are excited to introduce the Security Dashboard for AI. This serves as a unified dashboard that aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview. This unified dashboard allows CISOs and AI risk leaders to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. For example, you can see your full AI inventory and get visibility into a quarantined agent, flagged for high data risk due to oversharing sensitive information in Purview. The dashboard then correlates that signal with identity insights from Entra and threat protection alerts from Defender to provide a complete picture of exposure. From there, you can delegate tasks to the appropriate teams to enforce policies and remediate issues quickly. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, there’s nothing new to buy. If you’re already using Microsoft security products to secure AI, you’re already a Security Dashboard for AI customer. Figure 5: Security Dashboard for AI provides CISOs and AI risk leaders with a unified view of their AI risk by bringing together their AI inventory, AI risk, and security recommendations to strengthen overall posture Together, these innovations deliver observability and security across IT, development, and security teams, powered by Microsoft’s shared security capabilities. With Microsoft Agent 365, IT teams can manage and secure agents alongside users. Foundry Control Plane gives developers unified governance and lifecycle controls for agent fleets. Security Dashboard for AI provides CISOs and AI risk leaders with a consolidated view of AI risks across platforms, apps, and agents. Added innovation to secure and govern your AI workloads In addition to the IT, developer, and security leader-focused innovations outlined above, we continue to accelerate our pace of innovation in Microsoft Entra, Microsoft Purview, and Microsoft Defender to address the most pressing needs for securing and governing your AI workloads. These needs are: Manage agent sprawl and resource access e.g. managing agent identity, access to resources, and permissions lifecycle at scale Prevent data oversharing and leaks e.g. protecting sensitive information shared in prompts, responses, and agent interactions Defend against shadow AI, new threats, and vulnerabilities e.g. managing unsanctioned applications, preventing prompt injection attacks, and detecting AI supply chain vulnerabilities Enable AI governance for regulatory compliance e.g. ensuring AI development, operations, and usage comply with evolving global regulations and frameworks Manage agent sprawl and resource access 76% of business leaders expect employees to manage agents within the next 2–3 years. 3 Widespread adoption of agents is driving the need for visibility and control, which includes the need for a unified registry, agent identities, lifecycle governance, and secure access to resources. Today, Microsoft Entra provides robust identity protection and secure access for applications and users. However, organizations lack a unified way to manage, govern, and protect agents in the same way they manage their users. Organizations need a purpose-built identity and access framework for agents. Introducing Microsoft Entra Agent ID, now in preview Microsoft Entra Agent ID offers enterprise-grade capabilities that enable organizations to prevent agent sprawl and protect agent identities and their access to resources. These new purpose-built capabilities enable organizations to: Register and manage agents: Get a complete inventory of the agent fleet and ensure all new agents are created with an identity built-in and are automatically protected by organization policies to accelerate adoption. Govern agent identities and lifecycle: Keep the agent fleet under control with lifecycle management and IT-defined guardrails for both agents and people who create and manage them. Protect agent access to resources: Reduce risk of breaches, block risky agents, and prevent agent access to malicious resources with conditional access and traffic inspection. Agents built in Microsoft Copilot Studio, Microsoft Foundry, and Security Copilot get an Entra Agent ID built-in at creation. Developers can also adopt Entra Agent ID for agents they build through Microsoft Agent Framework, Microsoft Agent 365 SDK, or Microsoft Entra Agent ID SDK. Read the Microsoft Entra blog to learn more. Prevent data oversharing and leaks Data security is more complex than ever. Information Security Media Group (ISMG) reports that 80% of leaders cite leakage of sensitive data as their top concern. 4 In addition to data security and compliance risks of generative AI (GenAI) apps, agents introduces new data risks such as unsupervised data access, highlighting the need to protect all types of corporate data, whether it is accessed by employees or agents. To mitigate these risks, we are introducing new Microsoft Purview data security and compliance capabilities for Microsoft 365 Copilot and for agents and AI apps built with Copilot Studio and Microsoft Foundry, providing unified protection, visibility, and control for users, AI Apps, and Agents. New Microsoft Purview controls safeguard Microsoft 365 Copilot with real-time protection and bulk remediation of oversharing risks Microsoft Purview and Microsoft 365 Copilot deliver a fully integrated solution for protecting sensitive data in AI workflows. Based on ongoing customer feedback, we’re introducing new capabilities to deliver real-time protection for sensitive data in M365 Copilot and accelerated remediation of oversharing risks: Data risk assessments: Previously, admins could monitor oversharing risks such as SharePoint sites with unprotected sensitive data. Now, they can perform item-level investigations and bulk remediation for overshared files in SharePoint and OneDrive to quickly reduce oversharing exposure. Data Loss Prevention (DLP) for M365 Copilot: DLP previously excluded files with sensitivity labels from Copilot processing. Now in preview, DLP also prevents prompts that include sensitive data from being processed in M365 Copilot, Copilot Chat, and Copilot agents, and prevents Copilot from using sensitive data in prompts for web grounding. Priority cleanup for M365 Copilot assets: Many organizations have org-wide policies to retain or delete data. Priority cleanup, now generally available, lets admins delete assets that are frequently processed by Copilot, such as meeting transcripts and recordings, on an independent schedule from the org-wide policies while maintaining regulatory compliance. On-demand classification for meeting transcripts: Purview can now detect sensitive information in meeting transcripts on-demand. This enables data security admins to apply DLP policies and enforce Priority cleanup based on the sensitive information detected. & bulk remediation Read the full Data Security blog to learn more. Introducing new Microsoft Purview data security capabilities for agents and apps built with Copilot Studio and Microsoft Foundry, now in preview Microsoft Purview now extends the same data security and compliance for users and Copilots to agents and apps. These new capabilities are: Enhanced Data Security Posture Management: A centralized DSPM dashboard that provides observability, risk assessment, and guided remediation across users, AI apps, and agents. Insider Risk Management (IRM) for Agents: Uniquely designed for agents, using dedicated behavioral analytics, Purview dynamically assigns risk levels to agents based on their risky handing of sensitive data and enables admins to apply conditional policies based on that risk level. Sensitive data protection with Azure AI Search: Azure AI Search enables fast, AI-driven retrieval across large document collections, essential for building AI Apps. When apps or agents use Azure AI Search to index or retrieve data, Purview sensitivity labels are preserved in the search index, ensuring that any sensitive information remains protected under the organization’s data security & compliance policies. For more information on preventing data oversharing and data leaks - Learn how Purview protects and governs agents in the Data Security and Compliance for Agents blog. Defend against shadow AI, new threats, and vulnerabilities AI workloads are subject to new AI-specific threats like prompt injections attacks, model poisoning, and data exfiltration of AI generated content. Although security admins and SOC analysts have similar tasks when securing agents, the attack methods and surfaces differ significantly. To help customers defend against these novel attacks, we are introducing new capabilities in Microsoft Defender that deliver end-to-end protection, from security posture management to runtime defense. Introducing Security Posture Management for agents, now in preview As organizations adopt AI agents to automate critical workflows, they become high-value targets and potential points of compromise, creating a critical need to ensure agents are hardened, compliant, and resilient by preventing misconfigurations and safeguarding against adversarial manipulation. Security Posture Management for agents in Microsoft Defender now provides an agent inventory for security teams across Microsoft Foundry and Copilot Studio agents. Here, analysts can assess the overall security posture of an agent, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions, all aligned to the MITRE ATT&CK framework. Additionally, the new agent attack path analysis visualizes how an agent’s weak security posture can create broader organizational risk, so you can quickly limit exposure and prevent lateral movement. Introducing Threat Protection for agents, now in preview Attack techniques and attack surfaces for agents are fundamentally different from other assets in your environment. That’s why Defender is delivering purpose-built protections and detections to help defend against them. Defender is introducing runtime protection for Copilot Studio agents that automatically block prompt injection attacks in real time. In addition, we are announcing agent-specific threat detections for Copilot Studio and Microsoft Foundry agents coming soon. Defender automatically correlates these alerts with Microsoft’s industry-leading threat intelligence and cross-domain security signals to deliver richer, contextualized alerts and security incident views for the SOC analyst. Defender’s risk and threat signals are natively integrated into the new Microsoft Foundry Control Plane, giving development teams full observability and the ability to act directly from within their familiar environment. Finally, security analysts will be able to hunt across all agent telemetry in the Advanced Hunting experience in Defender, and the new Agent 365 SDK extends Defender’s visibility and hunting capabilities to third-party agents, starting with Genspark and Kasisto, giving security teams even more coverage across their AI landscape. To learn more about how you can harden the security posture of your agents and defend against threats, read the Microsoft Defender blog. Enable AI governance for regulatory compliance Global AI regulations like the EU AI Act and NIST AI RMF are evolving rapidly; yet, according to ISMG, 55% of leaders report lacking clarity on current and future AI regulatory requirements. 5 As enterprises adopt AI, they must ensure that their AI innovation aligns with global regulations and standards to avoid costly compliance gaps. Introducing new Microsoft Purview Compliance Manager capabilities to stay ahead of evolving AI regulations, now in preview Today, Purview Compliance Manager provides over 300 pre-built assessments for common industry, regional, and global standards and regulations. However, the pace of change for new AI regulations requires controls to be continuously re-evaluated and updated so that organizations can adapt to ongoing changes in regulations and stay compliant. To address this need, Compliance Manager now includes AI-powered regulatory templates. AI-powered regulatory templates enable real-time ingestion and analysis of global regulatory documents, allowing compliance teams to quickly adapt to changes as they happen. As regulations evolve, the updated regulatory documents can be uploaded to Compliance Manager, and the new requirements are automatically mapped to applicable recommended actions to implement controls across Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft 365, and Microsoft Foundry. Automated actions by Compliance Manager further streamline governance, reduce manual workload, and strengthen regulatory accountability. Introducing expanded Microsoft Purview compliance capabilities for agents and AI apps now in preview Microsoft Purview now extends its compliance capabilities across agent-generated interactions, ensuring responsible use and regulatory alignment as AI becomes deeply embedded across business processes. New capabilities include expanded coverage for: Audit: Surface agent interactions, lifecycle events, and data usage with Purview Audit. Unified audit logs across user and agent activities, paired with traceability for every agent using an Entra Agent ID, support investigation, anomaly detection, and regulatory reporting. Communication Compliance: Detect prompts sent to agents and agent-generated responses containing inappropriate, unethical, or risky language, including attempts to manipulate agents into bypassing policies, generating risky content, or producing noncompliant outputs. When issues arise, data security admins get full context, including the prompt, the agent’s output, and relevant metadata, so they can investigate and take corrective action Data Lifecycle Management: Apply retention and deletion policies to agent-generated content and communication flows to automate lifecycle controls and reduce regulatory risk. Read about Microsoft Purview data security for agents to learn more. Finally, we are extending our data security, threat protection, and identity access capabilities to third-party apps and agents via the network. Advancing Microsoft Entra Internet Access Secure Web + AI Gateway - extend runtime protections to the network, now in preview Microsoft Entra Internet Access, part of the Microsoft Entra Suite, has new capabilities to secure access to and usage of GenAI at the network level, marking a transition from Secure Web Gateway to Secure Web and AI Gateway. Enterprises can accelerate GenAI adoption while maintaining compliance and reducing risk, empowering employees to experiment with new AI tools safely. The new capabilities include: Prompt injection protection which blocks malicious prompts in real time by extending Azure AI Prompt Shields to the network layer. Network file filtering which extends Microsoft Purview to inspect files in transit and prevents regulated or confidential data from being uploaded to unsanctioned AI services. Shadow AI Detection that provides visibility into unsanctioned AI applications through Cloud Application Analytics and Defender for Cloud Apps risk scoring, empowering security teams to monitor usage trends, apply Conditional Access, or block high-risk apps instantly. Unsanctioned MCP server blocking prevents access to MCP servers from unauthorized agents. With these controls, you can accelerate GenAI adoption while maintaining compliance and reducing risk, so employees can experiment with new AI tools safely. Read the Microsoft Entra blog to learn more. As AI transforms the enterprise, security must evolve to meet new challenges—spanning agent sprawl, data protection, emerging threats, and regulatory compliance. Our approach is to empower IT, developers, and security leaders with purpose-built innovations like Agent 365, Foundry Control Plane, and the Security Dashboard for AI. These solutions bring observability, governance, and protection to every layer of the AI stack, leveraging familiar tools and integrated controls across Microsoft Defender, Microsoft Entra, and Microsoft Purview. The future of security is ambient, autonomous, and deeply woven into the fabric of how we build, deploy, and govern AI systems. Explore additional resources Learn more about Security for AI solutions on our webpage Learn more about Microsoft Agent 365 Learn more about Microsoft Entra Agent ID Get started with Microsoft 365 Copilot Get started with Microsoft Copilot Studio Get started with Microsoft Foundry Get started with Microsoft Defender for Cloud Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Purview Compliance Manager Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Bedrock Security, 2025 Data Security Confidence Index, published Mar 17, 2025. 2 AuditBoard & Ascend2, Connected Risk Report 2024; as cited by MIT Sloan Management Review, Spring 2025. 3 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more 4 First Annual Generative AI study: Business Rewards vs. Security Risks, , Q3 2023, ISMG, N=400 5 First Annual Generative AI study: Business Rewards vs. Security Risks, Q3 2023, ISMG, N=400Building Secure, Enterprise Ready AI Agents with Purview SDK and Agent Framework
At Microsoft Ignite, we announced the public preview of Purview integration with the Agent Framework SDK—making it easier to build AI agents that are secure, compliant, and enterprise‑ready from day one. AI agents are quickly moving from demos to production. They reason over enterprise data, collaborate with other agents, and take real actions. As that happens, one thing becomes non‑negotiable: Governance has to be built in. That’s where Purview SDK comes in. Agentic AI Changes the Security Model Traditional apps expose risks at the UI or API layer. AI agents are different. Agents can: Process sensitive enterprise data in prompts and responses Collaborate with other agents across workflows Act autonomously on behalf of users Without built‑in controls, even a well‑designed agent can create compliance gaps. Purview SDK brings Microsoft’s enterprise data security and compliance directly into the agent runtime, so governance travels with the agent—not after it. What You Get with Purview SDK + Agent Framework This integration delivers a few key things developers and enterprises care about most: Inline Data Protection Evaluate prompts and responses against Data Loss Prevention (DLP) policies in real time. Content can be allowed or blocked automatically. Built‑In Governance Send AI interactions to Purview for audit, eDiscovery, communication compliance, and lifecycle management—without custom plumbing. Enterprise‑Ready by Design Ship agents that meet enterprise security expectations from the start, not as a follow‑up project. All of this is done natively through Agent Framework middleware, so governance feels like part of the platform—not an add‑on. How Enforcement Works (Quickly) When an agent runs: Prompts and responses flow through the Agent Framework pipeline Purview SDK evaluates content against configured policies A decision is returned: allow, redact, or block Governance signals are logged for audit and compliance This same model works for: User‑to‑agent interactions Agent‑to‑agent communication Multi‑agent workflows Try It: Add Purview SDK in Minutes Here’s a minimal Python example using Agent Framework: That’s it! From that point on: Prompts and responses are evaluated against Purview policies setup within the enterprise tenant Sensitive data can be automatically blocked Interactions are logged for governance and audit Designed for Real Agent Systems Most production AI apps aren’t single‑agent systems. Purview SDK supports: Agent‑level enforcement for fine‑grained control Workflow‑level enforcement across orchestration steps Agent‑to‑agent governance to protect data as agents collaborate This makes it a natural fit for enterprise‑scale, multi‑agent architectures. Get Started Today You can start experimenting right away: Try the Purview SDK with Agent Framework Follow the Microsoft Learn docs to configure Purview SDK with Agent Framework. Explore the GitHub samples See examples of policy‑enforced agents in Python and .NET. Secure AI, Without Slowing It Down AI agents are quickly becoming production systems—not experiments. By integrating Purview SDK directly into the Agent Framework, Microsoft is making governance a default capability, not a deployment blocker. Build intelligent agents. Protect sensitive data. Scale with confidence.Strengthening your Security Posture with Microsoft Security Store Innovations at RSAC 2026
Security teams are facing more threats, more complexity, and more pressure to act quickly - without increasing risk or operational overhead. What matters is being able to find the right capability, deploy it safely, and use it where security work already happens. Microsoft Security Store was built with that goal in mind. It provides a single, trusted place to discover, purchase, and deploy Microsoft and partner-built security agents and solutions that extend Microsoft Security - helping you improve protection across SOC, identity, and data protection workflows. Today, the Security Store includes 75+ security agents and 115+ solutions from Microsoft and trusted partners - each designed to integrate directly into Microsoft Security experiences and meet enterprise security requirements. At RSAC 2026, we’re announcing capabilities that make it easier to turn security intent into action- by improving how you discover agents, how quickly you can put them to use, and how effectively you can apply them across workflows to achieve your security outcomes. Meet the Next Generation of Security Agents Security agents are becoming part of day-to-day operations for many teams - helping automate investigations, enrich signals, and reduce manual effort across common security tasks. Since Security Store became generally available, Microsoft and our partners have continued to expand the set of agents that integrate directly with Microsoft Defender, Sentinel, Entra, Purview, Intune and Security Copilot. Some of the notable partner-built agents available through Security Store include: XBOW Continuous Penetration Testing Agent XBOW’s penetration testing agents perform pen-tests, analyzes findings, and correlates those findings with a customer’s Microsoft Defender detections. XBOW integrates offensive security directly into Microsoft Security workflows by streaming validated, exploitable AppSec findings into Microsoft Sentinel and enabling investigation through XBOW's Copilot agents in Microsoft Defender. With XBOW’s pen-testing agents, offensive security can run continuously to identify which vulnerabilities are actually exploitable, and how to improve posture and detections. Tanium Incident Scoping Agent The Tanium Incident Scoping Agent (In Preview) is bringing real-time endpoint intelligence directly into Microsoft Defender and Microsoft Security Copilot workflows. The agent automatically scopes incidents, identifies impacted devices, and surfaces actionable context in minutes-helping teams move faster from detection to containment. By combining Tanium’s real-time intelligence with Microsoft Security investigations, you can reduce manual effort, accelerate response, and maintain enterprise-grade governance and control. Zscaler In Microsoft Sentinel, the Zscaler ZIA–ZPA Correlation Agent correlates ZIA and ZPA activity for a given user to speed malsite/malware investigations. It highlights suspicious patterns and recommends ZIA/ZPA policy changes to reduce repeat exposure. These agents build on a growing ecosystem of Microsoft and partner capabilities designed to work together, allowing you to extend Microsoft Security with specialized expertise where it has the most impact. Discover and Deploy Agents and Solutions in the Flow of Security Work Security teams work best when they don’t have to switch tools to make decisions. That’s why Security Store is embedded directly into Microsoft Security experiences - so you can discover and evaluate trusted agents and solutions in context, while working in the tools you already use. When Security Store became generally available, we embedded it into Microsoft Defender, allowing SOC teams to discover and deploy trusted Microsoft and partner‑built agents and solutions in the middle of active investigations. Analysts can now automate response, enrich investigations, and resolve threats all within the Defender portal. At RSAC, we’re expanding this approach across identity and data security. Strengthening Identity Security with Security Store in Microsoft Entra Identity has become a primary attack surface - from fraud and automated abuse to privileged access misuse and posture gaps. Security Store is now embedded in Microsoft Entra, allowing identity and security teams to discover and deploy partner solutions and agents directly within identity workflows. For external and verified identity scenarios, Security Store includes partner solutions that integrate with Entra External ID and Entra Verified ID to help protect against fraud, DDoS attacks, and intelligent bot abuse. These solutions, built by partners such as IDEMIA, AU10TIX, TrueCredential, HUMAN Security, Akamai and Arkose Labs help strengthen trust while preserving seamless user experiences. For enterprise identity security, more than 15 agents available through the Entra Security Store provide visibility into privileged activity and identity risk, posture health and trends, and actionable recommendations to improve identity security and overall security score. These agents are built by partners such as glueckkanja, adaQuest, Ontinue, BlueVoyant, Invoke, and Performanta. This allows you to extend Entra with specialized identity security capabilities, without leaving the identity control plane. Extending Data Protection with Security Store in Microsoft Purview Protecting sensitive data requires consistent controls across where data lives and how it moves. Security Store is now embedded in Microsoft Purview, enabling teams responsible for data protection and compliance to discover partner solutions directly within Purview DLP workflows. Through this experience, you can extend Microsoft Purview DLP with partner data security solutions that help protect sensitive data across cloud applications, enterprise browsers, and networks. These include solutions from Microsoft Entra Global Secure Access and partners such as Netskope, Island, iBoss, and Palo Alto Networks. This experience will be available to customers later this month, as reflected on the M365 roadmap. By discovering solutions in context, teams can strengthen data protection without disrupting established compliance workflows. Across Defender, Entra, and Purview, purchases continue to be completed through the Security Store website, ensuring a consistent, secure, and governed transaction experience - while discovery and evaluation happen exactly where teams already work. Outcome-Driven Discovery, with Security Store Advisor As the number of agents and solutions in the Store grow, finding the right fit for your security scenario quickly becomes more important. That’s why we’re introducing the AI‑guided Security Store Advisor, now generally available. You can describe your goal in natural language - such as “investigate suspicious network activity” and receive recommendations aligned to that outcome. Advisor also includes side-by-side comparison views for agents and solutions, helping you review capabilities, integrated services, and deployment requirements more quickly and reduce evaluation time. Security Store Advisor is designed with Responsible AI principles in mind, including transparency and explainability. You can learn more about how Responsible AI is applied in this experience in the Security Store Advisor Responsible AI FAQ. Overall, this outcome‑driven approach reduces time to value, improves solution fit, and helps your team move faster from intent to action. Learning from the Security Community with Ratings and Reviews Security decisions are strongest when informed by real world use cases. This is why we are introducing Security Store ratings and reviews from security professionals who have deployed and used agents and solutions in production environments. These reviews focus on practical considerations such as integration quality, operational impact, and ease of use, helping you learn from peers facing similar security challenges. By sharing feedback, the security community helps raise the bar for quality and enables faster, more informed decisions, so teams can adopt agents and solutions with greater confidence and reduce time to value. Making agents easier to use post deployment Once you’ve deployed your agents, we’re introducing several new capabilities that make it easier to work with your agents in your daily workflows. These updates help you operationalize agents faster and apply automation where it delivers real value. Interactive chat with agents in Microsoft Defender lets SOC analysts ask questions to agents with specialized expertise, such as understanding impacted devices or understanding what vulnerabilities to prioritize directly in the Defender portal. By bringing a conversational experience with agents into the place where analysts do most of their investigation work, analysts can seamlessly work in collaboration with agents to improve security. Logic App triggers for agents enables security teams to include security agents in their automated, repeatable workflows. With this update, organizations can apply agentic automation to a wider variety of security tasks while integrating with their existing tools and workflows to perform tasks like incident triage and access reviews. Product combinations in Security Store make it easier to deploy complete security solutions from a single streamlined flow - whether that includes connectors, SaaS tools, or multiple agents that need to work together. Increasingly, partners are building agents that are adept at using your SaaS security tools and security data to provide intelligent recommendations - this feature helps you deploy them faster with ease. A Growing Ecosystem Focused on Security Outcomes As the Security Store ecosystem continues to expand, you gain access to a broader set of specialized agents and solutions that work together to help defend your environment - extending Microsoft Security with partner innovation in a governed and integrated way. At the same time, Security Store provides partners a clear path to deliver differentiated capabilities directly into Microsoft Security workflows, aligned to how customers evaluate, adopt, and use security solutions. Get Started Visit https://securitystore.microsoft.com/ to discover security agents and solutions that meet your needs and extend your Microsoft Security investments. If you’re a partner, visit https://securitystore.microsoft.com/partners to learn how to list your solution or agent and reach customers where security decisions are made. Where to find us at RSAC 2026? Security Reborn in the Era of AI workshop Get hands‑on guidance on building and deploying Security Copilot agents and publishing them to the Security Store. March 23 | 8:00 AM | The Palace Hotel Register: Security Reborn in the Era of AI | Microsoft Corporate Microsoft Security Store: An Inside Look Join us for a live theater session exploring what’s coming next for Security Store March 26 | 1:00 PM | Microsoft Security Booth #5744 | North Expo Hall Visit us at the Booth Experience Security Store firsthand - test the experience and connect with experts. Microsoft Booth #1843Security Dashboard for AI - Now Generally Available
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is now generally available. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 2026Understanding and mitigating security risks in MCP implementations
Introducing any new technology can introduce new security challenges or exacerbate existing security risks. In this blog post, we’re going to look at some of the security risks that could be introduced to your environment when using Model Context Protocol (MCP), and what controls you can put in place to mitigate them. MCP is a framework that enables seamless integration between LLM applications and various tools and data sources. MCP defines: A standardized way for AI models to request external actions through a consistent API Structured formats for how data should be passed to and from AI systems Protocols for how AI requests are processed, executed, and returned MCP allows different AI systems to use a common set of tools and patterns, ensuring consistent behavior when AI models interact with external systems. MCP architecture MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. MCP security controls Any system which has access to important resources has implied security challenges. Security challenges can generally be addressed through correct application of fundamental security controls and concepts. As MCP is only newly defined, the specification is changing very rapidly and as the protocol evolves. Eventually the security controls within it will mature, enabling a better integration with enterprise and established security architectures and best practices. Research published in the Microsoft Digital Defense Report states that 98% of reported breaches would be prevented by robust security hygiene and the best protection against any kind of breach is to get your baseline security hygiene, secure coding best practices and supply chain security right – those tried and tested practices that we already know about still make the most impact in reducing security risk. Let's look at some of the ways that you can start to address security risks when adopting MCP. MCP server authentication (if your MCP implementation was before 26th April 2025) Problem statement: The original MCP specification assumed that developers would write their own authentication server. This requires knowledge of OAuth and related security constraints. MCP servers acted as OAuth 2.0 Authorization Servers, managing the required user authentication directly rather than delegating it to an external service such as Microsoft Entra ID. As of 26 April 2025, an update to the MCP specification allows for MCP servers to delegate user authentication to an external service. Risks: Misconfigured authorization logic in the MCP server can lead to sensitive data exposure and incorrectly applied access controls. OAuth token theft on the local MCP server. If stolen, the token can then be used to impersonate the MCP server and access resources and data from the service that the OAuth token is for. Mitigating controls: Thoroughly review your MCP server authorization logic, here some posts discussing this in more detail - Azure API Management Your Auth Gateway For MCP Servers | Microsoft Community Hub and Using Microsoft Entra ID To Authenticate With MCP Servers Via Sessions · Den Delimarsky Implement best practices for token validation and lifetime Use secure token storage and encrypt tokens Excessive permissions for MCP servers Problem statement: MCP servers may have been granted excessive permissions to the service/resource they are accessing. For example, an MCP server that is part of an AI sales application connecting to an enterprise data store should have access scoped to the sales data and not allowed to access all the files in the store. Referencing back to the principle of least privilege (one of the oldest security principles), no resource should have permissions in excess of what is required for it to execute the tasks it was intended for. AI presents an increased challenge in this space because to enable it to be flexible, it can be challenging to define the exact permissions required. Risks: Granting excessive permissions can allow for exfiltration or amending data that the MCP server was not intended to be able to access. This could also be a privacy issue if the data is personally identifiable information (PII). Mitigating controls: Clearly define the permissions that the MCP server has to access the resource/service it connects to. These permissions should be the minimum required for the MCP server to access the tool or data it is connecting to. Indirect prompt injection attacks Problem statement: Researchers have shown that the Model Context Protocol (MCP) is vulnerable to a subset of Indirect Prompt Injection attacks known as Tool Poisoning Attacks. Tool poisoning is a scenario where an attacker embeds malicious instructions within the descriptions of MCP tools. These instructions are invisible to users but can be interpreted by the AI model and its underlying systems, leading to unintended actions that could ultimately lead to harmful outcomes. Risks: Unintended AI actions present a variety of security risks that include data exfiltration and privacy breaches. Mitigating controls: Implement AI prompt shields: in Azure AI Foundry, you can follow these steps to implement AI prompt shields. Implement robust supply chain security: you can read more about how Microsoft implements supply chain security internally here. Established security best practices that will uplift your MCP implementation’s security posture Any MCP implementation inherits the existing security posture of your organization's environment that it is built upon, so when considering the security of MCP as a component of your overall AI systems it is recommended that you look at uplifting your overall existing security posture. The following established security controls are especially pertinent: Secure coding best practices in your AI application - protect against the OWASP Top 10, the OWASP Top 10 for LLMs, use of secure vaults for secrets and tokens, implementing end-to-end secure communications between all application components, etc. Server hardening – use MFA where possible, keep patching up to date, integrate the server with a third party identity provider for access, etc. Keep devices, infrastructure and applications up to date with patches Security monitoring – implementing logging and monitoring of an AI application (including the MCP client/servers) and sending those logs to a central SIEM for detection of anomalous activities Zero trust architecture – isolating components via network and identity controls in a logical manner to minimize lateral movement if an AI application were compromised. Conclusion MCP is a promising development in the AI space that enables rich data and context access. As developers embrace this new approach to integrating their organization's APIs and connectors into LLMs, they need to be aware of security risks and how to implement controls to reduce those risks. There are mitigating security controls that can be put in place to reduce the risks inherent in the current specification, but as the protocol develops expect that some of the risks will reduce or disappear entirely. We encourage you to contribute to and suggest security related MCP RFCs to make this protocol even better! With thanks to OrinThomas, dasithwijes, dendeli and Peter Marcu for their inputs and collaboration on this post.Announcing AI Entity Analyzer in Microsoft Sentinel MCP Server - Public Preview
What is the Entity Analyzer? Assessing the risk of entities is a core task for SOC teams - whether triaging incidents, investigating threats, or automating response workflows. Traditionally, this has required building complex playbooks or custom logic to gather and analyze fragmented security data from multiple sources. With Entity Analyzer, this complexity starts to fade away. The tool leverages your organization’s security data in Sentinel to deliver comprehensive, reasoned risk assessments for any entity you encounter - starting with users and urls. By providing this unified, out-of-the-box solution for entity analysis, Entity Analyzer also enables the AI agents you build to make smarter decisions and automate more tasks - without the need to manually engineer risk evaluation logic for each entity type. And for those building SOAR workflows, Entity Analyzer is natively integrated with Logic Apps, making it easy to enrich incidents and automate verdicts within your playbooks. *Entity Analyzer is rolling out in Public Preview to Sentinel MCP server and within Logic Apps starting today. Learn more here. **Leave feedback on the Entity Analyzer here. Deep Dive: How the User Analyzer is already solving problems for security teams Problem: Drowning in identity alerts Security operations centers (SOCs) are inundated with identity-based threats and alert noise. Triaging these alerts requires analyzing numerous data sources across sign-in logs, cloud app events, identity info, behavior analytics, threat intel, and more, all in tandem with each other to reach a verdict - something very challenging to do without a human in the loop today. So, we introduced the User Analyzer, a specialized analyzer that unifies, correlates, and analyzes user activity across all these security data sources. Government of Nunavut: solving identity alert overload with User Analyzer Hear the below from Arshad Sheikh, Security Expert at Government of Nunavut, on how they're using the User Analyzer today: How it's making a difference "Before the User Analyzer, when we received identity alerts we had to check a large amount of data related to users’ activity (user agents, anomalies, IP reputation, etc.). We had to write queries, wait for them to run, and then manually reason over the results. We attempted to automate some of this, but maintaining and updating that retrieval, parsing, and reasoning automation was difficult and we didn’t have the resources to support it. With the User Analyzer, we now have a plug-and-play solution that represents a step toward the AI-driven automation of the future. It gathers all the context such as what the anomalies are and presents it to our analysts so they can make quick, confident decisions, eliminating the time previously spent manually gathering this data from portals." Solving a real problem "For example, every 24 hours we create a low severity incident of our users who successfully sign-in to our network non interactively from outside of our GEO fence. This type of activity is not high-enough fidelity to auto-disable, requiring us to manually analyze the flagged users each time. But with User Analyzer, this analysis is performed automatically. The User Analyzer has also significantly reduced the time required to determine whether identity-based incidents like these are false positives or true positives. Instead of spending around 20 minutes investigating each incident, our analysts can now reach a conclusion in about 5 minutes using the automatically generated summary." Looking ahead "Looking ahead, we see even more potential. In the future, the User Analyzer could be integrated directly with Microsoft Sentinel playbooks to take automated, definitive action such as blocking user or device access based on the analyzer’s results. This would further streamline our incident response and move us closer to fully automated security operations." Want similar benefits in your SOC? Get started with our Entity Analyzer Logic Apps template here. User Analyzer architecture: how does it work? Let’s take a look at how the User Analyzer works. The User Analyzer aggregates and correlates signals from multiple data sources to deliver a comprehensive analysis, enabling informed actions based on user activity. The diagram below gives an overview of this architecture: Step 1: Retrieve Data The analyzer starts by retrieving relevant data from the following sources: Sign-In Logs (Interactive & Non-Interactive): Tracks authentication and login activity. Security Alerts: Alerts from Microsoft Defender solutions. Behavior Analytics: Surfaces behavioral anomalies through advanced analytics. Cloud App Events: Captures activity from Microsoft Defender for Cloud Apps. Identity Information: Enriches user context with identity records. Microsoft Threat Intelligence: Enriches IP addresses with Microsoft Threat Intelligence. Steps 2: Correlate signals Signals are correlated using identifiers such as user IDs, IP addresses, and threat intelligence. Rather than treating each alert or behavior in isolation, the User Analyzer fuses signals to build a holistic risk profile. Step 3: AI-based reasoning In the User Analyzer, multiple AI-powered agents collaborate to evaluate the evidence and reach consensus. This architecture not only improves accuracy and reduces bias in verdicts, but also provides transparent, justifiable decisions. Leveraging AI within the User Analyzer introduces a new dimension of intelligence to threat detection. Instead of relying on static signatures or rigid regex rules, AI-based reasoning can uncover subtle anomalies that traditional detection methods and automation playbooks often miss. For example, an attacker might try to evade detection by slightly altering a user-agent string or by targeting and exfiltrating only a few files of specific types. While these changes could bypass conventional pattern matching, an AI-powered analyzer understands the semantic context and behavioral patterns behind these artifacts, allowing it to flag suspicious deviations even when the syntax looks benign. Step 4: Verdict & analysis Each user is given a verdict. The analyzer outputs any of the following verdicts based on the analysis: Compromised Suspicious activity found No evidence of compromise Based on the verdict, a corresponding recommendation is given. This helps teams make an informed decision whether action should be taken against the user. *AI-generated content from the User Analyzer may be incorrect - check it for accuracy. User Analyzer Example Output See the following example output from the user analyzer within an incident comment: *IP addresses have been redacted for this blog* &CK techniques, a list of malicious IP addresses the user signed in from (redacted for this blog), and a few suspicious user agents the user's activity originated from. typically have to query and analyze these themselves, feel more comfortable trusting its classification. The analyzer also gives recommendations to remediate the account compromise, and a list of data sources it used during analysis. Conclusion Entity Analyzer in Microsoft Sentinel MCP server represents a leap forward in alert triage & analysis. By correlating signals and harnessing AI-based reasoning, it empowers SOC teams to act on investigations with greater speed, precision, and confidence. *Leave feedback on the Entity Analyzer hereAlways‑on Diagnostics for Purview Endpoint DLP: Effortless, Zero‑Friction troubleshooting for admins
Historically, some security teams have struggled with the challenge of troubleshooting issues with endpoint DLP. Investigations can often slow down because reproducing issues, collecting traces, and aligning on context can be tedious. With always-on diagnostics in Purview endpoint data loss prevention (DLP), our goal has been simple: make troubleshooting seamless, and effortless—without ever disrupting the information worker. Today, we’re excited to share new enhancements to always-on diagnostics for Purview endpoint DLP. This is the next step in our journey to modernize supportability in Microsoft Purview and dramatically reduce admin friction during investigations. Where We Started: Introduction of continuous diagnostic collection Earlier this year, we introduced continuous diagnostic trace collection on Windows endpoints (support for macOS endpoints coming soon). This eliminated the single largest source of friction: the need to reproduce issues. With this capability: Logs are captured persistently for up to 90 days Information workers no longer need admin permissions to retrieve traces Admins can submit complete logs on the first attempt Support teams can diagnose transient or rare issues with high accuracy In just a few months, we saw resolution times drop dramatically. The message was clear: Always-on diagnostics is becoming a new troubleshooting standard. Our Newest Enhancements: Built for Admins. Designed for Zero Friction. The newest enhancements to always-on diagnostics unlock the most requested capability from our IT and security administrators: the ability to retrieve and upload always-on diagnostic traces directly from devices using the Purview portal — with no user interaction required. This means: Admins can now initiate trace uploads on demand No interruption to information workers and their productivity No issue reproduction sessions, minimizing unnecessary disruption and coordination Every investigation starts with complete context Because the traces are already captured on-device, these improvements now help complete the loop by giving admins a seamless, portal-integrated workflow to deliver logs to Microsoft when needed. This experience is now fully available for customers using endpoint DLP on Windows. Why This Matters As a product team, our success is measured not just by usage, but by how effectively we eliminate friction for customers. Always-on diagnostics minimizes the friction and frustration that has historically affected some customers. - No more asking your employee or information worker to "can you reproduce that?" and share logs - No more lost context - No more delays while logs are collected after the fact How it Works Local trace capture Devices continuously capture endpoint DLP diagnostic data in a compressed, proprietary format, and this data stays solely on the respective device based on the retention period and storage limits configured by the admin. Users no longer need to reproduce issues during retrieval—everything the investigation requires is already captured on the endpoint. Admin-triggered upload Admins can now request diagnostic uploads directly from the Purview portal, eliminating the need to disrupt users. Upload requests can be initiated from multiple entry points, including: Alerts (Data Loss Prevention → Alerts → Events) Activity Explorer (Data Loss Prevention → Explorers → Activity explorer) Device Policy Status Page (Settings → Device onboarding → Devices) From any of these locations, admins can simply choose Request device log, select the date range, add a brief description, and submit the request. Once processed, the device’s always-on diagnostic logs are securely uploaded to Microsoft telemetry as per customer-approved settings. Admins can include the upload request number in their ticket with Microsoft Support, and sharing this number removes the need for the support engineer to ask for logs again during the investigation. This workflow ensures investigations start with complete diagnostic context. Privacy & compliance considerations Data is only uploaded during admin-initiated investigations Data adheres to our published diagnostic data retention policies Logs are only accessible to the Microsoft support team, not any other parties We Want to Hear From You Are you using always-on diagnostics? We'd love to hear about your experience. Share your feedback, questions, or success stories in the Microsoft Tech Community, or reach out to our engineering team directly. Making troubleshooting effortless—so you can focus on what matters, not on chasing logs.Securing the AI Pipeline – From Data to Deployment
In our first post, we established why securing AI workloads is mission-critical for the enterprise. Now, we turn to the AI pipeline—the end-to-end journey from raw data to deployed models—and explore why every stage must be fortified against evolving threats. As organizations accelerate AI adoption, this pipeline becomes a prime target for adversaries seeking to poison data, compromise models, or exploit deployment endpoints. Enterprises don’t operate a single “AI system”; they run interconnected pipelines that transform data into decisions across a web of services, models, and applications. Protecting this chain demands a holistic security strategy anchored in Zero Trust for AI, supply chain integrity, and continuous monitoring. In this post, we map the pipeline, identify key attack vectors at each stage, and outline practical defenses using Microsoft’s security controls—spanning data governance with Purview, confidential training environments in Azure, and runtime threat detection with Defender for Cloud. Our guidance aligns with leading frameworks, including the NIST AI Risk Management Framework and MITRE ATLAS, ensuring your AI security program meets recognized standards while enabling innovation at scale. A Security View of the AI Pipeline Securing AI isn’t just about protecting a single model—it’s about safeguarding the entire pipeline that transforms raw data into actionable intelligence. This pipeline spans multiple stages, from data collection and preparation to model training, validation, and deployment, each introducing unique risks that adversaries can exploit. Data poisoning, model tampering, and supply chain attacks are no longer theoretical—they’re real threats that can undermine trust and compliance. By viewing the pipeline through a security lens, organizations can identify these vulnerabilities early and apply layered defenses such as Zero Trust principles, data lineage tracking, and runtime monitoring. This holistic approach ensures that AI systems remain resilient, auditable, and aligned with enterprise risk and regulatory requirements. Stages & Primary Risks Data Collection & Ingestion Sources: enterprise apps, data lakes, web, partners. Key risks: poisoning, PII leakage, weak lineage, and shadow datasets. Frameworks call for explicit governance and provenance at this earliest stage. [nist.gov] Data Prep & Feature Engineering Risks: backdoored features, bias injection, and transformation tampering that evades standard validation. ATLAS catalogs techniques that target data, features, and preprocessing. [atlas.mitre.org] Model Training / Fine‑Tuning Risks: model theft, inversion, poisoning, and compromised compute. Confidential computing and isolated training domains are recommended. [learn.microsoft.com] Validation & Red‑Team Testing Risks: tainted validation sets, overlooked LLM‑specific risks (prompt injection, unbounded consumption), and fairness drift. OWASP’s LLM Top 10 highlights the unique classes of generative threats. [owasp.org] Registry & Release Management Risks: supply chain tampering (malicious models, dependency confusion), unsigned artifacts, and missing SBOM/AIBOM. [codesecure.com], [github.com] Deployment & Inference Risks: adversarial inputs, API abuse, prompt injection (direct & indirect), data exfiltration, and model abuse at runtime. Microsoft has documented multi‑layer mitigations and integrated threat protection for AI workloads. [techcommun…rosoft.com], [learn.microsoft.com] Reference Architecture (Zero Trust for AI) The Reference Architecture for Zero Trust in AI establishes a security-first blueprint for the entire AI pipeline—from raw data ingestion to model deployment and continuous monitoring. Its importance lies in addressing the unique risks of AI systems, such as data poisoning, model tampering, and adversarial attacks, which traditional security models often overlook. By embedding Zero Trust principles at every stage—governance with Microsoft Purview, isolated training environments, signed model artifacts, and runtime threat detection—organizations gain verifiable integrity, regulatory compliance, and resilience against evolving threats. Adopting this architecture ensures that AI innovations remain trustworthy, auditable, and aligned with business and compliance objectives, ultimately accelerating adoption while reducing risk and safeguarding enterprise reputation. Below is a visual of what this architecture looks like: Why this matters: Microsoft Purview establishes provenance, labels, and lineage Azure ML enforces network isolation Confidential Computing protects data-in-use Responsible AI tooling addresses safety & fairness Defender for Cloud adds runtime AI‑specific threat detection Azure ML Model Monitoring closes the loop with drift and anomaly detection. [microsoft.com], [azure.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com] Stage‑by‑Stage Threats & Concrete Mitigations (with Microsoft Controls) Data Collection & Ingestion - Attack Scenarios Data poisoning via partner feed or web‑scraped corpus; undetected changes skew downstream models. Research shows Differential Privacy (DP) can reduce impact but is not a silver bullet. Differential Privacy introduces controlled noise into training data or model outputs, making it harder for attackers to infer individual data points and limiting the influence of any single poisoned record. This helps reduce the impact of targeted poisoning attacks because malicious entries cannot disproportionately affect the model’s parameters. However, DP is not sufficient on its own for several reasons: Aggregate poisoning still works: DP protects individual records, but if an attacker injects a large volume of poisoned data, the cumulative effect can still skew the model. Utility trade-offs: Adding noise to achieve strong privacy guarantees often degrades model accuracy, creating tension between security and performance. Doesn’t detect malicious intent: DP doesn’t validate data quality or provenance—it only limits exposure. Poisoned data can still enter the pipeline undetected. Vulnerable to sophisticated attacks: Techniques like backdoor poisoning or gradient manipulation can bypass DP protections because they exploit model behavior rather than individual record influence. Bottom line, DP is a valuable layer for privacy and resilience, but it must be combined with data validation, anomaly detection, and provenance checks to effectively mitigate poisoning risks. [arxiv.org], [dp-ml.github.io] Sensitive data drift into training corpus (PII/PHI), later leaking through model inversion. NIST RMF calls for privacy‑enhanced design and provenance from the outset. When personally identifiable information (PII) or protected health information (PHI) unintentionally enters the training dataset—often through partner feeds, logs, or web-scraped sources—it creates a latent risk. If the model memorizes these sensitive records, adversaries can exploit model inversion attacks to reconstruct or infer private details from outputs or embeddings. [nvlpubs.nist.gov] Mitigations & Integrations Classify & label sensitive fields with Microsoft Purview Use Purview’s automated scanning and classification to detect PII, PHI, financial data, and other regulated fields across your data estate. Apply sensitivity labels and tags to enforce consistent governance policies. [microsoft.com] Enable lineage across Microsoft Fabric/Synapse/SQL Implement Data Loss Prevention (DLP) rules to block unauthorized movement of sensitive data and prevent accidental leaks. Combine this with role-based access control (RBAC) and attribute-based access control (ABAC) to restrict who can view, modify, or export sensitive datasets. Integrate with SOC and DevSecOps Pipelines Feed Purview alerts and lineage events into your SIEM/XDR workflows for real-time monitoring. Automate policy enforcement in CI/CD pipelines to ensure models only train on approved, sanitized datasets. Continuous Compliance Monitoring Schedule recurring scans and leverage Purview’s compliance dashboards to validate adherence to regulatory frameworks like GDPR, HIPAA, and NIST RMF. Maintain dataset hashes and signatures; store lineage metadata and approvals before a dataset can enter training (Purview + Fabric). [azure.microsoft.com] For externally sourced data, sandbox ingestion and run poisoning heuristics; if using Data Privacy (DP)‑training, document tradeoffs (utility vs. robustness). [aclanthology.org], [dp-ml.github.io] 3.2 Data Preparation & Feature Engineering Attack Scenarios Feature backdoors: crafted tokens in a free‑text field activate hidden behaviors only under specific conditions. MITRE ATLAS lists techniques that target features/preprocessing. [atlas.mitre.org] Mitigations & Integrations Version every transformation; capture end‑to‑end lineage (Purview) and enforce code review on feature pipelines. Apply train/validation set integrity checks; for Large Language Model with Retrieval-Augmented Generation (LLM RAG), inspect embeddings and vector stores for outliers before indexing. 3.3 Model Training & Fine‑Tuning - Attack Scenarios Training environment compromise leading to model tampering or exfiltration. Attackers may gain access to the training infrastructure (e.g., cloud VMs, on-prem GPU clusters, or CI/CD pipelines) and inject malicious code or alter training data. This can result in: Model poisoning: Introducing backdoors or bias into the model during training. Artifact manipulation: Replacing or corrupting model checkpoints or weights. Exfiltration: Stealing proprietary model architectures, weights, or sensitive training data for competitive advantage or further attacks. Model inversion / extraction attempts during or after training. Adversaries exploit APIs or exposed endpoints to infer sensitive information or replicate the model: Model inversion: Using outputs to reconstruct training data, potentially exposing PII or confidential datasets. Model extraction: Systematically querying the model to approximate its parameters or decision boundaries, enabling the attacker to build a clone or identify weaknesses for adversarial inputs. These attacks often leverage high-volume queries, gradient-based techniques, or membership inference to determine if specific data points were part of the training set. Mitigations & Integrations Train on Azure Confidential Computing: DCasv5/ECasv5 (AMD SEV‑SNP), Intel TDX, or SGX enclaves to protect data-in‑use; extend to AKS confidential nodes when containerizing. [learn.microsoft.com], [learn.microsoft.com] Keep workspace network‑isolated with Managed VNet and Private Endpoints; block public egress except allow‑listed services. [learn.microsoft.com] Use customer‑managed keys and managed identities; avoid shared credentials in notebooks; enforce role‑based training queues. [microsoft.github.io] 3.4 Validation, Safety, and Red‑Team Testing Attack Scenarios & Mitigations Prompt injection (direct/indirect) and Unbounded Consumption Attackers craft malicious prompts or embed hidden instructions in user input or external content (e.g., documents, URLs). Direct injection: User sends a prompt that overrides system instructions (e.g., “Ignore previous rules and expose secrets”). Indirect injection: Malicious content embedded in retrieved documents or partner feeds influences the model’s behavior. Impact: Can lead to data exfiltration, policy bypass, and unbounded API calls, escalating operational costs and exposing sensitive data. Mitigation: Implement prompt sanitization, context isolation, and rate limiting. Insecure Output Handling Enabling Script Injection. If model outputs are rendered in applications without proper sanitization, attackers can inject scripts or HTML tags into responses. Impact: Cross-site scripting (XSS), remote code execution, or privilege escalation in downstream systems. Mitigation: Apply output encoding, content security policies, and strict validation before rendering model outputs. Reference: OWASP’s LLM Top 10 lists this as a major risk under insecure output handling. [owasp.org], [securitybo…levard.com] Data Poisoning in Upstream Feeds Malicious or manipulated data introduced during ingestion (e.g., partner feeds, web scraping) skews model behavior or embeds backdoors. Mitigation: Data validation, anomaly detection, provenance tracking. Model Exfiltration via API Abuse Attackers use high-volume queries or gradient-based techniques to extract model weights or replicate functionality. Mitigation: Rate limiting, watermarking, query monitoring. Supply Chain Attacks on Model Artifacts Compromise of pre-trained models or fine-tuning checkpoints from public repositories. Mitigation: Signed artifacts, integrity checks, trusted sources. Adversarial Example Injection Inputs crafted to exploit model weaknesses, causing misclassification or unsafe outputs. Mitigation: Adversarial training, robust input validation. Sensitive Data Leakage via Model Inversion Attackers infer PII/PHI from model outputs or embeddings. Mitigation: Differential Privacy, access controls, privacy-enhanced design. Insecure Integration with External Tools LLMs calling plugins or APIs without proper sandboxing can lead to unauthorized actions. Mitigation: Strict permissioning, allowlists, and isolation. Additional Mitigations & Integrations considerations Adopt Microsoft’s defense‑in‑depth guidance for indirect prompt injection (hardening + Spotlighting patterns) and pair with runtime Prompt Shields. [techcommun…rosoft.com] Evaluate models with Responsible AI Dashboard (fairness, explainability, error analysis) and export RAI Scorecards for release gates. [learn.microsoft.com] Build security gates referencing MITRE ATLAS techniques and OWASP GenAI controls into your MLOps pipeline. [atlas.mitre.org], [owasp.org] 3.5 Registry, Signing & Supply Chain Integrity - Attack Scenarios Model supply chain risk: backdoored pre‑trained weights Attackers compromise publicly available or third-party pre-trained models by embedding hidden behaviors (e.g., triggers that activate under specific inputs). Impact: Silent backdoors can cause targeted misclassification or data leakage during inference. Mitigation: Use trusted registries and verified sources for model downloads. Perform model scanning for anomalies and backdoor detection before deployment. [raykhira.com] Dependency Confusion Malicious actors publish packages with the same name as internal dependencies to public repositories. If build pipelines pull these packages, attackers gain code execution. Impact: Compromised training or deployment environments, leading to model tampering or data exfiltration. Mitigation: Enforce private package registries and pin versions. Validate dependencies against allowlists. Unsigned Artifacts Swapped in the Registry If model artifacts (weights, configs, containers) are not cryptographically signed, attackers can replace them with malicious versions. Impact: Deployment of compromised models or containers without detection. Mitigation: Implement artifact signing and integrity verification (e.g., SHA256 checksums). Require signature validation in CI/CD pipelines before promotion to production. Registry Compromise Attackers gain access to the model registry and alter metadata or inject malicious artifacts. Mitigation: RBAC, MFA, audit logging, and registry isolation. Tampered Build Pipeline CI/CD pipeline compromised to inject malicious code during model packaging or containerization. Mitigation: Secure build environments, signed commits, and pipeline integrity checks. Poisoned Container Images Malicious base images used for model deployment introduce vulnerabilities or malware. Mitigation: Use trusted container registries, scan images for CVEs, and enforce image signing. Shadow Artifacts Attackers upload artifacts with similar names or versions to confuse operators and bypass validation. Mitigation: Strict naming conventions, artifact fingerprinting, and automated validation. Additional Mitigations & Integrations considerations Store models in Azure ML Registry with version pinning; sign artifacts and publish SBOM/AI‑BOM metadata for downstream verifiers. [microsoft.github.io], [github.com], [codesecure.com] Maintain verifiable lineage and attestations (policy says: no signature, no deploy). Emerging work on attestable pipelines reinforces this approach. [arxiv.org] 3.6 Secure Deployment & Runtime Protection - Attack Scenarios Adversarial inputs and prompt injections targeting your inference APIs or agents Attackers craft malicious queries or embed hidden instructions in user input or retrieved content to manipulate model behavior. Impact: Policy bypass, sensitive data leakage, or execution of unintended actions via connected tools. Mitigation: Prompt sanitization and isolation (strip unsafe instructions). Context segmentation for multi-turn conversations. Rate limiting and anomaly detection on inference endpoints. Jailbreaks that bypass safety filters Attackers exploit weaknesses in safety guardrails by chaining prompts or using obfuscation techniques to override restrictions. Impact: Generation of harmful, disallowed, or confidential content; reputational and compliance risks. Mitigation: Layered safety filters (input + output). Continuous red-teaming and adversarial testing. Dynamic policy enforcement based on risk scoring. API abuse and model extraction. High-volume or structured queries designed to infer model parameters or replicate its functionality. Impact: Intellectual property theft, exposure of proprietary model logic, and enabling downstream attacks. Mitigation: Rate limiting and throttling. Watermarking responses to detect stolen outputs. Query pattern monitoring for extraction attempts. [atlas.mitre.org] Insecure Integration with External Tools or Plugins LLM agents calling APIs without sandboxing can trigger unauthorized actions. Mitigation: Strict allowlists, permission gating, and isolated execution environments. Model Output Injection into Downstream Systems Unsanitized outputs rendered in apps or dashboards can lead to XSS or command injection. Mitigation: Output encoding, validation, and secure rendering practices. Runtime Environment Compromise Attackers exploit container or VM vulnerabilities hosting inference services. Mitigation: Harden runtime environments, apply OS-level security patches, and enforce network isolation. Side-Channel Attacks Observing timing, resource usage, or error messages to infer sensitive details about the model or data. Mitigation: Noise injection, uniform response timing, and error sanitization. Unbounded Consumption Leading to Cost Escalation Attackers flood inference endpoints with requests, driving up compute costs. Mitigation: Quotas, usage monitoring, and auto-scaling with cost controls. Additional Mitigations & Integrations considerations Deploy Managed Online Endpoints behind Private Link; enforce mTLS, rate limits, and token‑based auth; restrict egress in managed VNet. [learn.microsoft.com] Turn on Microsoft Defender for Cloud – AI threat protection to detect jailbreaks, data leakage, prompt hacking, and poisoning attempts; incidents flow into Defender XDR. [learn.microsoft.com] For Azure OpenAI / Direct Models, enterprise data is tenant‑isolated and not used to train foundation models; configure Abuse Monitoring and Risks & Safety dashboards, with clear data‑handling stance. [learn.microsoft.com], [learn.microsoft.com], [learn.microsoft.com] 3.7 Post‑Deployment Monitoring & Response - Attack Scenarios Data/Prediction Drift silently degrades performance Over time, input data distributions change (e.g., new slang, market shifts), causing the model to make less accurate predictions without obvious alerts. Impact: Reduced accuracy, operational risk, and potential compliance violations if decisions become unreliable. Mitigation: Continuous drift detection using statistical tests (KL divergence, PSI). Scheduled model retraining and validation pipelines. Alerting thresholds for performance degradation. Fairness Drift Shifts Outcomes Across Cohorts Model performance or decision bias changes for specific demographic or business segments due to evolving data or retraining. Impact: Regulatory risk (GDPR, EEOC), reputational damage, and ethical concerns. Mitigation: Implement bias monitoring dashboards. Apply fairness metrics (equal opportunity, demographic parity) in post-deployment checks. Trigger remediation workflows when drift exceeds thresholds. Emergent Jailbreak Patterns evolve over time Attackers discover new prompt injection or jailbreak techniques that bypass safety filters after deployment. Impact: Generation of harmful or disallowed content, policy violations, and security breaches. Mitigation: Behavioral anomaly detection on prompts and outputs. Continuous red-teaming and adversarial testing. Dynamic policy updates integrated into inference pipelines. Shadow Model Deployment Unauthorized or outdated models running in production environments without governance. Mitigation: Registry enforcement, signed artifacts, and deployment audits. Silent Backdoor Activation Backdoors introduced during training activate under rare conditions post-deployment. Mitigation: Runtime scanning for anomalous triggers and adversarial input detection. Telemetry Tampering Attackers manipulate monitoring logs or metrics to hide drift or anomalies. Mitigation: Immutable logging, cryptographic integrity checks, and SIEM integration. Cost Abuse via Automated Bots Bots continuously hit inference endpoints, driving up operational costs unnoticed. Mitigation: Rate limiting, usage analytics, and anomaly-based throttling. Model Extraction Over Time Slow, distributed queries across months to replicate model behavior without triggering rate limits. Mitigation: Long-term query pattern analysis and watermarking. Additional Mitigations & Integrations considerations Enable Azure ML Model Monitoring for data drift, prediction drift, data quality, and custom signals; route alerts to Event Grid to auto‑trigger retraining and change control. [learn.microsoft.com], [learn.microsoft.com] Correlate runtime AI threat alerts (Defender for Cloud) with broader incidents in Defender XDR for a complete kill‑chain view. [learn.microsoft.com] Real‑World Scenarios & Playbooks Scenario A — “Clean” Model, Poisoned Validation Symptom: Model looks great in CI, fails catastrophically on a subset in production. Likely cause: Attacker tainted validation data so unsafe behavior was never detected. ATLAS documents validation‑stage attacks. [atlas.mitre.org] Playbook: Require dual‑source validation sets with hashes in Purview lineage; incorporate RAI dashboard probes for subgroup performance; block release if variance exceeds policy. [microsoft.com], [learn.microsoft.com] Scenario B — Indirect Prompt Injection in Retrieval-Augmented Generation (RAG) Symptom: The assistant “quotes” an external PDF that quietly exfiltrates secrets via instructions in hidden text. Playbook: Apply Microsoft Spotlighting patterns (delimiting/datamarking/encoding) and Prompt Shields; enable Defender for Cloud AI alerts and remediate via Defender XDR. [techcommun…rosoft.com], [learn.microsoft.com] Scenario C — Model Extraction via API Abuse Symptom: Spiky usage, long prompts, and systematic probing. Playbook: Enforce rate/shape limits; throttle token windows; monitor with Defender for Cloud and block high‑risk consumers; for OpenAI endpoints, validate Abuse Monitoring telemetry and adjust content filters. [learn.microsoft.com], [learn.microsoft.com] Product‑by‑Product Implementation Guide (Quick Start) Data Governance & Provenance Microsoft Purview Data Governance GA: unify cataloging, lineage, and policy; integrate with Fabric; use embedded Copilot to accelerate stewardship. [microsoft.com], [azure.microsoft.com] Secure Training Azure ML with Managed VNet + Private Endpoints; use Confidential VMs (DCasv5/ECasv5) or SGX/TDX where enclave isolation is required; extend to AKS confidential nodes for containerized training. [learn.microsoft.com], [learn.microsoft.com] Responsible AI Responsible AI Dashboard & Scorecards for fairness/interpretability/error analysis—use as release artifacts at change control. [learn.microsoft.com] Runtime Safety & Threat Detection Azure AI Content Safety (Prompt Shields, groundedness, protected material detection) + Defender for Cloud AI Threat Protection (alerts for leakage/poisoning/jailbreak/credential theft) integrated to Defender XDR. [ai.azure.com], [learn.microsoft.com] Enterprise‑grade LLM Access Azure OpenAI / Direct Models: data isolation, residency (Data Zones), and clear privacy commitments for commercial & public sector customers. [learn.microsoft.com], [azure.microsoft.com], [blogs.microsoft.com] Monitoring & Continuous Improvement Azure ML Model Monitoring (drift/quality) + Event Grid triggers for auto‑retrain; instrument with Application Insights for latency/reliability. [learn.microsoft.com] Policy & Governance: Map → Measure → Manage (NIST AI RMF) Align your controls to NIST’s four functions: Govern: Define AI security policies: dataset admission, cryptographic signing, registry controls, and red‑team requirements. [nvlpubs.nist.gov] Map: Inventory models, data, and dependencies (Purview catalog + SBOM/AIBOM). [microsoft.com], [github.com] Measure: RAI metrics (fairness, explainability), drift thresholds, and runtime attack rates (Defender/Content Safety). [learn.microsoft.com], [learn.microsoft.com] Manage: Automate mitigations: block unsigned artifacts, quarantine suspect datasets, rotate keys, and retrain on alerts. [nist.gov] What “Good” Looks Like: A 90‑Day Hardening Plan Days 0–30: Establish Foundations Turn on Purview scans across Fabric/SQL/Storage; define sensitivity labels + DLP. [microsoft.com] Lock Azure ML workspaces into Managed VNet, Private Endpoints, and Managed Identity. [learn.microsoft.com], [microsoft.github.io] Move training to Confidential VMs for sensitive projects. [learn.microsoft.com] Days 31–60: Shift‑Left & Gate Releases Integrate RAI Dashboard/Scorecards into CI; add ATLAS + OWASP LLM checks to release gates. [learn.microsoft.com], [atlas.mitre.org], [owasp.org] Require SBOM/AIBOM and artifact signing for models. [codesecure.com], [github.com] Days 61–90: Runtime Defense & Observability Enable Defender for Cloud – AI Threat Protection and Azure AI Content Safety; wire alerts to Defender XDR. [learn.microsoft.com], [ai.azure.com] Roll out Model Monitoring (drift/quality) with auto‑retrain triggers via Event Grid. [learn.microsoft.com] FAQ: Common Leadership Questions Q: Do differential privacy and adversarial training “solve” poisoning? A: They reduce risk envelopes but do not eliminate attacks—plan for layered defenses and continuous validation. [arxiv.org], [dp-ml.github.io] Q: How do we prevent indirect prompt injection in agentic apps? A: Combine Spotlighting patterns, Prompt Shields, least‑privilege tool access, explicit consent for sensitive actions, and Defender for Cloud runtime alerts. [techcommun…rosoft.com], [learn.microsoft.com] Q: Can we use Azure OpenAI without contributing our data to model training? A: Yes—Azure Direct Models keep your prompts/completions private, not used to train foundation models without your permission; with Data Zones, you can align residency. [learn.microsoft.com], [azure.microsoft.com] Closing As your organization scales AI, the pipeline is the perimeter. Treat every stage—from data capture to model deployment—as a control point with verifiable lineage, signed artifacts, network isolation, runtime detection, and continuous risk measurement. But securing the pipeline is only part of the story—what about the models themselves? In our next post, we’ll dive into hardening AI models against adversarial attacks, exploring techniques to detect, mitigate, and build resilience against threats that target the very core of your AI systems. Key Takeaway Securing AI requires protecting the entire pipeline—from data collection to deployment and monitoring—not just individual models. Zero Trust for AI: Embed security controls at every stage (data governance, isolated training, signed artifacts, runtime threat detection) for integrity and compliance. Main threats and mitigations by stage: Data Collection: Risks include poisoning and PII leakage; mitigate with data classification, lineage tracking, and DLP. Data Preparation: Watch for feature backdoors and tampering; use versioning, code review, and integrity checks. Model Training: Risks are environment compromise and model theft; mitigate with confidential computing, network isolation, and managed identities. Validation & Red Teaming: Prompt injection and unbounded consumption are key risks; address with prompt sanitization, output encoding, and adversarial testing. Supply Chain & Registry: Backdoored models and dependency confusion; use trusted registries, artifact signing, and strict pipeline controls. Deployment & Runtime: Adversarial inputs and API abuse; mitigate with rate limiting, context segmentation, and Defender for Cloud AI threat protection. Monitoring: Watch for data/prediction drift and cost abuse; enable continuous monitoring, drift detection, and automated retraining. References NIST AI RMF (Core + Generative AI Profile) – governance lens for pipeline risks. [nist.gov], [nist.gov] MITRE ATLAS – adversary tactics & techniques against AI systems. [atlas.mitre.org] OWASP Top 10 for LLM Applications / GenAI Project – practical guidance for LLM‑specific risks. [owasp.org] Azure Confidential Computing – protect data‑in‑use with SEV‑SNP/TDX/SGX and confidential GPUs. [learn.microsoft.com] Microsoft Purview Data Governance – GA feature set for unified data governance & lineage. [microsoft.com] Defender for Cloud – AI Threat Protection – runtime detections and XDR integration. [learn.microsoft.com] Responsible AI Dashboard / Scorecards – fairness & explainability in Azure ML. [learn.microsoft.com] Azure AI Content Safety – Prompt Shields, groundedness detection, protected material checks. [ai.azure.com] Azure ML Model Monitoring – drift/quality monitoring & automated retraining flows. [learn.microsoft.com] #AIPipelineSecurity; #AITrustAndSafety; #SecureAI; #AIModelSecurity; #AIThreatModeling; #SupplyChainSecurity; #DataSecurity1.8KViews0likes0Comments