cloud security
1426 TopicsSecurity as the core primitive - Securing AI agents and apps
This week at Microsoft Ignite, we shared our vision for Microsoft security -- In the agentic era, security must be ambient and autonomous, like the AI it protects. It must be woven into and around everything we build—from silicon to OS, to agents, apps, data, platforms, and clouds—and throughout everything we do. In this blog, we are going to dive deeper into many of the new innovations we are introducing this week to secure AI agents and apps. As I spend time with our customers and partners, there are four consistent themes that have emerged as core security challenges to secure AI workloads. These are: preventing agent sprawl and access to resources, protecting against data oversharing and data leaks, defending against new AI threats and vulnerabilities, and adhering to evolving regulations. Addressing these challenges holistically requires a coordinated effort across IT, developers, and security leaders, not just within security teams and to enable this, we are introducing several new innovations: Microsoft Agent 365 for IT, Foundry Control Plane in Microsoft Foundry for developers, and the Security Dashboard for AI for security leaders. In addition, we are releasing several new purpose-built capabilities to protect and govern AI apps and agents across Microsoft Defender, Microsoft Entra, and Microsoft Purview. Observability at every layer of the stack To facilitate the organization-wide effort that it takes to secure and govern AI agents and apps – IT, developers, and security leaders need observability (security, management, and monitoring) at every level. IT teams need to enable the development and deployment of any agent in their environment. To ensure the responsible and secure deployment of agents into an organization, IT needs a unified agent registry, the ability to assign an identity to every agent, manage the agent’s access to data and resources, and manage the agent’s entire lifecycle. In addition, IT needs to be able to assign access to common productivity and collaboration tools, such as email and file storage, and be able to observe their entire agent estate for risks such as over-permissioned agents. Development teams need to build and test agents, apply security and compliance controls by default, and ensure AI models are evaluated for safety guardrails and security vulnerabilities. Post deployment, development teams must observe agents to ensure they are staying on task, accessing applications and data sources appropriately, and operating within their cost and performance expectations. Security & compliance teams must ensure overall security of their AI estate, including their AI infrastructure, platforms, data, apps, and agents. They need comprehensive visibility into all their security risks- including agent sprawl and resource access, data oversharing and leaks, AI threats and vulnerabilities, and complying with global regulations. They want to address these risks by extending their existing security investments that they are already invested in and familiar with, rather than using siloed or bolt-on tools. These teams can be most effective in delivering trustworthy AI to their organizations if security is natively integrated into the tools and platforms that they use every day, and if those tools and platforms share consistent security primitives such as agent identities from Entra; data security and compliance controls from Purview; and security posture, detections, and protections from Defender. With the new capabilities being released today, we are delivering observability at every layer of the AI stack, meeting IT, developers, and security teams where they are in the tools they already use to innovate with confidence. For IT Teams - Introducing Microsoft Agent 365, the control plane for agents, now in preview The best infrastructure for managing your agents is the one you already use to manage your users. With Agent 365, organizations can extend familiar tools and policies to confidently deploy and secure agents, without reinventing the wheel. By using the same trusted Microsoft 365 infrastructure, productivity apps, and protections, organizations can now apply consistent and familiar governance and security controls that are purpose-built to protect against agent-specific threats and risks. gement and governance of agents across organizations Microsoft Agent 365 delivers a unified agent Registry, Access Control, Visualization, Interoperability, and Security capabilities for your organization. These capabilities work together to help organizations manage agents and drive business value. The Registry powered by the Entra provides a complete and unified inventory of all the agents deployed and used in your organization including both Microsoft and third-party agents. Access Control allows you to limit the access privileges of your agents to only the resources that they need and protect their access to resources in real time. Visualization gives organizations the ability to see what matters most and gain insights through a unified dashboard, advanced analytics, and role-based reporting. Interop allows agents to access organizational data through Work IQ for added context, and to integrate with Microsoft 365 apps such as Outlook, Word, and Excel so they can create and collaborate alongside users. Security enables the proactive detection of vulnerabilities and misconfigurations, protects against common attacks such as prompt injections, prevents agents from processing or leaking sensitive data, and gives organizations the ability to audit agent interactions, assess compliance readiness and policy violations, and recommend controls for evolving regulatory requirements. Microsoft Agent 365 also includes the Agent 365 SDK, part of Microsoft Agent Framework, which empowers developers and ISVs to build agents on their own AI stack. The SDK enables agents to automatically inherit Microsoft's security and governance protections, such as identity controls, data security policies, and compliance capabilities, without the need for custom integration. For more details on Agent 365, read the blog here. For Developers - Introducing Microsoft Foundry Control Plane to observe, secure and manage agents, now in preview Developers are moving fast to bring agents into production, but operating them at scale introduces new challenges and responsibilities. Agents can access tools, take actions, and make decisions in real time, which means development teams must ensure that every agent behaves safely, securely, and consistently. Today, developers need to work across multiple disparate tools to get a holistic picture of the cybersecurity and safety risks that their agents may have. Once they understand the risk, they then need a unified and simplified way to monitor and manage their entire agent fleet and apply controls and guardrails as needed. Microsoft Foundry provides a unified platform for developers to build, evaluate and deploy AI apps and agents in a responsible way. Today we are excited to announce that Foundry Control Plane is available in preview. This enables developers to observe, secure, and manage their agent fleets with built-in security, and centralized governance controls. With this unified approach, developers can now identify risks and correlate disparate signals across their models, agents, and tools; enforce consistent policies and quality gates; and continuously monitor task adherence and runtime risks. Foundry Control Plane is deeply integrated with Microsoft’s security portfolio to provide a ‘secure by design’ foundation for developers. With Microsoft Entra, developers can ensure an agent identity (Agent ID) and access controls are built into every agent, mitigating the risk of unmanaged agents and over permissioned resources. With Microsoft Defender built in, developers gain contextualized alerts and posture recommendations for agents directly within the Foundry Control Plane. This integration proactively prevents configuration and access risks, while also defending agents from runtime threats in real time. Microsoft Purview’s native integration into Foundry Control Plane makes it easy to enable data security and compliance for every Foundry-built application or agent. This allows Purview to discover data security and compliance risks and apply policies to prevent user prompts and AI responses from safety and policy violations. In addition, agent interactions can be logged and searched for compliance and legal audits. This integration of the shared security capabilities, including identity and access, data security and compliance, and threat protection and posture ensures that security is not an afterthought; it’s embedded at every stage of the agent lifecycle, enabling you to start secure and stay secure. For more details, read the blog. For Security Teams - Introducing Security Dashboard for AI - unified risk visibility for CISOs and AI risk leaders, coming soon AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 90% of security professionals, including CISOs, report that their responsibilities have expanded to include data governance and AI oversight within the past year. 1 At the same time, 86% of risk managers say disconnected data and systems lead to duplicated efforts and gaps in risk coverage. 2 To address these needs, we are excited to introduce the Security Dashboard for AI. This serves as a unified dashboard that aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview. This unified dashboard allows CISOs and AI risk leaders to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. For example, you can see your full AI inventory and get visibility into a quarantined agent, flagged for high data risk due to oversharing sensitive information in Purview. The dashboard then correlates that signal with identity insights from Entra and threat protection alerts from Defender to provide a complete picture of exposure. From there, you can delegate tasks to the appropriate teams to enforce policies and remediate issues quickly. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, there’s nothing new to buy. If you’re already using Microsoft security products to secure AI, you’re already a Security Dashboard for AI customer. Figure 5: Security Dashboard for AI provides CISOs and AI risk leaders with a unified view of their AI risk by bringing together their AI inventory, AI risk, and security recommendations to strengthen overall posture Together, these innovations deliver observability and security across IT, development, and security teams, powered by Microsoft’s shared security capabilities. With Microsoft Agent 365, IT teams can manage and secure agents alongside users. Foundry Control Plane gives developers unified governance and lifecycle controls for agent fleets. Security Dashboard for AI provides CISOs and AI risk leaders with a consolidated view of AI risks across platforms, apps, and agents. Added innovation to secure and govern your AI workloads In addition to the IT, developer, and security leader-focused innovations outlined above, we continue to accelerate our pace of innovation in Microsoft Entra, Microsoft Purview, and Microsoft Defender to address the most pressing needs for securing and governing your AI workloads. These needs are: Manage agent sprawl and resource access e.g. managing agent identity, access to resources, and permissions lifecycle at scale Prevent data oversharing and leaks e.g. protecting sensitive information shared in prompts, responses, and agent interactions Defend against shadow AI, new threats, and vulnerabilities e.g. managing unsanctioned applications, preventing prompt injection attacks, and detecting AI supply chain vulnerabilities Enable AI governance for regulatory compliance e.g. ensuring AI development, operations, and usage comply with evolving global regulations and frameworks Manage agent sprawl and resource access 76% of business leaders expect employees to manage agents within the next 2–3 years. 3 Widespread adoption of agents is driving the need for visibility and control, which includes the need for a unified registry, agent identities, lifecycle governance, and secure access to resources. Today, Microsoft Entra provides robust identity protection and secure access for applications and users. However, organizations lack a unified way to manage, govern, and protect agents in the same way they manage their users. Organizations need a purpose-built identity and access framework for agents. Introducing Microsoft Entra Agent ID, now in preview Microsoft Entra Agent ID offers enterprise-grade capabilities that enable organizations to prevent agent sprawl and protect agent identities and their access to resources. These new purpose-built capabilities enable organizations to: Register and manage agents: Get a complete inventory of the agent fleet and ensure all new agents are created with an identity built-in and are automatically protected by organization policies to accelerate adoption. Govern agent identities and lifecycle: Keep the agent fleet under control with lifecycle management and IT-defined guardrails for both agents and people who create and manage them. Protect agent access to resources: Reduce risk of breaches, block risky agents, and prevent agent access to malicious resources with conditional access and traffic inspection. Agents built in Microsoft Copilot Studio, Microsoft Foundry, and Security Copilot get an Entra Agent ID built-in at creation. Developers can also adopt Entra Agent ID for agents they build through Microsoft Agent Framework, Microsoft Agent 365 SDK, or Microsoft Entra Agent ID SDK. Read the Microsoft Entra blog to learn more. Prevent data oversharing and leaks Data security is more complex than ever. Information Security Media Group (ISMG) reports that 80% of leaders cite leakage of sensitive data as their top concern. 4 In addition to data security and compliance risks of generative AI (GenAI) apps, agents introduces new data risks such as unsupervised data access, highlighting the need to protect all types of corporate data, whether it is accessed by employees or agents. To mitigate these risks, we are introducing new Microsoft Purview data security and compliance capabilities for Microsoft 365 Copilot and for agents and AI apps built with Copilot Studio and Microsoft Foundry, providing unified protection, visibility, and control for users, AI Apps, and Agents. New Microsoft Purview controls safeguard Microsoft 365 Copilot with real-time protection and bulk remediation of oversharing risks Microsoft Purview and Microsoft 365 Copilot deliver a fully integrated solution for protecting sensitive data in AI workflows. Based on ongoing customer feedback, we’re introducing new capabilities to deliver real-time protection for sensitive data in M365 Copilot and accelerated remediation of oversharing risks: Data risk assessments: Previously, admins could monitor oversharing risks such as SharePoint sites with unprotected sensitive data. Now, they can perform item-level investigations and bulk remediation for overshared files in SharePoint and OneDrive to quickly reduce oversharing exposure. Data Loss Prevention (DLP) for M365 Copilot: DLP previously excluded files with sensitivity labels from Copilot processing. Now in preview, DLP also prevents prompts that include sensitive data from being processed in M365 Copilot, Copilot Chat, and Copilot agents, and prevents Copilot from using sensitive data in prompts for web grounding. Priority cleanup for M365 Copilot assets: Many organizations have org-wide policies to retain or delete data. Priority cleanup, now generally available, lets admins delete assets that are frequently processed by Copilot, such as meeting transcripts and recordings, on an independent schedule from the org-wide policies while maintaining regulatory compliance. On-demand classification for meeting transcripts: Purview can now detect sensitive information in meeting transcripts on-demand. This enables data security admins to apply DLP policies and enforce Priority cleanup based on the sensitive information detected. & bulk remediation Read the full Data Security blog to learn more. Introducing new Microsoft Purview data security capabilities for agents and apps built with Copilot Studio and Microsoft Foundry, now in preview Microsoft Purview now extends the same data security and compliance for users and Copilots to agents and apps. These new capabilities are: Enhanced Data Security Posture Management: A centralized DSPM dashboard that provides observability, risk assessment, and guided remediation across users, AI apps, and agents. Insider Risk Management (IRM) for Agents: Uniquely designed for agents, using dedicated behavioral analytics, Purview dynamically assigns risk levels to agents based on their risky handing of sensitive data and enables admins to apply conditional policies based on that risk level. Sensitive data protection with Azure AI Search: Azure AI Search enables fast, AI-driven retrieval across large document collections, essential for building AI Apps. When apps or agents use Azure AI Search to index or retrieve data, Purview sensitivity labels are preserved in the search index, ensuring that any sensitive information remains protected under the organization’s data security & compliance policies. For more information on preventing data oversharing and data leaks - Learn how Purview protects and governs agents in the Data Security and Compliance for Agents blog. Defend against shadow AI, new threats, and vulnerabilities AI workloads are subject to new AI-specific threats like prompt injections attacks, model poisoning, and data exfiltration of AI generated content. Although security admins and SOC analysts have similar tasks when securing agents, the attack methods and surfaces differ significantly. To help customers defend against these novel attacks, we are introducing new capabilities in Microsoft Defender that deliver end-to-end protection, from security posture management to runtime defense. Introducing Security Posture Management for agents, now in preview As organizations adopt AI agents to automate critical workflows, they become high-value targets and potential points of compromise, creating a critical need to ensure agents are hardened, compliant, and resilient by preventing misconfigurations and safeguarding against adversarial manipulation. Security Posture Management for agents in Microsoft Defender now provides an agent inventory for security teams across Microsoft Foundry and Copilot Studio agents. Here, analysts can assess the overall security posture of an agent, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions, all aligned to the MITRE ATT&CK framework. Additionally, the new agent attack path analysis visualizes how an agent’s weak security posture can create broader organizational risk, so you can quickly limit exposure and prevent lateral movement. Introducing Threat Protection for agents, now in preview Attack techniques and attack surfaces for agents are fundamentally different from other assets in your environment. That’s why Defender is delivering purpose-built protections and detections to help defend against them. Defender is introducing runtime protection for Copilot Studio agents that automatically block prompt injection attacks in real time. In addition, we are announcing agent-specific threat detections for Copilot Studio and Microsoft Foundry agents coming soon. Defender automatically correlates these alerts with Microsoft’s industry-leading threat intelligence and cross-domain security signals to deliver richer, contextualized alerts and security incident views for the SOC analyst. Defender’s risk and threat signals are natively integrated into the new Microsoft Foundry Control Plane, giving development teams full observability and the ability to act directly from within their familiar environment. Finally, security analysts will be able to hunt across all agent telemetry in the Advanced Hunting experience in Defender, and the new Agent 365 SDK extends Defender’s visibility and hunting capabilities to third-party agents, starting with Genspark and Kasisto, giving security teams even more coverage across their AI landscape. To learn more about how you can harden the security posture of your agents and defend against threats, read the Microsoft Defender blog. Enable AI governance for regulatory compliance Global AI regulations like the EU AI Act and NIST AI RMF are evolving rapidly; yet, according to ISMG, 55% of leaders report lacking clarity on current and future AI regulatory requirements. 5 As enterprises adopt AI, they must ensure that their AI innovation aligns with global regulations and standards to avoid costly compliance gaps. Introducing new Microsoft Purview Compliance Manager capabilities to stay ahead of evolving AI regulations, now in preview Today, Purview Compliance Manager provides over 300 pre-built assessments for common industry, regional, and global standards and regulations. However, the pace of change for new AI regulations requires controls to be continuously re-evaluated and updated so that organizations can adapt to ongoing changes in regulations and stay compliant. To address this need, Compliance Manager now includes AI-powered regulatory templates. AI-powered regulatory templates enable real-time ingestion and analysis of global regulatory documents, allowing compliance teams to quickly adapt to changes as they happen. As regulations evolve, the updated regulatory documents can be uploaded to Compliance Manager, and the new requirements are automatically mapped to applicable recommended actions to implement controls across Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft 365, and Microsoft Foundry. Automated actions by Compliance Manager further streamline governance, reduce manual workload, and strengthen regulatory accountability. Introducing expanded Microsoft Purview compliance capabilities for agents and AI apps now in preview Microsoft Purview now extends its compliance capabilities across agent-generated interactions, ensuring responsible use and regulatory alignment as AI becomes deeply embedded across business processes. New capabilities include expanded coverage for: Audit: Surface agent interactions, lifecycle events, and data usage with Purview Audit. Unified audit logs across user and agent activities, paired with traceability for every agent using an Entra Agent ID, support investigation, anomaly detection, and regulatory reporting. Communication Compliance: Detect prompts sent to agents and agent-generated responses containing inappropriate, unethical, or risky language, including attempts to manipulate agents into bypassing policies, generating risky content, or producing noncompliant outputs. When issues arise, data security admins get full context, including the prompt, the agent’s output, and relevant metadata, so they can investigate and take corrective action Data Lifecycle Management: Apply retention and deletion policies to agent-generated content and communication flows to automate lifecycle controls and reduce regulatory risk. Read about Microsoft Purview data security for agents to learn more. Finally, we are extending our data security, threat protection, and identity access capabilities to third-party apps and agents via the network. Advancing Microsoft Entra Internet Access Secure Web + AI Gateway - extend runtime protections to the network, now in preview Microsoft Entra Internet Access, part of the Microsoft Entra Suite, has new capabilities to secure access to and usage of GenAI at the network level, marking a transition from Secure Web Gateway to Secure Web and AI Gateway. Enterprises can accelerate GenAI adoption while maintaining compliance and reducing risk, empowering employees to experiment with new AI tools safely. The new capabilities include: Prompt injection protection which blocks malicious prompts in real time by extending Azure AI Prompt Shields to the network layer. Network file filtering which extends Microsoft Purview to inspect files in transit and prevents regulated or confidential data from being uploaded to unsanctioned AI services. Shadow AI Detection that provides visibility into unsanctioned AI applications through Cloud Application Analytics and Defender for Cloud Apps risk scoring, empowering security teams to monitor usage trends, apply Conditional Access, or block high-risk apps instantly. Unsanctioned MCP server blocking prevents access to MCP servers from unauthorized agents. With these controls, you can accelerate GenAI adoption while maintaining compliance and reducing risk, so employees can experiment with new AI tools safely. Read the Microsoft Entra blog to learn more. As AI transforms the enterprise, security must evolve to meet new challenges—spanning agent sprawl, data protection, emerging threats, and regulatory compliance. Our approach is to empower IT, developers, and security leaders with purpose-built innovations like Agent 365, Foundry Control Plane, and the Security Dashboard for AI. These solutions bring observability, governance, and protection to every layer of the AI stack, leveraging familiar tools and integrated controls across Microsoft Defender, Microsoft Entra, and Microsoft Purview. The future of security is ambient, autonomous, and deeply woven into the fabric of how we build, deploy, and govern AI systems. Explore additional resources Learn more about Security for AI solutions on our webpage Learn more about Microsoft Agent 365 Learn more about Microsoft Entra Agent ID Get started with Microsoft 365 Copilot Get started with Microsoft Copilot Studio Get started with Microsoft Foundry Get started with Microsoft Defender for Cloud Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Purview Compliance Manager Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Bedrock Security, 2025 Data Security Confidence Index, published Mar 17, 2025. 2 AuditBoard & Ascend2, Connected Risk Report 2024; as cited by MIT Sloan Management Review, Spring 2025. 3 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more 4 First Annual Generative AI study: Business Rewards vs. Security Risks, , Q3 2023, ISMG, N=400 5 First Annual Generative AI study: Business Rewards vs. Security Risks, Q3 2023, ISMG, N=400From Manual Vetting to Continuous Trust: Automating Publisher Screening with AI
Publisher screening is a software supply-chain reality: if a publisher account is compromised, a single update can reach thousands of machines—and recovery is costly. Microsoft Trust & Security Services applies AI to automate screening at onboarding and keep reassessing publishers as new signals appear. Multiple “checker” agents evaluate identity, reputation, and post-approval behavior, then combine evidence into a consistent risk score and an approve/deny/escalate decision, with an evidence-backed explanation that supports auditability and appeals while reducing operational toil.Welcome to the Microsoft Security Community!
Microsoft Security Community Hub | Protect it all with Microsoft Security Eliminate gaps and get the simplified, comprehensive protection, expertise, and AI-powered solutions you need to innovate and grow in a changing world. The Microsoft Security Community is your gateway to connect, learn, and collaborate with peers, experts, and product teams. Gain access to technical discussions, webinars, and help shape Microsoft’s security products. Get there fast To stay up to date on upcoming opportunities and the latest Microsoft Security Community news, make sure to subscribe to our email list. Find the latest skilling content and on-demand videos – subscribe to the Microsoft Security Community YouTube channel. Catch the latest announcements and connect with us on LinkedIn – Microsoft Security Community and Microsoft Entra Community. Read the latest in the the Microsoft Security Community blog. Upcoming Community Calls March 2026 Mar. 31 | 8:00am | Microsoft Entra | Developer Tools for Agent ID: SDKs, CLIs & Samples Accelerate agent identity projects with Microsoft Entra’s developer toolchain. Explore SDKs, sample repos, and utilities for token acquisition, consent flows, and downstream API calls. Learn techniques for debugging local environments, validating authentication flows, and automating checks in CI/CD pipelines. Share ready-to-run samples, resources, and guidance for filing new tooling requests—helping you build faster and smarter. April 2026 Apr. 2 | 8:00am | Security Copilot Skilling Series | Current capabilities of Copilot in Intune This session on Copilot in Intune & Agents explores the current embedded Copilot experiences and AI‑powered agents available through Security Copilot in Microsoft Intune. Attendees will learn how these capabilities streamline administrative workflows, reduce manual effort, and accelerate everyday endpoint management tasks, helping organizations modernize how they operate and manage devices at scale. Apr. 7 | 9:00am | Microsoft Intune | Re‑Envisioned: The New Single Device Experience in the Intune Admin Console We’ve updated the single device page in the Intune admin center to make it easier to track device activity, access tools and reports, and manage device information in a more consistent and intuitive layout. The new full-page layout gives a single view for monitoring signals, supporting focus in dedicated views for tools and reports. Join us for an overview of these changes, now available in public preview. Apr. 14 | 8:00am | Microsoft Sentinel | Using distributed content to manage your multi-tenant SecOps Content distribution is a powerful multi-tenant feature that enables scalable management of security content across tenants. With this capability, you can create content distribution profiles in the multi-tenant portal that allow you to seamlessly replicate existing content—such as custom detection rules and endpoint security policies—from a source tenant to designated target tenants. Once distributed, the content runs on the target tenant, enabling centralized control with localized execution. This allows you to onboard new tenants quickly and maintain a consistent security baseline across tenants. In this session we'll walk through how you can use this new capability to scale your security operations. RESCHEDULED Apr. 28 | 8:00am | Security Copilot Skilling Series | Security Copilot Agents, DSPM AI Observability, and IRM for Agents This session covers an overview of how Microsoft Purview supports AI risk visibility and investigation through Data Security Posture Management (DSPM) and Insider Risk Management (IRM), alongside Security Copilot–powered agents. This session will go over what is AI Observability in DSPM as well as IRM for Agents in Copilot Studio and Azure AI Foundry. Attendees will learn about the IRM Triage Agent and DSPM Posture Agent and their deployment. Attendees will gain an understanding of how DSPM and IRM capabilities could be leveraged to improve visibility, context, and response for AI-related data risks in Microsoft Purview. Apr. 30 | 8:00am | Microsoft Security Community Presents | Purview Lightning Talks Join the Microsoft Security Community for Purview Lightning Talks; quick technical sessions delivered by the community, for the community. You’ll pick up practical Purview gems: must-know Compliance Manager tips, smart data security tricks, real-world scenarios, and actionable governance recommendations all in one energizing event. Hear directly from Purview customers, partners, and community members and walk away with ideas you can put to work right immediately. Register now; full agenda coming soon! May 2026 May 12 | 9:00am | Microsoft Sentinel | Hyper scale your SOC: Manage delegated access and role-based scoping in Microsoft Defender In this session we'll discuss Unified role based access control (RBAC) and granular delegated admin privileges (GDAP) expansions including: How to use RBAC to -Allow multiple SOC teams to operate securely within a shared Sentinel environment-Support granular, row-level access without requiring workspace separation-Get consistent and reusable scope definitions across tables and experiences How to use GDAP to -Manage MSSPs and hyper-scaler organizations with delegated- access to governed tenants within the Defender portal-Manage delegated access for Sentinel. Looking for more? Join the Security Advisors! As a Security Advisor, you’ll gain early visibility into product roadmaps, participate in focus groups, and access private preview features before public release. You’ll have a direct channel to share feedback with engineering teams, influencing the direction of Microsoft Security products. The program also offers opportunities to collaborate and network with fellow end users and Microsoft product teams. Join the Security Advisors program that best fits your interests: www.aka.ms/joincommunity. Additional resources Microsoft Security Hub on Tech Community Virtual Ninja Training Courses Microsoft Security Documentation Azure Network Security GitHub Microsoft Defender for Cloud GitHub Microsoft Sentinel GitHub Microsoft Defender XDR GitHub Microsoft Defender for Cloud Apps GitHub Microsoft Defender for Identity GitHub Microsoft Purview GitHub40KViews7likes13CommentsDefending the AI Era: New Microsoft Capabilities to Protect AI
As enterprises rapidly adopt AI to drive productivity, automate decisions, and power intelligent agents, a new attack surface is emerging—one that traditional security controls were never designed to protect. AI models, training pipelines, plugins, and autonomous agents now sit directly in the path of sensitive data, business logic, and critical workflows. Organizations must protect the AI supply chain from model development and deployment to runtime behavior, tool access, and downstream actions. At the same time, AI agents operating with broad privileges require runtime monitoring to ensure every tool invocation and action is safe. By combining proactive model scanning across the AI lifecycle with runtime enforcement that monitors and blocks risky agent behavior, security teams gain the visibility and control needed to prevent data exfiltration, misuse of automation, and silent manipulation of outcomes at machine speed. Microsoft Defender helps organizations protect AI investments end-to-end by proactively identifying risks, detecting AI-specific attacks, and enabling investigation and response efforts. New innovations in Defender continue to build upon this value with new threat protection and visibility capabilities for agents through Agent 365 and AI model scanning. Protect AI agents in Agent 365 from emerging threats As AI agents become embedded in core business workflows, they introduce a new class of operational risk that traditional security controls were never designed to manage. AI agents don’t just process data—they take actions, invoke tools, and make decisions, often with broad access to sensitive systems and information. Without continuous visibility and protection of agent activity at runtime, organizations risk silent data exfiltration, misuse of automation, and manipulated outcomes that can directly impact business integrity, compliance, and trust. Real-time protection integrates Microsoft Defender directly into Agent 365’s tools gateway (ATG) to evaluate every agent tool invocation before it executes. The new capabilities provide critical runtime scrutiny to catch unsafe or manipulated actions that traditional build-time checks cannot. It focuses on high confidence threats such as attempts to extract system instructions, access or leak sensitive data, misuse internal only tools, or route information to untrusted destinations If an action is determined to be risky, Defender blocks it immediately, before tool invocation, preventing any data access or leak, and harmful action. When there is a block of a risky action, a comprehensive, SOC-ready alert is generated that explains what was stopped, why it was considered risky, and which agent, user, and tool were involved. Identify risks across the AI model lifecycle When we talk about securing AI, we need to start with the model itself. AI models go through a lifecycle from data sourcing and training, through packaging and deployment, all the way to production. At each stage, there are security risks that traditional application security doesn't address. Understanding where those risks live is the first step toward building the right controls. Before any training begins, teams are pulling in pretrained models from registries like Hugging Face, consuming third-party datasets, and importing ML frameworks into their pipelines. A compromised pretrained model can carry embedded malware or backdoors that activate only under specific conditions. If models are consumed from external sources without scanning them, they are trusting unknown actors with access to our environment. AI model scanning in Microsoft Defender now provides scanning for models stored in Azure ML registries and workspaces covering malware, unsafe operators, and backdoors across common model formats. For security teams, recurring scanning results in security recommendations tied to the specific model resource enable quick remediation. Additionally, high-confidence malware detections now generate Defender alerts that flow directly into SOC workflows via Defender XDR. For developers, a new CLI integration enables in-pipeline on-demand scanning of model artifacts during the build process identifies risks down the single line of code. Additionally, gating capabilities in CI/CD pipelines help prevent unsafe models from ever reaching a registry. If a model hasn't been scanned, it shouldn't be pushed. Visibility across the lifecycle ties it all together. The AI model lifecycle requires controls at every stage: supply chain integrity verification, artifact validation during development, automated scanning before deployment, runtime threat detection in production, and discovery and cleanup at end of life. The organizations that treat this as a continuous discipline not a one-time checkpoint are the ones building the foundation to scale AI securely.New innovations in Microsoft Defender to strengthen multi-cloud, containers, and AI model security
Cloud security today is no longer just about misconfigurations; it’s about keeping pace with cloud-native change, prioritizing risk before it becomes an incident, and securing AI as a new supply chain for applications. In modern environments, infrastructure and applications are rebuilt and redeployed constantly through CI/CD, containers, and managed services, which means the security posture can quickly change. That speed increases the chance that small gaps—overly permissive identities, risky configuration drift, or unvetted AI models—turn into real attack paths unless teams have continuous visibility and guardrails that prevent regression. At the same time, security professionals need more than long lists of findings; they need risk context that connects issues to likelihood of exploitation and business impact so they can fix what matters first. And as organizations embed generative AI, the model itself becomes an artifact that must be governed like any other dependency—acquired, stored, scanned, validated, and monitored—because a tampered or unsafe model can introduce backdoors, leak sensitive data, or produce manipulated outputs at scale. In short, cloud security now spans across posture, runtime, and supply chain—for both cloud resources and the AI-powered applications. Today, we are closing that gap with multi-layered security: expanding our multi-cloud visibility to new AWS and GCP services, enabling near real-time container runtime protection to eliminate binary drift, and introducing AI model scanning. By embedding security directly into the execution layer of both containers and AI, Microsoft Defender for Cloud ensures that as your organization scales, your defense adapts automatically. Strengthen security posture through broader coverage, visibility, and prioritized real risk Microsoft Defender continues to expand how customers see and secure their multi-cloud environments by adding broader coverage and deeper visibility across Amazon Web Services (AWS) and Google Cloud Platform (GCP). With support across compute, databases, storage, analytics, AI and machine learning, identity, networking, and DevOps, customers can now discover and inventory a much wider set of cloud assets through a single, unified experience. This expanded agentless coverage automatically delivers security recommendations and compliance insights for newly discovered resources, enabling continuous risk assessment and faster remediation of misconfigurations. Coverage for these additional AWS and GCP resources will be available in public preview in March. As visibility increases, Defender for Cloud also ensures that prioritization remains clear and actionable. Cloud Secure Score—our AI‑driven, dynamic, risk‑based scoring mechanism—evaluates each resource individually based on likelihood of exploitation and potential business impact. This gives security teams clear insight into how and why their score evolves over time, helping them focus on the most critical risks first. Cloud Secure Score will be generally available in the Defender portal and publicly available in the Azure portal by the end of April. Defender for Cloud is also extending protection to specialized workloads, including upcoming vulnerability assessment support for Azure Databricks compute clusters, which provides visibility and actionable recommendations for vulnerabilities introduced through custom libraries. Vulnerability assessment for Azure Databricks will be available in Defender CSPM by the end of April. Detect and block unauthorized changes in running containers As organizations gain clearer visibility into risk across their cloud estate, protecting workloads at runtime becomes a critical layer of defense. Containers are designed to be immutable, but in practice attackers often exploit runtime gaps by introducing unauthorized binaries or malicious executables after deployment—changes that traditional controls may not detect in time. To address this risk, we are announcing binary drift detection and prevention, along with anti-malware detection and prevention for containers. These capabilities identify when a running container deviates from its original image and automatically prevents unauthorized or malicious processes from executing. With policy-driven controls, security teams can distinguish legitimate operational activity from suspicious behavior. This allows security teams to protect the integrity of their containerized applications and reduce the window for runtime compromise. The result is stronger, proactive protection that helps organizations confidently run container workloads across modern Kubernetes environments. Binary drift detection is now generally available, and binary drift prevention and anti-malware detection and prevention in public preview. Identify risks to your AI supply chain As generative AI becomes embedded in applications—from support chatbots and copilots to automated decisioning—unsecured AI models introduce a new and often invisible risk surface. A compromised or unvetted model can leak sensitive data, execute unsafe logic, or generate manipulated outputs that undermine trust, compliance, and brand integrity. Unlike traditional software flaws, these risks can propagate at machine speed, turning a single vulnerable model into a systemic business issue. Securing AI models before they are deployed—and continuously as they evolve—is critical for organizations delivering AI‑powered experiences. We’re thrilled to share the public preview of AI model scanning in Microsoft Defender, starting April, that delivers comprehensive protection for models stored in Azure Machine Learning registries and workspaces, identifying malware, unsafe operators, and embedded backdoors across common model formats. Continuous scanning generates actionable security recommendations tied to each model resource, while high-confidence malware detections trigger Defender alerts that flow directly into SOC workflows through Defender XDR. For developers, a new CLI enables on-demand, in-pipeline scanning of model artifacts during the build process, surfacing risk down to individual files and enforcing security gates in CI/CD pipelines so that models that haven’t been scanned aren’t deployed. Visibility across the AI development cycle brings these controls together—from supply chain integrity and artifact validation to pre-deployment scanning. Organizations that treat AI security as a continuous discipline, not a onetime checkpoint, build the foundation required to scale AI securely. AI model scanning will be available in public preview starting April 1 st at no additional cost as part of Defender for AI Services plan. Licensing requirements might change when the feature becomes generally available. If that happens, the feature will be disabled, and you’ll be notified should you wish to re-enable it under the new license. Additional Resources Learn more about Microsoft Defender for Cloud, here Find cloud security recent innovations, here Defender for AI blog Attend cloud security theatre sessions on container security and AI models at RSA on March 24 th and March 25 thUnsanctioned cloud apps generates constant alerts
When I mark a cloud app as unsanctioned it created a URL based indicator to block the site. However, it also by default enables the Generate Alert option on the indictor. This causes my SOC to bet inundated with garbage alerts. Now normally if I'm just unsanctioning one Cloud App a could go and turn of the alert. However, I use cloud app policy that will identify any new Cloud Apps in an entire category and then unsanction it. But it enables Generate Alert on the URL indicator. Then if someone accesses that new one the generate alert kicks off. I don't want to have to go into every new app and untick generate alert manually that's just too time consuming. Is there a way to change the default behaviour when adding an indicator to not enable the generate alert? Of is there some other way to do this? I could consider using power automate or something but I'd rather the default behaviour be the fix as automation can break. I don't have time to babysit it.128Views0likes3CommentsSecurity Dashboard for AI - Now Generally Available
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is now generally available. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 2026The Changing Role of Low-Fidelity (LoFi) Signals in the AI Era
Introduction Low-fidelity signals—heuristics that are cheap to compute but often ambiguous—have traditionally been viewed as a necessary annoyance in security operations. In high-volume pipelines, even a modest false-positive rate can translate into operational disruption: unnecessary blocks, costly recoveries, customer frustration, and analyst burnout from constant triage. In the supply chain scanning service, operated by the Trust and Security Services group in Microsoft, LoFi signals include URL and certificate reputation, obfuscation and packer detection, multiple YARA rule families, high-impact API usage (for example, TerminateProcess), and vulnerability detections. Any one of these may be noisy—or may correctly flag perfectly legitimate behavior. The key shift in the AI era is to stop treating LoFi hits as verdicts and start using them as decision points: triggers for deeper, contextual analysis. Two case studies: LoFi signals as routing, not verdicts Case study 1: URL reputation + LLMs—turning noisy signals into zero-day detections Our supply-chain scanning pipeline processes billions of files each day across public package registries. About 150 million files are routed through a URL reputation stage that extracts embedded URLs and evaluates them using threat intelligence plus heuristic rules. At this scale, small error rates become unmanageable: “a little noisy” turns into tens of thousands of daily alerts. Before: Signal overload Heuristic-only URL reputation produced roughly 40,000 blocking detections per day. Although many were genuine threats, the volume made it difficult to distinguish confirmed malware from false positives with confidence. Multiple heuristic layers provided partial signals, but none reliably produced a high-confidence verdict. As a result, analysts spent substantial time triaging files and tuning detection logic, weighing stricter blocking against the risk of disrupting legitimate packages and missing true malware. After: LLM-assisted signal refinement Adding LLM-based contextual analysis on top of URL reputation changed the signal-to-noise ratio. Instead of judging a URL in isolation, the model evaluates how it is used in surrounding code—an install script versus a documentation link, an obfuscated payload download versus a legitimate API call. Outcome: ~2,000× reduction in alerts—down to about 20 high-confidence blocking detections per day—saving substantial analyst time. More importantly, the remaining alerts skew toward true zero-days that other engines in the pipeline were missing. Case study 2: Windows Device Driver scanning pipelines—scaling LoFi signals into actionable detections Beyond supply-chain package scanning, LoFi-driven routing patterns also show up in third-party device driver scanning used for the Windows certification program and post publishing rescan workflows. The pipeline operates at high volume under strict performance and reliability constraints, making “scan everything deeply” unrealistic. The device driver pipeline receives about 70,000 submissions per month (January 2026 reference). From these submissions, roughly 1 million individual files are extracted and scanned. At this scale, even moderately noisy heuristics become unmanageable if treated as high-confidence detections. Before: high-volume, low-confidence heuristics Several LoFi heuristic detectors (primarily YARA rule-based) run in audit (aka telemetry-only) mode in the driver pipeline, including: Presence of network routing/manipulation (for example, network filter drivers): ~19,000 files/month Use of a process-termination API by a driver: ~5,000 files/month Obfuscated or packed driver: ~500 files/month These detectors are fast and inexpensive, but inherently imprecise. Many flagged files reflect legitimate driver behavior (packing, process termination, filtering logic), so turning every hit into enforcement would create an unacceptable volume of false positives. Without refinement, LoFi hits function best as indicators of potential risk—not actionable verdicts. After: selective escalation and targeted analysis Instead of treating every LoFi hit equally, the pipeline escalates only the top 4% of results for deeper inspection. Those samples get additional correlation and malware analyst review, which enables the creation of concrete, high-confidence signatures that can be safely enforced at scale. With this targeted escalation model: An average of ~5 new blocking detections are added per month Each detection typically identifies 10–100 malicious files Confirmed malware is blocked without broadly impacting legitimate driver submissions This approach preserves throughput while focusing scarce expert time on the most suspicious artifacts. In other words, LoFi signals stop being “detections” and become efficient filters that route the right samples into high-cost analysis—where you can then generate durable, high-confidence blocking rules. Key takeaways LoFi is a routing layer. In AI era pipelines, the goal is not to make every cheap heuristic perfectly precise—it is to use it to decide where to spend expensive compute and analyst time. Context beats indicators. LLMs can turn ambiguous URL signals into high-confidence decisions by reasoning about usage and intent, not just matching patterns. Escalate a small fraction, learn continuously. Selecting the top few percent for deeper analysis keeps throughput high and creates a feedback loop that produces enforceable signatures. Measure success by outcomes. The win is reduced alert volume and improved catch quality (for example, zero-days and durable blocking rules) rather than “more detections.” Conclusion As threat actors move faster and zero-days become more common, security systems have to make better decisions under tighter latency and cost constraints. The answer is not to replace LoFi signals with AI everywhere; it is to combine them. Cheap heuristics can cover the full surface area, while AI (and human expertise) is reserved for the small subset of events that truly deserve deeper reasoning. Both case studies illustrate the same pattern. In supply-chain scanning, LLMs transformed a 40,000-per-day alert stream into ~20 high-confidence blocks—surfacing zero-days that were previously lost in the noise. In device driver scanning, selective escalation of the top LoFi hits converts “interesting but unenforceable” heuristics into a steady stream of high-confidence blocking signatures. In practice, the most scalable security posture is a tiered one: LoFi for breadth, AI for context, and analysts for the hardest calls.Understanding and mitigating security risks in MCP implementations
Introducing any new technology can introduce new security challenges or exacerbate existing security risks. In this blog post, we’re going to look at some of the security risks that could be introduced to your environment when using Model Context Protocol (MCP), and what controls you can put in place to mitigate them. MCP is a framework that enables seamless integration between LLM applications and various tools and data sources. MCP defines: A standardized way for AI models to request external actions through a consistent API Structured formats for how data should be passed to and from AI systems Protocols for how AI requests are processed, executed, and returned MCP allows different AI systems to use a common set of tools and patterns, ensuring consistent behavior when AI models interact with external systems. MCP architecture MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. MCP security controls Any system which has access to important resources has implied security challenges. Security challenges can generally be addressed through correct application of fundamental security controls and concepts. As MCP is only newly defined, the specification is changing very rapidly and as the protocol evolves. Eventually the security controls within it will mature, enabling a better integration with enterprise and established security architectures and best practices. Research published in the Microsoft Digital Defense Report states that 98% of reported breaches would be prevented by robust security hygiene and the best protection against any kind of breach is to get your baseline security hygiene, secure coding best practices and supply chain security right – those tried and tested practices that we already know about still make the most impact in reducing security risk. Let's look at some of the ways that you can start to address security risks when adopting MCP. MCP server authentication (if your MCP implementation was before 26th April 2025) Problem statement: The original MCP specification assumed that developers would write their own authentication server. This requires knowledge of OAuth and related security constraints. MCP servers acted as OAuth 2.0 Authorization Servers, managing the required user authentication directly rather than delegating it to an external service such as Microsoft Entra ID. As of 26 April 2025, an update to the MCP specification allows for MCP servers to delegate user authentication to an external service. Risks: Misconfigured authorization logic in the MCP server can lead to sensitive data exposure and incorrectly applied access controls. OAuth token theft on the local MCP server. If stolen, the token can then be used to impersonate the MCP server and access resources and data from the service that the OAuth token is for. Mitigating controls: Thoroughly review your MCP server authorization logic, here some posts discussing this in more detail - Azure API Management Your Auth Gateway For MCP Servers | Microsoft Community Hub and Using Microsoft Entra ID To Authenticate With MCP Servers Via Sessions · Den Delimarsky Implement best practices for token validation and lifetime Use secure token storage and encrypt tokens Excessive permissions for MCP servers Problem statement: MCP servers may have been granted excessive permissions to the service/resource they are accessing. For example, an MCP server that is part of an AI sales application connecting to an enterprise data store should have access scoped to the sales data and not allowed to access all the files in the store. Referencing back to the principle of least privilege (one of the oldest security principles), no resource should have permissions in excess of what is required for it to execute the tasks it was intended for. AI presents an increased challenge in this space because to enable it to be flexible, it can be challenging to define the exact permissions required. Risks: Granting excessive permissions can allow for exfiltration or amending data that the MCP server was not intended to be able to access. This could also be a privacy issue if the data is personally identifiable information (PII). Mitigating controls: Clearly define the permissions that the MCP server has to access the resource/service it connects to. These permissions should be the minimum required for the MCP server to access the tool or data it is connecting to. Indirect prompt injection attacks Problem statement: Researchers have shown that the Model Context Protocol (MCP) is vulnerable to a subset of Indirect Prompt Injection attacks known as Tool Poisoning Attacks. Tool poisoning is a scenario where an attacker embeds malicious instructions within the descriptions of MCP tools. These instructions are invisible to users but can be interpreted by the AI model and its underlying systems, leading to unintended actions that could ultimately lead to harmful outcomes. Risks: Unintended AI actions present a variety of security risks that include data exfiltration and privacy breaches. Mitigating controls: Implement AI prompt shields: in Azure AI Foundry, you can follow these steps to implement AI prompt shields. Implement robust supply chain security: you can read more about how Microsoft implements supply chain security internally here. Established security best practices that will uplift your MCP implementation’s security posture Any MCP implementation inherits the existing security posture of your organization's environment that it is built upon, so when considering the security of MCP as a component of your overall AI systems it is recommended that you look at uplifting your overall existing security posture. The following established security controls are especially pertinent: Secure coding best practices in your AI application - protect against the OWASP Top 10, the OWASP Top 10 for LLMs, use of secure vaults for secrets and tokens, implementing end-to-end secure communications between all application components, etc. Server hardening – use MFA where possible, keep patching up to date, integrate the server with a third party identity provider for access, etc. Keep devices, infrastructure and applications up to date with patches Security monitoring – implementing logging and monitoring of an AI application (including the MCP client/servers) and sending those logs to a central SIEM for detection of anomalous activities Zero trust architecture – isolating components via network and identity controls in a logical manner to minimize lateral movement if an AI application were compromised. Conclusion MCP is a promising development in the AI space that enables rich data and context access. As developers embrace this new approach to integrating their organization's APIs and connectors into LLMs, they need to be aware of security risks and how to implement controls to reduce those risks. There are mitigating security controls that can be put in place to reduce the risks inherent in the current specification, but as the protocol develops expect that some of the risks will reduce or disappear entirely. We encourage you to contribute to and suggest security related MCP RFCs to make this protocol even better! With thanks to OrinThomas, dasithwijes, dendeli and Peter Marcu for their inputs and collaboration on this post.Accelerate Your Security Copilot Readiness with Our Global Technical Workshop Series
The Security Copilot team is delivering virtual hands-on technical workshops designed for technical practitioners who want to deepen their AI for Security expertise with Microsoft Entra, Intune, Microsoft Purview, and Microsoft Threat Protection. These workshops will help you onboard and configure Security Copilot and deepen your knowledge on agents. These free workshops are delivered year-round and available in multiple time zones. What You’ll Learn Our workshop series combines scenario-based instruction, live demos, hands-on exercises, and expert Q&A to help you operationalize Security Copilot across your security stack. These sessions are all moderated by experts from Microsoft’s engineering teams and are aligned with the latest Security Copilot capabilities. Every session delivers 100% technical content, designed to accelerate real-world Security Copilot adoption. Who Should Attend These workshops are ideal for: Security Architects & Engineers SOC Analysts Identity & Access Management Engineers Endpoint & Device Admins Compliance & Risk Practitioners Partner Technical Consultants Customer technical teams adopting AI powered defense Register now for these upcoming Security Copilot Virtual Workshops Start building Security Copilot skills—choose the product area and time zone that works best for you. Please take note of pre-requisites for each workshop in the registration page Security Copilot Virtual Workshop: Copilot in Defender North America time zone April 1, 2026 at 8:00-9:30 Am (PST) - register here April 29, 2026 at 8:00-9:30 AM (PST) - register here May 27, 2026 at 8:00-9:30 AM (PST) - register here June 24, 2026 at 8:00-9:30 Am (PST) - register here Asia Pacific time zone April 2, 2026 - register here April 30, 2026 - register here May 27, 2026 - register here June 24, 2026 - register here Security Copilot Virtual Workshop: Copilot in Entra North America time zone March 25, 2026 at 8:00 - 9:30 AM (PST) - register here April 22, 2026 at 8:00-9:30 AM (PST) - register here May 20, 2026 at 8:00-9:30 AM (PST) - register here June 17, 2026 at 8:00-9:30 AM (PST) - register here Asia Pacific time zone March 26, 2026 - register here April 23, 2026 - register here May 21, 2026 - register here Security Copilot Virtual Workshop: Copilot in Intune North America time zone March 11, 2026 at 8:00-9:30 AM (PST) - register here April 8, 2026 at 8:00-9:30 AM (PST) - register here May 6, 2026 at 8:00-9:30 AM (PST) - register here Asia Pacific time zone March 12, 2026 - register here April 9, 2026 - register here May 7, 2026 - register here Security Copilot Virtual Workshop: Copilot in Purview North America time zone March 19, 2026 8:00 - 9:30 AM (PST) - register here April 15, 2026 at 8:00-9:30 AM (PST) - register here May 13, 2026 at 8:00-9:30 AM (PST) - register here June 10, 2026 at 8:00-9:30 AM (PST) - register here Asia Pacific time zone March 19, 2026 - register here April 16, 2026 - register here May 14, 2026 - register here June 11, 2026 - register here Learn and Engage with the Microsoft Security Community Log in and follow this Microsoft Security Community Blog and post/ interact in the Microsoft Security Community discussion spaces. Follow = Click the heart in the upper right when you're logged in 🤍 Join the Microsoft Security Community and be notified of upcoming events, product feedback surveys, and more. Get early access to Microsoft Security products and provide feedback to engineers by joining the Microsoft Security Advisors.. Learn about the Microsoft MVP Program. Join the Microsoft Security Community LinkedIn and the Microsoft Entra Community LinkedIn