securing ai
42 TopicsSecurity as the core primitive - Securing AI agents and apps
This week at Microsoft Ignite, we shared our vision for Microsoft security -- In the agentic era, security must be ambient and autonomous, like the AI it protects. It must be woven into and around everything we build—from silicon to OS, to agents, apps, data, platforms, and clouds—and throughout everything we do. In this blog, we are going to dive deeper into many of the new innovations we are introducing this week to secure AI agents and apps. As I spend time with our customers and partners, there are four consistent themes that have emerged as core security challenges to secure AI workloads. These are: preventing agent sprawl and access to resources, protecting against data oversharing and data leaks, defending against new AI threats and vulnerabilities, and adhering to evolving regulations. Addressing these challenges holistically requires a coordinated effort across IT, developers, and security leaders, not just within security teams and to enable this, we are introducing several new innovations: Microsoft Agent 365 for IT, Foundry Control Plane in Microsoft Foundry for developers, and the Security Dashboard for AI for security leaders. In addition, we are releasing several new purpose-built capabilities to protect and govern AI apps and agents across Microsoft Defender, Microsoft Entra, and Microsoft Purview. Observability at every layer of the stack To facilitate the organization-wide effort that it takes to secure and govern AI agents and apps – IT, developers, and security leaders need observability (security, management, and monitoring) at every level. IT teams need to enable the development and deployment of any agent in their environment. To ensure the responsible and secure deployment of agents into an organization, IT needs a unified agent registry, the ability to assign an identity to every agent, manage the agent’s access to data and resources, and manage the agent’s entire lifecycle. In addition, IT needs to be able to assign access to common productivity and collaboration tools, such as email and file storage, and be able to observe their entire agent estate for risks such as over-permissioned agents. Development teams need to build and test agents, apply security and compliance controls by default, and ensure AI models are evaluated for safety guardrails and security vulnerabilities. Post deployment, development teams must observe agents to ensure they are staying on task, accessing applications and data sources appropriately, and operating within their cost and performance expectations. Security & compliance teams must ensure overall security of their AI estate, including their AI infrastructure, platforms, data, apps, and agents. They need comprehensive visibility into all their security risks- including agent sprawl and resource access, data oversharing and leaks, AI threats and vulnerabilities, and complying with global regulations. They want to address these risks by extending their existing security investments that they are already invested in and familiar with, rather than using siloed or bolt-on tools. These teams can be most effective in delivering trustworthy AI to their organizations if security is natively integrated into the tools and platforms that they use every day, and if those tools and platforms share consistent security primitives such as agent identities from Entra; data security and compliance controls from Purview; and security posture, detections, and protections from Defender. With the new capabilities being released today, we are delivering observability at every layer of the AI stack, meeting IT, developers, and security teams where they are in the tools they already use to innovate with confidence. For IT Teams - Introducing Microsoft Agent 365, the control plane for agents, now in preview The best infrastructure for managing your agents is the one you already use to manage your users. With Agent 365, organizations can extend familiar tools and policies to confidently deploy and secure agents, without reinventing the wheel. By using the same trusted Microsoft 365 infrastructure, productivity apps, and protections, organizations can now apply consistent and familiar governance and security controls that are purpose-built to protect against agent-specific threats and risks. gement and governance of agents across organizations Microsoft Agent 365 delivers a unified agent Registry, Access Control, Visualization, Interoperability, and Security capabilities for your organization. These capabilities work together to help organizations manage agents and drive business value. The Registry powered by the Entra provides a complete and unified inventory of all the agents deployed and used in your organization including both Microsoft and third-party agents. Access Control allows you to limit the access privileges of your agents to only the resources that they need and protect their access to resources in real time. Visualization gives organizations the ability to see what matters most and gain insights through a unified dashboard, advanced analytics, and role-based reporting. Interop allows agents to access organizational data through Work IQ for added context, and to integrate with Microsoft 365 apps such as Outlook, Word, and Excel so they can create and collaborate alongside users. Security enables the proactive detection of vulnerabilities and misconfigurations, protects against common attacks such as prompt injections, prevents agents from processing or leaking sensitive data, and gives organizations the ability to audit agent interactions, assess compliance readiness and policy violations, and recommend controls for evolving regulatory requirements. Microsoft Agent 365 also includes the Agent 365 SDK, part of Microsoft Agent Framework, which empowers developers and ISVs to build agents on their own AI stack. The SDK enables agents to automatically inherit Microsoft's security and governance protections, such as identity controls, data security policies, and compliance capabilities, without the need for custom integration. For more details on Agent 365, read the blog here. For Developers - Introducing Microsoft Foundry Control Plane to observe, secure and manage agents, now in preview Developers are moving fast to bring agents into production, but operating them at scale introduces new challenges and responsibilities. Agents can access tools, take actions, and make decisions in real time, which means development teams must ensure that every agent behaves safely, securely, and consistently. Today, developers need to work across multiple disparate tools to get a holistic picture of the cybersecurity and safety risks that their agents may have. Once they understand the risk, they then need a unified and simplified way to monitor and manage their entire agent fleet and apply controls and guardrails as needed. Microsoft Foundry provides a unified platform for developers to build, evaluate and deploy AI apps and agents in a responsible way. Today we are excited to announce that Foundry Control Plane is available in preview. This enables developers to observe, secure, and manage their agent fleets with built-in security, and centralized governance controls. With this unified approach, developers can now identify risks and correlate disparate signals across their models, agents, and tools; enforce consistent policies and quality gates; and continuously monitor task adherence and runtime risks. Foundry Control Plane is deeply integrated with Microsoft’s security portfolio to provide a ‘secure by design’ foundation for developers. With Microsoft Entra, developers can ensure an agent identity (Agent ID) and access controls are built into every agent, mitigating the risk of unmanaged agents and over permissioned resources. With Microsoft Defender built in, developers gain contextualized alerts and posture recommendations for agents directly within the Foundry Control Plane. This integration proactively prevents configuration and access risks, while also defending agents from runtime threats in real time. Microsoft Purview’s native integration into Foundry Control Plane makes it easy to enable data security and compliance for every Foundry-built application or agent. This allows Purview to discover data security and compliance risks and apply policies to prevent user prompts and AI responses from safety and policy violations. In addition, agent interactions can be logged and searched for compliance and legal audits. This integration of the shared security capabilities, including identity and access, data security and compliance, and threat protection and posture ensures that security is not an afterthought; it’s embedded at every stage of the agent lifecycle, enabling you to start secure and stay secure. For more details, read the blog. For Security Teams - Introducing Security Dashboard for AI - unified risk visibility for CISOs and AI risk leaders, coming soon AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 90% of security professionals, including CISOs, report that their responsibilities have expanded to include data governance and AI oversight within the past year. 1 At the same time, 86% of risk managers say disconnected data and systems lead to duplicated efforts and gaps in risk coverage. 2 To address these needs, we are excited to introduce the Security Dashboard for AI. This serves as a unified dashboard that aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview. This unified dashboard allows CISOs and AI risk leaders to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. For example, you can see your full AI inventory and get visibility into a quarantined agent, flagged for high data risk due to oversharing sensitive information in Purview. The dashboard then correlates that signal with identity insights from Entra and threat protection alerts from Defender to provide a complete picture of exposure. From there, you can delegate tasks to the appropriate teams to enforce policies and remediate issues quickly. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, there’s nothing new to buy. If you’re already using Microsoft security products to secure AI, you’re already a Security Dashboard for AI customer. Figure 5: Security Dashboard for AI provides CISOs and AI risk leaders with a unified view of their AI risk by bringing together their AI inventory, AI risk, and security recommendations to strengthen overall posture Together, these innovations deliver observability and security across IT, development, and security teams, powered by Microsoft’s shared security capabilities. With Microsoft Agent 365, IT teams can manage and secure agents alongside users. Foundry Control Plane gives developers unified governance and lifecycle controls for agent fleets. Security Dashboard for AI provides CISOs and AI risk leaders with a consolidated view of AI risks across platforms, apps, and agents. Added innovation to secure and govern your AI workloads In addition to the IT, developer, and security leader-focused innovations outlined above, we continue to accelerate our pace of innovation in Microsoft Entra, Microsoft Purview, and Microsoft Defender to address the most pressing needs for securing and governing your AI workloads. These needs are: Manage agent sprawl and resource access e.g. managing agent identity, access to resources, and permissions lifecycle at scale Prevent data oversharing and leaks e.g. protecting sensitive information shared in prompts, responses, and agent interactions Defend against shadow AI, new threats, and vulnerabilities e.g. managing unsanctioned applications, preventing prompt injection attacks, and detecting AI supply chain vulnerabilities Enable AI governance for regulatory compliance e.g. ensuring AI development, operations, and usage comply with evolving global regulations and frameworks Manage agent sprawl and resource access 76% of business leaders expect employees to manage agents within the next 2–3 years. 3 Widespread adoption of agents is driving the need for visibility and control, which includes the need for a unified registry, agent identities, lifecycle governance, and secure access to resources. Today, Microsoft Entra provides robust identity protection and secure access for applications and users. However, organizations lack a unified way to manage, govern, and protect agents in the same way they manage their users. Organizations need a purpose-built identity and access framework for agents. Introducing Microsoft Entra Agent ID, now in preview Microsoft Entra Agent ID offers enterprise-grade capabilities that enable organizations to prevent agent sprawl and protect agent identities and their access to resources. These new purpose-built capabilities enable organizations to: Register and manage agents: Get a complete inventory of the agent fleet and ensure all new agents are created with an identity built-in and are automatically protected by organization policies to accelerate adoption. Govern agent identities and lifecycle: Keep the agent fleet under control with lifecycle management and IT-defined guardrails for both agents and people who create and manage them. Protect agent access to resources: Reduce risk of breaches, block risky agents, and prevent agent access to malicious resources with conditional access and traffic inspection. Agents built in Microsoft Copilot Studio, Microsoft Foundry, and Security Copilot get an Entra Agent ID built-in at creation. Developers can also adopt Entra Agent ID for agents they build through Microsoft Agent Framework, Microsoft Agent 365 SDK, or Microsoft Entra Agent ID SDK. Read the Microsoft Entra blog to learn more. Prevent data oversharing and leaks Data security is more complex than ever. Information Security Media Group (ISMG) reports that 80% of leaders cite leakage of sensitive data as their top concern. 4 In addition to data security and compliance risks of generative AI (GenAI) apps, agents introduces new data risks such as unsupervised data access, highlighting the need to protect all types of corporate data, whether it is accessed by employees or agents. To mitigate these risks, we are introducing new Microsoft Purview data security and compliance capabilities for Microsoft 365 Copilot and for agents and AI apps built with Copilot Studio and Microsoft Foundry, providing unified protection, visibility, and control for users, AI Apps, and Agents. New Microsoft Purview controls safeguard Microsoft 365 Copilot with real-time protection and bulk remediation of oversharing risks Microsoft Purview and Microsoft 365 Copilot deliver a fully integrated solution for protecting sensitive data in AI workflows. Based on ongoing customer feedback, we’re introducing new capabilities to deliver real-time protection for sensitive data in M365 Copilot and accelerated remediation of oversharing risks: Data risk assessments: Previously, admins could monitor oversharing risks such as SharePoint sites with unprotected sensitive data. Now, they can perform item-level investigations and bulk remediation for overshared files in SharePoint and OneDrive to quickly reduce oversharing exposure. Data Loss Prevention (DLP) for M365 Copilot: DLP previously excluded files with sensitivity labels from Copilot processing. Now in preview, DLP also prevents prompts that include sensitive data from being processed in M365 Copilot, Copilot Chat, and Copilot agents, and prevents Copilot from using sensitive data in prompts for web grounding. Priority cleanup for M365 Copilot assets: Many organizations have org-wide policies to retain or delete data. Priority cleanup, now generally available, lets admins delete assets that are frequently processed by Copilot, such as meeting transcripts and recordings, on an independent schedule from the org-wide policies while maintaining regulatory compliance. On-demand classification for meeting transcripts: Purview can now detect sensitive information in meeting transcripts on-demand. This enables data security admins to apply DLP policies and enforce Priority cleanup based on the sensitive information detected. & bulk remediation Read the full Data Security blog to learn more. Introducing new Microsoft Purview data security capabilities for agents and apps built with Copilot Studio and Microsoft Foundry, now in preview Microsoft Purview now extends the same data security and compliance for users and Copilots to agents and apps. These new capabilities are: Enhanced Data Security Posture Management: A centralized DSPM dashboard that provides observability, risk assessment, and guided remediation across users, AI apps, and agents. Insider Risk Management (IRM) for Agents: Uniquely designed for agents, using dedicated behavioral analytics, Purview dynamically assigns risk levels to agents based on their risky handing of sensitive data and enables admins to apply conditional policies based on that risk level. Sensitive data protection with Azure AI Search: Azure AI Search enables fast, AI-driven retrieval across large document collections, essential for building AI Apps. When apps or agents use Azure AI Search to index or retrieve data, Purview sensitivity labels are preserved in the search index, ensuring that any sensitive information remains protected under the organization’s data security & compliance policies. For more information on preventing data oversharing and data leaks - Learn how Purview protects and governs agents in the Data Security and Compliance for Agents blog. Defend against shadow AI, new threats, and vulnerabilities AI workloads are subject to new AI-specific threats like prompt injections attacks, model poisoning, and data exfiltration of AI generated content. Although security admins and SOC analysts have similar tasks when securing agents, the attack methods and surfaces differ significantly. To help customers defend against these novel attacks, we are introducing new capabilities in Microsoft Defender that deliver end-to-end protection, from security posture management to runtime defense. Introducing Security Posture Management for agents, now in preview As organizations adopt AI agents to automate critical workflows, they become high-value targets and potential points of compromise, creating a critical need to ensure agents are hardened, compliant, and resilient by preventing misconfigurations and safeguarding against adversarial manipulation. Security Posture Management for agents in Microsoft Defender now provides an agent inventory for security teams across Microsoft Foundry and Copilot Studio agents. Here, analysts can assess the overall security posture of an agent, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions, all aligned to the MITRE ATT&CK framework. Additionally, the new agent attack path analysis visualizes how an agent’s weak security posture can create broader organizational risk, so you can quickly limit exposure and prevent lateral movement. Introducing Threat Protection for agents, now in preview Attack techniques and attack surfaces for agents are fundamentally different from other assets in your environment. That’s why Defender is delivering purpose-built protections and detections to help defend against them. Defender is introducing runtime protection for Copilot Studio agents that automatically block prompt injection attacks in real time. In addition, we are announcing agent-specific threat detections for Copilot Studio and Microsoft Foundry agents coming soon. Defender automatically correlates these alerts with Microsoft’s industry-leading threat intelligence and cross-domain security signals to deliver richer, contextualized alerts and security incident views for the SOC analyst. Defender’s risk and threat signals are natively integrated into the new Microsoft Foundry Control Plane, giving development teams full observability and the ability to act directly from within their familiar environment. Finally, security analysts will be able to hunt across all agent telemetry in the Advanced Hunting experience in Defender, and the new Agent 365 SDK extends Defender’s visibility and hunting capabilities to third-party agents, starting with Genspark and Kasisto, giving security teams even more coverage across their AI landscape. To learn more about how you can harden the security posture of your agents and defend against threats, read the Microsoft Defender blog. Enable AI governance for regulatory compliance Global AI regulations like the EU AI Act and NIST AI RMF are evolving rapidly; yet, according to ISMG, 55% of leaders report lacking clarity on current and future AI regulatory requirements. 5 As enterprises adopt AI, they must ensure that their AI innovation aligns with global regulations and standards to avoid costly compliance gaps. Introducing new Microsoft Purview Compliance Manager capabilities to stay ahead of evolving AI regulations, now in preview Today, Purview Compliance Manager provides over 300 pre-built assessments for common industry, regional, and global standards and regulations. However, the pace of change for new AI regulations requires controls to be continuously re-evaluated and updated so that organizations can adapt to ongoing changes in regulations and stay compliant. To address this need, Compliance Manager now includes AI-powered regulatory templates. AI-powered regulatory templates enable real-time ingestion and analysis of global regulatory documents, allowing compliance teams to quickly adapt to changes as they happen. As regulations evolve, the updated regulatory documents can be uploaded to Compliance Manager, and the new requirements are automatically mapped to applicable recommended actions to implement controls across Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft 365, and Microsoft Foundry. Automated actions by Compliance Manager further streamline governance, reduce manual workload, and strengthen regulatory accountability. Introducing expanded Microsoft Purview compliance capabilities for agents and AI apps now in preview Microsoft Purview now extends its compliance capabilities across agent-generated interactions, ensuring responsible use and regulatory alignment as AI becomes deeply embedded across business processes. New capabilities include expanded coverage for: Audit: Surface agent interactions, lifecycle events, and data usage with Purview Audit. Unified audit logs across user and agent activities, paired with traceability for every agent using an Entra Agent ID, support investigation, anomaly detection, and regulatory reporting. Communication Compliance: Detect prompts sent to agents and agent-generated responses containing inappropriate, unethical, or risky language, including attempts to manipulate agents into bypassing policies, generating risky content, or producing noncompliant outputs. When issues arise, data security admins get full context, including the prompt, the agent’s output, and relevant metadata, so they can investigate and take corrective action Data Lifecycle Management: Apply retention and deletion policies to agent-generated content and communication flows to automate lifecycle controls and reduce regulatory risk. Read about Microsoft Purview data security for agents to learn more. Finally, we are extending our data security, threat protection, and identity access capabilities to third-party apps and agents via the network. Advancing Microsoft Entra Internet Access Secure Web + AI Gateway - extend runtime protections to the network, now in preview Microsoft Entra Internet Access, part of the Microsoft Entra Suite, has new capabilities to secure access to and usage of GenAI at the network level, marking a transition from Secure Web Gateway to Secure Web and AI Gateway. Enterprises can accelerate GenAI adoption while maintaining compliance and reducing risk, empowering employees to experiment with new AI tools safely. The new capabilities include: Prompt injection protection which blocks malicious prompts in real time by extending Azure AI Prompt Shields to the network layer. Network file filtering which extends Microsoft Purview to inspect files in transit and prevents regulated or confidential data from being uploaded to unsanctioned AI services. Shadow AI Detection that provides visibility into unsanctioned AI applications through Cloud Application Analytics and Defender for Cloud Apps risk scoring, empowering security teams to monitor usage trends, apply Conditional Access, or block high-risk apps instantly. Unsanctioned MCP server blocking prevents access to MCP servers from unauthorized agents. With these controls, you can accelerate GenAI adoption while maintaining compliance and reducing risk, so employees can experiment with new AI tools safely. Read the Microsoft Entra blog to learn more. As AI transforms the enterprise, security must evolve to meet new challenges—spanning agent sprawl, data protection, emerging threats, and regulatory compliance. Our approach is to empower IT, developers, and security leaders with purpose-built innovations like Agent 365, Foundry Control Plane, and the Security Dashboard for AI. These solutions bring observability, governance, and protection to every layer of the AI stack, leveraging familiar tools and integrated controls across Microsoft Defender, Microsoft Entra, and Microsoft Purview. The future of security is ambient, autonomous, and deeply woven into the fabric of how we build, deploy, and govern AI systems. Explore additional resources Learn more about Security for AI solutions on our webpage Learn more about Microsoft Agent 365 Learn more about Microsoft Entra Agent ID Get started with Microsoft 365 Copilot Get started with Microsoft Copilot Studio Get started with Microsoft Foundry Get started with Microsoft Defender for Cloud Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Purview Compliance Manager Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Bedrock Security, 2025 Data Security Confidence Index, published Mar 17, 2025. 2 AuditBoard & Ascend2, Connected Risk Report 2024; as cited by MIT Sloan Management Review, Spring 2025. 3 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more 4 First Annual Generative AI study: Business Rewards vs. Security Risks, , Q3 2023, ISMG, N=400 5 First Annual Generative AI study: Business Rewards vs. Security Risks, Q3 2023, ISMG, N=400Transforming Security Analysis into a Repeatable, Auditable, and Agentic Workflow
Author(s): Animesh Jain, Vinay Yadav Shaped by investigations into the strategic question of what it takes for Windows to achieve world-leading security—and the practical engineering needed to explore agentic workflows at scale and their interfaces. Our work in Windows Servicing & Delivery (WSD) is shaped by two guiding prompts from leadership: "what does it take for Windows to achieve world-leading security", and "how do we responsibly integrate AI into systems as large and high-churn as Windows?". Reasoning models open new possibilities on both fronts. As we continue experimenting, one issue repeatedly surfaces as the bottleneck for scalable security assurance: variant vulnerabilities. They are subtle, recurring, and easy to miss—making them an ideal proving ground for the enterprise-grade workflow we present here. Security Analysis at Windows Scale Security analysis shouldn’t be an afterthought—it should be a continuous, auditable, and intelligence-driven process built directly into the engineering workflow. This work introduces an agentic security analysis pipeline that uses reasoning models and tool-based agents to detect variant vulnerabilities across large, fast-changing codebases. By combining automation with explainability, it transforms security validation from a manual, point-in-time task into a repeatable and trustworthy part of every build. Why are variants the hard part? Security flaws rarely occur in isolation. Once a vulnerability is fixed, its logical or structural pattern often reappears elsewhere in the codebase—hidden behind different variables, layers, or call paths. These recurring patterns are variants—the quiet echoes of known issues that can persist across millions of lines of code. Finding them manually is slow, repetitive, and incomplete. As engineering velocity increases, so does the likelihood of variant drift—the same vulnerability class re-emerging in a slightly altered form. Each missed variant carries a downstream cost: regression, re-servicing, or, in the worst cases, re-exploitation. Modern large systems like Windows are too large, too interconnected, and ship too frequently for manual vulnerability discovery to keep pace. Traditional static analyzers and deterministic class-based scanners struggle to generalize these patterns or create too much noise, while targeted fuzzing campaigns often fail to trigger the nuanced runtime conditions that expose them. To stay ahead, automation must evolve. We need systems that reason—not just scan—systems capable of understanding relationships between code regions and applying logical analogies instead of brute-force enumeration. Reasoning Models: A Turning Point in Security Research Recent advances in AI reasoning have demonstrated that large language models can uncover vulnerabilities previously missed by deterministic tools. For example, Google’s Big Sleep agent surfaced an exploitable SQLite flaw (CVE-2025-6965) that bypassed traditional fuzzers due to configuration-sensitive logic. Similarly, an o-series reasoning model helped identify a critical Linux SMB logoff use-after-free (CVE-2025-37899), proving that reasoning-driven automation can detect complex, context-dependent flaws in mature kernel code. These breakthroughs show what’s possible when systems can form, test, and refine hypotheses about software behavior. The challenge now is scaling that intelligence into repeatable, auditable, enterprise-grade workflows—where every result is traceable, reviewable, and integrated into the developer’s daily workflow. A Framework for Agentic Security Analysis To address this challenge, we’ve developed an agentic security analysis framework that applies reasoning models within structured, enterprise grade workflow pattern. It combines large language model agents, specialized analysis tools, and structured artifact generation to make vulnerability discovery continuous, explainable, and auditable. It is interfaced as a first-class Azure DevOps (ADO) pipeline and can be integrated natively into enterprise CI/CD processes. For security analysis, it continuously reasons over large, evolving codebases to identify and validate variant vulnerabilities earlier in the release cycle. Together, these components form a repeatable workflow that helps surface variant patterns with greater consistency and clarity. Core Technical Pillars Scale – Autonomous Code Reasoning Long-context models extend analysis across massive, evolving codebases. They infer analogies, relationships, and behavioral patterns between code regions, enabling scalable reasoning that adapts as systems grow. Tool–Agent Collaboration Specialized agents coordinate to perform semantic search, graph traversal, and both static and dynamic interpretation. This distributed reasoning approach ensures resilience and precision across diverse enterprise environments. Structured Artifact Generation Every step produces versioned, auditable artifacts that document the reasoning process. These artifacts help provide reproducibility, compliance, and transparency—critical for enterprise governance and regulated industries. Together, these pillars enable scalable, explainable, and repeatable vulnerability discovery across large software ecosystems such as Windows. Every stage—from reasoning to validation—is logged and traceable, designed to make each discovery reproducible and reviewable. Inside the framework Agent-Led, Human-Reviewed The system is agent-led from start to finish and human-reviewed only at decision boundaries. Agents form hypotheses from recent fixes or vulnerability classes, test them against context, perform validation passes, and generate evidence-backed reports for reviewer confirmation. The workflow mirrors how seasoned security engineers operate—only faster and continuously. n tasks based on templatized prompts. Tool Specialists as Agents Each analytical tool functions as a domain-specific agent—performing semantic search, file inspection, or function-graph traversal. These agents collaborate through structured orchestration, maintaining specialization without sacrificing coherence. Agentic Patterns and Orchestration The framework employs reusable reasoning patterns—reflective reasoning, actor–validator loops, and parallel tool dialogues—for accuracy and scale. A central conductor agent governs task coordination, context flow, and artifact persistence across runs. Auditability Through Artifacts Every investigation yields a transparent chain of artifacts: Analysis Notes – summarize candidate issues Critique Notes – document reasoning and counter-evidence Synthesis Reports – provide developer-ready summaries, diffs, call graphs, and exploitability insights Agentic Conversation Logs - provides conversation logs so developers can backtrack on reasoning and get more context This structure makes each discovery fully traceable and auditable. CI/CD-Native Integration The interface operates as a first-class Azure DevOps pipeline, attachable to pull requests, nightly builds, or release triggers. Each run publishes versioned artifacts and validation notes directly into the developer workflow—making reasoning-driven security a seamless part of software delivery. What It Can Do Today Seeded Variant Hunts: Start from a recent fix or known pattern to enumerate analogous cases, analyze helper functions, and test reachability. Evidence-First Reporting: Every finding includes reproducible evidence—code snippets, diffs, and caller graphs—delivered within the PR or work item. Scalable Coverage: Runs across servicing branches, producing consistent and auditable validation artifacts. Improved Precision: A reasoning-based validation pass has significantly reduced false positives in internal testing. Case Study: CVE-2025-55325 During a sweep of “*_DEFAULTS” deserializers, the agentic pipeline independently identified GetPoolDefaults trusting a user-controlled size field and copying that many bytes from a caller buffer. The missing runtime bounds check—guarded only by an assertion in debug builds—enabled a potential read access violation and information disclosure. The mitigation mirrored a hardened sibling helper: enforcing runtime bounds on Size versus BytesAvailable/Version before allocation and copy. The finding was later validated by the servicing teams, confirming it matched an issue already under active investigation—illustrating how the automated reasoning process can independently surface real-world vulnerabilities that align with expert analysis. Beyond Variant Analysis The underlying architecture of this framework extends naturally beyond variant detection: Net-new vulnerability discovery through cross-binary pattern matching Model-assisted fuzzing & static analysis orchestrated through CI/CD integration Regression detection via historical code comparisons Security Development Lifecycle (SDL) enforcement and reproducibility checks The agentic patterns and tooling can support net-new vulnerability discovery through cross-binary pattern matching, regression detection using historical code comparisons, reproducibility checks aligned with SDL requirements, and model-assisted fuzzing orchestrated through CI/CD processes. These capabilities open the door to applying reasoning-driven workflows across a broader range of security & validation tasks. The Road Ahead Looking ahead, this trajectory naturally leads toward autonomous cybersecurity pipelines powered by reasoning agents that apply reflective analysis, validation loops, and structured tool interactions to complex codebases. By structuring each step as an auditable artifact, the approach supports security & validation analysis that is both explainable and repeatable. These agents could help validate security posture, analyze historical and real-time signals, and detect anomalous patterns early in the lifecycle. References Google Cloud Blog – Big Sleep and AI-Assisted Vulnerability Discovery “A summer of security: empowering cyber defenders with AI.” https://blog.google/technology/safety-security/cybersecurity-updates-summer-2025 The Hacker News – Google AI ‘Big Sleep’ Stops Exploitation of Critical SQLite Flaw https://thehackernews.com/2025/07/google-ai-big-sleep-stops-exploitation.html NIST National Vulnerability Database – CVE-2025-6965 (SQLite) https://nvd.nist.gov/vuln/detail/CVE-2025-6965 Sean Heelan – “Reasoning Models and the ksmbd Use-After-Free” https://simonwillison.net/2025/May/24/sean-heelan The Cyber Express – AI Finds CVE-2025-37899 Zero-Day in Linux SMB Kernel https://thecyberexpress.com/cve-2025-37899-zero-day-in-linux-smb-kernel NIST National Vulnerability Database – CVE-2025-37899 (Linux SMB Use-After-Free) https://nvd.nist.gov/vuln/detail/CVE-2025-37899 NIST National Vulnerability Database – CVE-2025-55325 (Windows Storage Management Provider Buffer Over-read) https://nvd.nist.gov/vuln/detail/CVE-2025-55325 NVD Microsoft Security Response Center – Vulnerability Details for CVE-2025-55325 https://msrc.microsoft.com/update-guide/vulnerability/CVE-2025-55325Announcing New Microsoft Purview Capabilities to Protect GenAI Agents
As organizations accelerate their adoption of agentic AI, a new and urgent challenge is emerging: how to protect the rapidly growing number of agents—first-party, third-party, and custom-built—created and deployed across the enterprise. At Microsoft Ignite, we’re introducing major advancements in Microsoft Purview to support our customers in securing all their agents, wherever they operate. The New Reality: Agents Everywhere, Data Risks Amplified For many organizations, the pace of agent creation is outstripping traditional oversight. Developers, business units, and other information workers can spin up agents to automate tasks, analyze data, or interact with enterprise systems. This proliferation brings tremendous opportunity, but also a new level of risk. New agents can access sensitive information, trigger cascading actions of other agents, and operate outside direct human supervision. The anxiety is real: how do you protect every agent, even those you didn’t know existed? Data risks are especially critical in this new landscape. Agents can process and share sensitive information at scale, interact with external systems, and invoke other agents or large language models, multiplying the complexity and potential of data exposure. Unlike traditional apps, agents are dynamic, autonomous, and currently often invisible to standard security controls. The risk surface expands with every new agent, making comprehensive data protection not just a technical requirement, but a business imperative. Purview for Agent 365: Protections for a more complex agent world Therefore, this week, we are announcing Agent 365 (A365) as the control plane for agents, enabling organizations to confidently manage, secure, and govern AI agents—first-party, third-party, and custom-built—across the enterprise. With A365, teams can extend familiar Microsoft 365 tools and policies to agents, ensuring unified inventory, robust access controls, lifecycle management, and deep integration with productivity apps. That’s why we’re extending Microsoft Purview protections to A365, bringing enterprise-grade security, compliance, and risk management to every agent. Here’s what we’re introducing to make this possible: AI Observability in Data Security Posture Management: Organizations gain visibility, risk assessment, and guided remediation for agents across Microsoft environments. Note: While third-party agents are included in the inventory, assigned risk levels, risk patterns, and guided remediation currently apply to M365 Copilot agents, Copilot Studio, and Microsoft Foundry agents. AI Observability in Data Security Posture Management Agentic Risk in Insider Risk Management: New agent-specific risk indicators and behavioral analytics enable precise detection and policy enforcement. For example, organizations can now identify risky agent behaviors, such as unauthorized data access or unusual activity patterns, and take targeted action to mitigate threats. Data Loss Prevention (DLP) and Information Protection controls extended to agent actions: Purview DLP and Information Protection policies now extend to agents that operate autonomously, allowing these agents to inherit the same protections and organizational policies as users. For example, these built-in controls ensure AI agents don’t access or share sensitive data when accessing M365 data within apps, whether that means blocking access for agents to labeled files or preventing agents from sending external emails and Teams messages that contain sensitive data. Expanded governance via Communication Compliance, Audit, Data Lifecycle Management and eDiscovery: Organizations benefit from expanded proactive detection, secure retention, and policy-based governance for interactions between humans and agents. By including these protections in A365, organizations can apply Purview’s enterprise-grade security, compliance, and risk controls to every agent—making it simpler and safer for customers to deploy agents at scale. Learn more about the Agent 365 announcement. Extending Purview Controls for All Agents Not all agents in an organization will run under an A365 license, yet every agent still requires strong data security and compliance controls. For that reason, we are also adding the following Purview capabilities: Inline data protection for prompts and responses: Expanded DLP capabilities prevent the sharing of sensitive data or files between users and third-party agents (such as ChatGPT or Google Gemini in Agent mode) through inline data protection for the browser and network. Purview SDK embedded in Agent Framework SDK: Purview SDK embedded in Agent Framework SDK enables developers to seamlessly integrate enterprise-grade security, compliance, and governance into the AI agents they build. This integration enables automatic classification and protection of sensitive data, prevents data leaks and oversharing, and provides visibility and control for regulatory compliance—empowering organizations to confidently and securely adopt AI agents in complex environments. Embedding Security into the Foundry Development Pipeline We are also adding several Purview capabilities specifically available in Foundry: Purview integration with Foundry: Purview is now enabled within Foundry, allowing Foundry admins to activate Microsoft Purview on their subscription. Once enabled, interaction data from all apps and agents flows into Purview for centralized compliance, governance, and posture management of AI data. Azure AI Search honors Purview labels and policies: Azure AI Search now ingests Microsoft Purview sensitivity labels and enforces corresponding protection policies through built-in indexers (SharePoint, OneLake, Azure Blob, ADLS Gen2). This enables secure, policy-aligned search over enterprise data, enabling agentic RAG scenarios where only authorized documents are returned or sent to LLMs, preventing data oversharing and aligning with enterprise data protection standards. Communication Compliance for Foundry: New policies extend Communication Compliance capabilities to Foundry, allowing security admins to set organization-wide Communication Compliance policies for acceptable communication for interactions with Foundry-built apps and agents, supported by Microsoft’s Responsible AI Standard. In Foundry Control Plane, Foundry admins will be able to view any deviations from this policy. In addition, Purview admins will be able to review potentially risky AI interactions in Communication Compliance, enabling them to decide on appropriate next steps. Communication Compliance provides visibility to potential unethical or harmful agent interactions Automated AI Compliance Assessments: The new integration between Microsoft Purview Compliance Manager and Foundry delivers automated, real-time compliance for AI solutions. Organizations can quickly assess agents against global standards like the EU AI Act, NIST AI RMF, and ISO/IEC, with one-click assessments and live monitoring of critical controls such as fairness, safety, and transparency. This streamlined approach eliminates manual mapping, surfaces actionable insights, and helps AI systems remain audit-ready as they evolve. Strengthening Trust in Microsoft 365 Copilot And we’re not stopping there. We’re continuing to expand Purview’s protections for Microsoft 365 Copilot to help organizations provide real-time protection for sensitive data in and accelerate remediation of oversharing risks. New enhancements include: Item-level oversharing investigation and remediation: Data security admins can now use data risk assessments in DSPM to analyze user sharing links in SharePoint and OneDrive and take bulk actions such as applying sensitivity labels to shared files, requesting the site owner to review sharing links, or disabling the links entirely. These enhancements streamline risk management, reduce exposure, and give organizations greater control over sensitive data at scale. Expanding DLP for Microsoft 365 Copilot to safeguard sensitive prompts and prevent data leakage: This new real-time control applicable to M365 Copilot, Copilot Chat and Copilot agents, helps prevent data leaks and oversharing by detecting and blocking sensitive data based on SITs in prompts. By blocking the prompt, this also prevents sensitive data from being used for grounding in Microsoft 365 or the web. This expands on the existing capability to prevent sensitive files & emails from being accessed by Copilot based on sensitivity label. Data security and compliance admins need stronger controls for Copilot-related assets like Teams meeting recordings and transcripts. They want to identify recordings with sensitive data and delete them to reduce risk and free up storage. We are announcing two new capabilities to help: Priority cleanup for M365 Copilot assets: Enables admins to override existing retention policies and compliantly delete files, such as meetings recordings and transcripts created to support Copilot use. Priority cleanup is now generally available in Purview Data Lifecycle Management. On-demand classification now extends to meeting transcripts: Information Protection automatically classifies files when they’re created, accessed, or modified, identifying sensitive information in real time. On-demand classification brings the same discovery and classification to data-at-rest without requiring user interactions. We’ve now added meeting transcripts to that coverage. Once the sensitive data in meeting transcripts is discovered and classified, admins can apply DLP or Data Lifecycle Management (DLM) to protect sensitive data from being shared or exposed unintentionally. Honoring Purview data security controls when using Copilot Mode in Edge for Business: Microsoft Edge for Business now features Copilot Mode to empower users to accelerate their productivity through AI-assisted browsing. Copilot Mode honors existing Purview data protections, such as preventing summarization of sensitive content open in the browser. Additionally, Agent Mode can be enabled for multi-step agent workflows in the browser. These agentic workflows will honor the user’s existing DLP protections, such as endpoint DLP policies that prevent pasting of sensitive data to sensitive service domains. Collectively, these capabilities reinforce Purview as the enterprise standard for securing AI-powered productivity. They give organizations the protections they need to scale Copilot usage with confidence and control. Empowering Secure Agentic AI Adoption As agents become integral to enterprise operations, Purview’s expanded protections empower organizations to safely embrace agentic AI—maintaining control, trust, and accountability at every step. With unified data security and compliance, organizations can observe and assess agent risk, prevent oversharing and leakage, detect risky agent behavior, and take decisive control to turn agentic AI into a trusted engine for growth. To learn more about Agent 365, visit the Agent 365 website.Unlocking the Power of Microsoft Purview for ChatGPT Enterprise
In today's rapidly evolving technology landscape, data security and compliance are key. Microsoft Purview offers a robust solution for managing and securing interactions with AI based solutions. This integration not only enhances data governance but also ensures that sensitive information is handled with the appropriate controls. Let's dive into the benefits of this integration and outline the steps to integrate with ChatGPT Enterprise in specific. The integration works for Entra connected users on the ChatGPT workspace, if you have needs that goes beyond this, please tell us why and how it impacts you. Important update 1: Effective May 1, these capabilities require you to enable pay-as-you-go billing in your organization. Important update 2: From May 19, you are required to create a collection policy to ingest ChatGPT Enterprise information. In DSPM for AI you will find this one click process. Benefits of Integrating ChatGPT Enterprise with Microsoft Purview Enhanced Data Security: By integrating ChatGPT Enterprise with Microsoft Purview, organizations can ensure that interactions are securely captured and stored within their Microsoft 365 tenant. This includes user text prompts and AI app text responses, providing a comprehensive record of communications. Compliance and Governance: Microsoft Purview offers a range of compliance solutions, including Insider Risk Management, eDiscovery, Communication Compliance, and Data Lifecycle & Records Management. These tools help organizations meet regulatory requirements and manage data effectively. Customizable Detection: The integration allows for the detection of built in can custom classifiers for sensitive information, which can be customized to meet the specific needs of the organization. To help ensures that sensitive data is identified and protected. The audit data streams into Advanced Hunting and the Unified Audit events that can generate visualisations of trends and other insights. Seamless Integration: The ChatGPT Enterprise integration uses the Purview API to push data into Compliant Storage, ensuring that external data sources cannot access and push data directly. This provides an additional layer of security and control. Step-by-Step Guide to Setting Up the Integration 1. Get Object ID for the Purview account in Your Tenant: Go to portal.azure.com and search for "Microsoft Purview" in the search bar. Click on "Microsoft Purview accounts" from the search results. Select the Purview account you are using and copy the account name. Go to portal.azure.com and search for “Enterprise" in the search bar. Click on Enterprise applications. Remove the filter for Enterprise Applications Select All applications under manage, search for the name and copy the Object ID. 2. Assign Graph API Roles to Your Managed Identity Application: Assign Purview API roles to your managed identity application by connecting to MS Graph utilizing Cloud Shell in the Azure portal. Open a PowerShell window in portal.azure.com and run the command Connect-MgGraph. Authenticate and sign in to your account. Run the following cmdlet to get the ServicePrincipal ID for your organization for the Purview API app. (Get-MgServicePrincipal -Filter "AppId eq '9ec59623-ce40-4dc8-a635-ed0275b5d58a'").id This command provides the permission of Purview.ProcessConversationMessages.All to the Microsoft Purview Account allowing classification processing. Update the ObjectId to the one retrieved in step 1 for command and body parameter. Update the ResourceId to the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{a4543e1f-6e5d-4ec9-a54a-f3b8c156163f}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam It will look something like this from the command line We also need to add the permission for the application to read the user accounts to correctly map the ChatGPT Enterprise user with Entra accounts. First run the following command to get the ServicePrincipal ID for your organization for the GRAPH app. (Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'").id The following step adds the permission User.Read.All to the Purview application. Update the ObjectId with the one retrieved in step 1. Update the ResourceId with the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{df021288-bdef-4463-88db-98f22de89214}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam 3. Store the ChatGPT Enterprise API Key in Key Vault The steps for setting up Key vault integration for Data Map can be found here Create and manage credentials for scans in the Microsoft Purview Data Map | Microsoft Learn When setup you will see something like this in Key vault. 4. Integrate ChatGPT Enterprise Workspace to Purview: Create a new data source in Purview Data Map that connects to the ChatGPT Enterprise workspace. Go to purview.microsoft.com and select Data Map, search if you do not see it on the first screen. Select Data sources Select Register Search for ChatGPT Enterprise and select Provide your ChatGPT Enterprise ID Create the first scan by selecting Table view and filter on ChatGPT Add your key vault credentials to the scan Test the connection and once complete click continue When you click continue the following screen will show up, if everything is ok click Save and run. Validate the progress by clicking on the name, completion of the first full scan may take an extended period of time. Depending on size it may take more than 24h to complete. If you click on the scan name you expand to all the runs for that scan. When the scan completes you can start to make use of the DSPM for AI experience to review interactions with ChatGPT Enterprise. The mapping to the users is based on the ChatGPT Enterprise connection to Entra, with prompts and responses stored in the user's mailbox. 5. Review and Monitor Data: Please see this article for required permissions and guidance around Microsoft Purview Data Security Posture Management (DSPM) for AI, Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Use Purview DSPM for AI analytics and Activity Explorer to review interactions and classifications. You can expand on prompts and responses in ChatGPT Enterprise 6. Microsoft Purview Communication Compliance Communication Compliance (here after CC) is a feature of Microsoft Purview that allows you to monitor and detect inappropriate or risky interactions with ChatGPT Enterprise. You can monitor and detect requests and responses that are inappropriate based on ML models, regular Sensitive Information Types, and other classifiers in Purview. This can help you identify Jailbreak and Prompt injection attacks and flag them to IRM and for case management. Detailed steps to configure CC policies and supported configurations can be found here. 7. Microsoft Purview Insider Risk Management We believe that Microsoft Purview Insider Risk Management (here after IRM) can serve a key role in protecting your AI workloads long term. With its adaptive protection capabilities, IRM dynamically adjusts user access based on evolving risk levels. In the event of heightened risk, IRM can enforce Data Loss Prevention (DLP) policies on sensitive content, apply tailored Entra Conditional Access policies, and initiate other necessary actions to effectively mitigate potential risks. This strategic approach will help you to apply more stringent policies where it matters avoiding a boil the ocean approach to allow your team to get started using AI. To get started use the signals that are available to you including CC signals to raise IRM tickets and enforce adaptive protection. You should create your own custom IRM policy for this. Do include Defender signals as well. Based on elevated risk you may select to block users from accessing certain assets such as ChatGPT Enterprise. Please see this article for more detail Block access for users with elevated insider risk - Microsoft Entra ID | Microsoft Learn. 8. eDiscovery eDiscovery of AI interactions is crucial for legal compliance, transparency, accountability, risk management, and data privacy protection. Many industries must preserve and discover electronic communications and interactions to meet regulatory requirements. Including AI interactions in eDiscovery ensures organizations comply with these obligations and preserves relevant evidence for litigation. This process also helps maintain trust by enabling the review of AI decisions and actions, demonstrating due diligence to regulators. Microsoft Purview eDiscovery solutions | Microsoft Learn 9. Data Lifecycle Management Microsoft Purview offers robust solutions to manage AI data from creation to deletion, including classification, retention, and secure disposal. This ensures that AI interactions are preserved and retrievable for audits, litigation, and compliance purposes. Please see this article for more information Automatically retain or delete content by using retention policies | Microsoft Learn. Closing By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their ChatGPT Enterprise interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. We are still in preview some of the features listed are not fully integrated, please reach out to us if you have any questions or if you have additional requirements.Secure Model Context Protocol (MCP) Implementation with Azure and Local Servers
Introduction The Model Context Protocol (MCP) enables AI systems to interact with external data sources and tools through a standardized interface. While powerful, MCP can introduce security risks in enterprise environments. This tutorial shows you how to implement MCP securely using local servers, Azure OpenAI with APIM, and proper authentication. Understanding MCP's Security Risks There are a couple of key security concerns to consider before implementing MCP: Data Exfiltration: External MCP servers could expose sensitive data. Unauthorized Access: Third-party services become potential security risks. Loss of Control: Unknown how external services handle your data. Compliance Issues: Difficulty meeting regulatory requirements with external dependencies. The solution? Keep everything local and controlled. Secure Architecture Before we dive into implementation, let's take a look at the overall architecture of our secure MCP solution: This architecture consists of three key components working together: Local MCP Server - Your custom tools run entirely within your local environment, reducing external exposure risks. Azure OpenAI + APIM Gateway - All AI requests are routed through Azure API Management with Microsoft Entra ID authentication, providing enterprise-grade security controls and compliance. Authenticated Proxy - A lightweight proxy service handles token management and request forwarding, ensuring seamless integration. One of the key benefits of this architecture is that no API key is required. Traditional implementations often require storing OpenAI API keys in configuration files, environment variables, or secrets management systems, creating potential security vulnerabilities. This approach uses Azure Managed Identity for backend authentication and Azure CLI credentials for client authentication, meaning no sensitive API keys are ever stored, logged, or exposed in your codebase. For more security, APIM and Azure OpenAI resources can be configured with IP restrictions or network rules to only accept traffic from certain sources. These configurations are available for most Azure resources and provide an additional layer of network-level security. This security-forward approach gives you the full power of MCP's tool integration capabilities while keeping your implementation completely under your control. How to Implement MCP Securely 1. Local MCP Server Implementation Building the MCP Server Let's start by creating a simple MCP server in .NET Core. 1. Create a web application dotnet new web -n McpServer 2.Add MCP packages dotnet add package ModelContextProtocol --prerelease dotnet add package ModelContextProtocol.AspNetCore --prerelease 3. Configure Program.cs var builder = WebApplication.CreateBuilder(args); builder.Services.AddMcpServer() .WithHttpTransport() .WithToolsFromAssembly(); var app = builder.Build(); app.MapMcp(); app.Run(); WithToolsFromAssembly() automatically discovers and registers tools from the current assembly. Look into the C# SDK for other ways to register tools for your use case. 4. Define Tools Now, we can define some tools that our MCP server can expose. here is a simple example for tools that echo input back to the client: using ModelContextProtocol.Server; using System.ComponentModel; namespace Tools; [McpServerToolType] public static class EchoTool { [McpServerTool] [Description("Echoes the input text back to the client in all capital letters.")] public static string EchoCaps(string input) { return new string(input.ToUpperInvariant()); } [McpServerTool] [Description("Echoes the input text back to the client in reverse.")] public static string ReverseEcho(string input) { return new string(input.Reverse().ToArray()); } } Key components of MCP tools are the McpServerToolType class decorator indicating that this class contains MCP tools, and the McpServerTool method decorator with a description that explains what the tool does. Alternative: STDIO Transport If you want to use STDIO transport instead of SSE (implemented here), check out this guide: Build a Model Context Protocol (MCP) Server in C# 2. Create a MCP Client with Cline Now that we have our MCP server set up with tools, we need a client that can discover and invoke these tools. For this implementation, we'll use Cline as our MCP client, configured to work through our secure Azure infrastructure. 1. Install Cline VS Code Extension Install the Cline extension in VS Code. 2. Deploy secure Azure OpenAI Endpoint with APIM Instead of connecting Cline directly to external AI services (which could expose the secure implementation to external bad actors), we will route through Azure API Management (APIM) for enterprise security. With this implementation, all requests go through Microsoft Entra ID and we use managed identity for all authentications. Quick Setup: Deploy the Azure OpenAI with APIM solution. Ensure your Azure OpenAI resources are configured to allow your APIM's managed identity to make calls. The APIM policy below uses managed identity authentication to connect to Azure OpenAI backends. Refer to the Azure OpenAI documentation on managed identity authentication for detailed setup instructions. 3. Configure APIM Policy After deploying APIM, configure the following policy to enable Azure AD token validation, managed identity authentication, and load balancing across multiple OpenAI backends: <!-- Azure API Management Policy for OpenAI Endpoint --> <!-- Implements Azure AD Token validation, managed identity authentication --> <!-- Supports round-robin load balancing across multiple OpenAI backends --> <!-- Requests with 'gpt-5' in the URL are routed to a single backend --> <!-- The client application ID '04b07795-8ddb-461a-bbee-02f9e1bf7b46' is the official Azure CLI app registration --> <!-- This policy allows requests authenticated by Azure CLI (az login) when the required claims are present --> <policies> <inbound> <!-- IP Allow List Fragment (external fragment for client IP restrictions) --> <include-fragment fragment-id="YourCompany-IPAllowList" /> <!-- Azure AD Token Validation for Azure CLI app ID --> <validate-azure-ad-token tenant-id="YOUR-TENANT-ID-HERE" header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <client-application-ids> <application-id>04b07795-8ddb-461a-bbee-02f9e1bf7b46</application-id> </client-application-ids> <audiences> <audience>api://YOUR-API-AUDIENCE-ID-HERE</audience> </audiences> <required-claims> <claim name="roles" match="any"> <value>YourApp.User</value> </claim> </required-claims> </validate-azure-ad-token> <!-- Acquire Managed Identity access token for backend authentication --> <authentication-managed-identity resource="https://cognitiveservices.azure.com" output-token-variable-name="managed-id-access-token" ignore-error="false" /> <!-- Set Authorization header for backend using the managed identity token --> <set-header name="Authorization" exists-action="override"> <value>@("Bearer " + (string)context.Variables["managed-id-access-token"])</value> </set-header> <!-- Check if URL contains 'gpt-5' and set backend accordingly --> <choose> <when condition="@(context.Request.Url.Path.ToLower().Contains("gpt-5"))"> <set-variable name="selected-backend-url" value="https://your-region1-oai.openai.azure.com/openai" /> </when> <otherwise> <cache-lookup-value key="backend-counter" variable-name="backend-counter" /> <choose> <when condition="@(context.Variables.ContainsKey("backend-counter") == false)"> <set-variable name="backend-counter" value="@(0)" /> </when> </choose> <set-variable name="current-backend-index" value="@((int)context.Variables["backend-counter"] % 7)" /> <choose> <when condition="@((int)context.Variables["current-backend-index"] == 0)"> <set-variable name="selected-backend-url" value="https://your-region1-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 1)"> <set-variable name="selected-backend-url" value="https://your-region2-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 2)"> <set-variable name="selected-backend-url" value="https://your-region3-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 3)"> <set-variable name="selected-backend-url" value="https://your-region4-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 4)"> <set-variable name="selected-backend-url" value="https://your-region5-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 5)"> <set-variable name="selected-backend-url" value="https://your-region6-oai.openai.azure.com/openai" /> </when> <when condition="@((int)context.Variables["current-backend-index"] == 6)"> <set-variable name="selected-backend-url" value="https://your-region7-oai.openai.azure.com/openai" /> </when> </choose> <set-variable name="next-counter" value="@(((int)context.Variables["backend-counter"] + 1) % 1000)" /> <cache-store-value key="backend-counter" value="@((int)context.Variables["next-counter"])" duration="300" /> </otherwise> </choose> <!-- Always set backend service using selected-backend-url variable --> <set-backend-service base-url="@((string)context.Variables["selected-backend-url"])" /> <!-- Inherit any base policies defined outside this section --> <base /> </inbound> <backend> <base /> </backend> <outbound> <base /> </outbound> <on-error> <base /> </on-error> </policies> This policy creates a secure gateway that validates Azure AD tokens from your local Azure CLI session, then uses APIM's managed identity to authenticate with Azure OpenAI backends, eliminating the need for API keys. It automatically load-balances requests across multiple Azure OpenAI regions using round-robin distribution for optimal performance. 4. Create Azure APIM proxy for Cline This FastAPI-based proxy forwards OpenAI-compatible API requests from Cline through APIM using Azure AD authentication via Azure CLI credentials, eliminating the need to store or manage OpenAI API keys. Prerequisites: Python 3.8 or higher Azure CLI (ensure az login has been run at least once) Ensure the user running the proxy script has appropriate Azure AD roles and permissions. This script uses Azure CLI credentials to obtain bearer tokens. Your user account must have the correct roles assigned and access to the target API audience configured in the APIM policy above. Quick setup for the proxy: Create this requirements.txt: fastapi uvicorn requests azure-identity Create this Python script for the proxy source code azure_proxy.py: import os import requests from fastapi import FastAPI, Request from fastapi.responses import StreamingResponse import uvicorn from azure.identity import AzureCliCredential # CONFIGURATION APIM_BASE_URL = <APIM BASE URL HERE> AZURE_SCOPE = <AZURE SCOPE HERE> PORT = int(os.environ.get("PORT", 8080)) app = FastAPI() credential = AzureCliCredential() # Use a single requests.Session for connection pooling from requests.adapters import HTTPAdapter session = requests.Session() session.mount("https://", HTTPAdapter(pool_connections=100, pool_maxsize=100)) import time _cached_token = None _cached_expiry = 0 def get_bearer_token(scope: str) -> str: """Get an access token using AzureCliCredential, caching until expiry is within 30 seconds.""" global _cached_token, _cached_expiry now = int(time.time()) if _cached_token and (_cached_expiry - now > 30): return _cached_token try: token_obj = credential.get_token(scope) _cached_token = token_obj.token _cached_expiry = token_obj.expires_on return _cached_token except Exception as e: raise RuntimeError(f"Could not get Azure access token: {e}") @app.api_route("/{path:path}", methods=["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"]) async def proxy(request: Request, path: str): # Assemble the destination URL (preserve trailing slash logic) dest_url = f"{APIM_BASE_URL.rstrip('/')}/{path}".rstrip("/") if request.url.query: dest_url += "?" + request.url.query # Get the Bearer token bearer_token = get_bearer_token(AZURE_SCOPE) # Prepare headers (copy all, overwrite Authorization) headers = dict(request.headers) headers["Authorization"] = f"Bearer {bearer_token}" headers.pop("host", None) # Read body body = await request.body() # Send the request to APIM using the pooled session resp = session.request( method=request.method, url=dest_url, headers=headers, data=body if body else None, stream=True, ) # Stream the response back to the client return StreamingResponse( resp.raw, status_code=resp.status_code, headers={k: v for k, v in resp.headers.items() if k.lower() != "transfer-encoding"}, ) if __name__ == "__main__": # Bind the app to 127.0.0.1 to avoid any Firewall updates uvicorn.run(app, host="127.0.0.1", port=PORT) Run the setup: pip install -r requirements.txt az login # Authenticate with Azure python azure_proxy.py Configure Cline to use the proxy: Using the OpenAI Compatible API Provider: Base URL: http://localhost:8080 API Key: <any random string> Model ID: <your Azure OpenAI deployment name> API Version: <your Azure OpenAI deployment version> The API key field is required by Cline but unused in our implementation - any random string works since authentication happens via Azure AD. 5. Configure Cline to listen to your MCP Server Now that we have both our MCP server running and Cline configured with secure OpenAI access, the final step is connecting them together. To enable Cline to discover and use your custom tools, navigate to your installed MCP servers on Cline, select Configure MCP Servers, and add in the configuration for your server: { "mcpServers": { "mcp-tools": { "autoApprove": [ "EchoCaps", "ReverseEcho", ], "disabled": false, "timeout": 60, "type": "sse", "url": "http://<your localhost url>/sse" } } } Now, you can use Cline's chat interface to interact with your secure MCP tools. Try asking Cline to use your custom tools - for example, "Can you echo 'Hello World' in capital letters?" and watch as it calls your local MCP server through the infrastructure you've built. Conclusion There you have it: A secure implementation of MCP that can be tailored to your specific use case. This approach gives you the power of MCP while maintaining enterprise security. You get: AI capabilities through secure Azure infrastructure. Custom tools that never leave your environment. Standard MCP interface for easy integration. Complete control over your data and tools. The key is keeping MCP servers local while routing AI requests through your secure Azure infrastructure. This way, you gain MCP's benefits without compromising security. Disclaimer While this tutorial provides a secure foundation for MCP implementation, organizations are responsible for configuring their Azure resources according to their specific security requirements and compliance standards. Ensure proper review of network rules, access policies, and authentication configurations before deploying to production environments. Resources MCP SDKs and Tools: MCP C# SDK MCP Python SDK Cline SDK Cline User Guide Azure OpenAI with APIM Azure API Management Network Security: Azure API Management - restrict caller IPs Azure API Management with an Azure virtual network Set up inbound private endpoint for Azure API Management Azure OpenAI and AI Services Network Security: Configure Virtual Networks for Azure AI services Securing Azure OpenAI inside a virtual network with private endpoints Add an Azure OpenAI network security perimeter az cognitiveservices account network-rule1.9KViews3likes2CommentsCyber Dial Agent: Protecting Your Custom Copilot Agents
Introducing…the Cyber Dial Agent; a browser add-on and agent that streamlines security investigations by providing analysts with a unified, menu-driven interface to quickly access relevant pages in Microsoft Defender, Purview, and Defender for Cloud. This tool eliminates the need for manual searches across multiple portals, reducing investigation time and minimizing context switching for both technical and non-technical users. Visit the full article “Safeguard & Protect Your Custom Copilot Agents (Cyber Dial Agent)” in the Microsoft Purview Community Blog. You’ll access detailed, visual, step-by-step guides on all of the following: Importing the agent that was built via Microsoft Copilot Studio solution into another tenant and publishing it afterward To add your browser add-on solution in Microsoft Edge (or any modern browser) Using Purview DSPM for AI to Secure (Cyber Dial Custom Agent) Copilot Studio Agents Read the full article by Hesham_Saad.Securing Microsoft M365 Copilot and AI with Microsoft's Suite of Security Products - Part 1
Microsoft 365 Copilot and AI applications created in Azure AI Foundry are transforming productivity, but they also introduce new security challenges for businesses. Organizations embracing these AI capabilities must guard against risks such as data leaks, novel AI-driven threats (e.g. prompt injection attacks), and compliance violations. Microsoft offers a comprehensive suite of products to help secure and govern AI solutions. This multipart guide provides a detailed roadmap for using Microsoft’s security services together to protect AI deployments and Copilot integrations in an enterprise environment. Overview of Microsoft Security Solutions for AI and Copilot Microsoft’s security portfolio spans identity, devices, cloud apps, data, and threat management – all crucial for securing AI systems. Key products include: Microsoft Entra (identity and access), Microsoft Defender XDR (Unified enterprise defense suite that natively coordinates detection, prevention, investigation, and response across endpoints, identities, email, and applications), Microsoft Purview (data security, compliance, and governance), Microsoft Sentinel (cloud-native SIEM/SOAR for monitoring and response), and Microsoft Intune (device management), among others. These solutions are designed to integrate – forming an AI-first, unified security platform that greatly reduces the complexity of implementing a cohesive Zero Trust strategy across your enterprise and AI ecosystem. The table below summarizes the main product categories and their roles in securing AI applications and Copilot: Security Area Microsoft Product Role in Securing AI and Copilot Identity and Access Management Microsoft Entra and Entra Suite (Entra ID Protection, Entra Conditional Access, Entra Internet Access, Entra Private Access, Entra ID Governance) Verify and control access to AI systems. Enforce strong authentication and least privilege for users, admins, and AI service identities. Conditional Access policies (including new AI app controls) restrict who can use specific AI applications Endpoint & Device Security Microsoft Defender for Endpoint, Microsoft Intune Secure user devices that interact with AI. Defender for Endpoint provides EDR (Endpoint Detection & Response) to help block malware or exploits while also identifying devices that may be high risk. Intune helps ensure only managed, compliant devices can access corporate AI apps, aligning with a Zero Trust strategy. Cloud & Application Security Microsoft Defender for Cloud (CSPM/CWPP), Defender for Cloud Apps (CASB/SSPM), Azure Network Security (Azure Firewall, WAF) Protect AI Infrastructure and cloud workloads (IAAS/SASS) Defender for Cloud continuously assesses security posture of AI services (VMs, containers, Azure OpenAI instances) and detects misconfigurations or vulnerabilities. It now provides AI security posture management across multi-cloud AI environments (Azure, AWS, Google) and even multiple model types. Defender for Cloud Apps monitors and controls SaaS AI app usage to combat “shadow AI” usage (unsanctioned AI tools). Azure Firewall and WAF guard AI APIs and web front-ends against network threats, with new Copilot-powered features to analyze traffic and logs. Threat Detection & Response Microsoft Defender XDR, Microsoft Sentinel (SIEM/SOAR), Microsoft Security Copilot Detect and respond to threats. Microsoft’s Defender XDR suite provides a single pane of glass for security operations teams to detect, investigate, and respond to threats, correlating signals from endpoints, identities, cloud apps, and email. Microsoft Sentinel enhances these capabilities by aggregating and correlating signals from 3rd party, non-Microsoft products with Defender XDR data to alert on suspicious activities across the environment. Security Copilot (an AI assistant for SOC teams) further accelerates incident analysis and response using generative AI – helping defenders investigate incidents or automate threat hunting. Data Security & Compliance Microsoft Purview (Information Protection, Data Loss Prevention, Insider Risk, Compliance Manager, DSPM for AI), SharePoint Advanced Management Protect sensitive data used or produced by AI. Purview enables classification and sensitivity labeling of data so that confidential information is handled properly by AI. Purview Data Loss Prevention (DLP) enforces policies to prevent sensitive data leaks – for example, new Purview DLP controls for Edge for Business can block users from typing or pasting sensitive data into generative AI apps like ChatGPT or Copilot Chat. Purview Insider Risk Management can detect anomalous data extraction via AI tools. Purview Compliance Manager and Audit help ensure AI usage complies with regulations (e.g. GDPR, HIPAA) and provide audit logs of AI interactions. AI Application Safety Azure AI Content Safety (content filtering), Responsible AI controls (Prompt flow, OpenAI policy) Ensure AI output and usage remain safe and within policy. Azure AI Content Safety provides AI-driven content filters and “prompt shields” to block malicious or inappropriate prompts/outputs in real-time. Microsoft’s Responsible AI framework and tools (such as evaluations in Azure AI Studio to simulate adversarial prompts) further help developers build AI systems that adhere to safety and ethical standards. Meanwhile, M365 Copilot has built-in safeguards – it respects all your existing Microsoft 365 security, privacy, and compliance controls by design! How the Pieces Work Together Imagine a user at a company is using Microsoft 365 Copilot to query internal documents. Entra ID first ensures the user is who they claim (with MFA), and that their device is in a compliant state. When the user prompts Copilot, Copilot checks the user’s permissions and will only retrieve data they are authorized to see. The prompt and the AI’s generated answer is then checked by Microsoft Purview’s DLP , Insider Risk, DSPM, and compliance policies – if the user’s query or the response would expose, say, credit card numbers or other sensitive information, the system can block it or redact it. Meanwhile, Defender XDR's extended detection and response capabilities are working in the background: Defender for Cloud Apps logs that the user accessed an approved 3rd party AI service, Sentinel correlates this with any unusual behavior (like data exfiltration after running the prompt), an alert is triggered, and the user is either blocked or if allowed, forced to label and encrypt the data before sending it externally. In short, each security layer – identity, data, device, cloud, monitoring – plays an important part in securing this AI-driven scenario. Stay tuned for Part 2 of this Multi-Part Series In the following articles, we break down how to configure and use the tools summarized in this article, starting with Identity and Access Management. We will also highlight best practices (like Microsoft's recommended Prepare -> Discover -> Protect -> Govern approach for AI security) and include recent product enhancements that assist in securing AI.From Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI
By Chirag Mehta, Vice President and Principal Analyst - Constellation Research AI is changing the way organizations work. It helps teams write code, detect fraud, automate workflows, and make complex decisions faster than ever before. But as AI adoption increases, so do the risks, many of which traditional security tools were not designed to address. Cybersecurity leaders are starting to see that AI security is not just another layer of defense. It is becoming essential to building trust, ensuring resilience, and maintaining business continuity. Earlier this year, after many conversations with CISOs and CIOs, I saw a clear need to bring more attention to this topic. That led to my report on AI Security, which explores how AI-specific vulnerabilities differ from traditional cybersecurity risks and why securing AI systems calls for a more intentional approach. Why AI Changes the Security Landscape AI systems do not behave like traditional software. They learn from data instead of following pre-defined logic. This makes them powerful, but also vulnerable. For example, an AI model can: Misinterpret input in ways that humans cannot easily detect Be tricked into producing harmful or unintended responses through crafted prompts Leak sensitive training data in its outputs Take actions that go against business policies or legal requirements These are not coding flaws. They are risks that originate from how AI systems process information and act on it. These risks become more serious with agentic AI. These systems act on behalf of humans, interact with other software, and sometimes with other AI agents. They can make decisions, initiate actions, and change configurations. If one is compromised, the consequences can spread quickly. A key challenge is that many organizations still rely on traditional defenses to secure AI systems. While those tools remain necessary, they are no longer enough. AI introduces new risks across every layer of the stack, including data, networks, endpoints, applications, and cloud infrastructure. As I explained in my report, the security focus must shift from defending the perimeter to governing the behavior of AI systems, the data they use, and the decisions they make. The Shift Toward AI-Aware Cyber Resilience Cyber resilience is the ability to withstand, adapt to, and recover from attacks. Meeting that standard today requires understanding how AI is developed, deployed, and used by employees, customers, and partners. To get there, organizations must answer questions such as: Where is our sensitive data going, and is it being used safely to train models? What non-human identities, such as AI agents, are accessing systems and data? Can we detect when an AI system is being misused or manipulated? Are we in compliance with new AI regulations and data usage rules? Let’s look at how Microsoft has evolved its mature security portfolio to help protect AI workloads and support this shift toward resilience. Microsoft’s Approach to Secure AI Microsoft has taken a holistic and integrated approach to AI security. Rather than creating entirely new tools, it is extending existing products already used by millions to support AI workloads. These features span identity, data, endpoint, and cloud protection. 1. Microsoft Defender: Treating AI Workloads as Endpoints AI models and applications are emerging as a new class of infrastructure that needs visibility and protection. Defender for Cloud secures AI workloads across Azure and other cloud platforms such as AWS and GCP by monitoring model deployments and detecting vulnerabilities. Defender for Cloud Apps extends protection to AI-enabled apps running at the edge Defender for APIs supports AI systems that use APIs, which are often exposed to risks such as prompt injection or model manipulation Additionally, Microsoft has launched tools to support AI red-teaming, content safety, and continuous evaluation capabilities to ensure agents operate safely and as intended. This allows teams identify and remediate risks such as jailbreaks or prompt injection before models are deployed. 2. Microsoft Entra: Managing Non-Human Identities As organizations roll out more AI agents and copilots, non-human identities are becoming more common. These digital identities need strong oversight. Microsoft Entra helps create and manage identities for AI agents Conditional Access ensures AI agents only access the resources they need, based on real-time signals and context Privileged Identity Management manages, controls, and monitors AI agents access to important resources within an organization 3. Microsoft Purview: Securing Data Used in AI Purview plays an important role in securing both the data that powers AI apps and agents, and the data they generate through interactions. Data discovery and classification helps label sensitive information and track its use Data Loss Prevention policies help prevent leaks or misuse of data in tools such as Copilot or agents built in Azure AI Foundry Insider Risk Management alerts security teams when employees feed sensitive data into AI systems without approval Purview also helps organizations meet transparency and compliance requirements, extending the same policies they already use today to AI workloads, without requiring separate configurations, as regulations like the EU AI Act take effect. Here's a video that explains the above Microsoft security products: Securing AI Is Now a Strategic Priority AI is evolving quickly, and the risks are evolving with it. Traditional tools still matter, but they were not built for systems that learn, adapt, and act independently. They also weren’t designed for the pace and development approaches AI requires, where securing from the first line of code is critical to staying protected at scale. Microsoft is adapting its security portfolio to meet this shift. By strengthening identity, data, and endpoint protections, it is helping customers build a more resilient foundation. Whether you are launching your first AI-powered tool or managing dozens of agents across your organization, the priority is clear. Secure your AI systems before they become a point of weakness. You can read more in my AI Security report and learn how Microsoft is helping organizations secure AI supporting these efforts across its security portfolio.Using Copilot in Fabric with Confidence: Data Security, Compliance & Governance with DSPM for AI
Introduction As organizations embrace AI to drive innovation and productivity, ensuring data security, compliance, and governance becomes paramount. Copilot in Microsoft Fabric offers powerful AI-driven insights. But without proper oversight, users can misuse copilot to expose sensitive data or violate regulatory requirements. Enter Microsoft Purview’s Data Security Posture Management (DSPM) for AI—a unified solution that empowers enterprises to monitor, protect, and govern AI interactions across Microsoft and third-party platforms. We are excited to announce the general availability of Microsoft Purview capabilities for Copilot in Fabric, starting with Copilot in Power BI. This blog explores how Purview DSPM for AI integrates with Copilot in Fabric to deliver robust data protection and governance and provides a step-by-step guide to enable this integration. Capabilities of Purview DSPM for AI As organizations adopt AI, implementing data controls and Zero Trust approach is crucial to mitigate risks like data oversharing and leakage, and potential non-compliant usage in AI. We are excited to announce Microsoft Purview capabilities for Copilot in Fabric, starting with Copilot for Power BI, By combining Microsoft Purview and Copilot for Power BI, users can: Discover data risks such as sensitive data in user prompts and responses in Activity Explorer and receive recommended actions in their Microsoft Purview DSPM for AI Reports to reduce these risks. DSPM for AI Activity Explorer DSPM for AI Reports If you find Copilot in Fabric actions in DSPM for AI Activity Explorer or reports to be potentially inappropriate or malicious, you can look for further information in Insider Risk Management (IRM), through an eDiscovery case, Communication Compliance (CC), or Data Lifecycle Management (DLM). Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI. Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant or unethical AI usage detection with Purview Communication Compliance. Purview Audit provides a detailed log of user and admin activity within Copilot in Fabric, enabling organizations to track access, monitor usage patterns, and support forensic investigations. Purview eDiscovery enables legal and investigative teams to identify, collect, and review Copilot in Fabric interactions as part of case workflows, supporting defensible investigations Communication Compliance helps detect potential policy violations or risky behavior in administrator interactions, enabling proactive monitoring and remediation for Copilot in Fabric Data Lifecycle Management allows teams to automate the retention, deletion, and classification of Copilot in Fabric data—reducing storage costs and minimizing risk from outdated or unnecessary information Steps to Enable the Integration To use DSPM for AI from the Microsoft Purview portal, you must have the following prerequisites, Activate Purview Audit which requires user to have the role of Entra Compliance Admin or Entra Global admin to enable Purview Audit. More details on DSPM pre-requisites can be found here, Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn To enable Purview DSPM for AI for Copilot for Power BI, Step 1: Enable DSPM for AI Policies Navigate to Microsoft Purview DSPM for AI. Enable the one-click policy: “DSPM for AI – Capture interactions for Copilot experiences”. Optionally enable additional policies: Detect risky AI usage Detect unethical behavior in AI apps These policies can be configured in the Microsoft Purview DSPM for AI portal and tailored to your organization’s risk profile. Step 2: Monitor and Act Use DSPM for AI Reports and Activity Explorer to monitor AI interactions. Apply IRM, DLM, CC and eDiscovery actions as needed. Purview Roles and Permissions Needed by Users To manage and operate DSPM for AI effectively, assign the following roles: Role Responsibilities Purview Compliance Administrator Full access to configure policies and DSPM for AI setup Purview Security Reader View reports, dashboards, policies and AI Activity Content Explorer Content Viewer Additional Permission to view the actual prompts and responses on top of the above permissions More details on Purview DSPM for AI Roles & permissions can be found here, Permissions for Microsoft Purview Data Security Posture Management for AI | Microsoft Learn Purview Costs Microsoft Purview now offers a combination of entitlement-based (per-user-per-month) and Pay-As-You-Go (PAYG) pricing models. The PAYG model applies to a broader set of Purview capabilities—including Insider Risk Management, Communication Compliance, eDiscovery, and other data security and governance solutions—based on copilot for Power BI usage volume or complexity. Purview Audit logging of Copilot for Power BI activity remains included at no additional cost as part of Microsoft 365 E5 licensing. This flexible pricing structure ensures that organizations only pay for what they use as data flows through AI models, networks, and applications. For further details, please refer to this blog: New Purview pricing options for protecting AI apps and agents | Microsoft Community Hub Conclusion Microsoft Purview DSPM for AI is a game-changer for organizations looking to adopt AI responsibly. By integrating with Copilot in Fabric, it provides a comprehensive framework to discover, protect, and govern AI interactions—ensuring compliance, reducing risk, and enabling secure innovation. Whether you're a Fabric Admin, compliance admin or security admin, enabling this integration is a strategic step toward building a secure, AI-ready enterprise. Additional resources Use Microsoft Purview to manage data security & compliance for Microsoft Copilot in Fabric | Microsoft Learn How to deploy Microsoft Purview DSPM for AI to secure your AI apps Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn Learn about Microsoft Purview billing models | Microsoft Learn1.1KViews0likes0CommentsGraph RAG for Security: Insights from a Microsoft Intern
As a software engineering intern at Microsoft Security, I had the exciting opportunity to explore how Graph Retrieval-Augmented Generation (Graph RAG) can enhance data security investigations. This blog post shares my learning journey and insights from working with this evolving technology.