securing ai
46 TopicsSecure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Copilot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurUnderstanding and mitigating security risks in MCP implementations
Introducing any new technology can introduce new security challenges or exacerbate existing security risks. In this blog post, we’re going to look at some of the security risks that could be introduced to your environment when using Model Context Protocol (MCP), and what controls you can put in place to mitigate them. MCP is a framework that enables seamless integration between LLM applications and various tools and data sources. MCP defines: A standardized way for AI models to request external actions through a consistent API Structured formats for how data should be passed to and from AI systems Protocols for how AI requests are processed, executed, and returned MCP allows different AI systems to use a common set of tools and patterns, ensuring consistent behavior when AI models interact with external systems. MCP architecture MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. MCP security controls Any system which has access to important resources has implied security challenges. Security challenges can generally be addressed through correct application of fundamental security controls and concepts. As MCP is only newly defined, the specification is changing very rapidly and as the protocol evolves. Eventually the security controls within it will mature, enabling a better integration with enterprise and established security architectures and best practices. Research published in the Microsoft Digital Defense Report states that 98% of reported breaches would be prevented by robust security hygiene and the best protection against any kind of breach is to get your baseline security hygiene, secure coding best practices and supply chain security right – those tried and tested practices that we already know about still make the most impact in reducing security risk. Let's look at some of the ways that you can start to address security risks when adopting MCP. MCP server authentication (if your MCP implementation was before 26th April 2025) Problem statement: The original MCP specification assumed that developers would write their own authentication server. This requires knowledge of OAuth and related security constraints. MCP servers acted as OAuth 2.0 Authorization Servers, managing the required user authentication directly rather than delegating it to an external service such as Microsoft Entra ID. As of 26 April 2025, an update to the MCP specification allows for MCP servers to delegate user authentication to an external service. Risks: Misconfigured authorization logic in the MCP server can lead to sensitive data exposure and incorrectly applied access controls. OAuth token theft on the local MCP server. If stolen, the token can then be used to impersonate the MCP server and access resources and data from the service that the OAuth token is for. Mitigating controls: Thoroughly review your MCP server authorization logic, here some posts discussing this in more detail - Azure API Management Your Auth Gateway For MCP Servers | Microsoft Community Hub and Using Microsoft Entra ID To Authenticate With MCP Servers Via Sessions · Den Delimarsky Implement best practices for token validation and lifetime Use secure token storage and encrypt tokens Excessive permissions for MCP servers Problem statement: MCP servers may have been granted excessive permissions to the service/resource they are accessing. For example, an MCP server that is part of an AI sales application connecting to an enterprise data store should have access scoped to the sales data and not allowed to access all the files in the store. Referencing back to the principle of least privilege (one of the oldest security principles), no resource should have permissions in excess of what is required for it to execute the tasks it was intended for. AI presents an increased challenge in this space because to enable it to be flexible, it can be challenging to define the exact permissions required. Risks: Granting excessive permissions can allow for exfiltration or amending data that the MCP server was not intended to be able to access. This could also be a privacy issue if the data is personally identifiable information (PII). Mitigating controls: Clearly define the permissions that the MCP server has to access the resource/service it connects to. These permissions should be the minimum required for the MCP server to access the tool or data it is connecting to. Indirect prompt injection attacks Problem statement: Researchers have shown that the Model Context Protocol (MCP) is vulnerable to a subset of Indirect Prompt Injection attacks known as Tool Poisoning Attacks. Tool poisoning is a scenario where an attacker embeds malicious instructions within the descriptions of MCP tools. These instructions are invisible to users but can be interpreted by the AI model and its underlying systems, leading to unintended actions that could ultimately lead to harmful outcomes. Risks: Unintended AI actions present a variety of security risks that include data exfiltration and privacy breaches. Mitigating controls: Implement AI prompt shields: in Azure AI Foundry, you can follow these steps to implement AI prompt shields. Implement robust supply chain security: you can read more about how Microsoft implements supply chain security internally here. Established security best practices that will uplift your MCP implementation’s security posture Any MCP implementation inherits the existing security posture of your organization's environment that it is built upon, so when considering the security of MCP as a component of your overall AI systems it is recommended that you look at uplifting your overall existing security posture. The following established security controls are especially pertinent: Secure coding best practices in your AI application - protect against the OWASP Top 10, the OWASP Top 10 for LLMs, use of secure vaults for secrets and tokens, implementing end-to-end secure communications between all application components, etc. Server hardening – use MFA where possible, keep patching up to date, integrate the server with a third party identity provider for access, etc. Keep devices, infrastructure and applications up to date with patches Security monitoring – implementing logging and monitoring of an AI application (including the MCP client/servers) and sending those logs to a central SIEM for detection of anomalous activities Zero trust architecture – isolating components via network and identity controls in a logical manner to minimize lateral movement if an AI application were compromised. Conclusion MCP is a promising development in the AI space that enables rich data and context access. As developers embrace this new approach to integrating their organization's APIs and connectors into LLMs, they need to be aware of security risks and how to implement controls to reduce those risks. There are mitigating security controls that can be put in place to reduce the risks inherent in the current specification, but as the protocol develops expect that some of the risks will reduce or disappear entirely. We encourage you to contribute to and suggest security related MCP RFCs to make this protocol even better! With thanks to OrinThomas, dasithwijes, dendeli and Peter Marcu for their inputs and collaboration on this post.How to deploy Microsoft Purview DSPM for AI to secure your AI apps
Microsoft Purview Data Security Posture Management (DSPM for AI) is designed to enhance data security for the following AI applications: Microsoft Copilot experiences, including Microsoft 365 Copilot. Enterprise AI apps, including ChatGPT enterprise integration. Other AI apps, including all other AI applications like ChatGPT consumer, Microsoft Copilot, DeepSeek, and Google Gemini, accessed through the browser. In this blog, we will dive into the different policies and reporting we have to discover, protect and govern these three types of AI applications. Prerequisites Please refer to the prerequisites for DSPM for AI in the Microsoft Learn Docs. Login to the Purview portal To begin, start by logging into Microsoft 365 Purview portal with your admin credentials: In the Microsoft Purview portal, go to the Home page. Find DSPM for AI under solutions. 1. Securing Microsoft 365 Copilot Be sure to check out our blog on How to use the DSPM for AI data assessment report to help you address oversharing concerns when you deploy Microsoft 365 Copilot. Discover potential data security risks in Microsoft 365 Copilot interactions In the Overview tab of DSPM for AI, start with the tasks in “Get Started” and Activate Purview Audit if you have not yet activated it in your tenant to get insights into user interactions with Microsoft Copilot experiences In the Recommendations tab, review the recommendations that are under “Not Started”. Create the following data discovery policy to discover sensitive information in AI interactions by clicking into it. Detect risky interactions in AI apps - This public preview Purview Insider Risk Management policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot experiences. Click here to learn more about Risky AI usage policy. With the policies to discover sensitive information in Microsoft Copilot experiences in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter to Microsoft Copilot Experiences, and review the following for Microsoft Copilot experiences: Total interactions over time (Microsoft Copilot) Sensitive interactions per AI app Top unethical AI interactions Top sensitivity labels references in Microsoft 365 Copilot Insider Risk severity Insider risk severity per AI app Potential risky AI usage Protect sensitive data in Microsoft 365 Copilot interactions From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities from Microsoft Copilot experiences based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. Then drill down to each activity to view details including the capability to view prompts and response with the right permissions. To protect the sensitive data in interactions for Microsoft 365 Copilot, review the Not Started policies in the Recommendations tab and create these policies: Information Protection Policy for Sensitivity Labels - This option creates default sensitivity labels and sensitivity label policies. If you've already configured sensitivity labels and their policies, this configuration is skipped. Protect sensitive data referenced in Microsoft 365 Copilot - This guides you through the process of creating a Purview Data Loss Prevention (DLP) policy to restrict the processing of content with specific sensitivity labels in Copilot interactions. Click here to learn more about Data Loss Prevention for Microsoft 365 Copilot. Protect sensitive data referenced in Copilot responses - Sensitivity labels help protect files by controlling user access to data. Microsoft 365 Copilot honors sensitivity labels on files and only shows users files they already have access to in prompts and responses. Use Data assessments to identify potential oversharing risks, including unlabeled files. Stay tuned for an upcoming blog post on using DSPM for AI data assessments! Use Copilot to improve your data security posture - Data Security Posture Management combines deep insights with Security Copilot capabilities to help you identify and address security risks in your org. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Govern the prompts and responses in Microsoft 365 Copilot interactions Understand and comply with AI regulations by selecting “Guided assistance to AI regulations” in the Recommendations tab and walking through the “Actions to take”. From the Recommendations tab, create a Control unethical behavior in AI Purview Communications Compliance policy to detect sensitive information in prompts and responses and address potentially unethical behavior in Microsoft Copilot experiences and ChatGPT for Enterprise. This policy covers all users and groups in your organization. To retain and/or delete Microsoft 365 Copilot prompts and responses, setup a Data Lifecycle policy by navigating to Microsoft Purview Data Lifecycle Management and find Retention Policies under the Policies header. You can also preserve, collect, analyze, review, and export Microsoft 365 Copilot interactions by creating an eDiscovery case. 2. Securing Enterprise AI apps Please refer to this amazing blog on Unlocking the Power of Microsoft Purview for ChatGPT Enterprise | Microsoft Community Hub for detailed information on how to integrate with ChatGPT for enterprise, the Purview solutions it currently supports through Purview Communication Compliance, Insider Risk Management, eDiscovery, and Data Lifecycle Management. Learn more about the feature also through our public documentation. 3. Securing other AI Microsoft Purview DSPM for AI currently supports the following list of AI sites. Be sure to also check out our blog on the new Microsoft Purview data security controls for the browser & network to secure other AI apps. Discover potential data security risks in prompts sent to other AI apps In the Overview tab of DSPM for AI, go through these three steps in “Get Started” to discover potential data security risk in other AI interactions: Install Microsoft Purview browser extension For Windows users: The Purview extension is not necessary for the enforcement of data loss prevention on the Edge browser but required for Chrome to detect sensitive info pasted or uploaded to AI sites. The extension is also required to detect browsing to other AI sites through an Insider Risk Management policy for both Edge and Chrome browser. Therefore, Purview browser extension is required for both Edge and Chrome in Windows. For MacOS users: The Purview extension is not necessary for the enforcement of data loss prevention on macOS devices, and currently, browsing to other AI sites through Purview Insider Risk Management is not supported on MacOS, therefore, no Purview browser extension is required for MacOS. Extend your insights for data discovery – this one-click collection policy will setup three separate Purview detection policies for other AI apps: Detect sensitive info shared in AI prompts in Edge – a Purview collection policy that detects prompts sent to ChatGPT consumer, Micrsoft Copilot, DeepSeek, and Google Gemini in Microsoft Edge and discovers sensitive information shared in prompt contents. This policy covers all users and groups in your organization in audit mode only. Detect when users visit AI sites – a Purview Insider Risk Management policy that detects when users use a browser to visit AI sites. Detect sensitive info pasted or uploaded to AI sites – a Purview Endpoint Data loss prevention (eDLP) policy that discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only. With the policies to discover sensitive information in other AI apps in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter by Other AI Apps, and review the following for other AI apps: Total interactions over time (other AI apps) Total visits (other AI apps) Sensitive interactions per AI app Insider Risk severity Insider risk severity per AI app Protect sensitive info shared with other AI apps From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. To protect the sensitive data in interactions for other AI apps, review the Not Started policies in the Recommendations tab and create these policies: Fortify your data security – This will create three policies to manage your data security risks with other AI apps: 1) Block elevated risk users from pasting or uploading sensitive info on AI sites – this will create a Microsoft Purview endpoint data loss prevention (eDLP) policy that uses adaptive protection to give a warn-with-override to elevated risk users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode. Learn more about adaptive protection in Data loss prevention. 2) Block elevated risk users from submitting prompts to AI apps in Microsoft Edge – this will create a Microsoft Purview browser data loss prevention (DLP) policy, and using adaptive protection, this policy will block elevated, moderate, and minor risk users attempting to put information in other AI apps using Microsoft Edge. This integration is built-in to Microsoft Edge. Learn more about adaptive protection in Data loss prevention. 3) Block sensitive info from being sent to AI apps in Microsoft Edge - this will create a Microsoft Purview browser data loss prevention (DLP) policy to detect inline for a selection of common sensitive information types and blocks prompts being sent to AI apps while using Microsoft Edge. This integration is built-in to Microsoft Edge. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Conclusion Microsoft Purview DSPM for AI can help you discover, protect, and govern the interactions from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps. We recommend you review the Reports in DSPM for AI routinely to discover any new interactions that may be of concern, and to create policies to secure and govern those interactions as necessary. We also recommend you utilize the Activity Explorer in DSPM for AI to review different Activity explorer events while users interacting with AI, including the capability to view prompts and response with the right permissions. We will continue to update this blog with new features that become available in DSPM for AI, so be sure to bookmark this page! Follow-up Reading Check out this blog on the details of each recommended policies in DSPM for AI: Microsoft Purview – Data Security Posture Management (DSPM) for AI | Microsoft Community Hub Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365Graph RAG for Security: Insights from a Microsoft Intern
As a software engineering intern at Microsoft Security, I had the exciting opportunity to explore how Graph Retrieval-Augmented Generation (Graph RAG) can enhance data security investigations. This blog post shares my learning journey and insights from working with this evolving technology.Microsoft Copilot Studio vs. Microsoft Foundry: Building AI Agents and Apps
Microsoft Copilot Studio and Microsoft Foundry (often referred to as Azure AI Foundry) are two key platforms in Microsoft’s AI ecosystem that allow organizations to create custom AI agents and AI-enabled applications. While both share the goal of enabling businesses to build intelligent, task-oriented “copilot” solutions, they are designed for different audiences and use cases. To help you decide which path suits your organization, this blog provides an educational comparison of Copilot Studio vs. Azure AI Foundry, focusing on their unique strengths, feature parity and differences, and key criteria like control requirements, preferences, and integration needs. By understanding these factors, technical decision-makers, developers, IT admins, and business leaders can confidently select the right platform or even a hybrid approach for their AI agent projects. Copilot Studio and Azure AI Foundry: At a Glance Copilot Studio is designed for business teams, pro‑makers, and IT admins who want a managed, low‑code SaaS environment with plug‑and‑play integrations. Microsoft Foundry is built for professional developers who need fine‑grained control, customization, and integration into their existing application and cloud infrastructure. And the good news? Organizations often use both and they work together beautifully. Feature Parity and Key Differences While both platforms can achieve similar outcomes, they do so via different means. Here’s a high-level comparison of Copilot Studio and Azure AI Foundry: Factor Copilot Studio (SaaS, Low-Code) Microsoft (Azure) AI Foundry (PaaS, Pro-Code) Target Users & Skills Business domain experts, IT pros, and “pro-makers” comfortable with low-code tools. Little to no coding is required for building agents. Ideal for quick solutions within business units. Professional developers, software engineers, and data scientists with coding/DevOps expertise. Deep programming skills needed for custom code, DevOps, and advanced AI scenarios. Suited for complex, large-scale AI projects. Platform Model Software-as-a-Service – fully managed by Microsoft. Agents and tools are built and run in Microsoft’s cloud (M365/Copilot service) with no infrastructure to manage. Simplified provisioning, automatic updates, and built-in compliance with Microsoft 365 environment. Platform-as-a-Service, runs in your Azure subscription. You deploy and manage the agent’s infrastructure (e.g. Azure compute, networking, storage) in your cloud. Offers full control over environment, updates, and data residency. Integration & Data Out-of-box connectors & data integrations for Microsoft 365 (SharePoint, Outlook, Teams) and 3rd-party SaaS via Power Platform connectors. Easy integration with business systems without coding, ideal for leveraging existing M365 and Power Platform assets. Data remains in Microsoft’s cloud (with M365 compliance and Purview governance) by default. Deep custom integration with any system or data source via code. Natively works with Azure services (Azure SQL, Cosmos DB, Functions, Kubernetes, Service Bus, etc.) and can connect to on-prem or multi-cloud resources via custom connectors. Suitable when data/code must stay in your network or cloud for compliance or performance reasons. Development Experience Low-code, UI-driven development. Build agents with visual designers and prompt editors. No-code orchestration through Topics (conversational flows) and Agent Flows (Power Automate). Rich library of pre-built components (tools/capabilities) that are auto-managed and continuously improved by Microsoft (e.g. Copilot connectors for M365, built-in tool evaluations). Emphasizes speed and simplicity over granular control. Code-first development. Offers web-based studio plus extensive SDKs, CLI, and VS Code integration for coding agents and custom tools. Supports full DevOps: you can use GitHub/Azure DevOps for CI/CD, custom testing, version control, and integrate with your existing software development toolchain. Provides maximum flexibility to define bespoke logic, but requires more time and skill, sacrificing immediate simplicity for long-term extensibility. Control & Governance Managed environment – minimal configuration needed. Governance is handled via Microsoft’s standard M365 admin centers: e.g. Admin Center, Entra ID, Microsoft Purview, Defender for identity, access, auditing, and compliance across copilots. Updates and performance optimizations (e.g. tool improvements) are applied automatically by Microsoft. Limited need (or ability) to tweak infrastructure or model behavior under the hood – fits organizations that want Microsoft to manage the heavy lifting. Microsoft Foundry provides a pro‑code, Azure‑native environment for teams that need full control over the agent runtime, integrations, and development workflow. Full stack control – you manage how and where agents run. Customizable governance using Azure’s security & monitoring tools: Azure AD (identity/RBAC), Key Vault, network security (private endpoints, VNETs), plus integrated logging and telemetry via Azure Monitor, App Insights, etc. Foundry includes a developer control plane for observing, debugging, and evaluating agents during development and runtime. This is ideal for organizations requiring fine-grained control, custom compliance configurations, and rigorous LLMOps practices. Deployment Channels One-click publishing to Microsoft 365 experiences (Teams, Outlook), web chat, SharePoint, email, and more – thanks to native support for multiple channels in Copilot Studio. Everything runs in the cloud; you don’t worry about hosting the bot. Flexible deployment options. Foundry agents can be exposed via APIs or the Activity Protocol, and integrated into apps or custom channels using the M365 Agents SDK. Foundry also supports deploying agents as web apps, containers, Azure Functions, or even private endpoints for internal use, giving teams freedom to run agents wherever needed (with more setup). Control and customization Copilot Studio trades off fine-grained control for simplicity and speed. It abstracts away infrastructure and handles many optimizations for you, which accelerates development but limits how deeply you can tweak the agent’s behavior. Azure Foundry, by contrast, gives you extensive control over the agent’s architecture, tools and environment – at the cost of more complex setup and effort. Consider your project’s needs: Does it demand custom code, specialized model tuning or on-premises data? If yes, Foundry provides the necessary flexibility. Common Scenarios · HR or Finance teams building departmental AI assistants · Sales operations automating workflows and knowledge retrieval · Fusion teams starting quickly without developer-heavy resources Copilot Studio gives teams a powerful way to build agents quickly without needing to set up compute, networking, identity or DevOps pipeline · Embedding agents into production SaaS apps · If team uses professional developer frameworks (Semantic Kernel, LangChain, AutoGen, etc.) · Building multi‑agent architectures with complex toolchains · You require integration with existing app code or multi-cloud architecture. · You need full observability, versioning, instrumentation or custom DevOps. Foundry is ideal for software engineering teams who need configurability, extensibility and industrial-grade DevOps. Benefits of Combined Use: Embracing Hybrid approach One important insight is that Copilot Studio and Foundry are not mutually exclusive. In fact, Microsoft designed them to be interoperable so that organizations can use both in tandem for different parts of a solution. This is especially relevant for large projects or “fusion teams” that include both low-code creators and pro developers. The pattern many enterprises land on: Developers build specialized tools / agents in Foundry Makers assemble user-facing workflow experience in Copilot Studio Agents can collaborate via agent-to-agent patterns (including A2A, where applicable) Using both platforms together unlocks the best of both worlds: Seamless User Experience: Copilot Studio provides a polished, user-friendly interface for end-users, while Azure AI Foundry handles complex backend logic and data processing. Advanced AI Capabilities: Leverage Azure AI Foundry’s extensive model library and orchestration features to build sophisticated agents that can reason, learn, and adapt. Scalability & Flexibility: Azure AI Foundry’s cloud-native architecture ensures scalability for high-demand scenarios, while Copilot Studio’s low-code approach accelerates development cycles. For the customers who don’t want to decide up front, Microsoft introduced a unified approach for scaling agent initiatives: Microsoft Agent Pre-Purchase Plan (P3) as part of the broader Agent Factory story, designed to reduce procurement friction across both platforms. Security & Compliance using Microsoft Purview Microsoft Copilot Studio: Microsoft Purview extends enterprise-grade security and compliance to agents built with Microsoft Copilot Studio by bringing AI interaction governance into the same control plane you use for the rest of Microsoft 365. With Purview, you can apply DSPM for AI insights, auditing, and data classification to Copilot Studio prompts and responses, and use familiar compliance capabilities like sensitivity labels, DLP, Insider Risk Management, Communication Compliance, eDiscovery, and Data Lifecycle Management to reduce oversharing risk and support investigations. For agents published to non-Microsoft channels, Purview management can require pay-as-you-go billing, while still using the same Purview policies and reporting workflows teams already rely on. Microsoft Foundry: Microsoft Purview integrates with Microsoft Foundry to help organizations secure and govern AI interactions (prompts, responses, and related metadata) using Microsoft’s unified data security and compliance capabilities. Once enabled through the Foundry Control Plane or through Microsoft Defender for Cloud in Microsoft Azure Portal, Purview can provide DSPM for AI posture insights plus auditing, data classification, sensitivity labels, and enforcement-oriented controls like DLP, along with downstream compliance workflows such as Insider Risk, Communication Compliance, eDiscovery, and Data Lifecycle Management. This lets security and compliance teams apply consistent policies across AI apps and agents in Foundry, while gaining visibility and governance through the same Purview portal and reports used across the enterprise. Conclusion When it comes to Copilot Studio vs. Azure AI Foundry, there is no universally “best” choice – the ideal platform depends on your team’s composition and project requirements. Copilot Studio excels at enabling functional business teams and IT pros to build AI assistants quickly in a managed, compliant environment with minimal coding. Azure AI Foundry shines for developer-centric projects that need maximal flexibility, custom code, and deep integration with enterprise systems. The key is to identify what level of control, speed, and skill your scenario calls for. Use both together to build end-to-end intelligent systems that combine ease of use with powerful backend intelligence. By thoughtfully aligning the platform to your team’s strengths and needs, you can minimize friction and maximize momentum on your AI agent journey delivering custom copilot solutions that are both quick to market and built for the long haul Resources to explore Copilot Studio Overview Microsoft Foundry Use Microsoft Purview to manage data security & compliance for Microsoft Copilot Studio Use Microsoft Purview to manage data security & compliance for Microsoft Foundry Optimize Microsoft Foundry and Copilot Credit costs with Microsoft Agent pre-purchase plan Accelerate Innovation with Microsoft Agent FactorySecuring Microsoft M365 Copilot and AI with Microsoft's Suite of Security Products - Part 1
Microsoft 365 Copilot and AI applications created in Azure AI Foundry are transforming productivity, but they also introduce new security challenges for businesses. Organizations embracing these AI capabilities must guard against risks such as data leaks, novel AI-driven threats (e.g. prompt injection attacks), and compliance violations. Microsoft offers a comprehensive suite of products to help secure and govern AI solutions. This multipart guide provides a detailed roadmap for using Microsoft’s security services together to protect AI deployments and Copilot integrations in an enterprise environment. Overview of Microsoft Security Solutions for AI and Copilot Microsoft’s security portfolio spans identity, devices, cloud apps, data, and threat management – all crucial for securing AI systems. Key products include: Microsoft Entra (identity and access), Microsoft Defender XDR (Unified enterprise defense suite that natively coordinates detection, prevention, investigation, and response across endpoints, identities, email, and applications), Microsoft Purview (data security, compliance, and governance), Microsoft Sentinel (cloud-native SIEM/SOAR for monitoring and response), and Microsoft Intune (device management), among others. These solutions are designed to integrate – forming an AI-first, unified security platform that greatly reduces the complexity of implementing a cohesive Zero Trust strategy across your enterprise and AI ecosystem. The table below summarizes the main product categories and their roles in securing AI applications and Copilot: Security Area Microsoft Product Role in Securing AI and Copilot Identity and Access Management Microsoft Entra and Entra Suite (Entra ID Protection, Entra Conditional Access, Entra Internet Access, Entra Private Access, Entra ID Governance) Verify and control access to AI systems. Enforce strong authentication and least privilege for users, admins, and AI service identities. Conditional Access policies (including new AI app controls) restrict who can use specific AI applications Endpoint & Device Security Microsoft Defender for Endpoint, Microsoft Intune Secure user devices that interact with AI. Defender for Endpoint provides EDR (Endpoint Detection & Response) to help block malware or exploits while also identifying devices that may be high risk. Intune helps ensure only managed, compliant devices can access corporate AI apps, aligning with a Zero Trust strategy. Cloud & Application Security Microsoft Defender for Cloud (CSPM/CWPP), Defender for Cloud Apps (CASB/SSPM), Azure Network Security (Azure Firewall, WAF) Protect AI Infrastructure and cloud workloads (IAAS/SASS) Defender for Cloud continuously assesses security posture of AI services (VMs, containers, Azure OpenAI instances) and detects misconfigurations or vulnerabilities. It now provides AI security posture management across multi-cloud AI environments (Azure, AWS, Google) and even multiple model types. Defender for Cloud Apps monitors and controls SaaS AI app usage to combat “shadow AI” usage (unsanctioned AI tools). Azure Firewall and WAF guard AI APIs and web front-ends against network threats, with new Copilot-powered features to analyze traffic and logs. Threat Detection & Response Microsoft Defender XDR, Microsoft Sentinel (SIEM/SOAR), Microsoft Security Copilot Detect and respond to threats. Microsoft’s Defender XDR suite provides a single pane of glass for security operations teams to detect, investigate, and respond to threats, correlating signals from endpoints, identities, cloud apps, and email. Microsoft Sentinel enhances these capabilities by aggregating and correlating signals from 3rd party, non-Microsoft products with Defender XDR data to alert on suspicious activities across the environment. Security Copilot (an AI assistant for SOC teams) further accelerates incident analysis and response using generative AI – helping defenders investigate incidents or automate threat hunting. Data Security & Compliance Microsoft Purview (Information Protection, Data Loss Prevention, Insider Risk, Compliance Manager, DSPM for AI), SharePoint Advanced Management Protect sensitive data used or produced by AI. Purview enables classification and sensitivity labeling of data so that confidential information is handled properly by AI. Purview Data Loss Prevention (DLP) enforces policies to prevent sensitive data leaks – for example, new Purview DLP controls for Edge for Business can block users from typing or pasting sensitive data into generative AI apps like ChatGPT or Copilot Chat. Purview Insider Risk Management can detect anomalous data extraction via AI tools. Purview Compliance Manager and Audit help ensure AI usage complies with regulations (e.g. GDPR, HIPAA) and provide audit logs of AI interactions. AI Application Safety Azure AI Content Safety (content filtering), Responsible AI controls (Prompt flow, OpenAI policy) Ensure AI output and usage remain safe and within policy. Azure AI Content Safety provides AI-driven content filters and “prompt shields” to block malicious or inappropriate prompts/outputs in real-time. Microsoft’s Responsible AI framework and tools (such as evaluations in Azure AI Studio to simulate adversarial prompts) further help developers build AI systems that adhere to safety and ethical standards. Meanwhile, M365 Copilot has built-in safeguards – it respects all your existing Microsoft 365 security, privacy, and compliance controls by design! How the Pieces Work Together Imagine a user at a company is using Microsoft 365 Copilot to query internal documents. Entra ID first ensures the user is who they claim (with MFA), and that their device is in a compliant state. When the user prompts Copilot, Copilot checks the user’s permissions and will only retrieve data they are authorized to see. The prompt and the AI’s generated answer is then checked by Microsoft Purview’s DLP , Insider Risk, DSPM, and compliance policies – if the user’s query or the response would expose, say, credit card numbers or other sensitive information, the system can block it or redact it. Meanwhile, Defender XDR's extended detection and response capabilities are working in the background: Defender for Cloud Apps logs that the user accessed an approved 3rd party AI service, Sentinel correlates this with any unusual behavior (like data exfiltration after running the prompt), an alert is triggered, and the user is either blocked or if allowed, forced to label and encrypt the data before sending it externally. In short, each security layer – identity, data, device, cloud, monitoring – plays an important part in securing this AI-driven scenario. Stay tuned for Part 2 of this Multi-Part Series In the following articles, we break down how to configure and use the tools summarized in this article, starting with Identity and Access Management. We will also highlight best practices (like Microsoft's recommended Prepare -> Discover -> Protect -> Govern approach for AI security) and include recent product enhancements that assist in securing AI.