security copilot
22 Topics“Build Your Own” M365 Copilot DPIA templates for public sector and enterprise organizations
In April, Microsoft launched our “Build Your Own” Data Protection Impact Assessment templates for Office 365 as part of our commitment to helping our customers embrace new cutting-edge technologies while providing the information they need to continue to meet their compliance obligations. Today, we’re excited to expand that commitment into the era of AI, as we share our new “Build Your Own” Data Protection Impact Assessment templates for Microsoft’s AI-powered productivity service, Microsoft 365 Copilot. These “Build Your Own” DPIAs—for both public sector and enterprise customers—are customizable and illustrative template guides that Microsoft has produced, with references to our Product Terms, Data Protection Addendum (“DPA”), and Microsoft’s extensive documentation for M365 Copilot. They are designed to help public sector organizations systematically identify, assess, and address potential data protection risks, making it easier to evaluate compliance with the GDPR. As AI technologies rapidly evolve, and uses of that technology with it, we recognize that compliance tools like the “Build Your Own” DPIA will need to evolve, too. As such, we are committed to continually refining and improving the document, including based on customer feedback, with the goal of helping make our customers’ AI transformation compliance journey as friction-free as possible. Download the templates here: “Build Your Own” M365 Copilot Data Protection Impact Assessment for the Public Sector “Build Your Own” M365 Copilot Data Protection Impact Assessment for Enterprise CustomersRethinking Data Security and Governance in the Era of AI
The era of AI is reshaping industries, enabling unprecedented innovations, and presenting new opportunities for organizations worldwide. But as organizations accelerate AI adoption, many are focused on a growing concern: their current data security and governance practices are not effectively built for the fast-paced AI innovation and ever-evolving regulatory landscape. At Microsoft, we recognize the critical need for an integrated approach to address these risks. In our latest findings, Top 3 Challenges in Securing and Governing Data for the Era of AI, we uncovered critical gaps in how organizations manage data risk. The findings exemplify the current challenges: 91% of leaders are not prepared to manage risks posed by AI 1 and 85% feel unprepared to comply with AI regulations 2 . These gaps not only increase non-compliance but also put innovation at risk. Microsoft Purview has the tools to tackle these challenges head on, helping organizations move to an approach that protects data, meets compliance regulations, and enables trusted AI transformation. We invite you to take this opportunity to evaluate your current practices, platforms, and responsibilities, and to understand how to best secure and govern your organization for growing data risks in the era of AI. Platform fragmentation continues to weaken security outcomes Organizations often rely on fragmented tools across security, compliance, and data teams, leading to a lack of unified visibility and insufficient data hygiene. Our findings reveal the effects of fragmented platforms, leading to duplicated data, inconsistent classification, redundant alerts, and siloed investigations, which ultimately is causing data exposure incidents related to AI to be on the rise 3 . Microsoft Purview offers centralized visibility across your organization’s data estate. This allows teams to break down silos, streamline workflows, and mitigate data leakage and oversharing. With Microsoft Purview, capabilities like data health management and data security posture management are designed to enhance collaboration and deliver enriched insights across your organization to help further protect your data and mitigate risks faster. Microsoft Purview offers the following: Unified insights across your data estate, breaking down silos between security, compliance, and data teams. Microsoft Purview Data Security Posture Management (DSPM) for AI helps organizations gain unified visibility into GenAI usage across users, data, and apps to address the heightened risk of sensitive data exposure from AI. Built-in capabilities like classification, labeling, data loss prevention, and insider risk insights in one platform. In addition, newly launched solutions like Microsoft Purview Data Security Investigations accelerate investigations with AI-powered deep content analysis, which helps data security teams quickly identify and mitigate sensitive data and security risks within impacted data. Organizations like Kern County historically relied on many fragmented systems but adopted Microsoft Purview to unify their organization’s approach to data protection in preparation for increasing risks associated with deploying GenAI. “We have reduced risk exposure, [Microsoft] Purview helped us go from reaction to readiness. We are catching issues proactively instead of retroactively scrambling to contain them.” – Aaron Nance, Deputy Chief Information Security Officer, Kern County Evolving regulations require continuous compliance AI-driven innovation is creating a surge in regulations, resulting in over 200 daily updates across more than 900 regulatory agencies 4 , as highlighted in our research. Compliance has become increasingly difficult, with organizations struggling to avoid fines and comply with varying requirements across regions. To navigate these challenges effectively, security leaders’ responsibilities are expanding to include oversight across governance and compliance, including oversight of traditional data catalog and governance solutions led by the central data office. Leaders also cite the need for regulation and audit readiness. Microsoft Purview enables compliance and governance by: Streamlining compliance with Microsoft Purview Compliance Manager templates, step-by-step guidance, and insights for region and industry-specific regulations, including GDPR, HIPAA, and AI-specific regulation like the EU AI Act. Supporting legal matters such as forensic and internal investigations with audit trail records in Microsoft Purview eDiscovery and Audit. Activating and governing data for trustworthy analytics and AI with Microsoft Purview Unified Catalog, which enables visibility across your data estate and data confidence via data quality, data lineage, and curation capabilities for federated governance. Microsoft Purview’s suite of capabilities provides visibility and accountability, enabling security leaders to meet stringent compliance demands while advancing AI initiatives with confidence. Organizations need a unified approach to secure and govern data Organizations are calling for an integrated platform to address data security, governance, and compliance collectively. Our research shows that 95% of leaders agree that unifying teams and tools is a top priority 5 and 90% plan to adopt a unified solution to mitigate data related risks and maximize impact 6 . Integration isn't just about convenience, it’s about enabling innovation with trusted data protection. Microsoft Purview enables a shared responsibility model, allowing individual business units to own their data while giving central teams oversight and policy control. As organizations adopt a unified platform approach, our findings reveal the upside potential not only being reduced risk but also cost savings. With AI-powered copilots such as Security Copilot in Microsoft Purview, data protection tasks are simplified with natural-language guidance, especially for under resourced teams. Accelerating AI transformation with Microsoft Purview Microsoft Purview helps security, compliance, and governance teams navigate the complexities of AI innovation while implementing effective data protection and governance strategies. Microsoft partner EY highlights the results they are seeing: “We are seeing 25%–30% time savings when we build secure features using [Microsoft] Purview SDK. What was once fragmented is now centralized. With [Microsoft] Purview, everything comes together on one platform, giving a unified foundation to innovate and move forward with confidence.” – Prashant Garg, Partner of Data and AI, EY We invite you to explore how you can propel your organization toward a more secure future by reading the full research paper at https://aka.ms/SecureAndGovernPaper. Visit our website to learn more about Microsoft Purview. 1 Forbes, Only 9% Of Surveyed Companies Are Ready To Manage Risks Posed By AI, 2023 2 SAP LeanIX, AI Survey Results, 2024 3 Microsoft, Data Security Index Report, 2024 4 Forbes, Cost of Compliance, Thomson Reuters, 2021 5 Microsoft, Audience Research, 2024 6 Microsoft, Customer Requirements Research, 2024Enterprise-grade controls for AI apps and agents built with Azure AI Foundry and Copilot Studio
AI innovation is moving faster than ever, and more AI projects are moving beyond experimentation into deployment, to drive tangible business impact. As organizations accelerate innovation with custom AI applications and agents, new risks emerge across the software development lifecycle and AI stack related to data oversharing and leaks, new vulnerabilities and threats, and non-compliance with stringent regulatory requirements Through 2025, poisoning of software supply chains and infrastructure technology stacks will constitute more than 70% of malicious attacks against AI used in the enterprise 1 , highlighting potential threats that originate early in development. Today, the average cost of a data breach is $4.88 million, but when security issues are caught early in the development process, that number drops dramatically to just $80 per incident 2 . The message is very clear; security can’t be an afterthought anymore. It must be a team sport across the organization, embedded from the start and throughout the development lifecycle. That's why developers and security teams should align on processes and tools that bring security into every stage of the AI development lifecycle and give security practitioners visibility into and the ability to mitigate risks. To address these growing challenges and help customers secure and govern their AI workloads across development and security teams, we are: Enabling Azure AI Foundry and Microsoft Copilot Studio to provide best-in-class foundational capabilities to secure and govern AI workloads Deeply integrating and embedding industry-leading capabilities from Microsoft Purview, Microsoft Defender, and Microsoft Entra into Azure AI Foundry and Microsoft Copilot Studio This week, 3,000 developers are gathering in Seattle for the annual Microsoft Build conference, with many more tuning in online, to learn practical skills for accelerating their AI apps and agents' innovation. To support their AI innovation journey, today we are excited to announce several new capabilities to help developers and organizations secure and govern AI apps and agents. New Azure AI Foundry foundational capabilities to secure and govern AI workloads Azure AI Foundry enhancements for AI security and safety With 70,000 customers, 100 trillion tokens processed this quarter, and 2 billion enterprise search queries each day, Azure AI Foundry has grown beyond just an application layer—it's now a comprehensive platform for building agents that can plan, take action, and continuously learn to drive real business outcomes. To help organizations build and deploy AI with confidence, we’re introducing new security and safety capabilities and insights for developers in Azure AI Foundry Introducing Spotlighting to detect and block prompt injection attacks in real time As AI systems increasingly rely on external data sources, a new class of threats has emerged. Indirect prompt injection attacks embed hidden instructions in documents, emails, and web content, tricking models into taking unauthorized actions without any direct user input. These attacks are difficult to detect and hard to prevent using traditional filters alone. To address this, Azure AI Content Safety is introducing Spotlighting, now available in preview. Spotlighting strengthens the Prompt Shields guardrail by improving its ability to detect and handle potential indirect prompt injections, where hidden adversarial instructions are embedded in external content. This new capability helps prevent the model from inadvertently acting on malicious prompts that are not directly visible to the user. Enable Spotlighting in Azure AI Content Safety to detect potential indirect prompt injection attacks New capabilities for task adherence evaluation and task adherence mitigation to ensure agents remain within scope As developers build more capable agents, organizations face growing pressure to help confirm those agents act within defined instructions and policy boundaries. Even small deviations can lead to tool misuse, broken workflows, or risks like unintended exposure of sensitive data. To solve this, Azure AI Foundry now includes task adherence for agents, now in preview and powered by two components: a real-time evaluation and a new control within Azure AI Content Safety. At the core is a real-time task adherence evaluation API, part of Azure AI Content Safety. This API assesses whether an agent’s behavior is aligned with its assigned task by analyzing the user’s query, system instructions, planned tool calls, and the agent’s response. The evaluation framework is built on Microsoft’s Agent Evaluators, which measure intent resolution, tool selection accuracy, completeness of response, and overall alignment to the original request. Developers can run this scoring logic locally using the Task Adherence Evaluator in the Azure AI Evaluation SDK, with a five-point scale that ranges from fully nonadherent to fully adherent. This gives teams a flexible and transparent way to inspect task-level behavior before it causes downstream issues. Task adherence is enforced through a new control in Azure AI Content Safety. If an agent goes off-task, the control can block tool use, pause execution, or trigger human review. In Azure AI Agent Service, it is available as an opt-in feature and runs automatically. Combined with real-time evaluation, this control helps to ensure that agents stay on task, follow instructions, and operate according to enterprise policies. Learn more about Prompt Shields in Azure AI Content Safety. Azure AI Foundry continuous evaluation and monitoring of agentic systems Maintaining high performance and compliance for AI agents after deployment is a growing challenge. Without ongoing oversight, issues like performance degradation, safety risks, or unintentional misuse of resources can slip through unnoticed. To address this, Azure AI Foundry introduces continuous evaluation and monitoring of agentic systems, now in preview, provides a single pane of glass dashboard to track key metrics such as performance, quality, safety, and resource usage in real time. Continuous evaluation runs quality and safety evaluations at a sampled rate of production usage with results made available in the Azure AI Foundry Monitoring dashboard and published to Application Insights. Developers can set alerts to detect drift or regressions and use Azure Monitor to gain full-stack visibility into their AI systems. For example, an organization using an AI agent to assist with customer-facing tasks can monitor groundedness and detect a decline in quality when the agent begins referencing irrelevant information, helping teams to act before it potentially negatively affects trust of users. Azure AI Foundry evaluation integrations with Microsoft Purview Compliance Manager, Credo AI, and Saidot for streamlined compliance AI regulations and standards introduce new requirements for transparency, documentation, and risk management for high-risk AI systems. As developers build AI applications and agents, they may need guidance and tools to help them evaluate risks based on these requirements and seamlessly share control and evaluation insights with compliance and risk teams. Today, we are announcing previews for Azure AI Foundry evaluation tool’s integration with a compliance management solution, Microsoft Purview Compliance Manager, and AI governance solutions, Credo AI and Saidot. These integrations help define risk parameters, run suggested compliance evaluations, and collect evidence for control testing and auditing. For example, for a developer who’s building an AI agent in Europe may be required by their compliance team to complete a Data Protection Impact Assets (DPIA) and Algorithmic Impact Assessment (AIA) to meet internal risk management and technical documentation requirements aligned with emerging AI governance standards and best practices. Based on Purview Compliance Manager’s step-by-step guidance on controls implementation and testing, the compliance teams can evaluate risks such as potential bias, cybersecurity vulnerabilities, or lack of transparency in model behavior. Once the evaluation is conducted in Azure AI Foundry, the developer can obtain a report with documented risk, mitigation, and residual risk for compliance teams to upload to Compliance Manager to support audits and provide evidence to regulators or external stakeholders. Assess controls for Azure AI Foundry against emerging AI governance standards Learn more about Purview Compliance Manager. Learn more about the integration with Credo AI and Saidot in this blogpost. Leading Microsoft Entra, Defender and Purview value extended to Azure AI Foundry and Microsoft Copilot Studio Introducing Microsoft Entra Agent ID to help address agent sprawl and manage agent identity Organizations are rapidly building their own AI agents, leading to agent sprawl and a lack of centralized visibility and management. Security teams often struggle to keep up, unable to see which agents exist and whether they introduce security or compliance risks. Without proper oversight, agent sprawl increases the attack surface and makes it harder to manage these non-human identities. To address this challenge, we’re announcing the public preview of Microsoft Entra Agent ID, a new capability in the Microsoft Entra admin center that gives security admins visibility and control over AI agents built with Copilot Studio and Azure AI Foundry. With Microsoft Entra Agent ID, an agent created through Copilot Studio or Azure AI Foundry is automatically assigned an identity with no additional work required from the developers building them. This is the first step in a broader initiative to manage and protect non-human identities as organizations continue to build AI agents. : Security and identity admins can gain visibility into AI agents built in Copilot Studio and Azure AI Foundry in the Microsoft Entra Admin Center This new capability lays the foundation for more advanced capabilities coming soon to Microsoft Entra. We also know that no one can do it alone. Security has always been a team sport, and that’s especially true as we enter this new era of protecting AI agents and their identities. We’re energized by the momentum across the industry; two weeks ago, we announced support for the Agent-to-Agent (A2A) protocol and began collaborating with partners to shape the future of AI identity workflows. Today, we’re also excited to announce new partnerships with ServiceNow and Workday. As part of this, we’ll integrate Microsoft Entra Agent ID with the ServiceNow AI Platform and the Workday Agent System of Record. This will allow for automated provisioning of identities for future digital employees. Learn more about Microsoft Entra Agent ID. Microsoft Defender security alerts and recommendations now available in Azure AI Foundry As more AI applications are deployed to production, organizations need to predict and prevent potential AI threats with natively integrated security controls backed by industry-leading Gen AI and threat intelligence for AI deployments. Developers need critical signals from security teams to effectively mitigate security risks related to their AI deployments. When these critical signals live in separate systems outside the developer experience, this can create delays in mitigation, leaving opportunities for AI apps and agents to become liabilities and exposing organizations to various threats and compliance violations. Now in preview, Microsoft Defender for Cloud integrates AI security posture management recommendations and runtime threat protection alerts directly into the Azure AI Foundry portal. These capabilities, previously announced as part of the broader Microsoft Defender for Cloud solution, are extended natively into Azure AI Foundry enabling developers to access alerts and recommendations without leaving their workflows. This provides real-time visibility into security risks, misconfigurations, and active threats targeting their AI applications on specific Azure AI projects, without needing to switch tools or wait on security teams to provide details. Security insights from Microsoft Defender for Cloud help developers identify and respond to threats like jailbreak attacks, sensitive data leakage, and misuse of system resources. These insights include: AI security posture recommendations that identify misconfigurations and vulnerabilities in AI services and provide best practices to reduce risk Threat protection alerts for AI services that notify developers of active threats and provide guidance for mitigation, across more than 15 detection types For example, a developer building an AI-powered agent can receive security recommendations suggesting the use of Azure Private Link for Azure AI Services resources. This reduces the risk of data leakage by handling the connectivity between consumers and services over the Azure backbone network. Each recommendation includes actionable remediation steps, helping teams identify and mitigate risks in both pre- and post-deployment phases. This helps to reduce risks without slowing down innovation. : Developers can view security alerts on the Risks + alerts page in Azure AI Foundry : Developers can view recommendations on the Guardrails + controls page in Azure AI Foundry This integration is currently in preview and will be generally available in June 2025 in Azure AI Foundry. Learn more about protecting AI services with Microsoft Defender for Cloud. Microsoft Purview capabilities extended to secure and govern data in custom-built AI apps and agents Data oversharing and leakage are among the top concerns for AI adoption, and central to many regulatory requirements. For organizations to confidently deploy AI applications and agents, both low code and pro code developers need a seamless way to embed security and compliance controls into their AI creations. Without simple, developer-friendly solutions, security gaps can quickly become blockers, delaying deployment and increasing risks as applications move from development to production. Today, Purview is extending its enterprise-grade data security and compliance capabilities, making it easier for both low code and pro code developers to integrate data security and compliance into their AI applications and agents, regardless of which tools or platforms they use. For example, with this update, Microsoft Purview DSPM for AI becomes the one place data security teams can see all the data risk insights across Microsoft Copilots, agents built in Agent Builder and Copilot Studio, and custom AI apps and agents built in Azure AI Foundry and other platforms. Admins can easily drill into security and compliance insights for specific AI apps or agents, making it easier to investigate and take action on potential risks. : Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents in Microsoft Purview DSPM for AI In the following sections, we will provide more details about the updates to Purview capabilities in various AI workloads. 1. Microsoft Purview data security and compliance controls can be extended to any custom-built AI application and agent via the new Purview SDK or the native Purview integration with Azure AI Foundry. The new capabilities make it easy and effortless for security teams to bring the same enterprise-grade data security compliance controls available today for Microsoft 365 Copilot to custom AI applications and agents, so organizations can: Discover data security risks, such as sensitive data in user prompts, and data compliance risks, such as harmful content, and get recommended actions to mitigate risks proactively in Microsoft Purview Data Security Posture Management (DSPM) for AI. Protect sensitive data against data leakage and insider risks with Microsoft Purview data security policies. Govern AI interactions with Audit, Data Lifecycle Management, eDiscovery, and Communication Compliance. Microsoft Purview SDK Microsoft Purview now offers Purview SDK, a set of REST APIs, documentation, and code samples, currently in preview, enabling developers to integrate Purview's data security and compliance capabilities into AI applications or agents within any integrated development environment (IDE). : By embedding Purview APIs into the IDE, developers help enable their AI apps to be secured and governed at runtime For example, a developer building an AI agent using an AWS model can use the Purview SDK to enable their AI app to automatically identify and block sensitive data entered by users before it’s exposed to the model, while also providing security teams with valuable signals that support compliance. With Purview SDK, startups, ISVs, and partners can now embed Purview industry-leading capabilities directly into their AI software solutions, making these solutions Purview aware and easier for their customers to secure and govern data in their AI solutions. For example, Infosys Vice President and Delivery Head of Cyber Security Practice, Ashish Adhvaryu indicates, “Infosys Cyber Next platform integrates Microsoft Purview to provide enhanced AI security capabilities. Our solution, the Cyber Next AI assistant (Cyber Advisor) for the SOC analyst, leverages Purview SDK to drive proactive threat mitigation with real-time monitoring and auditing capabilities. This integration provides holistic AI-assisted protection, enhancing cybersecurity posture." Microsoft partner EY (previously known as Ernst and Young) has also leveraged the new Purview SDK to embed Purview value into their GenAI initiatives. “We’re not just building AI tools, we are creating Agentic solutions where trust, security, and transparency are present from the start, supported by the policy controls provided through the Purview SDK. We’re seeing 25 to 30 percent time savings when we build secure features using the Purview SDK,” noted Sumanta Kar, Partner, Innovation and Emerging Tech at EY. Learn more about the Purview SDK. Microsoft Purview integrates natively with Azure AI Foundry Organizations are developing an average of 14 custom AI applications. The rapid pace of AI innovation may leave security teams unaware of potential data security and compliance risks within their environments. With the update announced today, Azure AI Foundry signals are now directly integrated with Purview Data Security Posture Management for AI, Insider Risk Management, and data compliance controls, minimizing the need for additional development work. For example, for AI applications and agents built with Azure AI Foundry models, data security teams can gain visibility into AI usage and data risks in Purview DSPM for AI, with no additional work from developers. Data security teams can also detect, investigate, and respond to both malicious and inadvertent user activities, such as a departing employee leveraging an AI agent to retrieve an anomalous amount of sensitive data, with Microsoft Purview Insider Risk Management (IRM) policies. Lastly, user prompts and AI responses in Azure AI apps and agents can now be ingested into Purview compliance tools as mentioned above. Learn more about Microsoft Purview for Azure AI Foundry. 2. Purview data protections extended to Copilot Studio agents grounded in Microsoft Dataverse data Coming to preview in June, Purview Information Protection extends auto-labeling and label inheritance coverage to Dataverse to help prevent oversharing and data leaks. Information Protection makes it easier for organizations to automatically classify and protect sensitive data at scale. A common challenge is that sensitive data often lands in Dataverse from various sources without consistent labeling or protection. The rapid adoption of agents built using Copilot Studio and grounding data from Dataverse increases the risk of data oversharing and leakage if data is not properly protected. With auto-labeling, data stored in Dataverse tables can be automatically labeled based on policies set in Microsoft Purview, regardless of its source. This reduces the need for manual labeling effort and protects sensitive information from the moment it enters Dataverse. With label inheritance, AI agent responses grounded in Dataverse data will automatically carry and honor the source data’s sensitivity label. If a response pulls from multiple tables with different labels, the most restrictive label is applied to ensure consistent protection. For example, a financial advisor building an agent in Copilot Studio might connect multiple Dataverse tables, some labeled as “General” and others as “Highly Confidential.” If a response pulls from both, it will inherit the most restrictive label, in this case, "Highly Confidential,” to prevent unauthorized access and ensure appropriate protections are applied across both maker and users of the agent. Together, auto-labeling and label inheritance in Dataverse support a more secure, automated foundation for AI. : Sensitivity labels will be automatically applied to data in Dataverse : AI-generated responses will inherit and honor the source data’s sensitivity labels Learn more about protecting Dataverse data with Microsoft Purview. 3. Purview DSPM for AI can now provide visibility into unauthenticated interactions with Copilot Studio agents As organizations increasingly use Microsoft Copilot Studio to deploy AI agents for frontline customer interactions, gaining visibility into unauthenticated user interactions and proactively mitigating risks becomes increasingly critical. Building on existing Purview and Copilot Studio integrations, we’ve extended DSPM for AI and Audit in Copilot Studio to provide visibility into unauthenticated interactions, now in preview. This gives organizations a more comprehensive view of AI-related data security risks across authenticated and unauthenticated users. For example, a healthcare provider hosting an external, customer-facing agent assistant must be able to detect and respond to attempts by unauthenticated users to access sensitive patient data. With these new capabilities in DSPM for AI, data security teams can now identify these interactions, assess potential exposure of sensitive data, and act accordingly. Additionally, integration with Purview Audit provides teams with seamless access to information needed for audit requirements. : Gain visibility into all AI interactions, including those from unauthenticated users Learn more about Purview for Copilot Studio. 4. Purview Data Loss Prevention extended to more Microsoft 365 agent scenarios To help organizations prevent data oversharing through AI, at Ignite 2024, we announced that data security admins could prevent Microsoft 365 Copilot from using certain labeled documents as grounding data to generate summaries or responses. Now in preview, this control also extends to agents published in Microsoft 365 Copilot that are grounded by Microsoft 365 data, including pre-built Microsoft 365 agents, agents built with the Agent Builder, and agents built with Copilot Studio. This helps ensure that files containing sensitive content are used appropriately by AI agents. For example, confidential legal documents with highly specific language that could lead to improper guidance if summarized by an AI agent, or "Internal only” documents that shouldn’t be used to generate content that can be shared outside of the organization. : Extend data loss prevention (DLP) policies to Microsoft 365 Copilot agents to protect sensitive data Learn more about Data Loss Prevention for Microsoft 365 Copilot and agents. The data protection capabilities we are extending to agents in Agent Builder and Copilot Studio demonstrate our continued investment in strengthening the Security and Governance pillar of the Copilot Control System (CSS). CCS provides integrated controls to help IT and security teams secure, manage, and monitor Copilot and agents across Microsoft 365, spanning governance, management, and reporting. Learn more here. Explore additional resources As developers and security teams continue to secure AI throughout its lifecycle, it’s important to stay ahead of emerging risks and ensure protection. Microsoft Security provides a range of tools and resources to help you proactively secure AI models, apps, and agents from code to runtime. Explore the following resources to deepen your understanding and strengthen your approach to AI security: Learn more about Security for AI solutions on our webpage Learn more about Microsoft Purview SDK Get started with Azure AI Foundry Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Defender for Cloud Get started with Microsoft 365 Copilot Get started with Copilot Studio Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity, Jeremy D'Hoinne, Akif Khan, Manuel Acosta, Avivah Litan, Deepak Seth, Bart Willemsen, 10 February 2025 2 IBM. "Cost of a Data Breach 2024: Financial Industry." IBM Think, 13 Aug. 2024, https://www.ibm.com/think/insights/cost-of-a-data-breach-2024-financial-industry; Cser, Tamas. "The Cost of Finding Bugs Later in the SDLC." Functionize, 5 Jan. 2023, https://www.functionize.com/blog/the-cost-of-finding-bugs-later-in-the-sdlcBlog Series: Charting Your Path to Cyber Resiliency
"Cyber resilience is more than just a buzzword in the security industry; it is an essential approach to safeguarding digital assets in an era where cyber threats are not a matter of ‘if’ but ‘when’." - World Economic Forum, 2024 Cyber resiliency describes an organization’s ability to anticipate, withstand, respond and recover from adverse conditions caused by cyberattacks. Destructive cyberattacks such as ransomware can be highly impactful to business operations and profitability. With its emphasis on protecting our companies’ most critical business functions, cyber resiliency enhances the reputation of the Cybersecurity function - it can even help us achieve that most elusive goal of demonstrating our value to the business. In Part 1 and Part 2 of this series we examined the origins of cyber resiliency and Microsoft’s approach to helping our clients become more cyber resilient. As we learned in Part 1, Microsoft has identified 24 key issues that organizations should strategically target to enhance their cyber resilience. These key issues are grouped into the following categories: Low maturity security operations Insecure configuration of identity provider Insufficient privilege access and lateral movement controls No Multi-factor Authentication Lack of information protection control Limited adoption of modern security frameworks Let’s look at how Security Copilot can help, starting with the issue of Low maturity security operations. Security Operations Since its official release in April 2024, we’ve seen many Microsoft clients benefit from Security Copilot’s capabilities to address cyber resiliency issues in the category of Low maturity SOC Operations. For example, through its built-in integration with the Microsoft Defender XDR suite, Security Copilot features such as incident summaries, KQL Query Assistant and guided response can help with these components of the control: Skill gaps across security operations Limited use of endpoint detection and response Gaps in security monitoring and integration Even customers choosing not to use the full Defender XDR suite also benefit from Copilot’s abilities to help them reverse engineer malware and generate scripts. And organizations with limited or no SIEM/SOAR capabilities can also take advantage of Security Copilot’s easy integration with Microsoft Sentinel to accelerate SIEM/SOAR adoption. Security Copilot also assists with the issue of Ineffective SOC processes and operating model in 2 key ways: Reporting and Threat Intelligence. Reporting Security Copilot customers love the tool’s ability to quickly generate comprehensive incident reports geared to a variety of audiences, both technical and executive. Microsoft Defender for Threat Intelligence Integration Cyber Threat Intelligence (CTI) plays an important role in cyber resilience. NIST notes that an organization’s cyber resiliency decreases as the threat environment changes and new threat actors, techniques and vulnerabilities are introduced. Yet we often see customers not using threat intelligence effectively or worse, not using it at all. Within the M365 Portal, the embedded Security Copilot experience features incident summaries that are automatically enriched with threat intelligence from the full version of Microsoft Defender for Threat Intelligence. In both the embedded and standalone experiences, Security Copilot enables SOC analysts to use natural language to learn more about the threats and threat actors affecting their company and industry, get information about specific IOCs, and perform vulnerability impact assessments. Not sure how to start using threat intelligence? That’s OK, Security Copilot’s got you covered with suggested prompts like these in the standalone portal: Keep in mind, though, that Security Copilot is not just for SOC Operations– in fact, one of the key mistakes we’ve seen customers make in Security Copilot proof-of-concepts has been in failing to involve Security teams outside the SOC. Simply put, if your organization is just using Security Copilot in the SOC, you’re significantly limiting its impact on your overall cyber resilience. So let’s look next at what else it can do through integrations with identity management, data protection, and cloud platforms. Identity Management According to the Verizon Data Breach Investigations Report (DBIR), most breaches start with stolen credentials. This is reflected in Microsoft’s cyber resilience guidance where 3 of the key issue categories are identity-based. Security Copilot aids with identifying gaps in Entra configuration, both in the Entra Admin center and the Security Copilot standalone experience. Core capabilities include: Troubleshooting a user’s sign-in failures Providing user account details and authentication methods Exploring audit log events for a particular user, group, or application Enumerating Entra ID roles and group memberships In this case I’m troubleshooting a recent failed sign on attempt by a user. Security Copilot gives me the details of the sign-in and tells me in plain language the reason for the failure, along with the applicable conditional access policy, and the remediation steps to take: Security and Identity pros whose organizations already use Microsoft’s Workload Identities feature can also take advantage of Security Copilot’s abilities to investigate risky Entra ID applications. Security Copilot’s reach even extends to protection of Active Directory on-premises through its integration with Microsoft's Unified Security Operations Platform, which can include Defender for Identity alerts, as well as Windows Security Events collected by Microsoft Sentinel. Data Protection and Vulnerability Management The cyber resilience category Lack of Information Control covers a diverse set of components, including ineffective data loss prevention controls and lack of patch and vulnerability management. Security Copilot integrations support various teams across the organization in areas such as: Data Protection Security Copilot has a powerful integration with Microsoft Purview Data Security Posture Management (DSPM), a centralized data security management tool that includes signals from Microsoft Purview Information Protection, Data Loss Prevention, and Insider Risk Management. Just some of the many goals of this integration are: Helping Security teams conduct deeper investigations into data security incidents Enabling DLP admins to better identify gaps in DLP policy coverage Identifying devices involved in data exfiltration activities Assisting with insider risk management investigations Vulnerability Management As SANS notes, “The quantity of outstanding vulnerabilities for most large organizations is overwhelming, and all organizations struggle to keep up with the never-ending onslaught of new vulnerabilities in their infrastructure and applications.” Security Copilot works with Microsoft Defender External Attack Surface Management (Defender EASM) to help address this challenge. Defender EASM helps identify public-facing assets such as domains and hosts to map your organization’s external attack surface, discover unknown issues, and minimize risk. Security Copilot’s integration with EASM helps teams identify public-facing assets with high-priority CVEs and CVSS scores and find issues like expired domains, expired SSL certificates and SHA1 certificates. If you’re not currently using Defender EASM, it offers a free 30-day trial. (In fact, many customers have been so impressed with EASM and its Security Copilot integration during their trials, they’ve gone ahead and made it a permanent part of their cyber resilience strategy). Finally, note that both Purview DSPM and Defender EASM have multi-cloud capabilities. When used in combination with Security Copilot, they can greatly assist IT and Security teams with limited security experience in more than 1 cloud. Cloud Platforms Finally, in the cyber resilience category Limited adoption of modern security frameworks, Security Copilot helps address the issue of insecure design and configuration across cloud platforms via integrations with Azure Firewall and Azure WAF. Security Copilot features include identifying malicious traffic, searching for a given IDPS signature across all Azure Firewalls in the environment, and generating recommendations to improve the overall security of your deployments. Security Copilot can also help analyze Azure Web Application Firewall (WAF) logs to provide context for: Most frequently triggered rules Malicious IP addresses identified Blocked SQL injection (SQLi) and Cross-site Scripting (XSS) requests Microsoft Copilot for Security integration is available for both Azure WAF on both Azure Application Gateway and Azure WAF on Azure Front Door. Conclusion As we've seen throughout this series, Microsoft provides practical and tactical guidance to help our customers enhance their cyber resiliency to sophisticated and destructive cyberattacks that impact critical business operations. Security Copilot offers new capabilities to help build cyber resiliency in diverse and challenging areas such as: Vulnerability management Data security Multi-cloud management Security operations Identity protection In Building Secure, Resilient Architectures for Cyber Mission Assurance, MITRE emphasizes that “game-changing technologies, techniques, and strategies can make transformational improvements in the resilience of our critical systems.” It's clear that Security Copilot is already one of those game-changers and, with the recent announcement of Security Copilot agents, charting your path to cyber resilience just got a lot more exciting.Announcing Alert Triage Agents in Microsoft Purview, powered by Security Copilot
Powered by Security Copilot, the Alert Triage Agents will help organizations prioritize the most critical risks faster. By triaging alerts based on parameters provided by the organization and fine-tuning the process through feedback provided by the data security teams, the agent will upskill teams, provide faster investigation, and enable more assertive data risk mitigation.Integrating API data into Microsoft Security Copilot using custom logs and KQL plugins
Microsoft Security Copilot (Copilot) is a generative Artificial Intelligence (AI) system for cybersecurity use cases. Copilot is not a monolithic system but is an ecosystem running on a platform that allows data requests from multiple sources using a unique plugin mechanism. Currently, it supports API, GPT, and KQL-based plugins, API and KQL-based plugins can be used to pull external data into Security Copilot. In this blog post we will discuss how both methods can be combined and how we can get data only available through the API to Security Copilot, while benefiting from KQL plugin simplicity. KQL vs. API plugins KQL-based plugins can gather insights from Microsoft Sentinel workspaces, M365 Defender XDR, and Azure Data Explorer clusters. Such plugins do not require any development skills beyond ability to write KQL queries, do not require any additional authentication mechanism and can be easily extended with new features/queries as needed. API plugins give Copilot the capability to pull data from any external data source if it supports REST API, e.g. allowing Copilot to make Graph API calls. KQL and API plugins each have their specific use cases. The KQL option is often chosen for its simplicity and due to certain limitations associated with API plugins: API plugin request body schemas are limited to a depth of 1, which means they cannot handle deeply nested data structures. Output from APIs often needs to be parsed before Security Copilot can ingest it, as Security Copilot, like all other large language models (LLMs) based applications, has limits on how much information it can process at once, known as a "token limit". API must be publicly available for Copilot to access it, which means API endpoint must be properly secured and authentication method must be supported by Copilot. The best of both worlds A possible solution is to integrate data available only through the API into the Log Analytics workspace, allowing for subsequent querying via KQL. The solution consists of two parts: Logic App to query API data and send it to Log Analytics (Sentinel) workspace. Custom KQL plugin for Security Copilot to query custom tables. As an example, we will build a solution that allows querying Defender XDR Secure Score historical data, which is currently only available through Graph API. Create Logic App to store data retrieved via API in Log Analytics workspace We will start with building a simple Logic App to get API data and send it to Log Analytics. While we use Secure Score data, as an example, the same method can be used for any other data that does not change often and suitable for KQL table storage. The Logic App will do the following: Logic App is triggered once a day, in accordance with Secure Score update schedule (once in 24 hours). It gets the latest Secure Score via HTTP call to Graph API: HTTP GET to https://graph.microsoft.com/v1.0/security/secureScores?$top=1: Graph API call is authenticated via Managed Identity: Managed Identity will require SecurityEvents.Read.All permission to get access to Secure Score data: Connect-AzAccount $GraphAppId = "00000003-0000-0000-c000-000000000000" $NameOfMSI = "LOGIC_APP_NAME” $Permission = "SecurityEvents.Read.All" $GraphServicePrincipal = Get-AzADServicePrincipal -AppId $GraphAppId $AppRole = $GraphServicePrincipal.AppRole | Where-Object { $_.Value -eq $Permission -and $_.Origin -contains "Application" } New-AzADServicePrincipalAppRoleAssignment ` -ServicePrincipalDisplayName $NameOfMSI ` -ResourceDisplayName $GraphServicePrincipal.DisplayName ` -AppRoleId $AppRole.Id Received Secure Score data is sent to Log Analytics workspace using built-in Azure Log Analytics Data Collector connector. For convenience we will split data returned by Secure Score API into two categories to store them in different custom log tables: overall Secure Score values and specific values for each security control. Managed Identity assigned to the Logic App will have to be granted Log Analytics Contributor role on the chosen Log Analytics workspace: As a result of running Logic App for the first time, two custom logs will be generated in the selected Log Analytics workspace. SecureScoreControlsXDR_CL log will contain all information about specific controls: SecureScoreXDR_CL will contain just one entry per day, but it is handy when it comes to tracking Secure Score changes in the organization. Create custom KQL Plugin for Security Copilot Now when we have our data conveniently stored in Log Analytics workspace, we can proceed to creation of custom KQL plugin. Below is an example of such plugin with some basic skills, but thanks to simplicity of KQL plugins, it can easily be extended and adjusted to ones needs: Descriptor: Name: SecureScoreXDRPlugin DisplayName: Defender XDR Secure Score plugin Description: Skills to query and track Microsoft Defender XDR Secure Score SkillGroups: - Format: KQL Skills: - Name: GetSecureScoreXDR DisplayName: Get Defender XDR Secure Score for specific date Description: Queries Defender XDR Secure Score current status for Apps, Identity, Devices, Data and Total ExamplePrompts: - 'Get Defender Secure Score for today' - 'Get Secure Score for 2022-01-01' - 'What is the current Secure Score' - 'What was Secure Score 7 days ago' Inputs: - Name: date Description: The date to query the Secure Score for Required: true Settings: Target: Defender Template: |- let specifieddate = todatetime('{{date}}'); SecureScoreControlsXDR_CL | where TimeGenerated between (startofday(specifieddate) .. endofday(specifieddate)) | summarize IdentityScore = sumif(score_d, controlCategory_s == "Identity"), AppsScore = sumif(score_d, controlCategory_s == "Apps"), DeviceScore = sumif(score_d, controlCategory_s == "Device"), DataScore = sumif(score_d, controlCategory_s == "Data") by bin(TimeGenerated, 1d) | extend TotalScore = (IdentityScore + AppsScore + DeviceScore + DataScore) - Name: GetSecureScoreXDRChanges DisplayName: Get Defender XDR Secure Score controls changes for the past 7 days Description: Queries Defender XDR Secure Score and shows changes during the past 7 days ExamplePrompts: - 'How did secure score change in the past week' - 'What are secure score controls changes' - 'Show recent changes across secure score controls' - 'Show secure score changes for the past 7 days' Inputs: - Name: date Description: The date to query the Secure Score for Required: true Settings: Target: Defender Template: |- let specifieddate = todatetime('{{date}}'); let Controls = SecureScoreControlsXDR_CL | project TimeGenerated, RecommendationCategory=controlCategory_s, ControlName=controlName_s, Recommendation=description_s, ImplementationStatus=implementationStatus_s, ControlScore = score_d | where TimeGenerated >= specifieddate; Controls | summarize distinctScoreCount = count_distinct(ControlScore) by ControlName | where distinctScoreCount > 1 | join kind=inner ( Controls ) on ControlName | summarize TimeGenerated = max(TimeGenerated) by ControlName | join kind=inner ( Controls) on TimeGenerated, ControlName | project TimeGenerated, ControlName, RecommendationCategory, Recommendation, ImplementationStatus, ControlScore Now we need to save text above to YAML file and add it as custom KQL plugin: Once plugin is deployed, we can query and track Secure Score data using Security Copilot: Conclusion Storing non-log data within a Log Analytics workspace is an established practice. This method has been used to allow security analysts easy access to supplementary data via KQL, facilitating its use in KQL queries for detection enrichment purposes. As illustrated in the scenario above, we can still generate alerts based on this data, such as notifications for declining Secure Scores. Additionally, this approach now enables further AI-powered Security Copilot scenarios.The security benefits of structuring your Azure OpenAI calls – The System Role
In the rapidly evolving landscape of GenAI usage by companies, ensuring the security and integrity of interactions is paramount. A key aspect is managing the different conversational roles—namely system, user, and assistant. By clearly defining and separating these roles, you can maintain clarity and context while enhancing security. In this blog post, we explore the benefits of structuring your Azure OpenAI calls properly, focusing especially on the system prompt. A misconfigured system prompt can create a potential security risk for your application, and we’ll explain why and how to avoid it. The Different Roles in an AI-Based Chat Application Any AI chat application, regardless of the domain, is based on the interaction between two primary players, the user and the assistant. The user provides input or queries. The assistant generates contextually appropriate and coherent responses. Another important but sometimes overlooked player is the designer or developer of the application. This individual determines the purpose, flow, and tone of the application. Usually, this player is referred to as the system. The system provides the initial instructions and behavioral guidelines for the model. Microsoft Defender for Cloud’s researchers identified emerging anti-pattern Microsoft Defender for Cloud (MDC) offers security posture management and threat detection capabilities across clouds and has recently released a new set of features to help organizations build secure enterprise-ready gen-AI apps in the cloud, helping them build securely and stay secure. MDC’s research experts continuously track the development patterns to enhance the offering but also to promote secure practices to their customers and the wider tech community. They are also primary contributors to the OWASP Top 10 threats for LLM (Idan Hen, research team manager). Recently, MDC's research experts identified a common anti-pattern in AI application development is emerging – appending the system to the user prompt. Mixing these sections is easy and tempting – developers often use it because it’s slightly faster while building and also allows them to maintain context through long conversations. But this practice is harmful – it introduces detrimental security risks that could easily result in 'game over' – exposing sensitive data, getting your computer abused, or making your system vulnerable to Jailbreak attacks. Diving deeper: How system prompts evaluation keeps your application secure Separate system, user and assistant prompts with Azure OpenAI ChatCompletion API Azure OpenAI Service's Chat Completion API is a powerful tool designed to facilitate rich and interactive conversational experiences. Leveraging the capabilities of advanced language models, this API enables developers to create human-like chat interactions within their applications. By structuring conversations with distinct roles—system, user, and assistant—the API ensures clarity and context throughout the dialogue: [{"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request]}, {"role": "assistant", "content": [Model’s response] } ] This structured interaction model allows for enhanced user engagement across various use cases such as customer support, virtual assistants, and interactive storytelling. By understanding and predicting the flow of conversation, the Chat Completion API helps create not only natural and engaging user experiences but securer applications, driving innovation in communication technology. Anti-pattern explained When developers append their instructions to the user prompt. The model receives single input composed by two different sources: developer and user: {"role": "user", "content”: [Developer’s instructions] + [User’s request]} {"role": "assistant", "content": [Model’s response] } When developer instructions are mingled with user input, detection and content filtering systems often struggle to distinguish between the two. Anti-pattern resulting in less secured application This blurring of input roles can facilitate easier manipulation through both direct and indirect prompt injections, thereby increasing the risk of misuse and harmful content not being detected properly by security and safety systems. Developer instructions frequently contain security-related content, such as forbidden requests and responses, as well as lists of do's and don'ts. If these instructions are not conveyed using the system role, this important method for restricting model usage becomes less effective. Additionally, customers have reported that protection systems may misinterpret these instructions as malicious behavior, leading to a high rate of false positive alerts and the unwarranted blocking of benign content. In one case, a customer described forbidden behavior and appended it to the user role. The threat detection system then flagged it as malicious user activity. Moreover, developer instructions may contain private content and information related to the application's inner workings, such as available data sources and tools, their descriptions, and legitimate and illegitimate operations. Although it is not recommended, these instructions may also include information about the logged-in user, connected data sources and information related to the application's operation. Content within the system role enjoys higher privacy; a model can be instructed not to reveal it to the user, and a system prompt leak is considered a security vulnerability. When developer instructions are inserted together with user instructions, the probability of a system prompt leak is much higher, thereby putting our application at risk. Why do developers mingle their instructions with user input? In many cases, recurring instructions improve the overall user experience. During lengthy interactions, the model tends to forget earlier conversations, including the developer instructions provided in the system role. For example, a model instructed to role-play in an English teaching application or act as a medical assistant in a hospital support application may forget its assigned role by the end of the conversation. This can lead to poor user experience and potential confusion. To mitigate this issue, it is crucial to find methods to remind the model of its role and instructions throughout the interaction. One incorrect approach is to append the developer's instructions to user input by adding them to the User role. Although it keeps developers’ instructions fresh in the model's 'memory,' this practice can significantly impact security, as we saw earlier. Enjoy both user experience and secured application To enjoy both quality detection and filtering capabilities along with a maximal user experience throughout the entire conversation, one option is to refeed developer instructions using the system role several times as the conversation continues: {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 1]} {"role": "assistant", "content": [Model’s response 1] } {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 2]} {"role": "assistant", "content": [Model’s response 2] } By doing so, we achieve the best of both worlds: maintaining the best practice of separating developer instructions from user requests using the Chat Completion API, while keeping the instructions fresh in the model's memory. This approach ensures that detection and filtering systems function effectively, our instructions get the model's full attention, and our system prompt remains secure, all without compromising the user experience. To further enhance the protection of your AI applications and maximize detection and filtering capabilities, it is recommended to provide contextual information regarding the end user and the relevant application. Additionally, it is crucial to identify and mark various input sources and involved entities, such as grounding data, tools, and plugins. By doing so, our system can achieve a higher level of accuracy and efficacy in safeguarding your AI application. In our upcoming blog post, we will delve deeper into these critical aspects, offering detailed insights and strategies to further optimize the protection of your AI applications. Start secure and stay secure when building Gen-AI apps with Microsoft Defender for Cloud Structuring your prompts securely is the best-practice when designing chatbots. There are other lines of defense that must be put in place to fully secure your environment. Sign up and Enable the new Defender for cloud threat protection for AI for active threat detection (preview). Enable posture management to cover all your cloud security risks, including new AI posture features. Further Reading Microsoft Defender for Cloud (MDC). AI protection using MDC. Chat Completion API. Security challenges related to GenAI. How to craft effective System Prompt. The role of System Prompt in Chat Completion API. Responsible AI practices for Azure OpenAI models. Asaf Harari, Data Scientist, Microsoft Threat Protection Research. Shiran Horev, Principal Product Manager, Microsoft Defender for Cloud. Slava Reznitsky, Principal Architect, Microsoft Defender for Cloud.