security
408 TopicsIntroducing the New Microsoft Security Community Home!
We are excited to introduce the new home of the Microsoft Security Community! At aka.ms/securitycommunity, you can explore upcoming events, access technical content, and find new ways to connect with Microsoft experts and peers across the security ecosystem.Safeguarding Sensitive Data in Microsoft 365 Copilot Interactions: DLP for Microsoft 365 Copilot
Microsoft 365 Copilot is redefining how organizations work, bringing the power of generative AI directly into our secure productivity tools. As Copilot adoption accelerates, we’ve heard that you want more control over how your sensitive data can be used in interactions with Copilot. At Ignite 2025, Microsoft announced a major enhancement: Microsoft Purview Data Loss Prevention for Microsoft 365 Copilot to safeguard Microsoft 365 Copilot and Copilot Chat prompts, now entering General Availability. Even better, this capability is included for all users of Microsoft 365 Copilot and Copilot Chat. Why DLP for Copilot Prompts Is a Game-Changer As organizations adopt Copilot, their ways of sharing, creating, and interacting with data expand. With just a prompt, users can have Copilot summarize documents, analyze spreadsheets, or help brainstorm presentations. However, it raises an important question: what if the prompt includes sensitive information, like project code names, financial account numbers, health records, or other sensitive data? Over the last 2 years, Microsoft has been building a set of Data Loss Prevention (DLP) controls specifically designed for Copilot. Below is a quick overview of these related capabilities — ranging from already available to newly in preview — before we dive deep into today's GA announcement: Prevent Copilot processing of files & emails based on sensitivity labels In November 2024, Microsoft introduced the ability to create a DLP policy to restrict Microsoft 365 Copilot and Copilot Chat from processing sensitive files and emails using Sensitivity Labels for grounding data. This capability gives you control over whether content with the sensitivity labels you specify is restricted from being used in Microsoft 365 Copilot and Copilot Chat to generate summaries and responses. Prevent web searches for prompts containing Sensitive Information Types (SITs) The latest feature entering Public Preview is DLP for Microsoft 365 Copilot and Copilot Chat to prevent web searches for prompts containing sensitive data. This real-time control helps organizations mitigate data leakage and oversharing risks by preventing Microsoft 365 Copilot and agents from using sensitive data for external web searches. If a sensitive information type (SIT) is detected in a user prompt, Copilot can still leverage your enterprise data to form a response without sending the sensitive data to external search engines for web grounding. This capability extends to Microsoft 365 Copilot and agents built in Copilot Studio that are published to Microsoft 365 Copilot. DLP to Safeguard Copilot Prompts with Sensitive Information Types (SITs) The rest of this blog focuses on a key addition to this capability set: DLP for Microsoft 365 Copilot + Copilot Chat prompts to prevent processing of prompts containing sensitive information, now entering General Availability. Unlike the web search capability above, which prevents sensitive data from being sent externally during a web query, this capability evaluates the user’s text input directly, before processing occurs, to determine whether both enterprise data and web grounding can proceed. This feature uses Sensitive Information Types (SITs) as a condition within a Purview DLP policy to assess whether a user prompt sent to Copilot contains sensitive data, even if the data is unlabeled. With DLP for Copilot prompts, a user’s text input is scanned in real time for SITs, whether built-in (like Social Security Numbers, credit card numbers, etc.) or custom-defined by your organization (such as confidential terms or project names). If a text prompt contains one of the SITs you specify, Copilot restricts processing, halts any Graph or web grounding, and displays a clear message to the end user that the request cannot be completed. A user enters a prompt in Microsoft 365 Copilot Chat containing sensitive information. How DLP for Copilot Protects Prompts: Real-Time, Intelligent Protection The new DLP capability integrates seamlessly with Microsoft Purview, leveraging its powerful data classification & detection engine for sensitive information types. Here’s how it works: Input: When a user submits a prompt, Copilot checks the prompt for sensitive information using built-in or organization-defined sensitive information types (SITs). Immediate Action: If a SIT is detected, Copilot restricts the prompt from being processed. No AI response is generated, and no data is sent for Graph or web grounding. Output: Users receive a clear notification that their request cannot be completed due to company policies. This real-time protection ensures that sensitive data is not leaked or overshared, even as users explore new ways to work with AI. Setting Up DLP for Copilot Prompts: Data Security Admin Experience The easiest way to get started is through the new Microsoft Purview Data Security Posture Management (DSPM) portal, which provides a guided, one-click setup experience: 1. In Purview, go to Solutions > DSPM (preview) 2. Select the "Prevent data exposure in Microsoft 365 Copilot and Microsoft Copilot interactions" objective. 3. Follow the guided workflow and apply the recommended one-click DLP policy. The policy starts in simulation mode so you can review activity before enforcing it. Alternatively, you can configure and customize this policy directly from the Purview DLP portal Policies page or enable it from the Microsoft 365 Admin Center. view the remediation plan. view policy details and review. Then click the button, create a custom policy in DLP simulation mode to protect sensitive data referenced in Microsoft 365 Copilot and Microsoft Copilot. the confidence level and instance count. Practical Scenarios: Protecting What Matters Most Protect PII, financial data, and intellectual property: Financial institutions can block prompts containing deal terms, account numbers, or other sensitive data, preventing leaks through AI interactions. Similarly, healthcare organizations can safeguard patient information, and manufacturers can secure intellectual property and trade secrets from exposure, along with many other practical use cases. Once the prompt is detected and blocked, Microsoft Graph grounding and Bing web grounding is restricted. Safeguard sensitive non-public information: Imagine an organization involved in a confidential merger. By using DLP for Copilot prompts, administrators can set up a custom SIT that includes the project’s code name. If a user asks Copilot about the merger using the project’s code name, their request will be blocked, keeping sensitive information secure and protected. Visibility into DLP for M365 Copilot Prompts When a user’s prompt triggers a DLP policy, notifications and alerts are surfaced directly in the Microsoft Purview and Defender portals for security administrators. These alerts provide detailed information about which policy was activated, the type of sensitive information detected, and the context of the attempted Copilot interaction. Using these alert queues in Purview and Defender XDR, administrators can efficiently track policy activity, investigate potential incidents, and refine DLP rules to better align with organizational needs. The ability to review historical alerts and track ongoing enforcement empowers admins to maintain strong data security and proactively safeguard sensitive information. Defender XDR portal investigation of prompt DLP based incident. Takeaways The introduction of this latest enhancement to DLP for Copilot represents a key advancement in secure Copilot deployment and adoption. By empowering organizations to block sensitive data at the prompt level, Microsoft is helping customers unlock the full potential of Copilot, without compromising security or compliance. This innovation reflects Microsoft’s commitment to responsible AI, continuous improvement, and customer-driven development. As Copilot evolves, so will the tools to protect your data, ensuring that productivity and security go hand in hand. For more details, stay tuned for updates to the Product Roadmap and Learn documentation. Learn about using DLP to protect interactions with Microsoft 365 Copilot and Copilot Chat Learn about the default DLP policy for Microsoft 365 Copilot location | Microsoft Learn Permissions to create or edit a DLP policy to safeguard Microsoft 365 Copilot and Copilot Chat Learn about the new Microsoft Purview Data Security Posture Management (DSPM) | Microsoft Learn Roadmap Item: DLP for Microsoft 365 Copilot to safeguard prompts Roadmap Item: DLP to safeguard web search in Microsoft 365 CopilotWhy UK Enterprise Cybersecurity Is Failing in 2026 (And What Leaders Must Change)
Enterprise cybersecurity in large organisations has always been an asymmetric game. But with the rise of AI‑enabled cyber attacks, that imbalance has widened dramatically - particularly for UK and EMEA enterprises operating complex cloud, SaaS, and identity‑driven environments. Microsoft Threat Intelligence and Microsoft Defender Security Research have publicly reported a clear shift in how attackers operate: AI is now embedded across the entire attack lifecycle. Threat actors use AI to accelerate reconnaissance, generate highly targeted phishing at scale, automate infrastructure, and adapt tactics in real time - dramatically reducing the time required to move from initial access to business impact. In recent months, Microsoft has documented AI‑enabled phishing campaigns abusing legitimate authentication mechanisms, including OAuth and device‑code flows, to compromise enterprise accounts at scale. These attacks rely on automation, dynamic code generation, and highly personalised lures - not on exploiting traditional vulnerabilities or stealing passwords. The Reality Gap: Adaptive Attackers vs. Static Enterprise Defences Meanwhile, many UK enterprises still rely on legacy cybersecurity controls designed for a very different threat model - one rooted in a far more predictable world. This creates a dangerous "Resilience Gap." Here is why your current stack is failing- and the C-Suite strategy required to fix it. 1. The Failure of Traditional Antivirus in the AI Era Traditional antivirus (AV) relies on static signatures and hashes. It assumes malicious code remains identical across different targets. AI has rendered this assumption obsolete. Modern malware now uses automated mutation to generate unique code variants at execution time, and adapts behaviour based on its environment. Microsoft Threat Intelligence has observed threat actors using AI‑assisted tooling to rapidly rewrite payload components, ensuring that every deployment looks subtly different. In this model, there is no reliable signature to detect. By the time a pattern exists, the attacker has already moved on. Signature‑based detection is not just slow - it is structurally misaligned with AI‑driven attacks. The Risk: If your security relies on "recognising" a threat, you are already breached. By the time a signature exists, the attacker has evolved. The C-Suite Pivot: Shift investment from artifact detection to EDR/XDR (Extended Detection and Response). We must prioritise behavioural analytics and machine learning models that identify intent rather than file names. 2. Why Perimeter Firewalls Fail in a Cloud-First World Many UK enterprise still rely on firewalls enforcing static allow/deny rules based on IP addresses and ports. This model worked when applications were predictable and networks clearly segmented. Today, enterprise traffic is encrypted, cloud‑hosted, API‑driven, and deeply integrated with SaaS and identity services. AI‑assisted phishing campaigns abusing OAuth and device‑code flows demonstrate this clearly. From a network perspective, everything looks legitimate: HTTPS traffic to trusted identity providers. No suspicious port. No malicious domain. Yet the attacker successfully compromises identity. The Risk: Traditional firewalls are "blind" to identity-based breaches in cloud environments. The C-Suite Pivot: Move to Identity-First Security. Treat Identity as the new Control Plane, integrating signals like user risk, device health, and geolocation into every access decision. 3. The Critical Weakness of Single-Factor Authentication Despite clear NCSC guidance, single-factor passwords remain a common vulnerability in legacy applications and VPNs. AI-driven credential abuse has changed the economics of these attacks. Threat actors now deploy adaptive phishing campaigns that evolve in real-time. Microsoft has observed attackers using AI to hyper-target high-value UK identities- specifically CEOs, Finance Directors, and Procurement leads. The Risk: Static passwords are now the primary weak link in UK supply chain security. The C-Suite Pivot: Mandate Phishing‑resistant MFA (Passkeys or hardware security keys). Implement Conditional Access policies that evaluate risk dynamically at the moment of access, not just at login. Legacy Security vs. AI‑Era Reality 4. The Inherent Risk of VPN-Centric Security VPNs were built on a flawed assumption: that anyone "inside" the network is trustworthy. In 2026, this logic is a liability. AI-assisted attackers now use automation to map internal networks and identify escalation paths the moment they gain VPN access. Furthermore, Microsoft has tracked nation-state actors using AI to create synthetic employee identities- complete with fake resumes and deepfake communication. In these scenarios, VPN access isn't "hacked"; it is legally granted to a fraudster. The Risk: A compromised VPN gives an attacker the "keys to the kingdom." The C-Suite Pivot: Transition to Zero Trust Architecture (ZTA). Access must be explicit, scoped to the specific application, and continuously re‑evaluated using behavioural signals. 5. Data: The High-Velocity Target Sensitive data sitting unencrypted in legacy databases or backups is a ticking time bomb. In the AI era, data discovery is no longer a slow, manual process for a hacker. Attackers now use AI to instantly analyse your directory structures, classify your files, and prioritise high-value data for theft. Unencrypted data significantly increases your "blast radius," turning a containable incident into a catastrophic board-level crisis. The Risk: Beyond the technical breach, unencrypted data leads to massive UK GDPR fines and irreparable brand damage. The C-Suite Pivot: Adopt Data-Centric Security. Implement encryption by default, classify data while adding sensitivity labels and start board-level discussions regarding post‑quantum cryptography (PQC) to future-proof your most sensitive assets. 6. The Failure of Static IDS Traditional Intrusion Detection Systems (IDS) rely on known indicators of compromise - assuming attackers reuse the same tools and techniques. AI‑driven attacks deliberately avoid that assumption. Threat actors are now using Large Language Models (LLMs) to weaponize newly disclosed vulnerabilities within hours. While your team waits for a "known pattern" to be updated in your system, the attacker is already using a custom, AI-generated exploit. The Risk: Your team is defending against yesterday's news while the attacker is moving at machine speed. The C-Suite Pivot: Invest in Adaptive Threat Detection. Move toward Graph‑based XDR platforms that correlate signals across email, endpoint, and cloud to automate investigation and response before the damage spreads. From Static Security to Continuous Security Closing Thought: Security Is a Journey, Not a Destination For UK enterprises, the shift toward adaptive cybersecurity is no longer optional - it is increasingly driven by regulatory expectation, board oversight, and accountability for operational resilience. Recent UK cyber resilience reforms and evolving regulatory frameworks signal a clear direction of travel: cybersecurity is now a board‑level responsibility, not a back‑office technical concern. Directors and executive leaders are expected to demonstrate effective governance, risk ownership, and preparedness for cyber disruption - particularly as AI reshapes the threat landscape. AI is not a future cybersecurity problem. It is a current force multiplier for attackers, exposing the limits of legacy enterprise security architectures faster than many organisations are willing to admit. The uncomfortable truth for boards in 2026 is that no enterprise is 100% secure. Intrusions are inevitable. Credentials will be compromised. Controls will be tested. The difference between a resilient enterprise and a vulnerable one is not the absence of incidents, but how risk is managed when they occur. In mature organisations, this means assuming breach and designing for containment: Access controls that limit blast radius Least privilege and conditional access restricting attackers to the smallest possible scope if an identity is compromised Data‑centric security using automated classification and encryption, ensuring that even when access is misused, sensitive data cannot be freely exfiltrated As a Senior Enterprise Cybersecurity Architect, I see this moment as a unique opportunity. AI adoption does not have to repeat the mistakes of earlier technology waves, where innovation moved fast and security followed years later. We now have a rare chance to embed security from day one - designing identity controls, data boundaries, automated monitoring, and governance before AI systems become business‑critical. When security is built in upfront, enterprises don’t just reduce risk - they gain the confidence to move faster and unlock AI’s value safely. Security is no longer a “department”. In the age of AI, it is a continuous business function - essential to preserving trust and maintaining operational continuity as attackers move at machine speed. References: Inside an AI‑enabled device code phishing campaign | Microsoft Security Blog AI as tradecraft: How threat actors operationalize AI | Microsoft Security Blog Detecting and analyzing prompt abuse in AI tools | Microsoft Security Blog Post-Quantum Cryptography | CSRC Microsoft Digital Defense Report 2025 | Microsoft https://www.ncsc.gov.uk/news/government-adopt-passkey-technology-digital-servicesCredential Exposure Risk & Response Workbook
How to set up the Workbook Use the steps outlined in the Identify and Remediate Credentials article to get the right rules in place to start capturing credential data. You may choose to use custom regex patterns or more specific SITs that align with your scenario. This workbook will help you once that is done. This workbook transforms credential leakage detection into a measurable, executive-ready capability. End‑to‑end situational awareness: Correlates alerts across workloads, departments, credential types, and users to surface material exposure quickly. Actionable triage & forensics: Drill from trends to the artifact (message/file/URL), accelerating containment and root‑cause analysis. Risk‑aligned decisions: Quantifies exposure and response performance (creation vs. resolution trends) to guide investment and policy changes. Audit‑ready governance: Captures decisions, timelines, and outcomes for PCI/PII controls, identity hygiene, and secrets management. Prerequisites License requirements for Microsoft Purview Information Protection depend on the scenarios and features you use. To understand your licensing requirements and options for Microsoft Purview Information Protection, see the Information Protection sections from Microsoft 365 guidance for security & compliance and the related PDF download for feature-level licensing requirements. Before you start, all endpoint interaction with Sensitive content is already being included in the audit logging with Endpoint DLP enabled (Endpoint DLP must be enabled). For Microsoft 365 SharePoint, OneDrive Exchange, and Teams you can enable policies that generate events but not incidents for important sensitive information types. Install Power BI Desktop to make use of the templates Downloads - Microsoft Power BI Step-by-step guided walkthrough In this guide, we will provide high-level steps to get started using the new tooling. Get the latest version of the report that you are interested in. In this case, we will show the Board report. Open the report. If Power BI Desktop is installed, it should look like this: 3. You must authenticate with the https://api.security.microsoft.com, select Organizational account, and sign in. Then click Connect. 4. You will also have to authenticate with httpps://api.security.microsoft.com/api/advancedhunting, select Organizational account, and sign in. Then click Connect. What the Workbook Delivers The workbook moves programs to something that is measurable. Combined with customers' outcome‑based metrics (operational risk, control risk, end‑user impact), it enables an executive‑level, data‑driven narrative for investment and policy decisions. End‑to‑end situational awareness: Correlates alerts across workloads, departments, credential types, and users to surface material exposure quickly. Actionable triage & forensics: Drill from trends to the artifact (message/file/URL), accelerating containment and root‑cause analysis. Risk‑aligned decisions: Quantifies exposure and response performance (creation vs. resolution trends) to guide investment and policy changes. Audit‑ready governance: Captures decisions, timelines, and outcomes for PCI/PII controls, identity hygiene, and secrets management. Troubleshooting tips: If you are receiving a (400): Bad request error, it is likely that you do not have the necessary tables from the endpoint in Advanced Hunting. Those errors may also show if there are empty values passed from the left-hand side of the KQL queries. Detection trend Apply filtering to this view based on the DLP policies that monitor credentials. Trend Analysis Over Time Displays daily detection counts, helping identify spikes in credential leakage activity and enabling proactive investigation. Workload and Credential Type Breakdown Shows which workloads (e.g., Endpoint, Exchange, OneDrive) and credential types are most affected, guiding targeted security measures. Detection Source Visibility Highlight which security tools (Sentinel, Cloud App Security, Defender) are catching leaks, ensuring monitoring coverage, and identifying gaps. Detailed Credential Exposure Lists exposed credentials for quick validation and remediation, reducing the risk of misuse or compromise. (This part is dependent on the AI component) Supports Incident Response Enables rapid triage by correlating detection trends with specific credentials and sources, improving response times. Compliance and Audit Readiness Provides clear evidence of credential monitoring and leakage detection for regulatory and governance reporting. Credential incident trends Lifecycle Tracking of Credential Alerts Visualizes creation and resolution trends over time, helping teams measure response efficiency and identify periods of heightened risk. Workload and Credential Type Breakdown Shows which workloads (Endpoint, Exchange, OneDrive) and credential types are most impacted, enabling targeted mitigation strategies. Incident Type Analysis Highlights the distribution of alerts by category (e.g., CredRisk, Agent), supporting prioritization of critical incidents. Detailed Alert Context Provides message IDs and associated credentials for precise investigation and remediation, reducing time to contain threats. Performance and SLA Monitoring Tracks resolution timelines to ensure compliance with internal security SLAs and regulatory requirements. Audit and Governance Support Offers clear evidence of alert handling and closure, strengthening accountability and reporting. Content view Workload-Level Risk Visibility Highlights which workloads (e.g., SharePoint, Endpoint) have the highest credential exposure, enabling targeted security hardening. Departmental Risk Breakdown Shows which departments (Security, Logistics, Sales) are most impacted, helping prioritise remediation for critical business areas. Credential Type Analysis Identifies exposed credential types such as API keys, shared access keys, and tokens, guiding policy enforcement and rotation strategies. User and Document Correlation Links exposed credentials to specific users and documents, supporting rapid investigation and containment of leaks. Comprehensive Drill-Down Enables navigation from department → credential type → user → document for precise root cause analysis. Governance and Compliance Support Provides auditable evidence of credential exposure across workloads and departments, strengthening regulatory reporting. For endpoint, this view is an excellent way to catch applications that are not treating secrets in a safe way and expose them in temporary files. Force-directed graph Visual Alert Correlation Displays a force-directed graph linking users to alert categories, making it easy to identify patterns and clusters of credential-related risks. High-Risk User Identification Highlights users with multiple or severe alerts, enabling prioritisation for investigation and remediation. Credential Type and Department Context Shows which credential types and departments are most associated with alerts, supporting targeted security measures. Alert Severity and Details Provides a detailed table of alerts with severity and category, helping analysts quickly assess impact and urgency. Improved Threat Hunting Enables analysts to trace relationships between users, alert types, and credential exposure for deeper root cause analysis. Compliance and Reporting Offers clear evidence of monitoring and categorisation of credential-related alerts for governance and audit purposes. Security incidents correlated to credential leakage Focused on Credential Leakage Provides a dedicated view of alerts related to exposed credentials, enabling quick detection and response. Role-Based Risk Analysis Breaks down incidents by department and role, helping prioritise remediation for high-risk groups such as developers and security teams. User-Level Investigation Allows drill-down to individual users involved in credential-related alerts for rapid containment and corrective action. Credential Type Insights Highlight which types of credentials (e.g., API keys, passwords) are most vulnerable, guiding policy improvements and rotation strategies. Alert Source Correlation Displays which security tools (Sentinel, MCAS, Defender) are detecting leaks, ensuring coverage and identifying monitoring gaps. Compliance and Governance Support Offers auditable evidence of credential monitoring, supporting regulatory and internal security requirements. App and Network correlated to credential leakage For network detection, adjust the query in production to remove standard applications if they are too noisy. We have seen cases where Word and other commonly used applications make calls using FTP services as an example. While other applications may add too much noise. Token Detection Event Traceability Shows detected Token credentials events linked directly to individual User IDs and Device IDs for investigation. Application Usage Context Identifies that the detected activity is associated with the application ms‑teams.exe as an example. External URL Association Displays the Remote URL connected to the token detection event. Remote IP Visibility Lists the Remote IP addresses associated with the activity. Entity-Level Correlation Links UserId, DeviceId, Application, Remote URL, and Remote IP within a single event flow. You can select port used or how Apps are linked as well. Detection Count Aggregation Summarises the number of credential events tied to each correlated entity path. Turn detection into decisions. Deploy the workbook today to get measurable insights, accelerate triage, and deliver audit-ready governance. Start driving risk-aligned investment and policy changes with confidence. The PBI report is located here. Based on what you identify, you may be using tools such as Data Security Investigations to go deeper. We are also working on surfacing the AI triaging in a context that will enrich the DLP analyst experience.Post-Quantum Cryptography APIs Now Generally Available on Microsoft Platforms
Introduction We are excited to announce a significant leap forward in security: Post-Quantum Cryptography (PQC) algorithms are now generally available in Windows Server 2025 and Windows 11 clients (24H2, 25H2) and .NET 10. This major milestone is part of Microsoft's ongoing commitment to help organizations stay ahead of evolving cybersecurity threats and prepare for the era of quantum computing. This announcement aligns with the broader strategy of Microsoft’s Quantum Safe Program (QSP), as highlighted in this blog post, which outlines the company’s comprehensive roadmap for PQ readiness. The general availability of PQC algorithms in Windows Server 2025, Windows 11, and .NET 10 represents a significant initial step within the ‘Foundational security components’ phase of this initiative, with further milestones and enhancements planned to bolster security in the years ahead. PQC Algorithms Now GA in Windows Server 2025 and Windows 11 Client In May this year, we brought PQC to Windows Insiders. With the November update of Windows, we’re bringing ML-KEM and ML-DSA to Windows Server 2025 and Windows 11 client via updates to Cryptography API: Next Generation (CNG) libraries and Certificate functions. Developers now have access to ML-KEM for use in scenarios requiring key encapsulation or key exchange, enhancing preparedness against the "harvest now, decrypt later" threat. Additionally, developers can adopt ML-DSA for scenarios involving identity verification, integrity checks, or digital signature-based authentication. These updates represent a step towards enabling systems to safeguard sensitive data from both current and anticipated cryptographic challenges. Enhanced Security: PQC algorithms provide resilience against potential quantum-based attacks, which are expected to render many traditional cryptographic schemes obsolete. Seamless Integration: The PQC enhancements are integrated directly into the Windows cryptographic infrastructure, allowing for easy deployment and management. Enterprise-Ready: These features have been extensively tested to meet the performance and reliability needs of enterprise environments. Visit our crypto developer’s pages for ML-KEM and ML-DSA to learn more and get started. General Availability of PQC in .NET 10 In addition to Windows platform enhancements, we are thrilled to announce the general availability of PQC support in .NET 10. Developers can now build and deploy applications that utilize PQC algorithms, enabling robust data protection in the quantum era. Developer Empowerment: .NET 10 integrates PQC options within its cryptographic APIs, making it simple for developers to modernize their security posture. Cross-Platform Support: Build secure applications for Windows or Linux using the same PQC-enabled framework. Future-Proofing: Adopt the latest cryptographic standards with minimal code changes and broad compatibility. Learn more about these changes here, and check out .NET 10 to get started. Coming Soon: PQC in Active Directory Certificate Services (ADCS) Looking ahead, we are pleased to share that the general availability of PQC capabilities in Active Directory Certificate Services (ADCS) is targeted for early 2026. This forthcoming update will further strengthen the foundation of your organization’s identity and certificate management infrastructure. Comprehensive Coverage: PQC support in ADCS will enable issuance and management of certificates using PQC algorithms. Easy Migration: Detailed guidance and configuration examples will be provided to help organizations transition their PKI environments to PQC. Long-Term Security: Protect identities, devices, and communications well into the quantum era with minimal disruption. What Lies Ahead: Upcoming Developments and Challenges As cryptographic standards advance, SymCrypt will continue to incorporate additional quantum-resistant algorithms to maintain its leadership in security innovation. The development of PQC support for securing TLS is proceeding in alignment with IETF standards, aiming to provide strong protection for data in transit. In addition, Microsoft is preparing other essential domains—including firmware and software signing, identity, authentication, network security, and data protection—to be PQC-ready. Collaborating with ecosystem partners, these initiatives further extend the reach of quantum-safe security throughout the broader ecosystem. As PQC algorithms are still relatively new, it is important for organizations to consider "crypto agility," allowing systems to adapt as standards evolve. Microsoft advises customers to begin planning their transition to PQC by integrating new algorithms and adopting solutions that support both current and future cryptographic needs. In some cases, this means deploying PQC in hybrid or composite modes—combining a post-quantum algorithm with a traditional one such as RSA or ECDHE. Other situations may call for enabling pure PQC algorithms while maintaining compatibility with existing standards. Over time, as quantum technologies mature, we may see a shift towards only PQC. PQC algorithms may require increased computational resources, making ongoing optimization and hardware acceleration necessary to achieve an effective balance between security and performance. The transition to PQC includes updating cryptographic infrastructure, maintaining compatibility with legacy systems, and facilitating coordination among developers, hardware manufacturers, and service providers. Education and awareness are also important for broad adoption and compliance. Next Steps and Resources We encourage IT administrators, developers, and security professionals to begin leveraging PQC features in Windows Server 2025, Windows 11, and .NET 10, and to prepare for the upcoming enhancements in ADCS. Detailed documentation and best practices are available here: Using ML-KEM with CNG for Key Exchange Using ML-DSA with CNG for Digital Signatures What's new in .NET libraries for .NET 10 Conclusion Microsoft is committed to helping customers secure their environments against the threats of today and tomorrow. The general availability of PQC algorithms across our platforms marks a new era of cybersecurity resilience. We look forward to partnering with you on this journey and enabling a safer, quantum-ready future. Securing the present, innovating for the future Security is a shared responsibility. Through collaboration across hardware and software ecosystems, we can build more resilient systems secure by design and by default, from Windows to the cloud, enabling trust at every layer of the digital experience. The updated Windows Security book and Windows Server Security book are available to help you understand how to stay secure with Windows. Learn more about Windows 11, Windows Server, and Copilot+ PCs. To learn more about Microsoft Security Solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.Registration Open: Community-Led Purview Lightning Talks
Get ready for an electrifying event! The Microsoft Security Community proudly presents Purview Lightning Talks; an action-packed series featuring your fellow Microsoft users, partners and passionate Microsoft Security community members of all sorts. Each 3-12 minute talk cuts straight to the chase, delivering expert insights, real-world use cases, and even a few game-changing tips and tricks. Don’t miss this opportunity to learn, connect, and be inspired! Secure your spot now for the big day: April 30th at 8am Redmond Time. 💙 See agenda details below and follow this blog post (sign in and click the "follow" heart in the upper right) to receive notifications. We have more speaker details and community connection information coming soon! AGENDA The Day Offboarding Exposed Infinite Retention - Nikki Chapple nikkichapple A real-world discovery of orphaned OneDrives and retention debt caused by retain-only policies, and how Adaptive Scopes help prevent it. Topic: Data Lifecycle Management Securing Data in the Age of AI - Julio Cesar Goncalves Vasconcelos How Microsoft Purview enables organizations to accelerate AI adoption while maintaining security, compliance, and transparency. Topic: Purview for AI What’s In My Compliance Manager Toolbox - Jerrad Dahlager j-dahl7 A practical walkthrough of using Compliance Manager to map controls, track improvements, and simplify multi-framework compliance. Topic: Compliance Manager Why You Should Create Your Own Sensitive Information Types (SITs) - Niels Jakobsen Niels_Jakobsen An in-depth analysis of why built-in SITs are not one-size-fits-all, and how to tailor them for real enterprise needs. Topic: Information Protection Beyond eDiscovery – Purview DSI for Security Investigation - Susantha Silva How to turn DLP alerts and Insider Risk signals into structured data investigations without jumping between portals. Topic: Data Security (DSI) Four Labels Max for Daily Use: Which Ones & Why? - Romain Dalle Romain DALLE A minimalist sensitivity labeling baseline designed for real-world adoption and usability. Topic: Information Protection Elevating Purview DLP with a Real-World Use Case - Victor Wingsing vicwingsing Hardening Purview DLP beyond default configurations to close real-world data loss gaps. Topic: Data Loss Prevention (DLP) Stop, Think, Protect: Data Security in Real Life with Purview - Oliver Sahlmann Oliver Sahlmann A traffic-light approach showing how simple labels and DLP policies still deliver meaningful protection. Topic: Data Security The Purview Label Engine: Automated Classification & Documentation - Michael Kirst Neshva MichaelKirst1970 A scalable framework for rolling out Microsoft Purview labels across global, multilingual enterprises. Topic: Information Protection Data-Driven Endpoint DLP with Advanced Hunting - Tatu Seppälä tatuseppala Using KQL queries and usage patterns to refine endpoint DLP policies based on real behavior. Topic: Data Loss Prevention (DLP) Improving Discovery, Trust, and Reuse of Analytics with Purview Data Products - Craig Wyndowe CraigWyndowe How Purview Governance Domains and Data Products create a trusted, reusable analytics ecosystem. Topic: Data Governance From Zero to First Signal: Insider Risk Management Prerequisites That Matter - Sathish Veerapandian Sathish Veerapandian A focused look at the configurations required for Insider Risk Management to actually generate alerts. Topic: Insider Risk Management The Purview Hack No One Talks About: Container Sensitivity Labels - Nikki Chapple nikkichapple How container sensitivity labels instantly fix oversharing for Teams, Groups, and SharePoint sites. Topic: Information Protection Using Purview to Prevent Oversharing with AI Services - Viktor Hedberg headburgh How Information Protection and DLP prevent Copilot and AI services from exposing sensitive data. Topic: Information Protection & DLP How I Helped Customers Understand Their AI Usage (and Protect Data) - Bram de Jager Bram de Jager Exposing risky AI usage patterns and protecting sensitive data entered into public AI tools. Topic: Data Security Posture Management for AI Bulk Sensitivity Label Removal with Microsoft Purview Information Protection (MPIP) - Zak Hepler A practical demo on safely removing sensitivity labels at scale from SharePoint libraries. Topic: Information Protection Does M365 Support eDiscovery? (Mythbusting) - Julian Kusenberg Leprechaun91 A myth-busting session separating perception from reality in Microsoft 365 eDiscovery. Topic: eDiscovery896Views5likes0CommentsPart 3: DSPM for AI: Governing Data Risk in an Agent‑Driven Enterprise
Why Agent Security Alone Is Not Enough? Foundry‑level controls are designed to prevent unsafe behavior and bound autonomy at runtime. But even the strongest preventive controls cannot answer key governance questions on their own: Where is sensitive data being used in AI prompts and responses? Which agents are interacting with high‑risk data—and how often? Are agents oversharing, drifting from expected behavior, or creating compliance exposure over time? How do we demonstrate control, auditability, and accountability for AI systems to regulators and leadership? These are not theoretical concerns. With agents acting continuously and autonomously, risk no longer shows up as a single event—it shows up as patterns, trends, and posture. DSPM for AI exists to make those patterns visible. At its core, DSPM for AI provides a centralized, risk‑centric view of how data is used, exposed, and governed across AI applications and agents. It shifts the conversation from individual incidents to organizational posture. DSPM for AI answers a simple but critical question: “Given how our AI systems are actually being used, what is our current data risk—and where should we intervene?” Unlike traditional DSPM, DSPM for AI expands visibility into: Prompts and responses Agent interactions with enterprise data Oversharing patterns Agent‑driven risk signals Trends across first‑party and third‑party AI usage What DSPM for AI Brings into Focus? 1. AI Interaction Visibility DSPM for AI treats AI prompts, responses, and agent activity as first‑class security telemetry. This allows security teams to see: Sensitive data being submitted to AI systems High‑risk interactions involving regulated information Repeated exposure patterns rather than one‑off events In short, AI conversations become auditable security signals, not blind spots. 2. Oversharing and Exposure Risk One of the most common AI risks is unintentional oversharing—especially when agents retrieve or combine data across systems. DSPM for AI makes it possible to: Identify where sensitive data exists but is poorly labeled Detect when unlabeled or over‑shared data is being accessed via AI Prioritize remediation based on actual usage, not static classification This ties directly back to the Sensitive Data Leakage patterns discussed earlier—but at an organizational scale. 3. Agent‑Level Risk Context DSPM for AI extends posture management beyond users to agents themselves. Security teams can: Inventory agents operating in the environment View agent activity trends Identify agents exhibiting higher‑risk behavior patterns This enables a powerful shift: agents can be assessed, reviewed, and governed just like digital workers. 4. Bridging Security, Compliance, and Audit DSPM for AI connects operational security with governance outcomes. Through integration with audit logs, retention, and compliance workflows, organizations gain: Evidence for investigations and regulatory inquiries Consistent compliance posture across human and agent activity A defensible, repeatable governance model for AI systems This is where AI risk becomes explainable, reportable, and manageable—not just prevented. How DSPM for AI Complements Azure AI Foundry? If Azure AI Foundry provides the control plane that enforces safe agent behavior, DSPM for AI provides the visibility plane that measures how that behavior translates into risk over time. Think of it this way: Foundry controls prevent and constrain DSPM for AI observes, measures, and prioritizes Together, they enable continuous governance Without DSPM, security teams are left guessing whether controls are effective at scale. With DSPM, risk becomes quantifiable and actionable. Why This Matters for Security Leaders? For security leaders, agentic AI introduces a familiar challenge in an unfamiliar form: Risk is non‑deterministic Behavior changes over time Impact can span multiple systems instantly DSPM for AI gives leaders the ability to: Monitor AI risk like any other enterprise workload Prioritize remediation where it matters most Move from reactive investigations to proactive governance This is not about slowing innovation—it’s about making AI adoption defensible. Closing: From Secure Agents to Governed AI Securing agents is necessary—but it is not sufficient on its own. As AI systems increasingly act on behalf of the organization, governance must shift from individual controls to continuous posture management. DSPM for AI provides the missing link between prevention and accountability, turning fragmented AI activity into a coherent risk narrative. Together, Azure AI Foundry and DSPM for AI enable organizations to not only build and deploy agents safely, but to operate AI systems with clarity, confidence, and control at scale. In the agentic era, security prevents incidents—but governance determines trust.Part 2: Securing AI Agents with Azure AI Foundry: From Abuse Patterns to Lifecycle Controls
Every agent abuse pattern we’ve explored points to a specific control gap, not a theoretical flaw. Across all patterns, one theme consistently emerges: agents behave logically according to how they are configured. When failures occur, it’s rarely because the model “got it wrong”—it’s because the surrounding system granted too much freedom, trust, or persistence without adequate guardrails. This is exactly the problem Azure AI Foundry is designed to address. Rather than treating security as an add‑on, Foundry embeds controls directly into the agent platform, ensuring protection does not rely on custom glue code or fragmented tools. Effective agent security, therefore, is not concentrated in a single layer—it is enforced end‑to‑end across the agent lifecycle. In practice, Foundry delivers controls across all of the critical dimensions where agent abuse occurs: Instructions — governing what the agent is intended to do, with built‑in protections for prompts, prompt injection, and task adherence Identity — treating agents as first‑class identities, enforcing least privilege and accountability from day one Tools — constraining which tools agents can invoke, under what conditions, and with what approvals Data — extending enterprise data security, classification, and DLP controls directly to agent interactions Runtime behavior — providing continuous observability, detection, and evaluation of what agents are actually doing in production Because these controls are natively integrated, Foundry enables teams to secure agents without redesigning their architecture around security after the fact. With that context, let’s map each agent abuse pattern to the specific Foundry controls that help prevent it, detect it early, or limit its impact in real‑world deployments. Jailbreaks → Instruction & Runtime Protection in Azure AI Foundry The Risk Recap Jailbreaks attempt to override system or developer instructions by exploiting language ambiguity, instruction hierarchy, and the model’s default helpfulness. For agents, this risk escalates quickly—from unsafe outputs to unauthorized real‑world actions—once tools and identities are involved. How Azure AI Foundry Addresses This? Azure AI Foundry implements jailbreak protection before execution and at runtime, ensuring malicious intent is intercepted early and contained if it reappears later in the workflow. Foundry capabilities applied: Prompt Shields (Azure AI Content Safety) to detect and block direct jailbreak attempts at input Spotlighting to reduce the influence of adversarial or instruction‑override prompts Runtime detection and alerting (via built‑in observability and Defender integration) to surface attacker intent and suspicious prompts Least‑privilege agent identity (Entra integration) to ensure that even successful linguistic manipulation cannot translate into unauthorized actions Continuous evaluation and red‑teaming built into the agent lifecycle to validate resilience before deployment Core takeaway: In Foundry, jailbreak protection is not limited to prompt design—it is enforced across instruction handling, identity, and runtime execution. Prompt Injection → Context & Task Integrity in Azure AI Foundry The Risk Recap Prompt injection alters what the agent believes its instructions are—often indirectly through documents, emails, or RAG data sources. For agents, indirect prompt injection (XPIA) is especially dangerous because it is invisible to users and can quietly redirect agent behavior. How Azure AI Foundry Addresses This Foundry treats prompt trust and task integrity as first‑class security concerns, not just input filtering problems. Foundry capabilities applied: Prompt Shields with Spotlighting to neutralize hidden or embedded instructions from untrusted content Task Adherence Controls to continuously verify that the agent remains aligned to its approved goal or workflow Runtime detection to identify context manipulation and instruction smuggling as it occurs—before tools are invoked Core takeaway: Azure AI Foundry protects not just prompts, but the integrity of agent context and intent throughout execution. Memory Poisoning → Memory Governance & Observability in Azure AI Foundry The Risk Recap Memory poisoning persists across sessions and workflows. Once malicious or misleading information is written into memory, agents continue to act on it—often silently—making memory a long‑term attack surface. How Azure AI Foundry Addresses This? Foundry treats agent memory as a governed state, not an unrestricted persistence layer. Foundry capabilities applied: Controlled memory persistence to reduce what information can be written and retained Built‑in observability and tracing to monitor behavioral drift across interactions and over time Task adherence over time to detect delayed‑trigger abuse and gradual deviation from intended goals Red‑team evaluation workflows that simulate memory‑based abuse scenarios before agents reach production Core takeaway: In Azure AI Foundry, memory is governed, observable, and testable—preventing attackers from gaining persistence through long‑lived agent state. Excessive Autonomy → Identity, Tool & Approval Guardrails in Azure AI Foundry The Risk Recap Excessive autonomy occurs when agents are over‑empowered—too many tools, too many permissions, too little oversight. The agent may function “correctly,” but the blast radius grows exponentially. How Azure AI Foundry Addresses This? Foundry is designed to constrain autonomy without breaking productivity by enforcing boundaries at identity, tool, and workflow levels. Foundry capabilities applied: Agent identity as a first‑class identity with least‑privilege enforcement from creation Tool guardrails to explicitly define which tools an agent can invoke, and under what conditions Approval and checkpointing controls to introduce human‑in‑the‑loop enforcement for high‑impact actions Runtime tool monitoring to detect anomalous or risky behavior across integrated systems Core takeaway: Azure AI Foundry ensures that autonomy is intentional, bounded, and accountable—not accidental or unchecked. Sensitive Data Leakage → Integrated Data Security & Governance in Azure AI Foundry The Risk Recap Sensitive data leakage is often unintentional and difficult to detect after the fact. Agents can expose data through responses, memory, logs, or tool outputs while behaving “helpfully.” How Azure AI Foundry Addresses This? Foundry extends enterprise‑grade data security directly into agent workflows, rather than treating agents as exceptions. Foundry capabilities applied: Output content filtering to detect and redact sensitive data before responses are returned Microsoft Purview integration to enforce classification, labeling, DLP, auditing, and compliance policies on agent interactions Runtime exfiltration detection to identify risky access or transfer patterns as they happen End‑to‑end observability and lineage to trace exactly where sensitive data was accessed, used, or leaked Core takeaway: In Azure AI Foundry, agents inherit the same data security and governance expectations as humans and applications—by default. Closing: Governing Agent Risk at Enterprise Scale The patterns outlined in this post point to a critical shift in how organizations must think about AI risk. As agents gain the ability to act autonomously, retain state, and operate continuously across systems, risk becomes systemic, fast‑moving, and inherently scalable. In this environment, isolated safeguards or one‑time reviews are no longer sufficient. Azure AI Foundry addresses this challenge by embedding security controls across the entire agent lifecycle—from how agents are designed and authorized, to how they behave in production, to how their actions are continuously monitored and evaluated over time. This lifecycle‑integrated approach ensures that autonomy is paired with visibility, enforceable boundaries, and accountability by design. For security and risk leaders, the question is no longer whether agents can be deployed safely in a controlled pilot. The real test is whether they can be operated predictably, transparently, and at scale as they become part of critical business workflows. As you evaluate or expand agentic AI in your organization: Inventory and classify your agents as you would any other enterprise workload Treat agents as identities, enforcing least privilege and clear accountability Align controls to the full lifecycle, not just prompts or outputs Demand continuous visibility and evaluation, not point‑in‑time assurances Agents will increasingly act on behalf of the business. Ensuring they do so safely requires governance that moves at the same speed as autonomy. In an agent‑driven enterprise, trust isn’t assumed—it is continuously enforced.