governance
207 TopicsSecurity Dashboard for AI: 3 Ways CISOs Drive Impact Today
AI is reshaping the enterprise and, with it, the threat landscape. Today's organizations face new threats with AI agents that modify configurations, execute workflows, and access data without direct human oversight. As a result, the gap between AI adoption and AI governance is widening, and CISOs face growing challenges to maintain visibility, control, and compliance across an increasingly complex ecosystem. As AI becomes embedded across the enterprise, CISOs face four key challenges: Scale without visibility: Over 75% of enterprises surveyed by PWC report they are already adopting AI agents. ¹ At the same time, over 80% of security teams surveyed by Nokod report visibility gaps into the applications and AI agents created within their organization. ² Rapid AI proliferation and evolving regulations make unified visibility across AI platforms, apps, and agents critical for CISOs. Fragmentation: Organizations rely on multiple siloed tools for AI asset visibility, making oversight fragmented and inefficient. According to Gartner’s 2024 survey of 162 enterprises, organizations use 45 cybersecurity tools on average. Expanding AI risk: AI proliferation is rapidly increasing the attack and risk surface, with the surge of AI-generated identities. By 2027, 4 out of 5 organizations will face phishing attacks powered by AI-generated synthetic identities, according to IDC. ³ This makes it harder for CISOs to track emerging threats, unmanaged assets, and shifting risk patterns. Overload: Alert fatigue is now a top challenge, with organizations now receiving an average of 2,992 security alerts daily, yet 63% go unaddressed. ⁴ Increasing AI risk without a way to prioritize what matters most compounds pressure on CISOs. In conversations between Microsoft and CISOs, one common need emerged: a single place to view integrated AI risk across the enterprise. To address these growing challenges, we are excited to provide CISOs with the Security Dashboard for AI, which recently became generally available. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Entra, and Purview into one unified, executive-level view of AI posture, risk, and inventory across agents, apps, and platforms. The Security Dashboard for AI helps CISOs: Gain unified AI risk visibility: Discover AI agents and applications and continuously monitor posture across the environment Prioritize critical risks: Correlate signals across identity, data, and threat protection to surface the most urgent issues Drive risk mitigations: Investigate activity and take action to help reduce exposure across the AI ecosystem The dashboard is capable of aggregating and surfacing AI risks from across Microsoft Defender, Entra, Purview - including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents as well as cross-platform AI risks with Microsoft network-based or SDK-enabled integrations, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. As you activate Microsoft Security for AI capabilities, you can gain richer visibility into different aspects of your AI risk posture. Figure 1: Security Dashboard for AI in browser Getting Started with the Security Dashboard for AI The Security Dashboard for AI is provided at no additional cost to customers already using Defender, Entra, and/or Purview to protect their AI innovation. Based on how early adopter CISOs are using the dashboard, here are three ways you can start leveraging the dashboard today. 1. Manage Daily AI Risk Beyond reporting, you must stay hands-on with AI risks, scanning for emerging issues, verifying asset governance, and delegating remediations. The Security Dashboard for AI consolidates daily operations into a single pane of glass, surfacing critical alerts, unmanaged assets, and emerging risks. Use the dashboard as a daily AI risk radar, enabling rapid triage and ensuring you focus on the most urgent threats. Scan and triage daily AI risk: Start each day by identifying and prioritizing the highest-risk AI exposures. Risks are prioritized on severity reported by underlying security tools, helping you focus on the most critical exposures. Track AI asset inventory and monitor agent sprawl: Use the Inventory page to gain comprehensive visibility into all AI assets. Identify newly registered assets to mitigate the risk of shadow or unmanaged IT and surface inactive agents to proactively monitor and control agent sprawl. Delegate tasks for remediation: Move from insight to action by delegating tasks to your security team with easy click delegation. Delegation routes ownership via email or Microsoft Teams with notifications, due date, and ownership tracking. Delegate actions to specific roles such as global admin and AI administrator, without granting full access to underlying tools. Figure 2: Security Dashboard for AI risk page 2. Guide Briefings with Security Teams You require up-to-date intelligence to guide conversations with Security Teams about what is happening across the AI estate. The Security Dashboard for AI helps you anchor discussions in specific risks, trends, and ownership gaps surfaced in the data. The dashboard becomes a conversation driver, helping you ask the right questions about risk and security posture, to help ensure you and your team are triaging the right priorities. Because the dashboard consolidates signals from Defender, Entra, and Purview, both CISO and security teams operate from the same facts, enabling more outcome-driven discussions and faster prioritization, so you can shift the conversations from status updates to targeted action planning. Prioritize top AI Risk: Use the dashboard to help you prioritize the AI risk that matters the most. In preparation for team meetings, use Microsoft Security Copilot to explore AI risks, agent activity, and security recommendations via prompts to strengthen your AI security posture. With your team, take a closer look at risk vectors like data leakage, oversharing and unethical behavior, and discuss what actions need to be taken. Review Security Recommendations: Create a routine with your security team to review the recommended Microsoft security actions and track your progress over time. Across regular team check‑ins, review what has been addressed, what remains open, and which actions require follow‑up so you are prepared to respond to regulatory, audit, or executive questions with up‑to‑date metrics. Figure 3: Security Dashboard for AI inventory page Figure 4: Security Dashboard for AI delegation 3. Executive Reporting Reporting to the board on AI security posture has historically meant weeks of manual data gathering across multiple tools. The Security Dashboard for AI streamlines the data collection process with a single source of truth for AI risk, enabling confident, data-backed insights for your board presentations and conversations. Early adopters confirm the value and are using it for quarterly executive briefings. Prepare for Board Discussions: Use the dashboard to help get the right insights at the right altitude to help you prepare for discussions with your board. The Overview page aggregates identity, data security, and threat protection signals from Defender, Entra, and Purview into an AI risk scorecard with risk factors. The embedded Security Copilot AI-powered insights provide suggested prompts with risk assessments, summaries, and recommendations to help you prioritize what matters most. Extend Observability to Executive Stakeholders: Authorize AI risk follow‑ups to the appropriate security, identity, or governance owners using Microsoft Teams or email. Distribute visibility across GRC lead, AI governance, and IT leaders, while maintaining executive‑level oversight. Figure 5: Security Dashboard for AI Copilot prompt gallery Next Steps The Security Dashboard for AI helps CISOs manage AI risk faster, more confidently and more collaboratively with their team. Defender, Entra, and Purview signals are surfaced in a single pane of glass, providing observability across your AI estate. Drive faster triage, use data to support board-level discussions about AI risk, and enable coordinated action with integrated insights, recommendations, and delegation to help accelerate remediation across existing security workflows. The Security Dashboard for AI is generally available now. If your organization uses Microsoft Defender, Entra, and/or Purview, you already have access, no additional licensing is required. Visit ai.security.microsoft.com to access the dashboard directly, or navigate to it from the Defender, Entra, or Purview portals. Learn more about the Security Dashboard for AI on the MS Learn page and the Security Dashboard for AI Security Blog. Discover new features in the Security Dashboard for AI such as the Security Reader role, new delegation flow, and new identity risk section here. ¹AI agent survey. PwC, May 2025 ²Security Teams Taking on Expanded AI Data Responsibilities. Bedrock Data, March 2025 ³IDC FutureScape: Worldwide Security and Trust 2026 Predictions, November 2025 ⁴2026 State of Threat Detection and Response Report. Vectra AI, February 2026Introducing the Azure Resource Manager MCP Server!
We're super excited to announce the public preview of the Azure Resource Manager MCP Server! This is a remote MCP server that provides tools to give AI agents first-class access to Azure infrastructure operations through Azure Resource Manager (ARM). AI agents can now be equipped with tools to generate, validate, execute Azure Resource Graph (ARG) queries and tools to deploy and manage ARM template deployments. At its core, this server is built to help AI agents interact with Azure resources seamlessly. What this means for you Ask natural language questions about your Azure estate to your agents and get real time, accurate answers backed with an ARG query Deploy and manage infrastructure easily by having AI deploy ARM templates for you Monitor deployment status and catch issues before they escalate Ability to build more advanced AI agents that understand your Azure environment What You Can Do Today Generate, Validate, and Execute Azure Resource Graph Queries from Natural Language No need to struggle with writing KQL from stratch! Describe what you need, and the MCP server tool generates Azure Resource Graph queries that match your intent. You ask an AI Agent: "Find all virtual machines in my subscription that don't have managed disks". It uses the tool and returns: A ready-to-execute ARG query without manual KQL writing. Deploy, monitor and cancel ARM Templates Pass an ARM template, and the MCP server kicks off the deployment targeted to a an existing resource group scope. Monitor the deployment by getting status about it and even cancel it if you decide its not doing what you need it to. Here is the complete list of the tool available in this preview: generate_query validate_query execute_query create_template_deployment get_arm_template_deployment_status cancel_arm_template_deployment Real-World Scenarios Infrastructure Compliance Audit "Show me all resources created in the last 30 days that don't have required tags." - The MCP server generates and executes the query, returning resources that need remediation. Your team can then fix them programmatically or through Copilot. Rapid Infrastructure Provisioning "Using this ARM template <path to template>, deploy a secure storage account with HTTPS-only access, private endpoints, and Standard_LRS replication to my production resource group." This will take an existing ARM template and deploy it to a resource group scope. Policy Compliance Check "Check if all resources in my subscription comply with the latest policy applied to it." - The MCP server generates and executes the query, returning resources that are non-compliant. Your team can then take corrective actions programmatically or through Copilot. Building Agents with Azure Resource Manager MCP Server The MCP server's tools can be integrated into custom agents you build with GitHub Copilot. What this means is you can create custom agents that automatically check compliance, track changes in a scope, or ensure all resources have a particular tag applied to them! Getting Started Prerequisites VS Code installed Valid Azure account with appropriate permissions GitHub Copilot subscription Installation Install the MCP Server Open https://aka.ms/JoinAzMgmtMCP VS Code launches automatically Click Install under Azure Resource Manager MCP Server Sign in with your Azure credentials If you hit any authentication issues see Troubleshooting Guide in our repo Check tools are enabled in Chat Open Chat in VS Code (View > Chat) Click Configure Tools Ensure the six Azure Resource Manager MCP Server tools are enabled Start Using It Ask Copilot a question about your Azure resources or infrastructure needs The MCP server handles the rest Governance & Security The Azure Resource Manager MCP Server respects your Azure permissions and governance policies. All operations run in the context of your signed-in user. Additionally you can apply Azure Policies to prevent deployments via the MCP Server. Find more details in the README of our documentation repo. What's Next? We are actively expanding the capabilities of the Azure Resource Manager MCP Server! The Server will expand to include: Additional ARM API capabilities with ARM Enhanced query generation and optimization Support for additional MCP clients beyond VS Code, next up: Claude Get Feedback We want to hear from you. Try the public preview and share your feedback. Found a bug? Or have a feature request? Open an issue on GitHub at https://aka.ms/ARMMCPIssue Resources - 📖 Full Documentation – Complete setup and usage guide - 🔗 Install Now – Get started with the public preview - 🐛 Report Issues – Share feedback and bugs - ❓ FAQ – Common questions answered - 🛠️ Troubleshooting – Resolve common issues Try It Today The Azure Resource Manager MCP Server public preview is available now. Visit https://aka.ms/JoinARMMCP to install and start automating your Azure infrastructure with AI. What agents will you build with these tools? We can't wait to see how you'll use this. Steven Bucher PM on Azure Resource Manager and Azure GovernanceSecurity Dashboard for AI - Now Generally Available
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is now generally available. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 2026The Agent Era Has Already Arrived in Healthcare. Are You Ready to Govern It?
Start here. Answer honestly. Right now, how many AI agents are running inside your organization? Who built them? Which patient data, claims information, or proprietary research are they configured to access? If your CISO walked into your office tomorrow and asked for a complete inventory of every agent in your enterprise, including each one's owner, the systems it is permitted to access, and the policies that govern how it operates, could you produce that inventory before lunch? When the analyst who built that clinical summarization agent moves to a new role next quarter, what happens to the agent? Does its access continue? Does anyone notice? If a regulator opened an audit tomorrow, could you prove that every AI agent operating in your environment is subject to the same lifecycle controls, identity standards, and data protection policies you apply to your human workforce? Could you disable a compromised agent enterprise-wide with a single click, the same way you would revoke a lost access credential? If those questions made you hesitate, you are not alone. Almost no healthcare or life sciences organization can answer them confidently today. And that gap is exactly where the next decade of risk, and the next decade of competitive advantage, will be decided. The quiet crisis nobody talks about yet Healthcare and life sciences leaders are caught in a paradox. You need AI to survive the operational pressures squeezing your organization from every direction. Physician burnout is at crisis levels, with 45.2% of US physicians reporting symptoms in recent Mayo Clinic research. Revenue cycle complexity continues to climb, and McKinsey now estimates that the cost to collect consumes 30 to 60 percent of net patient revenue at many provider organizations. Prior authorization backlogs delay care. Clinical trial timelines stretch into years. Documentation burden eats hours that belong to patients. So you started piloting Microsoft 365 Copilot. You experimented with agents in Copilot Studio. Maybe a clinical team built an agent to draft discharge summaries. A revenue cycle group spun up an agent to triage denials. A medical affairs team built one to comb through literature. Each one delivered value. Each one was approved on its own merits. And then a quiet thing happened. You lost track of how many agents you have. According to KPMG's AI Quarterly Pulse Survey, 88 percent of organizations are now exploring or piloting AI agents. IDC projects that 1.3 billion agents will be in operation by 2028. Inside your own walls, the number is climbing fast. Each new agent is a digital identity that authenticates into your environment, accesses your data, and executes work on behalf of your business. Most have no formal owner. Most have no documented access scope. Most have no decommissioning plan. Most have never been reviewed by Compliance. Microsoft's 2024 Data Security Index found that 84 percent of organizations lack confidence in their AI data security posture, and 40 percent have already experienced an AI related data security incident. That is not a future problem. That is a now problem. If shadow IT was the defining governance challenge of the last decade, agent sprawl is the defining challenge of this one. And in healthcare and life sciences, where ePHI, member PII, and proprietary clinical trial data are at stake, the consequences are not theoretical. They are existential. The reframe that changes everything Here is the counterintuitive truth that separates HLS organizations that scale AI from those stuck in pilot purgatory. Governance is not the brake on AI adoption. Governance is the accelerator. When security, identity, and agent oversight are engineered in from day one, your teams stop tiptoeing. They build with confidence because the guardrails are real. They expand into clinical use cases because Compliance trusts the foundation. They scale wall-to-wall because IT can prove every agent is accounted for. The organizations that lead with trust end up moving faster in the long run, not slower. This is the bet behind Microsoft Agent 365 and Microsoft 365 E7. What Agent 365 and Microsoft 365 E7 actually are Microsoft 365 E7, announced March 6, 2026 and now generally available, is the Frontier Suite. It is Microsoft's answer to a single question that every healthcare CIO, CISO, and COO is wrestling with: how do you run AI safely, at scale, across an entire organization? E7 is not another SKU on top of your existing stack. It is one cohesive platform that brings together four essential capabilities: Microsoft 365 E5 for your enterprise productivity, collaboration, and security foundation, including Microsoft Defender, Microsoft Purview, and Microsoft Intune. Microsoft 365 Copilot for AI grounded in your organizational data through Work IQ, embedded in the flow of work for clinicians, researchers, operations teams, and administrators. Microsoft Entra Suite for identity governance, Conditional Access, and Zero Trust network access, extended consistently across users, applications, and AI agents. Microsoft Agent 365 as the centralized control plane to observe, govern, and secure every AI agent, whether built by Microsoft, your internal teams, or external partners. Agent 365 is also available as a standalone capability. But the magic happens when it works alongside the rest of E7, because that is where AI, identity, security, and governance stop being separate disciplines and become one operating system for the agentic era. The mental model that unlocks everything: agents are first-class digital identities Here is the simplest way to understand what Agent 365 does. Microsoft 365 governs your enterprise identities. Agent 365 governs your agent identities. The same control plane disciplines apply to both. Think about the rigor you apply to any privileged identity in your environment, whether a service account, an API integration, or a third-party application connector. You issue it a unique identity in Microsoft Entra. You assign a human owner who is accountable. You scope its access to least privilege. You apply DLP, sensitivity labels, and Conditional Access. You monitor for anomalous behavior. You have a documented decommissioning path. Identities that no one watches over become identities that get exploited. Now ask yourself how the last AI agent in your environment was created. The honest answer at most organizations: someone opened Copilot Studio, pointed it at a SharePoint library of clinical protocols, gave it a name, and moved on. No documented owner. No access review. No retirement plan. Compliance was never consulted. You would never stand up a privileged service account that way. Yet that is exactly how most organizations are standing up the fastest-growing class of digital identities in their environment. Agent 365 closes that gap by extending the identity, security, and lifecycle controls you already trust for users and applications so they apply with the same rigor to AI agents. Every agent receives a unique Entra Agent ID, a first-class identity in Azure AD with the same governance primitives as any other privileged identity. Every agent has a designated human owner who is accountable for its scope and behavior. Access is granted explicitly through Conditional Access and policy templates, so each agent operates only against the resources its purpose requires. Microsoft Purview DLP and sensitivity labels govern which data the agent is permitted to read, generate, or share. Microsoft Defender monitors agent activity for anomalies and surfaces alerts the same way it does for any other identity-driven risk. Lifecycle rules flag or auto-retire agents that are dormant, orphaned, or risky, eliminating the unowned automations that quietly accumulate in every enterprise. This is not metaphor. It is the actual architecture. The fastest path to governing agents is to extend the identity infrastructure you already trust. The three pillars of Agent 365: Observe, Govern, Secure Pillar 1: Observe. Know what is actually happening. You cannot govern what you cannot see. The first job of Agent 365 is to give you complete, continuous visibility into every AI agent operating in your environment. The Agent Registry is the single authoritative inventory of every agent, whether built by Microsoft, custom developed by your team, deployed by a partner, or discovered as a shadow agent operating without oversight. Each entry shows the owner, purpose, capabilities, lifecycle status, and business context. Agent Analytics tracks adoption, quality, performance, and business impact. Agent Map visualizes how agents connect with other agents, people, tools, and data sources, surfacing dependencies and risk concentrations you would never spot in a spreadsheet. Real time monitoring flows directly into Microsoft Defender, so unusual agent behavior generates alerts the same way unusual user behavior does today. For a health system CISO, that means finally being able to answer the question: which agents are touching ePHI, and is every one of them authorized? For a life sciences compliance officer, it means audit ready visibility into every AI system operating across R&D, regulatory affairs, and commercial. For a payer operations leader, it means knowing which claims processing agents are actually delivering accuracy and throughput, and which are quietly underperforming. Pillar 2: Govern. Set the rules. Control the lifecycle. Visibility is the start. Control is what turns visibility into outcomes. Agent 365 ensures that every agent is approved, compliant, and accountable from creation through retirement. IT led onboarding workflows make sure each agent launches with the right identity, access, and ownership before it ever touches data. Policy templates enforce data handling, permission, and usage rules consistently from day one through Defender, Entra, and Purview. Rules based agent management gives admins an automated If This Then That interface. If an agent is unused for 90 days, auto retire it. If an agent is flagged as risky, block it and alert the security operations team. No human in the loop required for the routine cases, full alerting and override for the exceptions. Ownership enforcement requires every agent to have a designated human owner. When that owner leaves the organization, the platform flags the orphaned agent for bulk reassignment, so nothing operates without clear accountability. The Tools Gateway brokers and audits tool access for agents, enabling least privilege at the action level, not just the identity level. For HLS specifically, that translates to outcomes you can take to your board. A hospital CIO can ensure any agent touching Epic or Cerner goes through standardized approval. A pharma IT director can enforce that clinical trial matching agents only touch de identified data unless elevated permissions are explicitly granted and documented. A payer compliance team can automatically retire agents tied to a completed open enrollment campaign instead of letting them silently expand the attack surface. Pillar 3: Secure. Protect agents and data with the stack you already trust. The final pillar is what makes Agent 365 production grade for healthcare and life sciences. Security and compliance are not bolted on. They are the same proven Microsoft security stack you already run for your users, extended natively to agents. Microsoft Purview, your data security and compliance backbone: Data Security Posture Management for AI gives visibility into how agents interact with sensitive data and detects risky usage patterns. Data Loss Prevention stops agents from accessing or processing files labeled Highly Confidential, even when a user prompts them to. Sensitivity labels are inherited automatically by agent outputs, governing how data is viewed, extracted, or shared downstream. Insider Risk Management detects risky behavior by users interacting with agents, such as unusual prompt patterns or excessive access to sensitive data. Communication Compliance monitors AI driven interactions for regulatory or ethical violations and unauthorized disclosures. eDiscovery and Audit logs every agent interaction, giving legal, compliance, and IT teams the transparency required for HIPAA, GDPR, and FDA 21 CFR Part 11. Oversharing Assessments run weekly checks for sensitive data exposure across SharePoint sites and agent access patterns. Microsoft Entra, your identity control plane: Entra Agent ID gives every agent a unique identity in Azure AD, so Conditional Access, role based access, and risk based policies apply individually. Conditional Access for agents enforces policies like only allow this prior authorization agent to access claims data from approved devices and locations during business hours. Identity Governance provides access packages for agents with reduced scope permissions and least privilege defaults. Block at Scale lets you instantly disable all high-risk agents from Entra in a single action. Microsoft Defender, your threat protection layer: Security Posture Management identifies and remediates agent misconfigurations, such as agents running with no authentication. Threat Detection and Blocking monitors suspicious agent activity, generates alerts, and blocks unauthorized tool invocations. Threat Investigation and Hunting collects unified agent observability logs so SOC teams can forensically trace every action an agent took. One Click Kill Switch instantly disables any agent and surfaces the complete audit trail of every action it took before being stopped. For a hospital security operations team, that means the same DLP policies protecting patient records in email and Teams now protect agents that summarize clinical notes. For a life sciences data protection officer, it means agents accessing proprietary compound data respect the same sensitivity labels as human researchers. For a payer CISO, it means an anomalous claims agent can be killed in seconds, with a complete forensic record of every member record it touched. Why this only works as an integrated platform Individual capabilities are useful. Integration is what makes them transformative. Here is the contrast HLS leaders feel today versus what changes the moment E7 lights up. Without an integrated platform, you operate with: Fragmented tools for identity, security, compliance, and AI, each with its own console and its own gaps. No centralized agent inventory, forcing your IT and security teams to track bots and automations in spreadsheets. Inconsistent policy enforcement across agents, creating compliance gaps every audit team will eventually find. Blind spots where agents access data, invoke tools, or interact with other agents without any oversight. Manual triage when an incident hits, because nothing connects user identity, agent identity, and data classification in one view. With Microsoft 365 E7, you gain: A Unified Agent Registry providing a single source of truth for every agent, whether Microsoft built, custom developed, partner deployed, or shadow discovered. Entra Agent ID giving each agent a unique identity, so Conditional Access, role based access, and risk based policies apply at the individual agent level. Full lifecycle governance with standardized onboarding, periodic review, ownership transfers, auto retirement of dormant agents, and structured offboarding. Policy by design, where Purview DLP, sensitivity labels, and compliance rules extend to all agent interactions through pre built templates applied consistently from day one. One click disable to instantly freeze any agent, with Defender threat detection extended to agents and full audit trails for forensic investigation. Expanded threat coverage that addresses agent sprawl, overprivileged access, tool misuse, misconfiguration, and inter agent risk patterns no legacy tool was designed to see. Shared registry and controls that let IT, Security, and Compliance reference the same authoritative inventory across Defender, Entra, and Purview, eliminating the silos that slow incident response. This is the reason E7 exists as a platform, not a bundle. AI, identity, security, and governance stop being separate disciplines and start operating as one system. What this is actually worth: the Forrester numbers Microsoft commissioned Forrester to conduct a Total Economic Impact study of Microsoft 365 Copilot, published in March 2025. The composite organization in that study, modeled on real customer interviews, achieved: 132 percent three-year ROI with payback in under one year. 9 hours saved per Copilot user per month through automation of routine work like drafting, summarizing, and analysis. Up to 2.6 percent top line revenue lift through better qualified opportunities, improved win rates, and stronger retention in customer facing teams. 25 percent acceleration in new employee onboarding as new hires ramp faster on summarized institutional knowledge. Those are the verified numbers. The bigger story for HLS is what they look like when applied to clinical, claims, and research workflows where every reclaimed hour is an hour that goes back to patients, members, or science. AI is already defending AI The same agentic capabilities transforming clinical and operational workflows are now embedded in your security stack. Microsoft Security Copilot agents work alongside human analysts inside Defender, Entra, Purview, and Intune, accelerating threat response and absorbing the manual load that today drowns most security operations teams. Independent benchmarks back the impact. In a 162 admin randomized study published in 2025, the Conditional Access Optimization Agent in Microsoft Entra completed configuration tasks 43 percent faster and produced 48 percent more accurate Conditional Access policies than admins working without it. Security triage, alert investigation, and identity hygiene are following the same trajectory. For HLS security teams already stretched thin, that is hours reclaimed every week to focus on the threats that actually matter, with the same Agent 365 governance applying to the security agents themselves. The defenders are governed by the same rules as the workforce they defend. How HLS organizations are putting Agent 365 to work Here is how the value shows up across the three biggest HLS segments. For providers: reclaiming time for care The challenge: clinicians spend more time on documentation than on patients. Care coordination is fragmented. Burnout is gutting retention. The strategy: deploy agents that absorb administrative load while Agent 365 ensures every one of them respects ePHI boundaries. Clinical documentation agents integrated with Microsoft Dragon Copilot structure dictation against EHR requirements, apply billing codes, and flag missing elements before submission. Care coordination agents generate care plans, allocate tasks, and surface relevant patient context during multidisciplinary rounds, optimized for HL7 FHIR interoperability. Patient intake and scheduling agents built in Copilot Studio handle appointment booking, reminders, eligibility verification, and referral management. Handoff and shift summary agents pull from multiple systems to generate complete handoff summaries for nurses and physicians transitioning between shifts, reducing communication gaps that drive adverse events. The aha moment: applied across a 10,000 employee health system, nine hours per user per month is more than one million reclaimed hours a year. That is the equivalent of hundreds of full time clinicians, returned to direct patient care, with every agent governed under the same Conditional Access and DLP policies your IT team already manages today. For payers: transforming revenue cycle and member experience The challenge: prior auth backlogs delay care. Denial rates climb. Member services teams drown in volume. The strategy: agentic AI rewires the most expensive, most manual workflows in your operation while Agent 365 keeps every agent inside the lines on member PII. Prior authorization agents autonomously gather clinical documentation, cross reference medical policy, determine approval criteria, and route decisions, accelerating turnaround from days to hours. Claims processing agents automate billing and denial management. With cost to collect running 30 to 60 percent of net patient revenue at many organizations, even modest automation produces material margin recovery. Denial resolution and appeals agents analyze denial patterns, surface root causes, generate appeal documentation, and track success rates over time, turning a cost center into a continuous improvement engine. Member services agents integrated with Microsoft 365 Copilot Chat handle benefits inquiries, claims status, and self service triage, deflecting call volume and improving first contact resolution. Fraud detection and risk adjustment agents scan claims data for anomalies and optimize coding accuracy for Medicare Advantage and ACA populations. The aha moment: a payer CISO can disable an anomalous prior auth agent in one click and produce a complete forensic record of every member record it accessed, while Compliance simultaneously confirms the agent never violated DLP. That is regulatory readiness that legacy automation cannot deliver. For life sciences and pharma: accelerating discovery and commercialization The challenge: clinical trials take years. Regulatory submissions consume teams. Medical affairs cannot keep up with literature volume. The strategy: orchestrate agents across R&D, regulatory, medical, and commercial, with Agent 365 enforcing the data classification rules that proprietary IP and clinical data demand. Clinical trial matching agents scan patient profiles and eligibility criteria to surface trial opportunities, accelerating recruitment. Regulatory document preparation agents assemble submissions, cross reference data across modules, and ensure consistency in FDA, EMA, and global filings. Medical research and literature review agents powered by Microsoft GraphRAG retrieve research backed insights with verified source references, giving medical science liaisons trustworthy synthesis on demand. Pharmacovigilance agents monitor safety databases, flag potential adverse events, and generate timely case reports. Commercial insights and launch planning agents synthesize market data, payer policy, and HCP sentiment for sharper launch and field strategy. The aha moment: cutting even three months off a regulatory cycle on a single high revenue product can mean tens of millions in additional sales, while Purview sensitivity labels guarantee every agent accessing proprietary compound data respects the same data classification as your senior researchers. A phased path that actually works in regulated industries In regulated industries, a big bang AI rollout is a recipe for incidents. The HLS organizations getting this right are following a five-phase pattern that builds expertise and validates governance before scale. Establish. Form a cross-functional champion team across IT, Compliance, Clinical Operations, and Research. Define what risks you are mitigating and what outcomes you are unlocking. Inventory the agents already in flight. Configure. Stand up identity, DLP, and policy templates in Microsoft 365 Admin Center, Power Platform Admin Center, and Microsoft Purview. Enforce that any agent handling PHI runs in a secure environment with audit logging on by default. Pilot. Choose a small group of makers in a controlled environment. Start with non-critical workflows like internal reporting or scheduling before moving to clinical or member facing use cases. Run weekly reviews with Compliance and Security. Empower. Launch role specific training for clinicians, researchers, makers, and IT. Stand up a Center of Excellence to provide templates, best practices, and reusable patterns. Promote success stories internally to build momentum. Scale. Expand agent development across departments with governance as a guardrail, not a gate. Use pay as you go metering to track usage and optimize licensing. Refine policies continuously based on Purview signals and audit results. The strategic insight: organizations that lead with governance reach scale faster than those that lead with experimentation. Trust is the unlock, not the obstacle. Governance is a team sport Here is the pattern we see again and again. The HLS organizations that succeed with AI at scale are not the ones with the smartest IT shop or the boldest Compliance officer. They are the ones whose IT, Security, Compliance, Clinical, Research, and Operations leaders sit at the same table on agent strategy from week one. Agent 365 was designed for that table. The Agent Registry is the shared truth. Purview policies satisfy your Compliance officer. Entra controls reassure your CISO. The lifecycle workflows give your CIO confidence. The clinical and research outcomes give your COO and Chief Medical Officer the business case. Everyone gets the view they need from the same single source. Stand up an agent governance council. Meet every two weeks. Use the Agent Registry as your standing agenda. Make decisions in plain sight. The organizations that do this consistently outperform on both speed and safety. The ones that try to keep AI inside a single function fall behind on both. Who contributes what Think back to the mental model. You would never let a single function authorize, configure, and oversee a new privileged system on its own, not when it touches ePHI, claims, or proprietary research. Security, IT, Compliance, Clinical, and the relevant business owner all weigh in because the stakes are too high for any one seat to carry alone. Agent governance demands the same multidisciplinary scrutiny, and the council is where that happens. Each seat brings something the others cannot. CIO. Owns the agent strategy and the platform investment. Translates board-level AI ambition into an operating model the rest of the organization can execute against. CISO and Security Operations. Define agent identity standards, Conditional Access policies, and incident response playbooks. Without this seat, an anomalous agent touching ePHI becomes a breach instead of a contained event. Chief Compliance Officer and Privacy. Translate HIPAA, GDPR, FDA 21 CFR Part 11, and state regulations into Purview policies and audit requirements. This is the seat that keeps you out of an OCR investigation or a 483 letter. Chief Medical Officer and Clinical Operations. Validate that clinical agents are safe, accurate, and aligned with care standards. Own the clinical risk review for any agent that touches patient care, the same way you would for a new clinical protocol. Chief Research Officer or Head of R&D. Govern how agents interact with proprietary trial data, compound libraries, and scientific IP. The seat that protects the next decade of pipeline value. COO and Revenue Cycle Leadership. Prioritize the operational workflows where agents will move the needle on cost to collect, denial rates, and throughput, and own the business outcomes that justify the investment. Center of Excellence Lead. Maintains templates, reusable patterns, and maker enablement. Turns every council decision into a guardrail builders can actually use the next morning. Frontline champions. Clinicians, claims specialists, and researchers who pilot, give feedback, and carry credibility back to their peers. The seat that decides whether agents get adopted or quietly ignored. When every one of these voices is in the room, your governance council operates like a tumor board for AI. Different lenses, one shared decision, full accountability. That is how regulated industries make complex calls safely, and it is exactly the muscle Agent 365 was built to support. Seven questions to bring to your next leadership meeting If you want to know whether your organization is ready, run through these together. The places you hesitate are exactly where Agent 365 and E7 deliver the most value. Visibility. Do you know which AI agents, bots, and automations are running in your environment today, who built them, what they have access to, and whether they are still needed? Control. If someone on your team builds a new AI agent tomorrow, what is the actual process to make sure it is approved and secured? Or could they deploy it with wide open access? Security. What prevents an AI agent from reading or transmitting patient data it should not? Do you have a way to detect and stop a rogue or compromised agent? Accountability. Who owns the outputs of an AI agent's actions? What is the offboarding process when the agent or its creator leaves? Scale. Six months from now, you may have a hundred agents deployed across departments. Are your oversight and compliance structures ready for that volume? Cross-functional alignment. How are your IT, Security, and Compliance teams partnering on AI today? Governance is a team sport. Data readiness. How confident are you that your data estate is clean, labeled, and governed well enough for AI to surface accurate answers and not outdated or conflicting information? If you hesitated on even one of those, you have just identified where Agent 365 and Microsoft 365 E7 will pay for themselves the fastest. The path forward Here is the honest truth. The healthcare and life sciences organizations that lead in the next decade will not be the ones that adopted AI first. They will be the ones that adopted AI safely, compliantly, and at scale, with intelligence and trust woven into every layer. Microsoft Agent 365 and Microsoft 365 E7 give you the only integrated platform that brings AI, identity, security, and governance into one cohesive system, running in the flow of work you already use. This is not about adding another tool to your stack. It is about extending the investments you have already made in Microsoft 365, Entra, Defender, and Purview to cover the fastest-growing class of digital identities in your environment. The agent era has already arrived. The question is whether you will govern it with confidence or chase it with anxiety. We would love to help you lead. Take the next step Explore Microsoft Agent 365: The Control Plane for Agents Microsoft Entra Agent ID: aka.ms/EntraAgentID Learn more about Microsoft 365 E7, the Frontier Suite: Introducing Microsoft 365 E7 See Microsoft 365 Copilot in action: Microsoft 365 Copilot Read the Forrester TEI study: The Total Economic Impact of Microsoft 365 CopilotCollecting Microsoft 365 Copilot Data with Microsoft Purview eDiscovery
Copilot Data Collection Reference Table Data Type Storage Location Item Class Collection Strategy Copilot Prompts (user questions sent to M365 Copilot) Exchange Online: Hidden folder in the user's mailbox. Compliance copies stored similar to Teams chats, but with unique item classes. IPM.SkypeTeams.Message.Copilot.<AppName> (e.g., .Word, .Excel, .Outlook, .BizChat). Additional AI-related classes may also apply: IPM.SkypeTeams.Message.ConnectedAIApp*, IPM.SkypeTeams.Message.CloudAIApp*, IPM.SkypeTeams.Message.TeamCopilot*, IPM.SkypeTeams.TeamCopilot* 1. Add the user's Exchange mailbox as a data source to the search. 2. In the condition builder you can optionally filter the search to only return Copilot prompts by adding a condition of "Item class contains any of Copilot activity". This automatically applies all relevant M365 Copilot item classes as a condition of the search. 3. Add any further additional conditions such as date range or keywords to narrow results as required. You can also use the Item Class condition to exclude M365 Copilot interactions from your collections when targeting a user’s mailbox. Notes: · Additional item classes may be added. The item class condition will be updated accordingly. Copilot Responses (AI-generated answers) Exchange Online: The same hidden folder in the user's mailbox as prompts. The same IPM.SkypeTeams.Message.Copilot.<AppName> pattern as prompts The same collection strategy used for prompts. Copilot Memories (personalized saved information Copilot "remembers") Exchange Online: Hidden CopilotMemory subfolder within the user's mailbox contacts. Stored as contact entries separate from prompts and responses. IPM.Contact Each memory item appears as a contact card within Exchange, which is distinct from the message-based item classes used for prompts/responses. 1. Add the user's Exchange mailbox as a data source to the search. 2. In the condition builder you can optionally filter the search to only return Contacts by adding a condition of "Item class contains any of Contacts". Notes: · Copilot memories will not be preserved under a legal hold or retention policy. · This will return both Copilot memories stored in contacts as well as traditional contacts from the user’s Exchange mailbox. Copilot Pages (AI-generated, user-editable documents) SharePoint Online: Stored in a user-owned SharePoint embedded container (shared with Loop workspace content and Copilot Notebooks). File format is .page. Not stored in the user's mailbox. N/A These are SharePoint files (not Exchange items), so no item class applies. Identify them in search results by the .page file extension. 1. Add the custodian’s SharePoint embedded site URL as a data source to the search. Alternatively, tenant-wide searches of all SPO sites will include all SharePoint Embedded containers 2. Optionally use the condition builder with conditions such as date range, keywords or file type to further filter results returned Facilitator agent interactions in a Team meeting chat Exchange Online: Hidden folder in all meeting attendees’ mailboxes. Compliance copies stored as Teams chats IPM.SkypeTeams.Message 1. Add the user's Exchange mailbox as a data source to the search. 2. In the condition builder you can optionally filter the search to only return Copilot prompts by adding a condition of "Item class contains any of Instant messages". 3. Add any further additional conditions such as date range or keywords to narrow results as required. Facilitator agent meeting notes (loop) SharePoint Online: Facilitator meeting notes are stored as a .loop file in a OneDrive folder titled Meetings of the user who initiated Facilitator in Teams N/A These are SharePoint files (not Exchange items), so no item class applies. Identify them in search results by the .loop file extension. 1. Add the user's OneDrive URL as a data source to the search. 2. In the condition builder you can optionally filter the search to only return loop files by adding a condition of "File type equals any of loop". 3. Add any further additional conditions such as date range or keywords to narrow results as required. Notes: · With eDiscovery premium enabled cases you can follow the standard workflow for collecting Team meeting messages and select to include cloud attachments in your collection. This will automatically pull into the export or review set any Facilitator agent meeting notes. Facilitator created word/loop documents SharePoint Online: When the facilitator agent is asked to create a word or loop document during a meeting they are stored in the requesters OneDrive in a folder called N/A These are SharePoint files (not Exchange items), so no item class applies. Identify them in search results by the .loop file extension. 1. Add the user's OneDrive URL as a data source to the search. 2. In the condition builder you can optionally filter the search to only return loop and doc files by adding a condition of "File type equals any of loop, docx". 3. Add any further additional conditions such as date range or keywords to narrow results as required. Notes: · With eDiscovery premium enabled cases you can follow the standard workflow for collecting Team meeting messages and select to include cloud attachments in your collection. This will automatically pull into the export or review set any Facilitator generated loop or word documents. Facilitator generated and assigned tasks Exchange Online: When the facilitator agent creates and assigns a task to an individual, it is created as a to-do item in the assigned individual's Exchange Mailbox IPM.Task 1. Add the user's Exchange mailbox as a data source to the search. 2. In the condition builder you can optionally filter the search to only return Tasks by adding a condition of "Item class contains any of Tasks". 3. Add any further additional conditions such as date range or keywords to narrow results as required. Application-Specific Item Classes for Prompts & Responses For more granular filtering by Copilot application, the following item class values can be used in KQL queries: Application Context Item Class Value Microsoft Copilot Chat (BizChat / Teams) IPM.SkypeTeams.Message.Copilot.BizChat Copilot in Excel IPM.SkypeTeams.Message.Copilot.Excel Copilot in Loop IPM.SkypeTeams.Message.Copilot.Loop Copilot in Outlook IPM.SkypeTeams.Message.Copilot.Outlook Copilot in PowerPoint IPM.SkypeTeams.Message.Copilot.PowerPoint Copilot in Teams IPM.SkypeTeams.Message.Copilot.Teams Copilot in Whiteboard IPM.SkypeTeams.Message.Copilot.Whiteboard Copilot in Word IPM.SkypeTeams.Message.Copilot.Word To target all Copilot applications at once, use the wildcard query ItemClass:IPM.SkypeTeams.Message.Copilot.*. For a wider list of AI data sources, see the following link: https://learn.microsoft.com/en-us/purview/edisc-search-copilot-data#data-sources-for-ai-data Important Notes for eDiscovery Practitioners Excluding Copilot Data from Broader Searches Because Copilot prompts and responses reside in the same Exchange mailbox as emails and Teams chats, they will appear in broad mailbox searches unless explicitly filtered out. To exclude Copilot items, use the condition "Item Class Contains none of Copilot activity" in the condition builder, or add (-ItemClass:IPM.SkypeTeams.Message.Copilot.*) in KQL. Some eDiscovery managers run separate searches, one for Copilot data and one for other communications, to keep collections distinct. Copilot Memories: Retention & Hold Limitations Purview retention policies and eDiscovery holds do not currently apply to Copilot memory items. Memory items remain until a user deletes them or an admin explicitly removes them via eDiscovery or Graph API. Additionally, deleting a Copilot prompt and response does not delete any memory derived from that conversation. Memories must be removed separately if required. Copilot Pages: Do Not Treat Like Prompts/Responses Copilot Pages are not stored in Exchange mailboxes. Searching only a custodian’s mailbox will not return Copilot Pages. Treat Copilot Pages the same way as you do for SharePoint content in your existing eDiscovery workflow. For collections, keyword searches will generate hits on text content within the .page file if either the SharePoint Embedded URL is included in the search or the search is a tenant-wide search of all SharePoint sites Be aware that full-text search within .page files in Purview eDiscovery review sets is not currently available. Instead you can use filters such as Subject/Title or Native File Type to locate Copilot Pages in your review set and review the content. When an eDiscovery hold is placed on a custodian’s mailbox, it does not automatically extend to the SharePoint Embedded site where the Copilot Pages are stored. Instead, ensure the hold policy includes the URL for the user-owned SharePoint Embedded site that contains the Copilot Page(s) that must be preserved. Audit Logs vs. eDiscovery for Copilot Content Audit logs record that a Copilot interaction occurred (time, user, workload context) but do not include the actual prompt or response text. To retrieve the substance of Copilot interactions, use Purview eDiscovery searches against the mailbox. Copilot Prompts and Responses: HTML Transcription Copilot prompts and responses are stored as individual messages within the user’s mailbox. When collecting Copilot interactions, enabling the “Organize conversations into HTML transcripts” premium option will convert these individual messages into HTML transcripts making for easier review and linkage between the user’s original prompt and the Copilot responses. Copilot Prompts and Responses: Contextual prompts and responses When using the Keywords condition as part of your collection in eDiscovery, it will only return items that match the keywords included in the query. This means that you may only return a part of the Copilot interaction. If using keywords in your collection query you can enable the “Include full conversation for Copilot, Teams and Viva Engage messages” premium option. This will include in the export or review set any prompts or responses from the Copilot interaction within a 12-hour window before and after each responsive item. This means that you are able to see the full context of the prompt or response that was responsive to search. Collecting Referenced Documents (Cloud Attachments) Copilot responses may reference or summarize SharePoint/OneDrive files. When collecting Copilot interactions, enabling the "Access links (cloud attachments) in messages" premium option will additionally collect the files referenced in the prompt or response and include them in the export package. This provides full evidentiary context but can significantly increase export size and processing time so consider if collecting these artifacts are relevant to the investigation. If so, look to use additional conditions such as date to effectively manage volumes or reduce the number of custodians in the collection. Facilitator agent in Microsoft Teams Meetings The Facilitator agent in Microsoft Teams is an AI-powered assistant (included with Microsoft 365 Copilot) that enhances meeting productivity by generating real-time notes, summarizing key decisions, and managing action items. It acts as an active participant, allowing for collaborative editing of notes and answering chat questions during calls. As the Facilitator works within the context of Microsoft Teams meetings (scheduled private meetings only) your existing workflows for collecting Microsoft Teams meetings chat should be used. In addition, enabling the "Access links (cloud attachments) in messages" premium setting will automatically collect any meeting note (loop) or loop or word documents created by the Facilitator agent. Copilot Retention Reference Table Data Type Microsoft Purview Retention Policy Location/Scope Copilot prompts and responses Microsoft Copilot experiences Copilot Memories (personalized saved information Copilot "remembers") Not supported Copilot Pages (AI-generated, user-editable documents) SharePoint classic and communications sites (Static Scopes only) Facilitator interactions in a Team meeting Teams chats Facilitator meeting notes (loop) OneDrive Accounts Facilitator created word/loop documents OneDrive Accounts Facilitator generated and assigned tasks Exchange mailboxes (Tasks with end dates only)Registration Open: Community-Led Purview Lightning Talks
Get ready for an electrifying event! The Microsoft Security Community proudly presents Purview Lightning Talks; an action-packed series featuring your fellow Microsoft users, partners and passionate Microsoft Security community members of all sorts. Each 3-12 minute talk cuts straight to the chase, delivering expert insights, real-world use cases, and even a few game-changing tips and tricks. Don’t miss this opportunity to learn, connect, and be inspired! Secure your spot now for the big day: April 30th at 8am Redmond Time. See agenda details below and follow this blog post (sign in and click the "follow" heart in the upper right) to receive notifications. ❗UPDATE❗This event is expected to last around 2 hours and 15 minutes, due to the incredible number of community sessions that were submitted! 💖 Please see the timing table below broken out into sections of four talks each, and plan to arrive 10 minutes before the section that interests you, OR stay for the whole time! Speakers will be available in the chat to answer your questions; please ask your questions during their session. Spillover Q&A forum links will also be shared. The full session recording will be indexed and posted to Microsoft Security Community YouTube within 24 hours after the event. Bookmark this page or follow this blog post for updates! Agenda Legend ↩️ Data Lifecycle Management 🔐 Information Protection 🚫 Data Loss Prevention (DLP) 🦾 Data Security Posture Management (DSPM) for AI 🤖 Purview for AI 👁️ Insider Risk Management (IRM) 🔍 eDiscovery 📊 Governance 🗒️ Compliance Manager 🛡️ Data Security All times are listed in US Pacific/Redmond Time. Session lengths are rounded to the nearest minute. AGENDA Section 1 - approximately 8:00 am - 8:43 am ↩️ The Day Offboarding Exposed Infinite Retention — Nikki Chapple Length: 10 minutes | Topic: Data Lifecycle Management A routine Purview request led to an unexpected discovery: more than 9,000 orphaned OneDrives and thousands of inactive mailboxes still storing content long after employees had left. This talk explains how a retain-only policy created hidden retention debt and how Adaptive Scopes can help organisations separate active users from leavers to avoid similar pitfalls. 🔐 The Purview Label Engine: Automated Classification, Translation, and co-Documentation for Enterprise Tenants — Michael Kirst-Neshva Length: 12 minutes | Topic: Information Protection Global enterprises face the challenge of implementing uniform data protection standards across borders and languages. In this talk, I’ll present a framework that makes Microsoft Purview labels truly scalable. Discover how to roll out parent and child label logics automatically, manage priorities with a single click, and generate instant compliance documentation for every business unit. 🗒️ What's In My Compliance Manager Toolbox: A Cloud Security Architect's Perspective — Jerrad Dahlager Length: 8 minutes | Topic: Compliance Manager A practical walkthrough of how I use Compliance Manager across real client engagements to map controls, track improvement actions, and simplify multi-framework compliance. No theory, just what works in the field. 🛡️ Stop, Think, Protect: Data Security in Real Life with Purview — Oliver Sahlmann Length: 8 minutes | Topic: Data Security With simple labels and matching DLP policies, Purview offers a practical and accessible way to approach data security. This lightning talk uses a real-life traffic light concept to show how a low barrier to adoption can still drive meaningful protection and awareness. Section 2 - approximately 8:44 am - 9:15 am 🔐 Using Purview to prevent oversharing with AI services — Viktor Hedberg Length: 10 minutes | Topic: Information Protection In this day and age, AI is the big thing. However, Copilot has access to everything you can access, including potentially sensitive data. In this session we will look at how to prevent Copilot to access highly sensitive data, using Information Protection. 🦾 How I Helped My Customers Understand their AI Usage (and protect their sensitive data) — Bram de Jager Length: 5 minutes | Topic: Data Security Posture Management (DSPM) for AI As AI tools explode across the web, many organizations still have no idea what’s actually happening in the browser—where employees type prompts, paste sensitive data, or visit public AI sites outside corporate governance. In this lightning talk, I’ll share how I helped customers shine a light on this issue. We’ll explore how Purview Data Security Posture Management (DSPM) can reveal which AI tools employees use, what types of data they input, and where sensitive information may leak through prompts. I’ll walk through real customer scenario where we detected risky AI usage patterns—such as employees pasting confidential documents into public chatbots. 🔐 Four Labels Max for Daily Use: Which Ones & Why? — Romain Dalle Length: 8 minutes | Topic: Information Protection Sensitivity labels are one of the most critical parts of a Purview Risk and compliance deployment, if not the most critical, because it directly impacts how end-users and business units should allow or restrict themselves to share their business data, internally and externally, on a daily basis. Labels have not other options than being precise, meaningful, and balanced in terms of embedded data security. Setting the right taxonomy is core to success, and is everything but a one-time project. 🚫 Data-driven Endpoint DLP Solution with Advanced Hunting — Tatu Seppälä Length: 8 minutes | Topic: Data Loss Prevention (DLP) This lightning talk shows you how to use KQL queries in advanced hunting to easily build initial sensitive service domain groups for authorized and unauthorized domains based on your organization's usage patterns. The same approach can be used for numerous other similar solution refinement and design purposes. Section 3 - approximately 9:16 am - 9:46 am 🔐 The Purview Hack No One Talks About: Container Sensitivity Labels That Fix Oversharing Fast — Nikki Chapple Length: 10 minutes | Topic: Information Protection Most organizations tackle oversharing with manual fixes, but the fastest solution is often overlooked. In this lightning talk, I show how container sensitivity labels automatically apply the right sharing and collaboration controls, ensuring every new Group, Team or SharePoint site starts secure by default. 🔍 Does M365 Support eDiscovery? — Julian Kusenberg Length: 11 minutes | Topic: eDiscovery A myth-busting session that separates perception from reality when it comes to Microsoft 365 eDiscovery capabilities. 📊 Improving Discovery, Trust, and Reuse of Analytics with Purview Data Products — Craig Wyndowe Length: 5 minutes | Topic: Governance This talk shows how bringing Power BI and Fabric assets into Microsoft Purview Governance Domains and Data Products creates a single, trusted view of enterprise analytics. By connecting reports, semantic models, and underlying data with shared metadata, ownership, and business context, organizations can make existing assets easy to discover and safe to reuse. 🔐 Why You Should Create Your Own Sensitive Information Types (SITs) — Niels Jakobsen Length: 5 minutes | Topic: Information Protection An in depth analysis of why Microsoft SITs are not one-size-fits-all, and how to create your own using what Microsoft has already built for you. Section 4 - approximately 9:47 am-10:30 am 👁️ From Zero to First Signal: Insider Risk Management Prerequisites That Actually Matter — Sathish Veerapandian Length: 8 minutes | Topic: Insider Risk Management (IRM) A focused live demo showing the real world prerequisites required for Microsoft Purview Insider Risk Management to work effectively. This session highlights the critical Entra ID, Intune, Microsoft Defender for Endpoint, and Purview DLP configurations that must be in place before creating IRM policies. 🤖 Securing data in the age of AI — Júlio César Gonçalves Vasconcelos Length: 11 minutes | Topic: Purview for AI AI will transform business as we know it; but without proper governance, it can introduce serious risks. We’ll show you how Microsoft Purview enables organizations to accelerate AI adoption while maintaining security, compliance, and transparency. 🔍 Beyond eDiscovery - Purview DSI for Security Investigation — Susantha Silva Length: 11 minutes | Topic: eDiscovery Most people hear “Microsoft Purview” and immediately think compliance, eDiscovery, or legal holds. But this session highlights Data Security Investigations, showing how DSI lets you take a DLP alert or insider risk signal and turn it into a structured investigation. 🚫 Elevating Purview DLP with a real world use case — Victor Wingsing Length: 14 minutes | Topic: Data Loss Prevention (DLP) Learn how I hardened Microsoft Purview DLP beyond out of the box defaults—closing real world data loss gaps, tuning policies to actual user behavior, and turning noisy alerts into protection that really blocks exfiltration. - Quick Closing/ Resource Sharing2.2KViews7likes0CommentsProtect and govern every tenant with Microsoft Entra Tenant Governance
As organizations scale, tenant sprawl becomes inevitable. Legacy test tenants, employee‑created environments, and forgotten tenants create blind spots for security and identity teams. Get to know Microsoft Entra Tenant Governance, a new Entra capability that provides centralized visibility and control across multi‑tenant environments. We'll cover how Tenant Governance enables tenant discovery, secure governance relationships, configuration monitoring, and governed tenant creation from day one. You'll see how organizations can apply consistent security baselines, detect configuration drift, and reduce operational overhead all while maintaining autonomy across teams. Walk away with a clear framework for bringing order, visibility, and governance to your multi‑tenant identity landscape. How do I participate? Registration is not required. Add this event to your calendar, then sign in to the Tech Community and select Attend to receive reminders. Post your questions in advance, or any time during the live broadcast.74Views0likes0CommentsSafeguarding Sensitive Data in Microsoft 365 Copilot Interactions: DLP for Microsoft 365 Copilot
Microsoft 365 Copilot is redefining how organizations work, bringing the power of generative AI directly into our secure productivity tools. As Copilot adoption accelerates, we’ve heard that you want more control over how your sensitive data can be used in interactions with Copilot. At Ignite 2025, Microsoft announced a major enhancement: Microsoft Purview Data Loss Prevention for Microsoft 365 Copilot to safeguard Microsoft 365 Copilot and Copilot Chat prompts, now entering General Availability. Even better, this capability is included for all users of Microsoft 365 Copilot and Copilot Chat. Why DLP for Copilot Prompts Is a Game-Changer As organizations adopt Copilot, their ways of sharing, creating, and interacting with data expand. With just a prompt, users can have Copilot summarize documents, analyze spreadsheets, or help brainstorm presentations. However, it raises an important question: what if the prompt includes sensitive information, like project code names, financial account numbers, health records, or other sensitive data? Over the last 2 years, Microsoft has been building a set of Data Loss Prevention (DLP) controls specifically designed for Copilot. Below is a quick overview of these related capabilities — ranging from already available to newly in preview — before we dive deep into today's GA announcement: Prevent Copilot processing of files & emails based on sensitivity labels In November 2024, Microsoft introduced the ability to create a DLP policy to restrict Microsoft 365 Copilot and Copilot Chat from processing sensitive files and emails using Sensitivity Labels for grounding data. This capability gives you control over whether content with the sensitivity labels you specify is restricted from being used in Microsoft 365 Copilot and Copilot Chat to generate summaries and responses. Prevent web searches for prompts containing Sensitive Information Types (SITs) The latest feature entering Public Preview is DLP for Microsoft 365 Copilot and Copilot Chat to prevent web searches for prompts containing sensitive data. This real-time control helps organizations mitigate data leakage and oversharing risks by preventing Microsoft 365 Copilot and agents from using sensitive data for external web searches. If a sensitive information type (SIT) is detected in a user prompt, Copilot can still leverage your enterprise data to form a response without sending the sensitive data to external search engines for web grounding. This capability extends to Microsoft 365 Copilot and agents built in Copilot Studio that are published to Microsoft 365 Copilot. DLP to Safeguard Copilot Prompts with Sensitive Information Types (SITs) The rest of this blog focuses on a key addition to this capability set: DLP for Microsoft 365 Copilot + Copilot Chat prompts to prevent processing of prompts containing sensitive information, now entering General Availability. Unlike the web search capability above, which prevents sensitive data from being sent externally during a web query, this capability evaluates the user’s text input directly, before processing occurs, to determine whether both enterprise data and web grounding can proceed. This feature uses Sensitive Information Types (SITs) as a condition within a Purview DLP policy to assess whether a user prompt sent to Copilot contains sensitive data, even if the data is unlabeled. With DLP for Copilot prompts, a user’s text input is scanned in real time for SITs, whether built-in (like Social Security Numbers, credit card numbers, etc.) or custom-defined by your organization (such as confidential terms or project names). If a text prompt contains one of the SITs you specify, Copilot restricts processing, halts any Graph or web grounding, and displays a clear message to the end user that the request cannot be completed. A user enters a prompt in Microsoft 365 Copilot Chat containing sensitive information. How DLP for Copilot Protects Prompts: Real-Time, Intelligent Protection The new DLP capability integrates seamlessly with Microsoft Purview, leveraging its powerful data classification & detection engine for sensitive information types. Here’s how it works: Input: When a user submits a prompt, Copilot checks the prompt for sensitive information using built-in or organization-defined sensitive information types (SITs). Immediate Action: If a SIT is detected, Copilot restricts the prompt from being processed. No AI response is generated, and no data is sent for Graph or web grounding. Output: Users receive a clear notification that their request cannot be completed due to company policies. This real-time protection ensures that sensitive data is not leaked or overshared, even as users explore new ways to work with AI. Setting Up DLP for Copilot Prompts: Data Security Admin Experience The easiest way to get started is through the new Microsoft Purview Data Security Posture Management (DSPM) portal, which provides a guided, one-click setup experience: 1. In Purview, go to Solutions > DSPM (preview) 2. Select the "Prevent data exposure in Microsoft 365 Copilot and Microsoft Copilot interactions" objective. 3. Follow the guided workflow and apply the recommended one-click DLP policy. The policy starts in simulation mode so you can review activity before enforcing it. Alternatively, you can configure and customize this policy directly from the Purview DLP portal Policies page or enable it from the Microsoft 365 Admin Center. view the remediation plan. view policy details and review. Then click the button, create a custom policy in DLP simulation mode to protect sensitive data referenced in Microsoft 365 Copilot and Microsoft Copilot. the confidence level and instance count. Practical Scenarios: Protecting What Matters Most Protect PII, financial data, and intellectual property: Financial institutions can block prompts containing deal terms, account numbers, or other sensitive data, preventing leaks through AI interactions. Similarly, healthcare organizations can safeguard patient information, and manufacturers can secure intellectual property and trade secrets from exposure, along with many other practical use cases. Once the prompt is detected and blocked, Microsoft Graph grounding and Bing web grounding is restricted. Safeguard sensitive non-public information: Imagine an organization involved in a confidential merger. By using DLP for Copilot prompts, administrators can set up a custom SIT that includes the project’s code name. If a user asks Copilot about the merger using the project’s code name, their request will be blocked, keeping sensitive information secure and protected. Visibility into DLP for M365 Copilot Prompts When a user’s prompt triggers a DLP policy, notifications and alerts are surfaced directly in the Microsoft Purview and Defender portals for security administrators. These alerts provide detailed information about which policy was activated, the type of sensitive information detected, and the context of the attempted Copilot interaction. Using these alert queues in Purview and Defender XDR, administrators can efficiently track policy activity, investigate potential incidents, and refine DLP rules to better align with organizational needs. The ability to review historical alerts and track ongoing enforcement empowers admins to maintain strong data security and proactively safeguard sensitive information. Defender XDR portal investigation of prompt DLP based incident. Takeaways The introduction of this latest enhancement to DLP for Copilot represents a key advancement in secure Copilot deployment and adoption. By empowering organizations to block sensitive data at the prompt level, Microsoft is helping customers unlock the full potential of Copilot, without compromising security or compliance. This innovation reflects Microsoft’s commitment to responsible AI, continuous improvement, and customer-driven development. As Copilot evolves, so will the tools to protect your data, ensuring that productivity and security go hand in hand. For more details, stay tuned for updates to the Product Roadmap and Learn documentation. Learn about using DLP to protect interactions with Microsoft 365 Copilot and Copilot Chat Learn about the default DLP policy for Microsoft 365 Copilot location | Microsoft Learn Permissions to create or edit a DLP policy to safeguard Microsoft 365 Copilot and Copilot Chat Learn about the new Microsoft Purview Data Security Posture Management (DSPM) | Microsoft Learn Roadmap Item: DLP for Microsoft 365 Copilot to safeguard prompts Roadmap Item: DLP to safeguard web search in Microsoft 365 Copilot2.5KViews2likes0CommentsWhy UK Enterprise Cybersecurity Is Failing in 2026 (And What Leaders Must Change)
Enterprise cybersecurity in large organisations has always been an asymmetric game. But with the rise of AI‑enabled cyber attacks, that imbalance has widened dramatically - particularly for UK and EMEA enterprises operating complex cloud, SaaS, and identity‑driven environments. Microsoft Threat Intelligence and Microsoft Defender Security Research have publicly reported a clear shift in how attackers operate: AI is now embedded across the entire attack lifecycle. Threat actors use AI to accelerate reconnaissance, generate highly targeted phishing at scale, automate infrastructure, and adapt tactics in real time - dramatically reducing the time required to move from initial access to business impact. In recent months, Microsoft has documented AI‑enabled phishing campaigns abusing legitimate authentication mechanisms, including OAuth and device‑code flows, to compromise enterprise accounts at scale. These attacks rely on automation, dynamic code generation, and highly personalised lures - not on exploiting traditional vulnerabilities or stealing passwords. The Reality Gap: Adaptive Attackers vs. Static Enterprise Defences Meanwhile, many UK enterprises still rely on legacy cybersecurity controls designed for a very different threat model - one rooted in a far more predictable world. This creates a dangerous "Resilience Gap." Here is why your current stack is failing- and the C-Suite strategy required to fix it. 1. The Failure of Traditional Antivirus in the AI Era Traditional antivirus (AV) relies on static signatures and hashes. It assumes malicious code remains identical across different targets. AI has rendered this assumption obsolete. Modern malware now uses automated mutation to generate unique code variants at execution time, and adapts behaviour based on its environment. Microsoft Threat Intelligence has observed threat actors using AI‑assisted tooling to rapidly rewrite payload components, ensuring that every deployment looks subtly different. In this model, there is no reliable signature to detect. By the time a pattern exists, the attacker has already moved on. Signature‑based detection is not just slow - it is structurally misaligned with AI‑driven attacks. The Risk: If your security relies on "recognising" a threat, you are already breached. By the time a signature exists, the attacker has evolved. The C-Suite Pivot: Shift investment from artifact detection to EDR/XDR (Extended Detection and Response). We must prioritise behavioural analytics and machine learning models that identify intent rather than file names. 2. Why Perimeter Firewalls Fail in a Cloud-First World Many UK enterprise still rely on firewalls enforcing static allow/deny rules based on IP addresses and ports. This model worked when applications were predictable and networks clearly segmented. Today, enterprise traffic is encrypted, cloud‑hosted, API‑driven, and deeply integrated with SaaS and identity services. AI‑assisted phishing campaigns abusing OAuth and device‑code flows demonstrate this clearly. From a network perspective, everything looks legitimate: HTTPS traffic to trusted identity providers. No suspicious port. No malicious domain. Yet the attacker successfully compromises identity. The Risk: Traditional firewalls are "blind" to identity-based breaches in cloud environments. The C-Suite Pivot: Move to Identity-First Security. Treat Identity as the new Control Plane, integrating signals like user risk, device health, and geolocation into every access decision. 3. The Critical Weakness of Single-Factor Authentication Despite clear NCSC guidance, single-factor passwords remain a common vulnerability in legacy applications and VPNs. AI-driven credential abuse has changed the economics of these attacks. Threat actors now deploy adaptive phishing campaigns that evolve in real-time. Microsoft has observed attackers using AI to hyper-target high-value UK identities- specifically CEOs, Finance Directors, and Procurement leads. The Risk: Static passwords are now the primary weak link in UK supply chain security. The C-Suite Pivot: Mandate Phishing‑resistant MFA (Passkeys or hardware security keys). Implement Conditional Access policies that evaluate risk dynamically at the moment of access, not just at login. Legacy Security vs. AI‑Era Reality 4. The Inherent Risk of VPN-Centric Security VPNs were built on a flawed assumption: that anyone "inside" the network is trustworthy. In 2026, this logic is a liability. AI-assisted attackers now use automation to map internal networks and identify escalation paths the moment they gain VPN access. Furthermore, Microsoft has tracked nation-state actors using AI to create synthetic employee identities- complete with fake resumes and deepfake communication. In these scenarios, VPN access isn't "hacked"; it is legally granted to a fraudster. The Risk: A compromised VPN gives an attacker the "keys to the kingdom." The C-Suite Pivot: Transition to Zero Trust Architecture (ZTA). Access must be explicit, scoped to the specific application, and continuously re‑evaluated using behavioural signals. 5. Data: The High-Velocity Target Sensitive data sitting unencrypted in legacy databases or backups is a ticking time bomb. In the AI era, data discovery is no longer a slow, manual process for a hacker. Attackers now use AI to instantly analyse your directory structures, classify your files, and prioritise high-value data for theft. Unencrypted data significantly increases your "blast radius," turning a containable incident into a catastrophic board-level crisis. The Risk: Beyond the technical breach, unencrypted data leads to massive UK GDPR fines and irreparable brand damage. The C-Suite Pivot: Adopt Data-Centric Security. Implement encryption by default, classify data while adding sensitivity labels and start board-level discussions regarding post‑quantum cryptography (PQC) to future-proof your most sensitive assets. 6. The Failure of Static IDS Traditional Intrusion Detection Systems (IDS) rely on known indicators of compromise - assuming attackers reuse the same tools and techniques. AI‑driven attacks deliberately avoid that assumption. Threat actors are now using Large Language Models (LLMs) to weaponize newly disclosed vulnerabilities within hours. While your team waits for a "known pattern" to be updated in your system, the attacker is already using a custom, AI-generated exploit. The Risk: Your team is defending against yesterday's news while the attacker is moving at machine speed. The C-Suite Pivot: Invest in Adaptive Threat Detection. Move toward Graph‑based XDR platforms that correlate signals across email, endpoint, and cloud to automate investigation and response before the damage spreads. From Static Security to Continuous Security Closing Thought: Security Is a Journey, Not a Destination For UK enterprises, the shift toward adaptive cybersecurity is no longer optional - it is increasingly driven by regulatory expectation, board oversight, and accountability for operational resilience. Recent UK cyber resilience reforms and evolving regulatory frameworks signal a clear direction of travel: cybersecurity is now a board‑level responsibility, not a back‑office technical concern. Directors and executive leaders are expected to demonstrate effective governance, risk ownership, and preparedness for cyber disruption - particularly as AI reshapes the threat landscape. AI is not a future cybersecurity problem. It is a current force multiplier for attackers, exposing the limits of legacy enterprise security architectures faster than many organisations are willing to admit. The uncomfortable truth for boards in 2026 is that no enterprise is 100% secure. Intrusions are inevitable. Credentials will be compromised. Controls will be tested. The difference between a resilient enterprise and a vulnerable one is not the absence of incidents, but how risk is managed when they occur. In mature organisations, this means assuming breach and designing for containment: Access controls that limit blast radius Least privilege and conditional access restricting attackers to the smallest possible scope if an identity is compromised Data‑centric security using automated classification and encryption, ensuring that even when access is misused, sensitive data cannot be freely exfiltrated As a Senior Enterprise Cybersecurity Architect, I see this moment as a unique opportunity. AI adoption does not have to repeat the mistakes of earlier technology waves, where innovation moved fast and security followed years later. We now have a rare chance to embed security from day one - designing identity controls, data boundaries, automated monitoring, and governance before AI systems become business‑critical. When security is built in upfront, enterprises don’t just reduce risk - they gain the confidence to move faster and unlock AI’s value safely. Security is no longer a “department”. In the age of AI, it is a continuous business function - essential to preserving trust and maintaining operational continuity as attackers move at machine speed. References: Inside an AI‑enabled device code phishing campaign | Microsoft Security Blog AI as tradecraft: How threat actors operationalize AI | Microsoft Security Blog Detecting and analyzing prompt abuse in AI tools | Microsoft Security Blog Post-Quantum Cryptography | CSRC Microsoft Digital Defense Report 2025 | Microsoft https://www.ncsc.gov.uk/news/government-adopt-passkey-technology-digital-services