securing ai
58 TopicsCrawl, Walk, Run: A Practitioner's Guide to AI Maturity in the SOC
Every security operations center is being told to adopt AI. Vendors promise autonomous threat detection, instant incident response, and the end of alert fatigue. The reality is messier. Most SOC teams are still figuring out where AI fits into their existing workflows, and jumping straight to autonomous agents without building foundational trust is a recipe for expensive failure. The Crawl, Walk, Run framework offers a more honest path. It's not a new concept. Cloud migration teams, DevOps organizations, and Zero Trust programs have used it for years. But it maps remarkably well to how security teams should adopt AI. Each phase builds organizational trust, governance maturity, and technical capability that the next phase depends on. Skip a phase and the risk compounds. This guide is written for SOC leaders and practitioners who want a practical, phased approach to AI adoption, not a vendor pitch.Secure AI Workloads in Azure: Join Our Next Azure Decoded Session on April 8th
AI introduces new risks—like prompt injection, data leakage, and model misuse—which means security teams need visibility and guardrails that extend beyond traditional cloud controls. In our next Azure Decoded session, we’ll focus on securing AI workloads in Azure with Microsoft Defender for Cloud. Register now for the Azure Decoded session on April 8th at 12 PM PST. Bringing AI security architecture to life with Azure Decoded In the Lockdown AI workloads with Microsoft Defender for Cloud session, we move from concepts to implementation and show how these protections appear in the platform. We’ll walk through where Microsoft Defender for Cloud fits into an end-to-end AI security strategy—and how discovery, posture management, and runtime protection work together to secure AI workloads built on Azure. You’ll also see how to connect the dots across the workflow—so signals from AI resources, identity, and data controls roll up into actionable recommendations and alerts. Enable and scope the AI workloads protections in Defender for Cloud Use the Data & AI security dashboard to understand coverage and priority risks Review posture findings (CSPM) and translate them into remediation steps Investigate runtime detections (CWP) and see how they map into Microsoft Defender XDR Our goal isn’t theory for theory’s sake. It’s to help you see how AI security shows up in real architecture and real workflows—so you can apply it confidently in your own environment. Who is this session for? We built this session for practitioners who are actively working with AI in Azure, including: Developers building AI applications and agents Security engineers responsible for protecting AI workloads Cloud architects designing enterprise‑ready AI solutions If you’re balancing innovation with security and governance, this session is designed to help you translate AI security concepts into concrete steps in Azure. Before you join: Familiarity with core Azure concepts (subscriptions, resource groups, identity, networking) is helpful. You don’t need to be a machine learning expert—the focus is on securing the cloud resources and workflows that power AI solutions. From AI security concepts to platform protections If you’d like to get the most out of the session, start with the Microsoft Learn module Protect AI workloads with Microsoft Defender for Cloud. It introduces the building blocks of AI workloads in Azure and the security considerations that come with them. In the module, you’ll learn how to: Identify the layers that make up AI workloads in Azure Understand AI-specific risks, including prompt injection, data leakage, and model misuse Use Microsoft Foundry guardrails and observability to monitor and constrain model behavior See how Defender for Cloud, Microsoft Purview, and Microsoft Entra ID work together for defense in depth and governance Think of this as your foundation: it connects AI workload architecture to the controls you’ll configure in Azure, so you can protect inputs and outputs, maintain visibility, and apply governance without slowing delivery. Catch up on the previous Azure Decoded session If you missed the previous Azure Decoded session—or want a refresher—you can watch it on demand on YouTube: ▶️ Watch the previous Azure Decoded session on YouTube It’s a helpful refresher and sets the stage for the April 8 discussion. Turn learning into hands-on skills If you want to move the show, you can do this in your environment. The Microsoft Applied Skills credential, Secure AI Solutions in the Cloud, is a great next step after the Azure Decoded session. You will: Scope and enable protections for AI-related resources and workloads in Azure Validate coverage and prioritize risks using the Data & AI security dashboard Find and remediate posture gaps (CSPM) that increase exposure for AI workloads Investigate runtime detections (CWP) and understand what they mean in the context of AI workload behavior Triage AI-related alerts and incidents in Microsoft Defender XDR and decide on next steps Get started 1️⃣ Register for Azure Decoded: Lock Down AI Workloads with Microsoft Defender for Cloud 2️⃣ Watch the previous Azure Decoded session before April 8th (optional refresher) 3️⃣ Earn the Microsoft Applied Skills: Secure AI Solutions in the Cloud credential to showcase skills. The goal is to leave with something reusable: a practical sequence you can apply to new projects to confirm coverage, reduce posture gaps, and respond quickly when Defender signals suspicious activity tied to AI workloads.Building Secure, Enterprise Ready AI Agents with Purview SDK and Agent Framework
At Microsoft Ignite, we announced the public preview of Purview integration with the Agent Framework SDK—making it easier to build AI agents that are secure, compliant, and enterprise‑ready from day one. AI agents are quickly moving from demos to production. They reason over enterprise data, collaborate with other agents, and take real actions. As that happens, one thing becomes non‑negotiable: Governance has to be built in. That’s where Purview SDK comes in. Agentic AI Changes the Security Model Traditional apps expose risks at the UI or API layer. AI agents are different. Agents can: Process sensitive enterprise data in prompts and responses Collaborate with other agents across workflows Act autonomously on behalf of users Without built‑in controls, even a well‑designed agent can create compliance gaps. Purview SDK brings Microsoft’s enterprise data security and compliance directly into the agent runtime, so governance travels with the agent—not after it. What You Get with Purview SDK + Agent Framework This integration delivers a few key things developers and enterprises care about most: Inline Data Protection Evaluate prompts and responses against Data Loss Prevention (DLP) policies in real time. Content can be allowed or blocked automatically. Built‑In Governance Send AI interactions to Purview for audit, eDiscovery, communication compliance, and lifecycle management—without custom plumbing. Enterprise‑Ready by Design Ship agents that meet enterprise security expectations from the start, not as a follow‑up project. All of this is done natively through Agent Framework middleware, so governance feels like part of the platform—not an add‑on. How Enforcement Works (Quickly) When an agent runs: Prompts and responses flow through the Agent Framework pipeline Purview SDK evaluates content against configured policies A decision is returned: allow, redact, or block Governance signals are logged for audit and compliance This same model works for: User‑to‑agent interactions Agent‑to‑agent communication Multi‑agent workflows Try It: Add Purview SDK in Minutes Here’s a minimal Python example using Agent Framework: That’s it! From that point on: Prompts and responses are evaluated against Purview policies setup within the enterprise tenant Sensitive data can be automatically blocked Interactions are logged for governance and audit Designed for Real Agent Systems Most production AI apps aren’t single‑agent systems. Purview SDK supports: Agent‑level enforcement for fine‑grained control Workflow‑level enforcement across orchestration steps Agent‑to‑agent governance to protect data as agents collaborate This makes it a natural fit for enterprise‑scale, multi‑agent architectures. Get Started Today You can start experimenting right away: Try the Purview SDK with Agent Framework Follow the Microsoft Learn docs to configure Purview SDK with Agent Framework. Explore the GitHub samples See examples of policy‑enforced agents in Python and .NET. Secure AI, Without Slowing It Down AI agents are quickly becoming production systems—not experiments. By integrating Purview SDK directly into the Agent Framework, Microsoft is making governance a default capability, not a deployment blocker. Build intelligent agents. Protect sensitive data. Scale with confidence.Strengthening your Security Posture with Microsoft Security Store Innovations at RSAC 2026
Security teams are facing more threats, more complexity, and more pressure to act quickly - without increasing risk or operational overhead. What matters is being able to find the right capability, deploy it safely, and use it where security work already happens. Microsoft Security Store was built with that goal in mind. It provides a single, trusted place to discover, purchase, and deploy Microsoft and partner-built security agents and solutions that extend Microsoft Security - helping you improve protection across SOC, identity, and data protection workflows. Today, the Security Store includes 75+ security agents and 115+ solutions from Microsoft and trusted partners - each designed to integrate directly into Microsoft Security experiences and meet enterprise security requirements. At RSAC 2026, we’re announcing capabilities that make it easier to turn security intent into action- by improving how you discover agents, how quickly you can put them to use, and how effectively you can apply them across workflows to achieve your security outcomes. Meet the Next Generation of Security Agents Security agents are becoming part of day-to-day operations for many teams - helping automate investigations, enrich signals, and reduce manual effort across common security tasks. Since Security Store became generally available, Microsoft and our partners have continued to expand the set of agents that integrate directly with Microsoft Defender, Sentinel, Entra, Purview, Intune and Security Copilot. Some of the notable partner-built agents available through Security Store include: XBOW Continuous Penetration Testing Agent XBOW’s penetration testing agents perform pen-tests, analyzes findings, and correlates those findings with a customer’s Microsoft Defender detections. XBOW integrates offensive security directly into Microsoft Security workflows by streaming validated, exploitable AppSec findings into Microsoft Sentinel and enabling investigation through XBOW's Copilot agents in Microsoft Defender. With XBOW’s pen-testing agents, offensive security can run continuously to identify which vulnerabilities are actually exploitable, and how to improve posture and detections. Tanium Incident Scoping Agent The Tanium Incident Scoping Agent (In Preview) is bringing real-time endpoint intelligence directly into Microsoft Defender and Microsoft Security Copilot workflows. The agent automatically scopes incidents, identifies impacted devices, and surfaces actionable context in minutes-helping teams move faster from detection to containment. By combining Tanium’s real-time intelligence with Microsoft Security investigations, you can reduce manual effort, accelerate response, and maintain enterprise-grade governance and control. Zscaler In Microsoft Sentinel, the Zscaler ZIA–ZPA Correlation Agent correlates ZIA and ZPA activity for a given user to speed malsite/malware investigations. It highlights suspicious patterns and recommends ZIA/ZPA policy changes to reduce repeat exposure. These agents build on a growing ecosystem of Microsoft and partner capabilities designed to work together, allowing you to extend Microsoft Security with specialized expertise where it has the most impact. Discover and Deploy Agents and Solutions in the Flow of Security Work Security teams work best when they don’t have to switch tools to make decisions. That’s why Security Store is embedded directly into Microsoft Security experiences - so you can discover and evaluate trusted agents and solutions in context, while working in the tools you already use. When Security Store became generally available, we embedded it into Microsoft Defender, allowing SOC teams to discover and deploy trusted Microsoft and partner‑built agents and solutions in the middle of active investigations. Analysts can now automate response, enrich investigations, and resolve threats all within the Defender portal. At RSAC, we’re expanding this approach across identity and data security. Strengthening Identity Security with Security Store in Microsoft Entra Identity has become a primary attack surface - from fraud and automated abuse to privileged access misuse and posture gaps. Security Store is now embedded in Microsoft Entra, allowing identity and security teams to discover and deploy partner solutions and agents directly within identity workflows. For external and verified identity scenarios, Security Store includes partner solutions that integrate with Entra External ID and Entra Verified ID to help protect against fraud, DDoS attacks, and intelligent bot abuse. These solutions, built by partners such as IDEMIA, AU10TIX, TrueCredential, HUMAN Security, Akamai and Arkose Labs help strengthen trust while preserving seamless user experiences. For enterprise identity security, more than 15 agents available through the Entra Security Store provide visibility into privileged activity and identity risk, posture health and trends, and actionable recommendations to improve identity security and overall security score. These agents are built by partners such as glueckkanja, adaQuest, Ontinue, BlueVoyant, Invoke, and Performanta. This allows you to extend Entra with specialized identity security capabilities, without leaving the identity control plane. Extending Data Protection with Security Store in Microsoft Purview Protecting sensitive data requires consistent controls across where data lives and how it moves. Security Store is now embedded in Microsoft Purview, enabling teams responsible for data protection and compliance to discover partner solutions directly within Purview DLP workflows. Through this experience, you can extend Microsoft Purview DLP with partner data security solutions that help protect sensitive data across cloud applications, enterprise browsers, and networks. These include solutions from Microsoft Entra Global Secure Access and partners such as Netskope, Island, iBoss, and Palo Alto Networks. This experience will be available to customers later this month, as reflected on the M365 roadmap. By discovering solutions in context, teams can strengthen data protection without disrupting established compliance workflows. Across Defender, Entra, and Purview, purchases continue to be completed through the Security Store website, ensuring a consistent, secure, and governed transaction experience - while discovery and evaluation happen exactly where teams already work. Outcome-Driven Discovery, with Security Store Advisor As the number of agents and solutions in the Store grow, finding the right fit for your security scenario quickly becomes more important. That’s why we’re introducing the AI‑guided Security Store Advisor, now generally available. You can describe your goal in natural language - such as “investigate suspicious network activity” and receive recommendations aligned to that outcome. Advisor also includes side-by-side comparison views for agents and solutions, helping you review capabilities, integrated services, and deployment requirements more quickly and reduce evaluation time. Security Store Advisor is designed with Responsible AI principles in mind, including transparency and explainability. You can learn more about how Responsible AI is applied in this experience in the Security Store Advisor Responsible AI FAQ. Overall, this outcome‑driven approach reduces time to value, improves solution fit, and helps your team move faster from intent to action. Learning from the Security Community with Ratings and Reviews Security decisions are strongest when informed by real world use cases. This is why we are introducing Security Store ratings and reviews from security professionals who have deployed and used agents and solutions in production environments. These reviews focus on practical considerations such as integration quality, operational impact, and ease of use, helping you learn from peers facing similar security challenges. By sharing feedback, the security community helps raise the bar for quality and enables faster, more informed decisions, so teams can adopt agents and solutions with greater confidence and reduce time to value. Making agents easier to use post deployment Once you’ve deployed your agents, we’re introducing several new capabilities that make it easier to work with your agents in your daily workflows. These updates help you operationalize agents faster and apply automation where it delivers real value. Interactive chat with agents in Microsoft Defender lets SOC analysts ask questions to agents with specialized expertise, such as understanding impacted devices or understanding what vulnerabilities to prioritize directly in the Defender portal. By bringing a conversational experience with agents into the place where analysts do most of their investigation work, analysts can seamlessly work in collaboration with agents to improve security. Logic App triggers for agents enables security teams to include security agents in their automated, repeatable workflows. With this update, organizations can apply agentic automation to a wider variety of security tasks while integrating with their existing tools and workflows to perform tasks like incident triage and access reviews. Product combinations in Security Store make it easier to deploy complete security solutions from a single streamlined flow - whether that includes connectors, SaaS tools, or multiple agents that need to work together. Increasingly, partners are building agents that are adept at using your SaaS security tools and security data to provide intelligent recommendations - this feature helps you deploy them faster with ease. A Growing Ecosystem Focused on Security Outcomes As the Security Store ecosystem continues to expand, you gain access to a broader set of specialized agents and solutions that work together to help defend your environment - extending Microsoft Security with partner innovation in a governed and integrated way. At the same time, Security Store provides partners a clear path to deliver differentiated capabilities directly into Microsoft Security workflows, aligned to how customers evaluate, adopt, and use security solutions. Get Started Visit https://securitystore.microsoft.com/ to discover security agents and solutions that meet your needs and extend your Microsoft Security investments. If you’re a partner, visit https://securitystore.microsoft.com/partners to learn how to list your solution or agent and reach customers where security decisions are made. Where to find us at RSAC 2026? Security Reborn in the Era of AI workshop Get hands‑on guidance on building and deploying Security Copilot agents and publishing them to the Security Store. March 23 | 8:00 AM | The Palace Hotel Register: Security Reborn in the Era of AI | Microsoft Corporate Microsoft Security Store: An Inside Look Join us for a live theater session exploring what’s coming next for Security Store March 26 | 1:00 PM | Microsoft Security Booth #5744 | North Expo Hall Visit us at the Booth Experience Security Store firsthand - test the experience and connect with experts. Microsoft Booth #1843Microsoft Purview securing data and enabling apps and agents across your AI stack
As agentic AI moves from experimentation to enterprise execution, it fundamentally reshapes the data risk landscape—because AI apps and autonomous agents can access, reason over, and act on sensitive information at unprecedented speed and scale. This blog explains how Microsoft Purview extends security, compliance, and risk management across the AI stack (from data and prompts to copilots, custom agents, and even third‑party AI services) with capabilities like DSPM, sensitivity labels, DLP, insider risk, and audit/eDiscovery. It also highlights recent innovations such as inline DLP for Copilot Studio agents, upcoming DLM insights and policy recommendations for Copilot/AI app interactions, and expanded protections for Copilot web search and network/browser enforcement through partners.Governing AI Agent Behavior: Aligning User, Developer, Role, and Organizational Intent
Authors: Fady Copty, Principal Researcher Neta Haiby, Partner Product Manager Idan Hen, Principal Researcher AI agents increasingly perform tasks that involve reasoning, acting, and interacting with other systems. Building a trusted agent requires ensuring it operates within the correct boundaries and performs tasks consistent with its intended purpose. In practice, this requires aligning several layers of intent: User intent: The goal or task the user is trying to accomplish. Developer intent: The purpose for which the agent was designed and built. Role-based intent: The specific function the agent performs within an organization. Organizational intent: Enterprise policies, standards, and operational constraints. For example, one department may adopt an agent developed by another team, customize it for a specific business role, require that it adhere to internal policies, and expect it to provide reliable results to end users. Aligning these intent layers helps ensure agents meet user needs while operating within organizational, security, and compliance boundaries. Importance of intent alignment A successful and trusted AI agent must satisfy what the user intended to accomplish, while operating within the bounds of what the developer, role, and organization intended it to do. Proper intent alignment empowers AI agents to: Deliver quality results that accurately address user requests and solve real problems, increasing trust and productivity. Ensure the agent maintains its intended goal and operates within the boundaries it was developed and deployed for, reflecting the developer’s original design and the job to be done by the deploying organization. Uphold security and compliance by respecting organizational policies, protecting data, and preventing misuse or unauthorized actions. User Intent: The Key to Quality Outcomes Every AI agent interaction begins with the user’s objective, the task the user is trying to complete. Correctly interpreting that objective is essential to producing useful results. If the agent misinterprets the request, the response may be irrelevant, incomplete, or incorrect. Modern agents often go beyond simple question answering. They interpret requests, select tools or services, and perform actions to complete a task. Evaluating alignment with user intent therefore requires examining whether the agent correctly interprets the request, chooses the appropriate tools, and produces a coherent response. For example, when a user submits the query “Weather now,” an agent must infer that the user wants the current local weather. It must retrieve the relevant location and weather data through available APIs and present the result in a clear response. Developer intent: Defining the agent’s intended scope If user intent is about what the user wants the agent to do, developer intent is about what was the agent developed for. Developer’s intent defines the quality that of how well the agent fulfills its intended job, and the security boundaries that protect the agent from misuse or drift. In short, developer intent defines how the agent are both reliable in what they do and resilient against threats that could push them beyond their purpose. In essence, developer intent reflects the original design and purpose of the system, anchoring the agent’s behavior so it consistently does what it was built to do and nothing more. The developer could be external to the organization, and the developer’s intent could be generic to allow serving multiple organizations. For example, if a developer designs an AI agent to process emails for sorting and prioritization, the agent must stay within that scope. It should classify emails into categories like “urgent,” “informational,” or “follow-up,” and perhaps flag potential phishing attempts. However, it must not autonomously send replies, delete messages, or access external systems without explicit authorization even if it was asked to do so by the user. This alignment ensures the agent performs its intended job reliably while preventing unintended actions that could compromise security or user trust. Role-based intent: Defining the agent’s operational role. Role-based intent is the specific business objective, purpose, scope, and authority the AI agent has within an organization as a digital worker. Role-based intent defines what the agent’s job within a specific organization is. Every agent deployed in a business environment occupies a digital role whether as a customer support assistant, a marketing analyst, a compliance reviewer, or a workflow orchestrator. These roles can be explicit (a named agent such as a “Marketing Analyst Agent”) or implicit (a copilot assigned to assist a human marketing analyst). Its role-based intent dictates the boundaries of that position: what it is empowered to do, what decisions it can make, what data it can access, and when it must defer to a human or another system. For example, if an AI agent is developed as a “Compliance Reviewer” and its role is to review compliance for HIPAA regulations, its role-based intent defines its digital job description: scanning emails and documents for HIPAA-related regulatory keywords, flagging potential violations, and generating compliance reports. It is empowered to review and report HIPAA-related violations, but not all types of records and all types of regulations. This differs from Developer Intent, which focuses on the technical boundaries and capabilities coded into the agent, such as ensuring it only processes text data, uses approved APIs, and cannot execute actions outside its programmed scope. While developer intent enforces how the agent operates (its technical limits), role-based intent governs what job it performs within the organization and the authority it holds in business workflows. Organizational intent: enforcing enterprise policies and safeguards Beyond the user and developer intent, a successful AI agent must also reflect the organization’s intent – the goals, values, and requirements of the enterprise or team deploying the agent. Organizational intent often takes the form of policies, compliance standards, and security practices that the agent is expected to uphold. Aligning with organizational and developer intent is what makes an AI agent trustworthy in production, as it ensures the AI’s actions stay within approved boundaries and protect the business and its customers. This is the realm of security and compliance. For example, an AI agent acting as a “HR Onboarding Assistant” has a role-based intent of guiding new employees through the onboarding process, answer policy-related questions, and schedule mandatory training sessions. It can access general HR documents and training calendars but it may have to comply with GDPR by avoiding unnecessary collection of personal data and ensuring any sensitive information (like Social Security numbers) is handled through secure, approved channels. This keeps the agent within its defined role while meeting regulatory obligations. Intent precedence and conflict resolution Because multiple layers of intent guide an AI agent’s behavior, conflicts can occur. Organizations therefore need a clear precedence model that determines which intent takes priority when instructions or expectations do not align. In enterprise environments, intent should be resolved in the following order of precedence: Organizational intent Security policies, regulatory requirements, and enterprise governance define the outer boundaries for agent behavior. Role-based intent The business function assigned to the agent determines what tasks it is authorized to perform within the organization. Developer intent The technical capabilities and constraints designed into the system define how the agent operates. User intent User requests are fulfilled only when they remain consistent with the constraints defined above. This hierarchy ensures that AI agents can deliver useful outcomes for users while remaining aligned with system design, business responsibilities, and organizational safeguards. Examples of intent conflicts and expected agent behavior User request conflicts with organizational or role intent The agent should refuse the action or escalate to a human reviewer. User request is permitted but unclear The agent should request clarification before proceeding. User request is permitted and clearly defined The agent can proceed and explain the actions taken. Elements of intent Each type of intent is made of different elements: User intent User intent represents the task or outcome the user is trying to achieve. It is typically inferred from the user’s request and surrounding context. Common elements include: Goal – the outcome the user wants to achieve. Context – why the request is being made and how the result will be used. Constraints – time, format, or operational limits. Preferences – language, tone, or level of detail. Success criteria – what defines a completed task. Risk level – the potential impact of incorrect results When requests involve high-impact actions or unclear objectives, agents should request clarification before proceeding. Developer intent Developer intent defines the agent’s designed capabilities, purpose, and operational safeguards. It establishes what the system is intended to do and the technical limits that prevent misuse. Key elements include: Purpose definition – the specific task or problem the agent is designed to address. Capability boundaries – the actions and tools the agent is allowed to use. Guardrails – restrictions that prevent unsafe behavior, policy violations, or unauthorized actions. Operational constraints – technical limits such as approved APIs, supported data types, or restricted operations. When developer intent is clearly defined and enforced, agents operate consistently within their intended scope and resist attempts to perform actions outside their design. Example developer specification: Purpose An AI travel assistant that helps users plan trips. Expected inputs Natural language travel queries, including destination, dates, budget, and preferences. Expected outputs Travel recommendations, itineraries, destination information, and activity suggestions. Allowed actions Recommend destinations. Generate itineraries. Provide travel tips based on user preferences. Guardrails Only assist with travel planning. Do not expose internal data or customer PII. Role-based intent Just like a human employee, an AI agent must understand and stay within its job description. This ensures clarity, safety, and accountability in how agents operate alongside people and other systems. Key principles of role-based intent include: Scope of responsibility – the specific tasks the agent is authorized to perform. Autonomy boundaries – when the agent can act independently versus when human oversight is required. Context awareness – understanding how requests relate to the agent’s assigned business function. Coordination with other systems or agents – ensuring responsibilities do not overlap or conflict. When role-based intent is clearly defined and enforced, AI agents operate with the precision and reliability of well-trained team members. They know their scope, respect their boundaries, and contribute effectively to organizational goals. In this way, role-based intent serves as the practical mechanism that connects developer design and organizational business purpose, turning AI from a general assistant into a trusted, specialized digital worker. For example: Scope of Responsibility Travel planning assistance for customers planning to travel to France Boundary of Autonomy Cannot make bookings or payments on behalf of customers Cannot access or modify customer accounts Contextual Awareness Food preferences (e.g., vegetarian, allergies) are sensitive information Coordination with Other Agents Must refer customers to human agents for multi-country trips or complex itineraries Organizational intent Key considerations include: Policy compliance and governance Organizations often define rules that govern what users and AI systems are allowed to do. These may originate from regulations such as GDPR or HIPAA, industry standards, or internal policies and ethics guidelines. For example, a financial services organization may require an agent to include disclaimers when discussing financial topics, while a healthcare organization may restrict the generation of medical advice beyond an agent’s approved scope. Enforcing organizational intent requires governance mechanisms that monitor and control agent behavior to ensure compliance. Content safety and risk management Organizations must also prevent AI systems from producing harmful, inappropriate, or sensitive outputs. This includes content such as hate speech, biased or misleading responses, or the disclosure of confidential data. Aligning agents with organizational intent requires safeguards that detect and prevent these types of outputs. When agents operate within organizational intent, enterprises gain greater assurance that AI systems respect legal requirements, protect sensitive data, and follow established operational policies. Clear governance and enforcement mechanisms also make it easier for organizations to deploy AI systems across sensitive business functions while maintaining security and compliance. Best Practices for Maintaining and Protecting Intent Alignment Aligning user, developer, role-based, and organizational intent is an ongoing discipline that ensures AI agents continue to operate safely, securely, effectively, and in harmony with evolving needs. As AI systems become more autonomous and adaptive, maintaining intent alignment requires continuous oversight, enforcement, robust governance, and strong feedback mechanisms. Here are key best practices for maintaining and protecting these layers of intent: Ensure Intent in Design and Governance: Capture each type of intent user, developer, role-based, and organizational as explicit requirements in the design process to start secure. Define them through documentation, policies, and testable parameters. Treat these intents as part of the agent’s “constitution,” reviewed regularly as the system evolves. Establish Clear Agent Identity and Intent mapping: Every AI agent should have a unique agent identity just like a human employee or device. Inventory all agents assign identities and maintain a mapping to all intent documentations. Enforce least privileged access based on the Intent: This ensures agents only perform actions within their intended scope and prevent privilege misuse or unauthorized escalation. Regularly review and update access rights as roles or business needs evolve. Enforce intent dimensions: Enforcement means preventing the agent from taking actions or accessing data outside approved boundaries, even if a prompt tries to push it there. Use the intent precedence to solve conflicts between intent dimensions. Evaluate agents continuously in development and production: Agents are powerful productivity assistants. They can plan, make decisions, and execute actions. Agents typically first reason through user intents in conversations, select the correct tools to call and satisfy the user requests, and complete various tasks according to their instructions. Before deploying agents, it’s critical to evaluate their design, behavior and performance against available Intent documentation. For example, test the agent against a sample input that could deviate it from all available intent dimensions. Implement Guardrails and Policy Enforcement: Embed dynamic guardrails at every layer. Developer guardrails prevent drift in capability or behavior, role-based guardrails limit actions to authorized domains, and organizational policies enforce compliance and safety. Use platforms like Azure AI Content Safety or policy orchestration frameworks to enforce boundaries automatically. Continuously Observe, Monitor and Audit Agent Behavior: Intent alignment must be validated in production. Regular audits, telemetry, and behavior logs help ensure the agent’s outputs, actions, and interactions remain consistent with intended roles and policies. Implement feedback loops that flag anomalies such as actions outside of scope, unauthorized data access, or off-policy responses. Maintain a Human-in-the-Loop for Escalation: Even with autonomous reasoning, agents should know when to pause and seek human oversight. Define escalation triggers (e.g., high-risk requests, ambiguous user intents, or policy conflicts) that route decisions to human reviewers, protecting both users and the organization from unintended consequences. Update Intents as Systems and Contexts Evolve: Intent dimensions can change over time. Treat intent definitions as living assets that must adapt over time. Establish a structured process to review and update intent boundaries whenever the agent’s capabilities, integrations, or environments change. Foster a Culture of Security and Compliance: Educate developers, operators, and business stakeholders about the importance of intent alignment and the risks of intent drift or breaking. Promote shared responsibility for agent security, and encourage proactive reporting and remediation of issues. Maintaining and protecting intent ensures that AI agents perform tasks with quality, securely and responsibly aligned with user needs, developer design, role purpose, and organizational values. As enterprises scale their AI workforce, disciplined intent management becomes the foundation for safety, trust, and sustainable success1.7KViews0likes0CommentsSecurity Dashboard for AI - Now Generally Available
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is now generally available. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 2026Securing the Browser Era - From Cloud to AI: A blog series on protecting the modern workspace
In today’s digital-first workplace, the browser has quietly become the new operating system for enterprise productivity. From accessing SaaS platforms and cloud-native applications to enabling real-time collaboration and now AI-assisted workflows, the browser is no longer just a window to the web—it is the primary interface for getting work done. As AI capabilities become embedded directly into browsers from copilots to autonomous agents, this evolution brings unprecedented convenience and equally unprecedented risk. The browser is no longer passive; it is active, intelligent, and increasingly privileged. This shift demands a fundamental rethinking of how we secure the browser in the age of AI. In Part 1 of this series, we explored how the browser evolved into a mission-critical workspace with the rise of cloud and SaaS, and how attackers quickly pivoted to exploit this new surface. Part 2 introduced a defense-in-depth playbook, emphasizing the need for enterprise-grade secure browsers and Zero Trust principles to counter browser specific threats. In this final part, we examine the next frontier: AI-powered browsers. These tools promise dramatic productivity gains but also introduce novel and complex security threats. This post explores these risks and how organizations can defend against them using Microsoft’s integrated security solutions. Part 3 – Securing AI-Driven Browsers: Balancing Innovation with Risk Modern browsers are increasingly augmented with AI assistants and generative AI features that can summarize web content, automate tasks, and even act on the user’s behalf, transforming the browser from a static tool into an active collaborator. Entirely new AI browsers are emerging, promising to reshape productivity by letting AI agents click, write, and gather information autonomously. These capabilities yield clear productivity benefits; employees can get instant answers from web data, offload tedious actions to an AI assistant, and draw on generative AI to spark ideas. This innovation can unlock efficiency and insights – but it also expands the attack surface in unprecedented ways. Enterprises must now grapple with shadow AI in addition to the SaaS induced shadow IT in the workplace, where employees use consumer-grade AI tools without oversight, risking sensitive data exposure. AI-powered browsers blur the line between code and content, and between user intent and machine autonomy. AI integration in browsing brings a mix of cybersecurity and AI safety challenges that attackers can exploit. The browser’s threat surface now spans everything from classic web exploits to the nuances of AI model behavior. Unique Threat Vectors Introduced by AI-Browsers Prompt Injection Attacks - Prompt injection represents the most severe AI-native vulnerability, ranking as LLM01 by OWASP with attack success rates of 50-84%. AI agents are susceptible to a new class of attack where malicious instructions are hidden within web content to hijack the AI’s behavior. Unlike traditional exploits, prompt injections do not target code vulnerabilities, they exploit the AI’s tendency to follow instructions. Not all prompt injections fire immediately; some are designed to unfold over time. Sophisticated attackers might split a malicious payload across multiple AI responses or hide it deep in a long streaming answer. This dynamic traffic can evade one-time content filters. An attack might lurk undetected or only reveal itself on the third exchange of a conversation for example. Direct Prompt Injection: Attackers craft malicious prompts that override system instructions, causing the AI to leak data, modify behavior, or execute unauthorized actions. Indirect Prompt Injection: Malicious instructions hidden in documents, emails, or web pages are processed by AI agents during legitimate tasks. Visual Prompt Injection: Attackers embed instructions in images using steganography or pixel manipulation that are invisible to humans but interpreted by AI vision systems. The consequences can be severe, the browser’s AI might reveal confidential data to an attacker, make unauthorized transactions, or sabotage workflows. Autonomous Agent Misuse and Identity Risks - AI-native browsers with agentic capabilities can autonomously navigate websites, complete transactions, manage emails, and execute tasks without user intervention. This means AI browser assistants often request broad access to user data and actions. Agents move beyond the browser tab by invoking backend APIs, SaaS tools, or workflows outside the visible browser. This creates unprecedented risk through privilege escalation, credential abuse, unauthorized workflow execution, and an attacker might now drive the AI to directly exfiltrate data or reconfigure accounts using the agent’s high privileges. Memory Poisoning - Attackers can inject malicious instructions directly into LLMs persistent memory via Cross Site Request Forgery (CSRF) attacks. The exploit works across sessions, browsers, and devices. In BYOD or mixed-use environments, memory persistence re-triggers risky behaviors even after reboot or browser change, expanding blast radius beyond a single endpoint. OAuth & API Exploitation Through AI Agents - AI browsers automatically accept OAuth permissions to complete tasks leading to exfiltration of sensitive files including shared drives from colleagues/customers, email impersonation for lateral movement, and browser-native ransomware. AI Session Hijacking & Token Theft - Traditional session hijacking is amplified when AI systems store authentication tokens, conversation history, and sensitive context. Attackers stealing these tokens gain access to all AI-processed enterprise data, conversation history containing intellectual property, and persistent authenticated sessions across cloud services. Shadow AI - Much like shadow IT with cloud apps, shadow AI refers to AI tools adopted by users without IT department’s approval. Employees use unsanctioned AI apps or browser extensions in the workplace such as installing a popular AI writing assistant extension or linking their work M365 account to a third-party AI app. A malicious extension could request excessive permission or inject harmful code. We already know browser extensions can bypass many security controls, adding AI just increases the threats. Malicious extensions morph into password managers, crypto wallets, and banking apps to steal sensitive information. Similarly, supply-chain risks exist if an AI app or its machine learning model is compromised – an attacker might manipulate the AI’s outputs. BYO-Agent patterns spread quietly; organizations often have no inventory of agents accessing sensitive data leading to governance failures. Time-to-impact is compressed to minutes; exfiltration happens at machine speed with no lateral movement. Traditional network based DLP architectures are largely blind to agentic behaviors. Agent-to-Agent Exploitation - As AI agent marketplaces and internal agent fabrics become reality, browsers may host swarms of agents that communicate with each other. If one agent is compromised (for example, by a prompt injection or malicious design), it can embed hidden commands in the data it shares with another agent. This could lead to automated propagation of attacks: Agent A passes a poisoned summary to Agent B, which then unknowingly executes harmful actions. This supply-chain style attack among AI agents creates a new breed of threat, where malware is not file-based or even code-based, it is a malicious instruction that hops from one AI to the next. Existing Threats Amplified by AI-Browsers Sensitive Data Leakage and Data Exfiltration - AI is now the top data exfiltration channel, surpassing shadow SaaS, and unmanaged file sharing. Users may input proprietary data into an AI chatbot or allow an AI assistant to access corporate emails and files for context. Consumer AI services often retain or even train on those prompts and data, creating a pipeline of confidential information flowing out of the organization without any oversight. Traditional DLP tools scan attachments and block uploads but miss the fastest-growing threat entirely—copy/paste into GenAI. Insider Threat - Autonomous agents with authorized access to sensitive data and systems can misuse access to harm organizations, whether intentional or not. Accidental/Reckless Insiders: Careless insiders unintentionally expose confidential information through natural language prompts. AI assistants summarize internal content or pull insights from restricted sources. Malicious Insiders Empowered: AI guides insiders step-by-step on privilege escalation, system manipulation, monitoring evasion, or intelligence extraction. Non-technical employees can now exfiltrate data without touching a file by asking AI to summarize or transform sensitive information. Autonomous Agent Misuse: Agents chain tasks together, accessing systems outside intended scopes. Misconfigured systems allow agents to trigger workflows that expose sensitive data or weaken security controls. Compliance & Data Residency Violations – AI browsers and AI augmented browsers with sidebar and extensions create gaps in meeting the controls required by many regulations. AI-Sidebar Data Transmission: Sensitive user data, active web content, browsing history, open tabs are sent to cloud-based AI backends by default. This creates regulatory Exposure: GDPR violations through uncontrolled data transfers outside EU, HIPAA violations when healthcare data flows to AI platforms, data residency requirements breached when AI processes data in unapproved jurisdictions. Lack of Transparency: Users unaware that anything they view could be sent to AI service backend, AI browser extensions harvest personal data with minimal safeguards, potentially violating FERPA and HIPAA by collecting health and student data, third-party data transmission and storage create exposure through data breaches at AI vendors. Accountability Gap: When AI agents perform unauthorized actions, forensic trails point to legitimate user sessions. Organizations cannot distinguish between user actions and agent-executed tasks. AI Enhanced Social Engineering - Attackers are also leveraging AI and are generating personalized, context-aware phishing at scale, creating convincing deepfake vishing/videos, producing grammatically perfect, culturally appropriate phishing emails. Traditional red flags such as poor grammar and generic greetings no longer help, making it harder for users to distinguish legitimate from fraudulent and increasing the success rate of social engineering. While there is not a vulnerability in the browser’s code, these tactics exploit human trust what they see/hear via the browser, supercharging old attacks with AI. AI agents exhibit poorer security awareness than average employees, making them vulnerable to social engineering via trusted platforms. These risks are not theoretical; they are already being exploited. But while the threat landscape has evolved, so too have the defenses. The same Zero Trust principles that transformed network and identity security can now be extended to the browser. Microsoft’s integrated security stack spanning Edge, Entra, Defender, Purview, and more offers a layered, adaptive defense tailored for the AI-powered browser era. Let us explore how these controls work together to secure the modern enterprise browser. Building a Defense-in-Depth Strategy for AI-Browsers As generative AI becomes embedded in everyday browser workflows, organizations must extend Zero Trust principles — verify explicitly, grant least privilege, and assume breach — to this new attack surface. Microsoft provides an integrated security stack that maps directly to these principles across four key layers: the browser itself, identity and access, data protection, and threat detection. Together, they form a comprehensive defense against the unique risks AI introduces to enterprise browsing. Secure the Browser — Microsoft Edge for Business Microsoft Edge for Business is the foundation of any AI-era browser security strategy. Its automatic work/personal profile separation ensures corporate credentials and data remain isolated from consumer AI tools like ChatGPT. Admins can enforce policy-based profile switching to route visits to unsanctioned AI sites through the personal profile effectively neutralizing shadow AI risk at the browser level. Edge's Enhanced Security Mode disables JIT JavaScript compilation and activates hardware-enforced stack protections, dramatically reducing exposure to memory corruption exploits triggered by untrusted or AI-generated content. Combined with sandboxing, site isolation, typosquatting protection, and enforced HTTPS to reduce the browser attack surface. Built-in Defender SmartScreen provides a continuously updated threat intelligence to block phishing pages, malware downloads, and scam sites in real time. Microsoft Defender for Endpoint (MDE) extension inventory and flags high-risk add-ons. Through deep integration with Microsoft Intune, admins can remotely enforce browser policies to configure allow-lists can block unapproved AI-themed plugins and enabling M365 Copilot chat in the sidebar, so employees benefit from AI productivity while all queries remain protected. Strengthen Identity and Access — Microsoft Entra ID With AI services deeply integrated into browser workflows, identity controls provide the first line of defense. Enterprises must enforce strict trust boundaries and validation for agents and inter-agent communications as well as extend security beyond the browser UI. Entra Agent ID can be leveraged to secure AI agents that may power browser-based AI experiences. Microsoft Entra Conditional Access can enforce explicit verification before granting access to any app — requiring compliant devices, MFA, trusted networks, and Edge for Business. CA can distinguish between session types: it directs unmanaged devices through a cloud-brokered session using Defender for Cloud Apps, whereas managed devices get enhanced access. Token binding in Entra ID prevents stolen session tokens from being replayed elsewhere to control against OAuth token theft by malicious extensions or malware. App consent can be configured to require admin approval before any third-party AI app receives high-risk permissions, preventing users from inadvertently granting ChatGPT or similar tools access to SharePoint files or emails. Defender for Cloud Apps (MDA) can watch for abnormal usage of SaaS APIs by AI accounts and App Governance provides a consolidated view of all authorized apps, their permissions, and usage patterns, automatically alerting or suspending apps that exhibit anomalous behavior, such as mass data downloads via the Graph API. Continuous Access Evaluation (CAE) ensures that even after a token is issued, critical events like a password change, account compromise, or Defender risk signal can invalidate sessions in near real time, minimizing the window of abuse for a hijacked AI session. At the network edge, Microsoft Entra Internet Access adds a cloud-based secure web gateway featuring Prompt Shield, operates in line for the entire session, scanning partial responses and multi-turn conversations continuously. It is not limited to scanning just the first user prompt or the initial response. By maintaining conversational context, it can catch a staged attack even if malicious instructions or sensitive data only appear mid-session. This kind of streaming-aware inspection is essential for AI browser traffic, where static security models would fall short. Protect Data Everywhere — Microsoft Purview AI introduces novel data egress paths that traditional controls were not designed to handle. Microsoft Purview's Endpoint DLP extends directly into Edge for Business, enabling policies that block sensitive data — PII, source code, financial records from being pasted into AI chatbots or uploaded to unsanctioned services. Policies can be configured granularly, for example - approved corporate AI apps can receive certain data, while all others are blocked. For unmanaged or BYOD devices where endpoint agents cannot be installed, Defender for Cloud Apps provides session-based DLP through Conditional Access App Control enforcing monitor only or blocking downloads modes and applying watermarking or copy/paste restrictions. This layered approach of endpoint DLP for managed devices, cloud DLP for unmanaged sessions ensures data protection policies apply regardless of how or where employees access corporate data. Microsoft Purview sensitivity labels provide persistent classification and encryption that travels with the data. Microsoft solutions like Copilot, Foundry, AI search inherits and propagates these labels to AI-generated outputs. Beyond blocking, Purview Insider Risk Management can detect behavioral patterns such as large copy operations from internal sources, after-hours data movement, repeated uploads to unknown web services and feeding these signals into automated risk scoring and investigation workflows. Integrated with Microsoft Sentinel, these DLP alerts enable security teams to detect and respond to potential AI-driven leakage. Detect and Respond — Defender and Sentinel Microsoft Defender for Endpoint (MDE) continuously monitors device telemetry for suspicious browser behavior. If a prompt injection causes an AI agent to spawn a command shell or run PowerShell, MDE's Attack Surface Reduction (ASR) rules detect and terminate the child process immediately. Network Protection adds a backstop beyond the browser's own filters. Defender for Cloud Apps adds a cloud-centric threat perspective, detecting anomalous OAuth app behavior such as a rarely used AI connector suddenly downloading mass data and correlating these with identity and endpoint signals to surface truly risky activity. Microsoft Defender for AI (part of the Defender for Cloud suite) plays a key role in mitigating unique AI threats – prompt injections, agent-to-agent interactions, and multi-turn conversations exploits. It adds an extra layer of security to the AI model and application level by detecting malicious or risky AI activities in real time. Defender for AI detects jailbreaks, prompt injection attempts and techniques like ASCII smuggling and blocks the AI from executing it and generates a security alert for security teams. This measure directly counters the risk of an AI-powered browser being tricked into violating its constraints. Defender for AI monitors the content flowing in and out of the AI model for signs of sensitive data exposure and leverages Microsoft Threat Intelligence to spot known malicious indicators inside AI communications. Microsoft Sentinel ties it all together as the SIEM/SOAR backbone, ingesting signals from Entra ID, Edge, Defender, Purview, and MDCA to detect advanced multi-stage attack patterns. For example, Sentinel can correlate repeated SmartScreen blocks, a suspicious browser-spawned script, and a mass SharePoint download into a single high-fidelity incident, enabling a SOC analyst to trigger an automated response playbook that isolates the device, kills risky processes, and disables the compromised account. In the AI era, the browser is no longer just a window to the web — it is a security control plane. Microsoft’s security portfolio has evolved for this purpose: from Edge’s built-in hardening and Entra’s identity control, to network-level AI threat interception and adaptive DLP, to cross-domain detection in Sentinel, each layer plays a role in addressing these innovative threats. The evolution from cloud to SaaS to AI has brought the humble browser to the center of both our productivity and our adversaries’ playbooks. The path forward is clear - treat browser security as a first-class citizen in your Zero Trust strategy. By combining strong policies, defense-in-depth controls, and forward-looking investments in enterprise and AI browser security, organizations can turn the browser from a liability into a trusted productivity engine.601Views0likes0CommentsNew Microsoft Purview innovations for Fabric to safely accelerate your AI transformation
As organizations adopt AI, security and governance remain core primitives for safe AI transformation and acceleration. After all, data leaders are aware of the notion that: Your AI is only as good as your data. Organizations are skeptical about AI transformation due to concerns of sensitive data oversharing and poor data quality. In fact, 86% of organizations lack visibility into AI data flows, operating in darkness about what information employees share with AI systems [1]. Compounding on this challenge, about 67% of executives are uncomfortable using data for AI due to quality concerns [2]. The challenges of data oversharing and poor data quality requires organizations to solve these issues seamlessly for the safe usage of AI. Microsoft Purview offers a modern, unified approach to help organizations secure and govern data across their entire data estate, in particular best in class integrations with M365, Microsoft Fabric, and Azure data estates, streamlining oversight and reducing complexity across the estate. At FabCon Atlanta, we’re announcing new Microsoft Purview innovations for Fabric to help seamlessly secure and confidently activate your data for AI transformation. These updates span data security and data governance, granting Fabric users to both Discover risks and prevent data oversharing in Fabric Improve governance processes and data quality across their data estate 1. Discover risks and prevent data oversharing in Fabric As data volume increases with AI usage, Microsoft Purview secures your data with capabilities such as Information Protection, Data Loss Prevention (DLP), Insider Risk Management (IRM), and Data Security Posture Management (DSPM). These capabilities work together to secure data throughout its lifecycle and now specifically for your Fabric data estate. Here are a few new Purview innovations for your Fabric estate: Microsoft Purview DLP policies to prevent data leakage for Fabric Warehouse and KQL/SQL DBs Now generally available, Microsoft Purview DLP policies allow Fabric admins to prevent data oversharing in Fabric through policy tip triggering when sensitive data is detected in assets uploaded to Warehouses. Additionally, in preview, Purview DLP enables Fabric admins to restrict access to assets with sensitive data in KQL/SQL DBs and Fabric Warehouses to prevent data oversharing. This helps admins limit access to sensitive data detected in these data sources and data stores to just asset owners and allowed collaborators. These DLP innovations expand upon the depth and breadth of existing DLP policies to ensure sensitive data in Fabric is protected. Figure 1. DLP restrict access preventing data oversharing of customer information stored in a KQL database. Microsoft Purview Insider Risk Management (IRM) indicators for Lakehouse, IRM data theft quick policy for Fabric, and IRM pay-as-you-go usage report for Fabric Microsoft Purview Insider Risk Management is now generally available for Microsoft Fabric extending its risk-detection capabilities to Microsoft Fabric lakehouses (in addition to Power BI which is supported today) by offering ready-to-use risk indicators based on risky user activities in Fabric lakehouses, such as sharing data from a Fabric lakehouse with people outside the organization . Additionally, IRM data theft policy is now generally available for security admins to create a data theft policy to detect Fabric data exfiltration, such as exporting Power BI reports. Also, organizations now have visibility into how much they are billed with the IRM pay-as-you-go usage report for Fabric, providing customers with an easy-to-use dashboard to track their consumption and predictability on costs. Figure 2. IRM identifying risky user behavior when handling data in a Fabric Lakehouse. Figure 3. Security admins can create a data theft policy to detect Fabric data exfiltration. Figure 4. Security admins can check the pay-as-you-go usage (processing units) across different workloads and activities such as the downgrading of sensitivity labels of a lakehouse through the usage report. Microsoft Purview for all Fabric Copilots and Agents Microsoft Purview currently provides capabilities in preview for all Copilots and Agents in Fabric. Organizations can: Discover data risks such as sensitive data in user prompts and responses and receive recommended actions to reduce these risks. Detect and remediate oversharing risks with Data Risk Assessments on DSPM, that identify potentially overshared, unprotected, or sensitive Fabric assets, giving teams clear visibility into where data exposure exists and enabling targeted actions—like applying labels or policies—to reduce risk and ensure Fabric data is AI‑ready and governed by design. Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI. Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant usage detection. Figure 5. Purview DSPM provides admins with the ability to discover data risks such as a user’s attempt to obtain historical data within a data agent in the Data Science workload in Fabric. DSPM subsequently provides actions to solve this risk. Now that we’ve covered how Purview helps secure Fabric data and AI, the next focus is ensuring Fabric users can use that data responsibly. 2. Improve governance processes and data quality across their data estate Once an organization’s data is secured for AI, the next challenge is ensuring consumers can easily find and trust the data needed for AI. This is where the Purview Unified Catalog comes in, serving as the foundation for enterprise data governance. Estate-wide data discovery provides a holistic view of the data landscape, helping prevent valuable data from being underutilized. Built-in data quality tools enable teams to measure, monitor, and remediate issues such as incomplete records, inconsistencies, and redundancies, ensuring decisions and AI outcomes are based on trusted, reliable data. Purview provides additional governance capabilities for all data consumers and governance teams and supplement those who utilize the Fabric OneLake catalog. Here are a few new innovations within the Purview Unified Catalog: Publication workflows for data products and glossary terms Now generally available, data owners can leverage Workflows in the Purview Unified Catalog to manage how data products and glossary terms are published. Customizable workflows enable governance teams to work faster to create a well curated catalog, specifically by ensuring that data products and glossary terms are published and governed responsibly. Data consumers can request access to data products and be reassured that the data is held to a certain governance standard by governance teams. Figure 6. Customizing a Workflow for publishing a glossary term in your catalog. Data quality for ungoverned assets in the Unified Catalog, including Fabric data In the Unified Catalog, Data Quality for ungoverned data assets allows organizations to run data quality on data assets, including Fabric assets, without linking them to data products. This approach enables data quality stewards to run data quality at a faster speed and on greater scale, ensuring that their organizations can democratize high quality data for AI use cases. Figure 7. Running data quality on data assets without it being associated with a data product. Looking Forward As organizations accelerate their AI ambitions, data security and governance become essential. Microsoft Purview and Microsoft Fabric deliver an integrated and unified foundation that enables organizations to innovate with confidence, ensuring data is protected, governed, and trusted for responsible AI activation. We’re committed to helping you stay ahead of evolving challenges and opportunities as you unlock more value from your data. Explore these new capabilities and join us on the journey toward a more secure, governed, and AI‑ready data future. [1] 2025 AI Security Gap: 83% of Organizations Flying Blind [2] The Importance Of Data Quality: Metrics That Drive Business Success