security
5423 TopicsBuild Sensitivity Label‑Aware, Secure RAG with Azure AI Search and Purview
Introduction: Why This Matters Now Most developers building solutions with Azure AI Search haven’t had to think about Microsoft Purview sensitivity labels before. Sensitivity labels are applied at the source—SharePoint, OneLake, OneDrive—and they classify and protect documents through encryption, access rules, and usage rights. As a result, developers often don’t see these labels directly, and many are unaware that labeled or encrypted documents behave differently when used in AI and search workloads. This matters because RAG and Copilot‑style applications rely on complete, context‑rich data to return accurate answers. If labeled content isn’t accessible to the indexing pipeline—or if Azure AI Search isn’t configured to interpret label metadata—your retrieval layer may unintentionally miss protected documents, leading to incomplete grounding, reduced answer quality, or inconsistent user experiences. For context, Copilot-style apps are context‑aware AI applications that combine a large language model (LLM) with enterprise data to help users ask questions, generate content, and complete tasks inside an existing workflow. Historically, search experiences haven’t fully honored Purview label protections. While Azure AI Search can enforce document‑level permissions in sources such as SharePoint in Microsoft 365, ADLS Gen2, Azure Blob storage (when configured), ACLs only answer who can see the document, whereas sensitivity labels define how the content must be handled once accessed. Also, enterprise security and compliance teams expect label-based access enforcement, when configured. If Purview integration is not enabled, documents with certain label protections—especially encrypted ones—may simply not be indexable, which reduces the corpus available to AI Search. This blog explains how Azure AI Search now integrates with Purview sensitivity labels, why this configuration is increasingly important for secure and complete enterprise RAG, and how to enable it in your environment. What Are Sensitivity Labels & Why They Impact AI Search Microsoft Purview sensitivity labels classify and protect organizational data by applying encryption, access controls, and visual markings across documents, emails, and collaboration spaces. When labels are applied, Microsoft Purview governs, among other functionality: Who can read a document Whether it’s encrypted What usage rights apply How the data must be treated Purview sensitivity labels and Azure AI Search Developers often assume these label-based enforcements “just work,” but unless Azure AI Search is configured to extract and evaluate label metadata, AI systems cannot retrieve protected content and/or enforce the behavior expected of data carrying those labels, leading to incomplete and sometimes insecure RAG answers. Azure AI Search now supports the following actions as part of sensitivity label support in preview: Sensitivity label ingestion at indexing time Label-based document-level access control at query-time What the Integration Enables (And What Happens If You Don’t Turn It On) When purview labels are integrated with AI Search Labeled documents are successfully indexed Label metadata is stored alongside the document Query-time filters enforce Purview EXTRACT rights RAG apps, copilots, and agents return only what a user can access No risk of “silent missing labeled-context” in retrieval Unified Purview governance across Microsoft 365 documents and AI Search If You Don’t Enable It Documents with labels with configured protections won’t index, leading to incomplete data available for AI Search, reducing answer quality Search results won’t enforce protections based on labels, impacting user experience End users won't have visibility into labels applied to their documents based on compliance requirements, impacting user experience as well Sources Supported These are the data sources where purview labels are supported in AI Search today: Azure Blob Storage ADLS Gen2 SharePoint (Preview) OneLake End-to-end flow Next steps Follow the documentation and resources below to enable your Azure AI Search indexes with Purview sensitivity labels Documentation: Indexing sensitivity labels in Azure AI Search Query-Time Microsoft Purview Sensitivity Label Enforcement in Azure AI Search Demo app repo: https://aka.ms/Ignite25/aisearch-purview-sensitivity-labels-repo Demo video: tivity labels in Azure AI Search demoBehind the Build with RSA: Identity Resilience in the Age of AI
Behind the Build is an ongoing series spotlighting standout Microsoft partner collaborations. Each edition dives into the technical and strategic decisions that shape real-world integrations—highlighting engineering excellence, innovation, and the shared customer value created through partnership. RSA and Microsoft share a long, multiyear partnership shaped not by a single product or integration, but by shared customers grappling with some of today’s most complex security challenges, from cloud migration and identity sprawl to AI-driven threats. In this Behind the Build blog, we feature Dave Taku, RSA’s Vice President of Product Management and User Experience, to dive deeper into how that collaboration works at a technical level, how RSA and Microsoft engineers partner to solve real customer problems, and how recent work spanning Microsoft Entra, Microsoft Sentinel, and AI-driven security capabilities is shaping what comes next. Meet Dave Taku Dave Taku has spent nearly 25 years in cybersecurity, working across domains such as telecommunications and network security. But most of that time has been focused squarely on identity in areas like authentication, access management, governance and lifecycle, in particular. He’s been with RSA for two decades. When asked what makes a great VP of product, Dave describes his role as one centered on enablement. “My job is really to provide clarity and empower the team, to help them be successful.” That team-oriented mindset carries through RSA’s broader approach to engineering and partnerships. A Customer Driven Partnership with Microsoft RSA’s collaboration with Microsoft has largely been shaped by shared customers, many of them large, complex enterprises navigating the shift from on premises environments to cloud-first architectures. “These efforts are almost always customer initiated,” Dave notes. “Customers want us working together to make their journey successful.” That alignment has led to a wide range of joint initiatives over the years, spanning identity control planes, hybrid and multi cloud scenarios, and more recently, deeper analytics and AI driven security workflows. Identity as the Foundation Identity sits at the center of RSA’s partnership with Microsoft, particularly through integrations with Microsoft Entra. While organizations increasingly adopt Entra for cloud identity, many still operate complex hybrid estates and highly regulated environments. RSA can help in those mixed-use cases by extending identity controls beyond a single platform, providing behavioral analytics and risk-based authentication that complements Entra’s native features. “At RSA, we’re laser focused on answering two questions for our customers,” Dave explains. “Who is this user (can we be absolutely sure)? And is their access appropriate from a zero-trust perspective?” A standout example of Microsoft’s collaboration with RSA is their early adoption of External Authentication Methods (EAM), where they served as a day one launch partner. EAM built on prior generations of integration between RSA and Microsoft identity technologies and has been critical for customers migrating sensitive workloads to the cloud without disrupting existing security postures. At the end of the day, it is customers that drive this kind of innovation. Dave points to large, global, financial institutions as clear bellwethers. As these organizations shift toward cloud first models and embrace Azure and SaaS, they face the challenge of modernizing identity without disrupting environments long secured by RSA or introducing new risks during migration. EAM has been critical in enabling that transition, allowing established RSA authentication and policy controls to carry forward into Microsoft Entra so customers can adopt cloud services while preserving the security models and operational consistency they depend on. From Identity Signals to Agentic AI with Sentinel More recently, RSA and Microsoft have collaborated on deeper integrations with Microsoft Sentinel, including work with Sentinel data lake and Security Copilot. These efforts marked the first co-engineered agentic solution from RSA and Microsoft. RSA sees AI influencing identity security across several fronts: improving insights and automation, defending against AI-powered attacks, and securing non-human identities as autonomous agents become more common in enterprise environments. RSA’s approach starts with administrative telemetry from RSA ID Plus. Those events are ingested through a Sentinel connector and stored in the Microsoft Sentinel data lake which enables cost‑effective long‑term retention of identity telemetry, making it available for advanced analytics. Security Copilot agents then assess this data to surface anomalous or risky administrative behavior. “Admin accounts are increasingly a target,” says Dave. “If you don’t know when an admin is behaving unusually, you’re already too late.” This integration enables security teams to analyze identity related activity alongside broader organizational telemetry, helping analysts detect compromised credentials earlier and respond faster. “Human operators can’t keep up anymore,” Dave says. “As identities become more dynamic and more automated, we need AI driven assistance to maintain zero trust at scale.” Looking Ahead As RSA and Microsoft look ahead, their collaboration is increasingly shaped by how identity security must evolve in an AI driven world. Dave outlines three core areas where both teams see significant opportunities for continued innovation. AI will play a growing role in helping organizations make sense of increasingly fluid identity environments, enabling better insight, decision making, and, over time, more autonomous responses as manual oversight becomes less viable. At the same time, the rise of AI powered attacks is placing new strain on traditional identity trust models, pushing the industry toward more adaptive, analytics driven signals. Finally, as enterprises adopt AI agents that act independently or on behalf of users, identity security is expanding beyond humans altogether, making the protection of non-human identities an essential frontier for the future of cybersecurity. Programs like the Microsoft Intelligent Security Association (MISA) help enable this kind of deep technical collaboration, providing a framework for RSA and Microsoft to align on emerging scenarios, validate integrations, and bring new capabilities to market faster. “It’s been a long journey together,” Dave reflects. “And we’re just getting started.”108Views1like0CommentsAI Security in Azure with Microsoft Defender for Cloud: Learn the How, Join the Session
As organizations accelerate AI adoption, securing AI workloads has become a top priority. Unlike traditional cloud applications, AI systems introduce new risks—such as prompt injection, data leakage, and model misuse—that require a more integrated approach to security and governance. To help developers and security teams understand and address these challenges, we are hosting Azure Decoded: Kickstart AI Security with Microsoft Defender for Cloud, a live session on March 18 th at 12 PM PST focused on securing AI workloads built with Microsoft Foundry and Azure AI services. From AI Security Concepts to Platform Protections A strong foundation for this session starts with the Microsoft Learn module Understand how Microsoft Defender for Cloud supports AI security and governance in Azure. This training introduces how AI workloads are structured in Azure and why they require a different security model than traditional applications. In the module, learners explore: The layers that make up AI workloads in Azure Security risks unique to AI, including prompt injection, data leakage, and model misuse How Microsoft Foundry provides guardrails and observability for AI models How Microsoft Defender for Cloud works with Microsoft Purview and Microsoft Entra ID to deliver a unified, defense‑in‑depth security and governance strategy for AI Together, these services help organizations protect model inputs and outputs, maintain visibility, and enforce governance across AI workloads in Azure. Bringing AI Security Architecture to Life with Azure Decoded The Azure Decoded: Kickstart AI Security with Microsoft Defender for Cloud session on March 18 th builds on these concepts by connecting them to real‑world architecture and platform decisions. Attendees learn how Microsoft Defender for Cloud fits into a broader AI security strategy and how Microsoft Foundry helps apply guardrails, visibility, and governance across AI workloads. This session is designed for: Developers building AI applications and agents on Azure Security engineers responsible for protecting AI workloads Cloud architects designing enterprise‑ready AI solutions By combining conceptual understanding with platform‑level security discussions, the session helps teams design AI solutions that are not only innovative—but also secure, governed, and trustworthy. Be sure to register so you do not miss out. Start Your AI Security Journey AI security is evolving quickly, and it requires both architectural understanding and practical platform knowledge. Start by exploring how Microsoft Defender for Cloud supports AI security and governance in Azure, then join the Azure Decoded session to see how these principles come together in real‑world AI workloads.