governance
14 TopicsSensitivity Auto-labelling via Document Property
Why is this needed? Sensitivity labels are generally relevant within an organisation only. If a file is labelled within one environment and then moved to another environment, sensitivity label content markings may be visible, but by default, the applied sensitivity label will not be understood. This can lead to scenarios where information that has been generated externally is not adequately protected. My favourite analogy for these scenarios is to consider the parallels between receiving sensitive information and unpacking groceries. When unpacking groceries, you might sit your grocery bag on a counter or on the floor next to the pantry. You’ll likely then unpack each item, take a look at it and then decide where to place it. Without looking at an item to determine its correct location, you might place it in the wrong location. Porridge might be safe from the kids on the bottom shelf. If you place items that need to be protected, such as chocolate, on the bottom shelf, it’s not likely to last very long. So, I affectionately refer to information that hasn’t been evaluated as ‘porridge’, as until it has been checked, it will end up on the bottom shelf of the pantry where it is quite accessible. Label-based security controls, such as Data Loss Prevention (DLP) policies using conditions of ‘content contains sensitivity label’ will not apply to these items. To ensure the security of any contained sensitive information, we should look for potential clues to its sensitivity and then utilize these clues to ensure that the contained information is adequately protected - We take a closer look at the ‘porridge’, determine whether it’s an item that needs protection and if so, move it to a higher shelf in the pantry so that it’s out of reach for the kids. Effective use of Purview revolves around the use of ‘know your data’ strategies. We should be using as many methods as possible to try to determine the sensitivity of items. This can include the use of Sensitive Information Types (SITs) containing keyword or pattern-based classifiers, trainable classifiers, Exact Data Match, Document fingerprinting, etc. Matching items via SITs present in the items content can be problematic due to false positives. Keywords like ‘Sensitive’ or ‘Protected’ may be mentioned out of context, such as when referring to a classification or an environment. When classifications have been stamped via a property, it allows us to match via context rather than content. We don’t need to guess at an item’s sensitivity if another system has already established what the item’s classification is. These methods are much less prone to false positives. Why isn’t everyone doing this? Document properties are often not considered in Purview deployments. SharePoint metadata management seems to be a dying artform and most compliance or security resources completing Purview configurations don’t have this skill set. There’s also a lack of understanding of the relevance of checking for item properties. Microsoft haven’t helped as the documentation in this space is somewhat lacking and needs to be unpicked via some aligning DLP guidance (Create a DLP policy to protect documents with FCI or other properties). Many of these configurations will also be tied to regional requirements. Document properties being used by systems where I’m from, in Australia, will likely be very different to those used in other parts of the world. In the following sections, we’ll take a look at applicable use cases and walk through how to enable these configurations. Scenarios for use Labelling via document property isn’t for everyone. If your organisation is new to classification or you don’t have external partners that you collaborate with at higher sensitivity levels, then this likely isn’t for you. For those that collaborate heavily and have a shared classification framework, as is often seen across government, this is a must! This approach will also be highly relevant to multi-tenant organisations or conglomerates where information is regularly shared between environments. The following scenarios are examples of where this configuration will be relevant: 1. Migrating from 3 rd party classification tools If an item has been previously stamped by a 3 rd party classification tool, then evaluating its applied document properties will provide a clear picture of its security classification. These properties can then be used in service-based auto-labelling policies to effectively transition items from 3 rd party tools to Microsoft Purview sensitivity labels. As labels are applied to items, they will be brought into scope of label-based controls. 2. Detecting data spill Data spill is a term that is used to define situations where information that is of a higher than permitted security classification land in an environment. Consider a Microsoft 365 tenant that is approved for the storage of Official information but Top Secret files are uploaded to it. Document properties that align with higher than permitted classifications provide us with an almost guaranteed method of identifying spilled items. Pairing this document property with an auto-labelling policy allows for the application of encryption to lock unauthorized users out of the items. Tools like Content Explorer and eDiscovery can then be used to easily perform cleanup activities. If using document properties and auto-labelling for this purpose, keep in mind that you’ll need to create sensitivity labels for higher than permitted classifications in order to catch spilled items. These labels won’t impact usability as you won’t publish them to users. You will, however, need to publish them to a single user or break glass account so that they’re not ignored by auto-labelling. 3. Blocking access by AI tools If your organization was concerned about items with certain properties applied being accessed by generative AI tools, such as Copilot, you could use Auto-labelling to apply a sensitivity label that restricts EXTRACT permissions. You can find some information on this at Microsoft 365 Copilot data protection architecture | Microsoft Learn. This should be relevant for spilled data, but might also be useful in situations where there are certain records that have been marked via properties and which should not be Copilot accessible. 4. External Microsoft Purview Configurations Sensitivity labels are relevant internally only. A label, in its raw form, is essentially a piece of metadata with an ID (or GUID) that we stamp on pieces of information. These GUIDs are understood by your tenant only. If an item marked with a GUID shows up in another Microsoft 365 tenant, the GUID won’t correspond with any of that tenant’s labels or label-based controls. The art in Microsoft Purview lies in interpreting the sensitivity of items based on content markings and other identifiers, so that data security can be maintained. Document properties applied by Purview, such as ClassificationContentMarkingHeaderText are not relevant to a specific tenant, which makes them portable. We can use these properties to help maintain classifications as items move between environments. 5. Utilizing metadata applied by Records Management solutions Some EDRMS, Records or Content Management solutions will apply properties to items. If an item has been previously managed and then stamped with properties, potentially including a security classification, via one of these systems, we could use this information to inform sensitivity label application. 6. 3 rd party classification tools used externally Even if your organisation hasn’t been using 3rd party classification tools, you should consider that partner organisations, such as other Government departments, might be. Evaluating the properties applied by external organisations to items that you receive will allow you to extend protections to these items. If classification tools like Janus or Titus are used in your geography/industry, then you may want to consider checking for their properties. Regarding the use of auto-classification tools Some organisations, particularly those in Government, will have organisational policies that prevent the use of automatic classification capabilities. These policies are intended to ensure that each item is assessed by an actual person for risk of disclosure rather than via an automated service that could be prone to error. However, when auto-labelling is used to interpret and honour existing classifications, we are lowering rather than raising the risk profile. If the item’s existing classification (applied via property) is ignored, the item will be treated as porridge and is likely to be at risk. If auto-labelling is able to identify a high-risk item and apply the relevant label, it will then be within scope of Purview’s data security controls, including label-based DLP, groups and sites data out of place alerting, and potentially even item encryption. The outcome is that, through the use of auto-labelling, we are able to significantly reduce risk of inappropriate or unintended disclosure. Configuration Process Setting up document property-based auto-labelling is fairly straightforward. We need to setup a managed property and then utilize it an auto-labelling policy. Below, I've split this process into 6 steps: Step 1 – Prepare your files In order to make use of document properties, an item with the properties applied will first need to be indexed by SharePoint. SharePoint will record the properties as ‘crawled properties’, which we’ll then need to convert into ‘managed properties’ to make them useful. If you already have items with the relevant properties stored in SharePoint, then they are likely already indexed. If not, you’ll need to upload or create an item or items with the properties applied. For testing, you’ll want to create a file with each property/value combination so that you can confirm that your auto-labelling policies are all working correctly. This could require quite a few files depending on the number of properties you’re looking for. To kick off your crawled property generation though, you could create or upload a single file with the correct properties applied. For example: In the above, I’ve created properties for ClassificationContentMarkingHeaderText and ClassificationContentMarkingFooterText, which you’ll often see applied by Purview when an item has a sensitivity label content marking applied to it. I’ve also included properties to help identify items classified via JanusSeal, Titus and Objective. Step 2 – Index the files After creating or uploading your file, we then need SharePoint to index it. This should happen fairly quickly depending on the size of your environment. I'd expect to wait sometime between 10 minutes and 24 hrs. If you're not in a hurry, then I'd recommend just checking back the next day. You'll know when this has been completed when you head into SharePoint Admin > Search > Managed Search Schema > Crawled Properties and can find your newly indexed properties: Step 3 – Configure managed properties Next, the properties need to be configured as managed properties. To do this, go to SharePoint Admin > More features > Search > Managed Search Schema > Managed Properties. Create a new managed property and give it a name. Note that there are some character restrictions in naming, but you should be able to get it close to your document property name. Set the property’s type to text, select queryable and retrievable. Under ‘mappings to crawled properties’, choose add mapping, search for and select the property indexed from the file property. Note that the crawled property will have the same name as your document property, so there’s no need to browse through all of them: Repeat this so that you have a managed property for each document property that you want to look for. Step 4 – Configure Auto-labelling policies Next up, create some auto-labelling policies. You’ll need one for each label that you want to apply, not one per property as you can check multiple properties within the one auto-labelling policy. - From within Purview, head to Information Protection > Policies > Auto-labelling policies. - Create a new policy using the custom policy template. - Give your policy an appropriate name (e.g. Label PROTECTED via property). - Select the label that you want to apply (e.g. PROTECTED). - Select SharePoint based services (SharePoint and OneDrive). - Name your auto-labelling rules appropriately (e.g. SPO – Contains PROTECTED property) - Enter your conditions as a long string with property and value separated via a colon and multiple entries separated with a comma. For example: ClassificationContentMarkingHeaderText:PROTECTED,ClassificationContentMarkingFooterText:PROTECTED,Objective-Classification:PROTECTED,PMDisplay:PROTECTED,TitusSEC:PROTECTED Note that the properties that you are referencing are the Managed Property rather than the document property. This will be relevant if your managed property ended up having a different name due to character restrictions. After pasting in your string into the UI, the resultant rule should look something like this: When done, you can either leave your policy in simulation mode or save it and then turn it on from the auto-labelling policies screen. Just be aware of any potential impacts, such as accidently locking users out by automatically deploying a label with encryption configuration. You can reduce any potential impact by targeting your auto-labelling policy at a site or set of sites initially and then expanding its scope after testing. Step 5 - Test Testing your configuration will be as easy as uploading or creating a set of files with the relevant document properties in place. Once uploaded, you’ll need to give SharePoint some time to index the items and then the auto-labelling policy some time to apply sensitivity labels to them. To confirm label application, you can head to the document library where your test files are located and enable the sensitivity column. Files that have been auto-labelled will have their label listed: You could also check for auto-labelling activity in Purview via Activity explorer: Step 6 – Expand into DLP If you’ve spent the time setting up managed properties, then you really should consider capitalizing on them in your DLP configurations. DLP policy conditions can be configured in the same manner that we configured Auto-labelling in Step 3 above. The document property also gives us an anchor for DLP conditions that is independent of an item’s sensitivity label. You may wish to consider the following: DLP policies blocking external sharing of items with certain properties applied. This might be handy for situations where auto-labelling hasn’t yet labelled an item. DLP policies blocking the external sharing of items where the applied sensitivity label doesn’t match the applied document property. This could provide an indication of risky label downgrade. You could extend such policies into Insider Risk Management (IRM) by creating IRM policies that are aligned with the above DLP policies. This will allow for document properties to be considered in user risk calculation, which can inform controls like Adaptive Protection. Here's an example of a policy from the DLP rule summary screen that shows conditions of item contains a label or one of our configured document properties: Thanks for reading and I hope this article has been of use. If you have any questions or feedback, please feel free to reach out.2.1KViews8likes8CommentsMicrosoft Purview: The Ultimate AI Data Security Solution
Introduction AI is transforming the way enterprises operate, however with great innovation comes great responsibility. I’ve spent the last few years helping organizations secure their data with tools like Azure Information Protection, Data Loss Prevention, and now Microsoft Purview. As generative AI tools like Microsoft Copilot become embedded in everyday workflows, the need for clear governance and robust data protection is more urgent than ever. Through this blog post, let's explore how Microsoft Purview can help organizations stay ahead of securing AI interactions without slowing down innovation. What’s the Issue? AI agents are increasingly used to process sensitive data, often through natural language prompts. Without proper oversight, this can lead to data oversharing, compliance violations, and security risks. Why It’s Urgent? According to the recent trends of 2025, over half of corporate users bring their own AI tools to work, often consumer-grade apps like ChatGPT or DeepSeek. These tools bypass enterprise protections, making it difficult to monitor and control data exposure. Use Cases Enterprise AI Governance: Apply consistent policies across Microsoft and third-party AI tools. Compliance Auditing: Generate audit logs for AI interactions to meet regulatory requirements. Risk Mitigation: Block risky uploads and enforce adaptive protection based on user behavior. How Microsoft Purview Solves It Data Security Posture Management (DSPM) for AI Purview’s DSPM for AI provides a centralized dashboard to monitor AI activity, assess data risks, and enforce compliance policies across Copilots, agents, and third-party AI apps. It correlates data classification, user behavior, and policy coverage to surface real-time risks, such as oversharing via AI agents, and generates actionable recommendations to remediate gaps. DSPM integrates with tools like Microsoft Security Copilot for AI-assisted investigations and supports automated scanning, trend analytics, and posture reporting. It also extends protection to third-party AI tools like ChatGPT through endpoint DLP and browser extensions, ensuring consistent governance across both managed and unmanaged environments 2. Unified Protection Across AI Agents Whether you're using Microsoft 365 Copilot, Security Copilot, or Azure AI services, Purview applies consistent security and compliance controls. Agents inherit protection from their parent apps, including sensitivity labels, data loss prevention (DLP), and Insider Risk Management. Real-Time Risk Detection Purview enables real-time monitoring of prompts and responses, helping security teams detect oversharing and policy violations instantly. From Microsoft Learn – Insider Risk 4. One-Click Policy Activation Administrators can leverage Microsoft Purview’s Data Security Posture Management (DSPM) for AI to rapidly deploy comprehensive security and compliance controls via one-click policy activation. This streamlined mechanism enables organizations to enforce prebuilt policy templates across AI ecosystems, ensuring prompt implementation of data loss prevention (DLP), sensitivity labeling, and Insider Risk Management on both Microsoft and third-party AI services. Through DSPM’s unified policy orchestration layer, security teams gain granular telemetry into prompt and response flows, real-time policy enforcement, and detailed incident reporting. Automated analytics continuously assess risk posture, enabling adaptive policy adjustments and scalable governance as new AI tools and user workflows are introduced into the enterprise environment. Please note: After implementing policy changes, it can take up to 24 hours for changes to become visible and take full effect across your environment. From Microsoft Learn – Purview Data Security Posture Management (DSPM) portal 5. Support for Third-Party AI Apps Purview extends robust data security and compliance to browser-based AI tools such as ChatGPT and Google Gemini by employing endpoint Data Loss Prevention (DLP) and browser extensions that monitor and control data flows in real time. Through Microsoft Purview’s Data Security Posture Management (DSPM) for AI, organizations can implement granular controls for sensitive data accessed during both Microsoft-native and third-party AI interactions. DSPM offers continuous discovery and classification of data assets, linking AI prompts and responses to their original data sources to automatically enforce data protection policies, including sensitivity labeling, adaptive access controls, and comprehensive content inspection, contextually for each AI transaction. For unsanctioned AI services reached via browsers, the Purview browser extension inspects both input and output, enabling endpoint DLP to block, alert, or redact sensitive material instantly, thus preventing unauthorized uploads, downloads, or copy/paste activities. Security teams benefit from rich telemetry on AI usage patterns, which integrate with user risk profiles and anomaly detection to identify and flag suspicious attempts to extract confidential information. Close integration with Microsoft Security Copilot and automated analytics further enhances visibility across all AI data flows, supporting incident response, audit, and compliance reporting needs. Purview’s adaptive policy orchestration ensures that evolving AI services and workflows are continuously assessed for risk, and that controls are dynamically aligned with business, regulatory, and security requirements, enabling scalable, policy-driven governance for the expanding enterprise AI ecosystem. Pros and Cons The following table outlines the key advantages and potential limitations of implementing AI and agent data security controls within Microsoft Purview. Pros Cons License Needed Centralized AI governance Requires proper licensing and setup Microsoft 365 E5 or equivalent Purview add-on license Real-time risk detection May need browser extensions for full coverage Microsoft 365 E5 or Purview add-on Supports both Microsoft and third-party AI apps Some features limited to enterprise versions Microsoft 365 E5, E5 Compliance, or equivalent Purview add-on Conclusion Microsoft Purview offers a comprehensive solution for securing AI agents and their data interactions. By leveraging DSPM for AI, organizations can confidently adopt AI technologies while maintaining control over sensitive information. Explore Microsoft Purview’s DSPM for AI here. Start by assessing your current AI usage and activate one-click policies to secure your environment today! FAQ 1. What is the purpose of Microsoft Purview’s AI and agent data security controls? The purpose is to ensure that sensitive data accessed or processed by AI systems and agents is governed, protected, and monitored using Microsoft Purview’s compliance and security capabilities. Microsoft Purview data security and compliance protection 2. How does Microsoft Purview help secure AI-generated content? Microsoft Purview applies data loss prevention (DLP), sensitivity labels, and information protection policies to AI-generated content, ensuring it adheres to organizational compliance standards. Microsoft Purview Information Protection 3. Can Microsoft Purview track and audit AI interactions with sensitive data? Yes. Microsoft Purview provides audit logs and activity explorer capabilities that allow organizations to monitor how AI systems and agents interact with sensitive data. Search the audit log 4. What role do sensitivity labels play in AI data governance? Sensitivity labels classify and protect data based on its sensitivity level. When applied, they enforce encryption, access restrictions, and usage rights, even when data is processed by AI. Learn about sensitivity labels 5. How does Microsoft Purview integrate with Copilot and other AI tools? Microsoft Purview extends its data protection and compliance capabilities to Microsoft 365 Copilot and other AI tools by ensuring that data accessed by these tools is governed under existing policies. Microsoft 365 admin center Microsoft 365 Copilot usage 6. Are there specific controls for third-party AI agents? Yes. Microsoft Purview supports conditional access, DLP, and access reviews to manage and monitor third-party AI agents that interact with organizational data. What is Conditional Access in Microsoft Entra ID? 7. How can organizations ensure AI usage complies with regulatory requirements? By using Microsoft Purview’s compliance manager, organizations can assess and manage regulatory compliance risks associated with AI usage. Microsoft Purview Compliance Manager About the Author: Hi! Jacques “Jack” here, I’m a Microsoft Technical Trainer at Microsoft. I wanted to share a topic that is often top of mind, AI governance. I’ve been working with Microsoft Purview since its launch in 2022, building on prior experience with Azure Information Protection and Data Loss Prevention. I also have great expertise with Generative AI technologies since their public release in November 2022, including Microsoft Copilot and other enterprise-grade AI solutions.Secure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Coplot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurEmpowering Secure AI Innovation: Data Security and Compliance for AI Agents
As organizations embrace the transformative power of generative AI, agentic AI is quickly becoming a core part of enterprise innovation. Whether organizations are just beginning their AI journey or scaling advanced solutions, one thing is clear: agents are poised to transform every function and workflow across organizations. IDC predicts that over 1 billion new business process agents will be created in the next four years 1 . This surge in AI adoption is empowering employees across roles – from low-code makers to pro-code developers – to build and use AI in new ways. Business leaders are eager to support this momentum, but they also recognize the need to innovate responsibly with AI. Microsoft Purview’s evolution When Microsoft 365 Copilot launched in November 2022, it sparked a wave of excitement and an immediate question: how do we secure and govern the data powering these AI experiences? Microsoft Purview quickly evolved to meet this need, extending its data security and compliance capabilities to the Microsoft 365 Copilot ecosystem. It delivered discoverability, protection, and governance value that helped customers discover data risks such as data oversharing, protect sensitive data to prevent data loss and insider risks, and govern AI usage to meet regulations and policies. Now, as customers move beyond pre-built agents like Copilot to develop their own AI agents and applications, Microsoft Purview has evolved to extend the same data protections built for Microsoft 365 Copilot to AI agents. Today, those protections span the entire development spectrum—from no-code and low-code tools like Copilot Studio to pro-code environments such as Azure AI Foundry. Microsoft Purview helps address challenges across the development spectrum Makers – typically business users or citizen developers who build solutions using low-code or no-code tools – shouldn’t need to become security experts to build AI responsibly. Yet, without proper safeguards, these agents can inadvertently expose sensitive data or violate compliance policies. That is why with Microsoft Purview, security and IT teams can feel confident about the agents being built in their organizations. When makers build agents through the Agent Builder or directly in Copilot Studio, security admins can set up Microsoft Purview’s data security and compliance controls that work behind the scenes to support makers in building secure and compliant agents. These controls automatically enforce policies, monitor data access, and ensure compliance without requiring the maker to become a security expert without requiring makers to take additional actions. In fact, a recent Microsoft study found that 71% of developer decision-makers acknowledge that these constraints result in security trade-offs and development delays 2 . Pro-code developers are under increasing pressure to deliver fast, flexible, and seamlessly integrated solutions, yet data security often becomes a deployment blocker or an afterthought. Building enterprise-grade data security and compliance capabilities from scratch is not only time-consuming but also requires deep domain expertise. This is where Microsoft Purview steps in. As an industry leader in data security and compliance, Purview does the heavy lifting, so developers don’t have to. Now in preview, Purview SDK can be used by developers to embed robust, enterprise-ready data protections directly into their AI applications, instead of building complex security frameworks on their own. The Purview SDK is a comprehensive set of REST APIs, documentation, and code samples, allowing developers to easily incorporate Microsoft Purview’s capabilities into their workflows—regardless of their integrated development environment (IDE). This empowers them to move fast without compromising on security or compliance and at the same time, Microsoft Purview helps security teams remain in control. : By embedding Purview APIs into the IDE, developers help enable their AI apps to be secured and governed at runtime Startups, ISVs, and partners can leverage the Purview SDK to seamlessly integrate Purview’s industry-leading features into their AI agents and applications. This enables their offerings to become Purview-aware, empowering customers to more easily secure and govern data within their AI environments. For example, Christian Veillette, Chief Technology Officer at Arthur Health, a Quisitive customer, states “The synergistic integration of MazikCare, the Quisitive Intelligence Platform, and the data compliance power of Purview SDK, including its DSPM for AI, forms a foundational pillar for trustworthy and safe AI-driven healthcare transformations. This powerful combination ensures continuous oversight and instant enforcement of compliance policies, giving IT leadership full assurance in the output of every AI model and upholding the highest safety standards. By centralizing policy enforcement, security concerns are significantly eased, empowering leadership to confidently steer their organizations through the AI transformation journey.” Microsoft partner, Infotechtion, has also leveraged the new Purview SDK to embed Purview value into their GenAI initiatives. Vivek Bhatt, Infotechtion’s Chief Technology Officer says, “Embedding Purview SDK into Infotechtion's AI governance solution improved trust and security by aligning Gen-AI interactions with Microsoft Purview's enterprise policies.” Microsoft Purview also natively integrates with Azure AI Foundry, enabling seamless, built-in security and compliance for AI workloads without requiring additional development effort. With this integration, signals from Azure AI Foundry are automatically surfaced in Microsoft Purview’s Data Security Posture Management (DSPM) for AI, Insider Risk Management, and compliance solutions. This means security teams can monitor AI usage, detect data risks, and enforce compliance policies across AI agents and applications—whether they’re built in-house or with Azure AI Foundry models. This reinforces Microsoft’s commitment to delivering secure-by-default AI innovation—empowering organizations to scale responsibly with confidence. : Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents in Microsoft Purview DSPM for AI. Explore more partner case studies from Ernst & Young and Infosys to see how they’re leveraging Purview SDK. Learn more about Purview SDK and Microsoft Purview for Azure AI Foundry. Unified visibility and control Whether supporting pro-code developers or low-code makers, Microsoft Purview enables organizations to secure and govern AI across organizations. With Purview, security teams can discover data security risks, protect sensitive data against data leakage and insider risks, and govern AI interactions. Discover data security risks With Data Security Posture Management (DSPM) for AI, data security teams can discover detailed data risk insights in AI interactions across Microsoft Copilots, agents built in Agent Builder and Copilot Studio, and custom AI apps and agents. Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents all in Microsoft Purview DSPM for AI. Protect sensitive data against data leaks and insider risks In DSPM for AI, data security admins can also get recommended insights to improve their organization’s security posture like minimizing risks of data oversharing. For example, an admin might get a recommendation to set up a data loss prevention (DLP) policy that prevents agents in Microsoft 365 Copilot from using certain labeled documents as grounding data to generate summaries or responses. By setting up this policy, organizations can prevent confidential legal documents—with specific language that could lead to improper guidance—from being summarized. It also ensures that “Internal only” documents aren’t used to create content that might be shared outside the organization. Extend data loss prevention (DLP) policies to agents in Microsoft 365 to protect sensitive data. Agents often pull data from sources like SharePoint and Dataverse, and Microsoft Purview helps protect that data every step of the way. It honors sensitivity labels, enforces access permissions, and applies label inheritance so that AI-generated content carries the same protections as its source. With auto-labeling in Dataverse, sensitive data is classified as soon as it’s ingested—reducing manual effort and maintaining consistent protection. When responses draw from multiple sources with different labels, the most restrictive label is applied to uphold compliance and minimize risk. : Sensitivity labels will be automatically applied to data in Dataverse. : AI-generated responses will inherit and honor the source data’s sensitivity labels. In addition to data and permission controls that help address data oversharing or leakage, security teams also need ways to detect users' risky activities in AI apps and agents that could potentially lead to data security incidents. With risky AI usage indicators, policy template, and analytics report in Microsoft Purview Insider Risk Management, security teams with appropriate permissions can detect risky activities. For example, there could be a departing employee receiving an unusual number of AI responses across Copilots and agents containing sensitive data, deviating from their past activity patterns. Security teams can then effectively detect and respond to these potential incidents to minimize the negative impact. For example, they can configure Adaptive Protection to automatically block a high-risk user from accessing sensitive data. An Insider Risk Management alert from a Risky AI usage policy shows a user with anomalous activities. Govern AI Interactions to detect non-compliant usage Microsoft Purview provides a comprehensive set of tools to govern AI usage and detect non-compliant user activities. AI interactions across Microsoft Copilots, AI apps and agents, are recorded in Audit logs. eDiscovery enables legal and compliance teams with appropriate permissions to collect and review AI-generated content for internal investigations or litigation. Data Lifecycle Management enables teams to set policies to retain or dispose of AI interactions, while Communication Compliance helps detect risky or inappropriate use of AI, such as harmful content or other violations against code-of-conduct policies. Together, these capabilities give organizations the visibility and control they need to innovate responsibly with AI. AI interactions across Microsoft Copilots, AI apps and agents are recorded in Audit logs. AI interactions across Microsoft Copilots, AI apps and agents can be collected and reviewed in eDiscovery. Microsoft Purview Communication Compliance can detect non-compliant content in AI prompts across Microsoft Copilots, AI apps and agents. Securing the Future of AI Innovation — Explore Additional Resources As organizations accelerate their adoption of agentic AI, the need for built-in security and compliance has never been more critical. Microsoft Purview empowers both makers and developers to innovate with confidence—ensuring that every AI interaction is secure, compliant, and aligned with enterprise standards. By embedding protection across the entire development lifecycle, Purview helps organizations unlock the full potential of AI while maintaining the trust, transparency, and control that responsible innovation demands. To dive deeper into how Microsoft Purview supports secure AI development, explore our additional resources, documentation, and integration guides: Learn more about Security for AI solutions on our webpage Learn more about Microsoft Purview SDK Learn more about Purview pricing Get started with Azure AI Foundry Get started with Microsoft Purview 1 IDC, 1 Billion New Logical Applications: More Background, Gary Chen, Jim Mercer, April 2024 https://blogs.idc.com/2025/04/04/the-agentic-evolution-of-enterprise-applications/ 2 Microsoft, AI App Security Quantitative Study, April 2025Microsoft Purview Powering Data Security and Compliance for Security Copilot
Microsoft Purview provides Security and Compliance teams with extensive visibility into admin actions within Security Copilot. It offers tools for enriched users and data insights to identify, review, and manage Security Copilot interaction data in DSPM for AI. Data security and compliance administrators can also utilize Purview’s capabilities for data lifecycle management and information protection, advanced retention, eDiscovery, and more. These features support detailed investigations into logs to demonstrate compliance within the Copilot tenant. Prerequisites Please refer to the prerequisites for Security Copilot and DSPM for AI in the Microsoft Learn Docs. Key Capabilities and Features Heightened Context and Clarity As organizations adopt AI, implementing data controls and a Zero Trust approach is essential to mitigate risks like data oversharing, leakage, and non-compliant usage. Microsoft Purview, combined with Data Security Posture Management (DSPM) for AI, empowers security and compliance teams to manage these risks across Security Copilot interactions. With this integration, organizations can: Discover data risks by identifying sensitive information in user prompts and responses. Microsoft Purview surfaces these insights in the DSPM for AI dashboard and recommends actions to reduce exposure. Identify risky AI usage using Microsoft Purview Insider Risk Management to investigate behaviors such as inadvertent sharing of sensitive data or to detect suspicious activity within Security Copilot usage. These capabilities provide heightened visibility into how AI is used across the organization, helping teams proactively address potential risks before they escalate. Compliance and Governance Building on this visibility, organizations can take action using Microsoft Purview’s integrated compliance and governance solutions. Here are some examples of how teams are leveraging these capabilities to govern Security Copilot interactions: Audit provides a detailed log of user and admin activity within Security Copilot, enabling organizations to track access, monitor usage patterns, and support forensic investigations. eDiscovery enables legal and investigative teams to identify, collect, and review Security Copilot interactions as part of case workflows, supporting defensible investigations. Communication Compliance helps detect potential policy violations or risky behavior in administrator interactions, enabling proactive monitoring and remediation. Data Lifecycle Management allows teams to automate the retention, deletion, and classification of Security Copilot data—reducing storage costs and minimizing risk from outdated or unnecessary information. Together, these tools provide a comprehensive governance framework that supports secure, compliant, and responsible AI adoption across the enterprise. Getting Started Enable Purview Audit for Security Copilot Sign into your Copilot tenant at https://securitycopilot.microsoft.com/, and with the Security Administrator permissions, navigate to the Security Copilot owner settings and ensure Audit logging is enabled. Microsoft Purview To start using DSPM for AI and the Microsoft Purview capabilities, please complete the following steps to get set up and then feel free to experiment yourself. Navigate to Purview (Purview.Microsoft.com) and ensure you have adequate permissions to access the different Purview solutions as described here. DSPM for AI Select the DSPM for AI “Solution” option on the left-most navigation. Go to the policies or recommendations tab turn on the following: a. “DSPM for AI – Capture interactions for Copilot Experiences”: Captures prompts and responses for data security posture and regulatory compliance from Security Copilot and other Copilot experiences. b. “Detect Risky AI Usage”: Helps to calculate user risk by detecting risky prompts and responses in Copilot experiences. c. “Detect unethical behavior in AI apps”: Detects sensitive info and inappropriate use of AI in prompts and responses in Copilot experiences. To begin reviewing Security Copilot usage within your organization and identifying interactions that contain sensitive information, select Reports from the left navigation panel. a. The "Sensitive interactions per AI app" report shows the most common sensitive information types used in Security Copilot interactions and their frequency. For instance, this tenant has a significant amount of IT and IP Address information within these interactions. Therefore, it is important to ensure that all sensitive information used in Security Copilot interactions is utilized for legitimate workplace purposes and does not involve any malicious or non-compliant use of Security Copilot. b. “Top unethical AI interactions” will show an overview of any potentially unsafe or inappropriate interactions with AI apps. In this case, Security Copilot only has seven potentially unsafe interactions that included unauthorized disclosure and regulatory collusion. c. “Insider risk severity per AI app” shows the number of high risk, medium risk, low risk and no risk users that are interacting with Security Copilot. In this tenant, there are about 1.9K Security Copilot users, but very few of them have an insider risk concern. d. To check the interaction details of this potentially risky activity, head over to Activity Explorer for more information. 5. In Activity Explorer, you should filter the App to Security Copilot. You will also have the option to filter based on the user risk level and sensitive information type. To identify the highest risk behaviors, filter for users with a medium to high risk level or those associated with the most sensitive information types. a. Once you have filtered, you can start looking through the activity details for more information like the user details, the sensitive information types, the prompt and response data, and more. b. Based on the details shown, you may decide to investigate the activity and the user further. To do so, we have data security investigation and governance tools. Data Security Investigations and Governance If you find Security Copilot actions in DSPM for AI Activity Explorer to be potentially inappropriate or malicious, you can look for further information in Insider Risk Management (IRM), through an eDiscovery case, Communication Compliance (CC), or Data Lifecycle Management (DLM). Insider Risk Management By enabling the quick policy in DSPM for AI to monitor risky Copilot usage, alerts will start appearing in IRM. Customize this policy based on your organization's risk tolerance by adjusting triggering events, thresholds, and indicators for detected activity. Examine the alerts associated with the "DSPM for AI – Detect risky AI usage" policy, potentially sorting them by severity from high to low. For these alerts, you will find a User Activity scatter plot that provides insights into the activities preceding and following the user's engagement with a risky prompt in Security Copilot. This assists the Data Security administrator in understanding the necessary triage actions for this user/alert. After thoroughly investigating these details and determining whether the activity was malicious or an inadvertent insider risk, appropriate actions can be taken, including issuing a user warning, resolving the case, sharing the case with an email recipient, or escalating the case to eDiscovery for further investigation. eDiscovery To identify, review and manage your Security Copilot logs to support your investigations, use the eDiscovery tool. Here are the steps to take in eDiscovery: a. Create an eDiscovery Case b. Create a new search c. In Search, go to condition builder and select Add conditions -> KeyQL d. Enter the query as: - KQL Equal (ItemClass=IPM.SkypeTeams.Message.Copilot.Security.SecurityCopilot) e. Run the query f. Once completed, add the search to a review set (Button at the top) g. In the review set, view details of the Security Copilot conversation Communication Compliance In Communication Compliance, like IRM, you can investigate details around the Security Copilot interactions. Specifically, in CC, you can determine if these interactions contained non-compliant usage of Security Copilot or inappropriate text. After identifying the sentiment of the Security Copilot communication, you can take action by resolving the alert, sending a warning notice to the user, escalating the alert to a reviewer, or escalating the alert for investigation, which will create a new eDiscovery case. Data Lifecycle Management For regulatory compliance or investigation purposes, navigate to Data Lifecycle Management to create a new retention policy for Security Copilot activities. a. Provide a friendly name for the retention policy and select Next b. Skip Policy Scope section for this validation c. Select “Static” type of retention policy and select Next d. Choose “Microsoft Copilot Experiences” to apply retention policy to Security Copilot interactions Billing Model Microsoft Purview audit logging of Security Copilot activity remains included at no additional cost as part of Microsoft 365 E5 licensing. However, Microsoft Purview now offers a combination of entitlement-based (per-user-per-month) and Pay-As-You-Go (PAYG) pricing models. The PAYG model applies to a broader set of Purview capabilities—including Insider Risk Management, Communication Compliance, eDiscovery, and other data security and governance solutions—based on usage volume or complexity. This flexible pricing structure ensures that organizations only pay for what they use as data flows through AI models, networks, and applications. For further details, please refer to this Microsoft Security Community Blog: New Purview pricing options for protecting AI apps and agents | Microsoft Community Hub Looking Ahead By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their Security Copilot interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. Please reach out to us if you have any questions or additional requirements. Additional Resources Use Microsoft Purview to manage data security & compliance for Microsoft Security Copilot | Microsoft Learn How to deploy Microsoft Purview DSPM for AI to secure your AI apps Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview Data Security Posture Management (DSPM) for AI | Microsoft Learn Learn about Microsoft Purview billing models | Microsoft LearnRethinking Data Security and Governance in the Era of AI
The era of AI is reshaping industries, enabling unprecedented innovations, and presenting new opportunities for organizations worldwide. But as organizations accelerate AI adoption, many are focused on a growing concern: their current data security and governance practices are not effectively built for the fast-paced AI innovation and ever-evolving regulatory landscape. At Microsoft, we recognize the critical need for an integrated approach to address these risks. In our latest findings, Top 3 Challenges in Securing and Governing Data for the Era of AI, we uncovered critical gaps in how organizations manage data risk. The findings exemplify the current challenges: 91% of leaders are not prepared to manage risks posed by AI 1 and 85% feel unprepared to comply with AI regulations 2 . These gaps not only increase non-compliance but also put innovation at risk. Microsoft Purview has the tools to tackle these challenges head on, helping organizations move to an approach that protects data, meets compliance regulations, and enables trusted AI transformation. We invite you to take this opportunity to evaluate your current practices, platforms, and responsibilities, and to understand how to best secure and govern your organization for growing data risks in the era of AI. Platform fragmentation continues to weaken security outcomes Organizations often rely on fragmented tools across security, compliance, and data teams, leading to a lack of unified visibility and insufficient data hygiene. Our findings reveal the effects of fragmented platforms, leading to duplicated data, inconsistent classification, redundant alerts, and siloed investigations, which ultimately is causing data exposure incidents related to AI to be on the rise 3 . Microsoft Purview offers centralized visibility across your organization’s data estate. This allows teams to break down silos, streamline workflows, and mitigate data leakage and oversharing. With Microsoft Purview, capabilities like data health management and data security posture management are designed to enhance collaboration and deliver enriched insights across your organization to help further protect your data and mitigate risks faster. Microsoft Purview offers the following: Unified insights across your data estate, breaking down silos between security, compliance, and data teams. Microsoft Purview Data Security Posture Management (DSPM) for AI helps organizations gain unified visibility into GenAI usage across users, data, and apps to address the heightened risk of sensitive data exposure from AI. Built-in capabilities like classification, labeling, data loss prevention, and insider risk insights in one platform. In addition, newly launched solutions like Microsoft Purview Data Security Investigations accelerate investigations with AI-powered deep content analysis, which helps data security teams quickly identify and mitigate sensitive data and security risks within impacted data. Organizations like Kern County historically relied on many fragmented systems but adopted Microsoft Purview to unify their organization’s approach to data protection in preparation for increasing risks associated with deploying GenAI. “We have reduced risk exposure, [Microsoft] Purview helped us go from reaction to readiness. We are catching issues proactively instead of retroactively scrambling to contain them.” – Aaron Nance, Deputy Chief Information Security Officer, Kern County Evolving regulations require continuous compliance AI-driven innovation is creating a surge in regulations, resulting in over 200 daily updates across more than 900 regulatory agencies 4 , as highlighted in our research. Compliance has become increasingly difficult, with organizations struggling to avoid fines and comply with varying requirements across regions. To navigate these challenges effectively, security leaders’ responsibilities are expanding to include oversight across governance and compliance, including oversight of traditional data catalog and governance solutions led by the central data office. Leaders also cite the need for regulation and audit readiness. Microsoft Purview enables compliance and governance by: Streamlining compliance with Microsoft Purview Compliance Manager templates, step-by-step guidance, and insights for region and industry-specific regulations, including GDPR, HIPAA, and AI-specific regulation like the EU AI Act. Supporting legal matters such as forensic and internal investigations with audit trail records in Microsoft Purview eDiscovery and Audit. Activating and governing data for trustworthy analytics and AI with Microsoft Purview Unified Catalog, which enables visibility across your data estate and data confidence via data quality, data lineage, and curation capabilities for federated governance. Microsoft Purview’s suite of capabilities provides visibility and accountability, enabling security leaders to meet stringent compliance demands while advancing AI initiatives with confidence. Organizations need a unified approach to secure and govern data Organizations are calling for an integrated platform to address data security, governance, and compliance collectively. Our research shows that 95% of leaders agree that unifying teams and tools is a top priority 5 and 90% plan to adopt a unified solution to mitigate data related risks and maximize impact 6 . Integration isn't just about convenience, it’s about enabling innovation with trusted data protection. Microsoft Purview enables a shared responsibility model, allowing individual business units to own their data while giving central teams oversight and policy control. As organizations adopt a unified platform approach, our findings reveal the upside potential not only being reduced risk but also cost savings. With AI-powered copilots such as Security Copilot in Microsoft Purview, data protection tasks are simplified with natural-language guidance, especially for under resourced teams. Accelerating AI transformation with Microsoft Purview Microsoft Purview helps security, compliance, and governance teams navigate the complexities of AI innovation while implementing effective data protection and governance strategies. Microsoft partner EY highlights the results they are seeing: “We are seeing 25%–30% time savings when we build secure features using [Microsoft] Purview SDK. What was once fragmented is now centralized. With [Microsoft] Purview, everything comes together on one platform, giving a unified foundation to innovate and move forward with confidence.” – Prashant Garg, Partner of Data and AI, EY We invite you to explore how you can propel your organization toward a more secure future by reading the full research paper at https://aka.ms/SecureAndGovernPaper. Visit our website to learn more about Microsoft Purview. 1 Forbes, Only 9% Of Surveyed Companies Are Ready To Manage Risks Posed By AI, 2023 2 SAP LeanIX, AI Survey Results, 2024 3 Microsoft, Data Security Index Report, 2024 4 Forbes, Cost of Compliance, Thomson Reuters, 2021 5 Microsoft, Audience Research, 2024 6 Microsoft, Customer Requirements Research, 2024Enterprise-grade controls for AI apps and agents built with Azure AI Foundry and Copilot Studio
AI innovation is moving faster than ever, and more AI projects are moving beyond experimentation into deployment, to drive tangible business impact. As organizations accelerate innovation with custom AI applications and agents, new risks emerge across the software development lifecycle and AI stack related to data oversharing and leaks, new vulnerabilities and threats, and non-compliance with stringent regulatory requirements Through 2025, poisoning of software supply chains and infrastructure technology stacks will constitute more than 70% of malicious attacks against AI used in the enterprise 1 , highlighting potential threats that originate early in development. Today, the average cost of a data breach is $4.88 million, but when security issues are caught early in the development process, that number drops dramatically to just $80 per incident 2 . The message is very clear; security can’t be an afterthought anymore. It must be a team sport across the organization, embedded from the start and throughout the development lifecycle. That's why developers and security teams should align on processes and tools that bring security into every stage of the AI development lifecycle and give security practitioners visibility into and the ability to mitigate risks. To address these growing challenges and help customers secure and govern their AI workloads across development and security teams, we are: Enabling Azure AI Foundry and Microsoft Copilot Studio to provide best-in-class foundational capabilities to secure and govern AI workloads Deeply integrating and embedding industry-leading capabilities from Microsoft Purview, Microsoft Defender, and Microsoft Entra into Azure AI Foundry and Microsoft Copilot Studio This week, 3,000 developers are gathering in Seattle for the annual Microsoft Build conference, with many more tuning in online, to learn practical skills for accelerating their AI apps and agents' innovation. To support their AI innovation journey, today we are excited to announce several new capabilities to help developers and organizations secure and govern AI apps and agents. New Azure AI Foundry foundational capabilities to secure and govern AI workloads Azure AI Foundry enhancements for AI security and safety With 70,000 customers, 100 trillion tokens processed this quarter, and 2 billion enterprise search queries each day, Azure AI Foundry has grown beyond just an application layer—it's now a comprehensive platform for building agents that can plan, take action, and continuously learn to drive real business outcomes. To help organizations build and deploy AI with confidence, we’re introducing new security and safety capabilities and insights for developers in Azure AI Foundry Introducing Spotlighting to detect and block prompt injection attacks in real time As AI systems increasingly rely on external data sources, a new class of threats has emerged. Indirect prompt injection attacks embed hidden instructions in documents, emails, and web content, tricking models into taking unauthorized actions without any direct user input. These attacks are difficult to detect and hard to prevent using traditional filters alone. To address this, Azure AI Content Safety is introducing Spotlighting, now available in preview. Spotlighting strengthens the Prompt Shields guardrail by improving its ability to detect and handle potential indirect prompt injections, where hidden adversarial instructions are embedded in external content. This new capability helps prevent the model from inadvertently acting on malicious prompts that are not directly visible to the user. Enable Spotlighting in Azure AI Content Safety to detect potential indirect prompt injection attacks New capabilities for task adherence evaluation and task adherence mitigation to ensure agents remain within scope As developers build more capable agents, organizations face growing pressure to help confirm those agents act within defined instructions and policy boundaries. Even small deviations can lead to tool misuse, broken workflows, or risks like unintended exposure of sensitive data. To solve this, Azure AI Foundry now includes task adherence for agents, now in preview and powered by two components: a real-time evaluation and a new control within Azure AI Content Safety. At the core is a real-time task adherence evaluation API, part of Azure AI Content Safety. This API assesses whether an agent’s behavior is aligned with its assigned task by analyzing the user’s query, system instructions, planned tool calls, and the agent’s response. The evaluation framework is built on Microsoft’s Agent Evaluators, which measure intent resolution, tool selection accuracy, completeness of response, and overall alignment to the original request. Developers can run this scoring logic locally using the Task Adherence Evaluator in the Azure AI Evaluation SDK, with a five-point scale that ranges from fully nonadherent to fully adherent. This gives teams a flexible and transparent way to inspect task-level behavior before it causes downstream issues. Task adherence is enforced through a new control in Azure AI Content Safety. If an agent goes off-task, the control can block tool use, pause execution, or trigger human review. In Azure AI Agent Service, it is available as an opt-in feature and runs automatically. Combined with real-time evaluation, this control helps to ensure that agents stay on task, follow instructions, and operate according to enterprise policies. Learn more about Prompt Shields in Azure AI Content Safety. Azure AI Foundry continuous evaluation and monitoring of agentic systems Maintaining high performance and compliance for AI agents after deployment is a growing challenge. Without ongoing oversight, issues like performance degradation, safety risks, or unintentional misuse of resources can slip through unnoticed. To address this, Azure AI Foundry introduces continuous evaluation and monitoring of agentic systems, now in preview, provides a single pane of glass dashboard to track key metrics such as performance, quality, safety, and resource usage in real time. Continuous evaluation runs quality and safety evaluations at a sampled rate of production usage with results made available in the Azure AI Foundry Monitoring dashboard and published to Application Insights. Developers can set alerts to detect drift or regressions and use Azure Monitor to gain full-stack visibility into their AI systems. For example, an organization using an AI agent to assist with customer-facing tasks can monitor groundedness and detect a decline in quality when the agent begins referencing irrelevant information, helping teams to act before it potentially negatively affects trust of users. Azure AI Foundry evaluation integrations with Microsoft Purview Compliance Manager, Credo AI, and Saidot for streamlined compliance AI regulations and standards introduce new requirements for transparency, documentation, and risk management for high-risk AI systems. As developers build AI applications and agents, they may need guidance and tools to help them evaluate risks based on these requirements and seamlessly share control and evaluation insights with compliance and risk teams. Today, we are announcing previews for Azure AI Foundry evaluation tool’s integration with a compliance management solution, Microsoft Purview Compliance Manager, and AI governance solutions, Credo AI and Saidot. These integrations help define risk parameters, run suggested compliance evaluations, and collect evidence for control testing and auditing. For example, for a developer who’s building an AI agent in Europe may be required by their compliance team to complete a Data Protection Impact Assets (DPIA) and Algorithmic Impact Assessment (AIA) to meet internal risk management and technical documentation requirements aligned with emerging AI governance standards and best practices. Based on Purview Compliance Manager’s step-by-step guidance on controls implementation and testing, the compliance teams can evaluate risks such as potential bias, cybersecurity vulnerabilities, or lack of transparency in model behavior. Once the evaluation is conducted in Azure AI Foundry, the developer can obtain a report with documented risk, mitigation, and residual risk for compliance teams to upload to Compliance Manager to support audits and provide evidence to regulators or external stakeholders. Assess controls for Azure AI Foundry against emerging AI governance standards Learn more about Purview Compliance Manager. Learn more about the integration with Credo AI and Saidot in this blogpost. Leading Microsoft Entra, Defender and Purview value extended to Azure AI Foundry and Microsoft Copilot Studio Introducing Microsoft Entra Agent ID to help address agent sprawl and manage agent identity Organizations are rapidly building their own AI agents, leading to agent sprawl and a lack of centralized visibility and management. Security teams often struggle to keep up, unable to see which agents exist and whether they introduce security or compliance risks. Without proper oversight, agent sprawl increases the attack surface and makes it harder to manage these non-human identities. To address this challenge, we’re announcing the public preview of Microsoft Entra Agent ID, a new capability in the Microsoft Entra admin center that gives security admins visibility and control over AI agents built with Copilot Studio and Azure AI Foundry. With Microsoft Entra Agent ID, an agent created through Copilot Studio or Azure AI Foundry is automatically assigned an identity with no additional work required from the developers building them. This is the first step in a broader initiative to manage and protect non-human identities as organizations continue to build AI agents. : Security and identity admins can gain visibility into AI agents built in Copilot Studio and Azure AI Foundry in the Microsoft Entra Admin Center This new capability lays the foundation for more advanced capabilities coming soon to Microsoft Entra. We also know that no one can do it alone. Security has always been a team sport, and that’s especially true as we enter this new era of protecting AI agents and their identities. We’re energized by the momentum across the industry; two weeks ago, we announced support for the Agent-to-Agent (A2A) protocol and began collaborating with partners to shape the future of AI identity workflows. Today, we’re also excited to announce new partnerships with ServiceNow and Workday. As part of this, we’ll integrate Microsoft Entra Agent ID with the ServiceNow AI Platform and the Workday Agent System of Record. This will allow for automated provisioning of identities for future digital employees. Learn more about Microsoft Entra Agent ID. Microsoft Defender security alerts and recommendations now available in Azure AI Foundry As more AI applications are deployed to production, organizations need to predict and prevent potential AI threats with natively integrated security controls backed by industry-leading Gen AI and threat intelligence for AI deployments. Developers need critical signals from security teams to effectively mitigate security risks related to their AI deployments. When these critical signals live in separate systems outside the developer experience, this can create delays in mitigation, leaving opportunities for AI apps and agents to become liabilities and exposing organizations to various threats and compliance violations. Now in preview, Microsoft Defender for Cloud integrates AI security posture management recommendations and runtime threat protection alerts directly into the Azure AI Foundry portal. These capabilities, previously announced as part of the broader Microsoft Defender for Cloud solution, are extended natively into Azure AI Foundry enabling developers to access alerts and recommendations without leaving their workflows. This provides real-time visibility into security risks, misconfigurations, and active threats targeting their AI applications on specific Azure AI projects, without needing to switch tools or wait on security teams to provide details. Security insights from Microsoft Defender for Cloud help developers identify and respond to threats like jailbreak attacks, sensitive data leakage, and misuse of system resources. These insights include: AI security posture recommendations that identify misconfigurations and vulnerabilities in AI services and provide best practices to reduce risk Threat protection alerts for AI services that notify developers of active threats and provide guidance for mitigation, across more than 15 detection types For example, a developer building an AI-powered agent can receive security recommendations suggesting the use of Azure Private Link for Azure AI Services resources. This reduces the risk of data leakage by handling the connectivity between consumers and services over the Azure backbone network. Each recommendation includes actionable remediation steps, helping teams identify and mitigate risks in both pre- and post-deployment phases. This helps to reduce risks without slowing down innovation. : Developers can view security alerts on the Risks + alerts page in Azure AI Foundry : Developers can view recommendations on the Guardrails + controls page in Azure AI Foundry This integration is currently in preview and will be generally available in June 2025 in Azure AI Foundry. Learn more about protecting AI services with Microsoft Defender for Cloud. Microsoft Purview capabilities extended to secure and govern data in custom-built AI apps and agents Data oversharing and leakage are among the top concerns for AI adoption, and central to many regulatory requirements. For organizations to confidently deploy AI applications and agents, both low code and pro code developers need a seamless way to embed security and compliance controls into their AI creations. Without simple, developer-friendly solutions, security gaps can quickly become blockers, delaying deployment and increasing risks as applications move from development to production. Today, Purview is extending its enterprise-grade data security and compliance capabilities, making it easier for both low code and pro code developers to integrate data security and compliance into their AI applications and agents, regardless of which tools or platforms they use. For example, with this update, Microsoft Purview DSPM for AI becomes the one place data security teams can see all the data risk insights across Microsoft Copilots, agents built in Agent Builder and Copilot Studio, and custom AI apps and agents built in Azure AI Foundry and other platforms. Admins can easily drill into security and compliance insights for specific AI apps or agents, making it easier to investigate and take action on potential risks. : Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents in Microsoft Purview DSPM for AI In the following sections, we will provide more details about the updates to Purview capabilities in various AI workloads. 1. Microsoft Purview data security and compliance controls can be extended to any custom-built AI application and agent via the new Purview SDK or the native Purview integration with Azure AI Foundry. The new capabilities make it easy and effortless for security teams to bring the same enterprise-grade data security compliance controls available today for Microsoft 365 Copilot to custom AI applications and agents, so organizations can: Discover data security risks, such as sensitive data in user prompts, and data compliance risks, such as harmful content, and get recommended actions to mitigate risks proactively in Microsoft Purview Data Security Posture Management (DSPM) for AI. Protect sensitive data against data leakage and insider risks with Microsoft Purview data security policies. Govern AI interactions with Audit, Data Lifecycle Management, eDiscovery, and Communication Compliance. Microsoft Purview SDK Microsoft Purview now offers Purview SDK, a set of REST APIs, documentation, and code samples, currently in preview, enabling developers to integrate Purview's data security and compliance capabilities into AI applications or agents within any integrated development environment (IDE). : By embedding Purview APIs into the IDE, developers help enable their AI apps to be secured and governed at runtime For example, a developer building an AI agent using an AWS model can use the Purview SDK to enable their AI app to automatically identify and block sensitive data entered by users before it’s exposed to the model, while also providing security teams with valuable signals that support compliance. With Purview SDK, startups, ISVs, and partners can now embed Purview industry-leading capabilities directly into their AI software solutions, making these solutions Purview aware and easier for their customers to secure and govern data in their AI solutions. For example, Infosys Vice President and Delivery Head of Cyber Security Practice, Ashish Adhvaryu indicates, “Infosys Cyber Next platform integrates Microsoft Purview to provide enhanced AI security capabilities. Our solution, the Cyber Next AI assistant (Cyber Advisor) for the SOC analyst, leverages Purview SDK to drive proactive threat mitigation with real-time monitoring and auditing capabilities. This integration provides holistic AI-assisted protection, enhancing cybersecurity posture." Microsoft partner EY (previously known as Ernst and Young) has also leveraged the new Purview SDK to embed Purview value into their GenAI initiatives. “We’re not just building AI tools, we are creating Agentic solutions where trust, security, and transparency are present from the start, supported by the policy controls provided through the Purview SDK. We’re seeing 25 to 30 percent time savings when we build secure features using the Purview SDK,” noted Sumanta Kar, Partner, Innovation and Emerging Tech at EY. Learn more about the Purview SDK. Microsoft Purview integrates natively with Azure AI Foundry Organizations are developing an average of 14 custom AI applications. The rapid pace of AI innovation may leave security teams unaware of potential data security and compliance risks within their environments. With the update announced today, Azure AI Foundry signals are now directly integrated with Purview Data Security Posture Management for AI, Insider Risk Management, and data compliance controls, minimizing the need for additional development work. For example, for AI applications and agents built with Azure AI Foundry models, data security teams can gain visibility into AI usage and data risks in Purview DSPM for AI, with no additional work from developers. Data security teams can also detect, investigate, and respond to both malicious and inadvertent user activities, such as a departing employee leveraging an AI agent to retrieve an anomalous amount of sensitive data, with Microsoft Purview Insider Risk Management (IRM) policies. Lastly, user prompts and AI responses in Azure AI apps and agents can now be ingested into Purview compliance tools as mentioned above. Learn more about Microsoft Purview for Azure AI Foundry. 2. Purview data protections extended to Copilot Studio agents grounded in Microsoft Dataverse data Coming to preview in June, Purview Information Protection extends auto-labeling and label inheritance coverage to Dataverse to help prevent oversharing and data leaks. Information Protection makes it easier for organizations to automatically classify and protect sensitive data at scale. A common challenge is that sensitive data often lands in Dataverse from various sources without consistent labeling or protection. The rapid adoption of agents built using Copilot Studio and grounding data from Dataverse increases the risk of data oversharing and leakage if data is not properly protected. With auto-labeling, data stored in Dataverse tables can be automatically labeled based on policies set in Microsoft Purview, regardless of its source. This reduces the need for manual labeling effort and protects sensitive information from the moment it enters Dataverse. With label inheritance, AI agent responses grounded in Dataverse data will automatically carry and honor the source data’s sensitivity label. If a response pulls from multiple tables with different labels, the most restrictive label is applied to ensure consistent protection. For example, a financial advisor building an agent in Copilot Studio might connect multiple Dataverse tables, some labeled as “General” and others as “Highly Confidential.” If a response pulls from both, it will inherit the most restrictive label, in this case, "Highly Confidential,” to prevent unauthorized access and ensure appropriate protections are applied across both maker and users of the agent. Together, auto-labeling and label inheritance in Dataverse support a more secure, automated foundation for AI. : Sensitivity labels will be automatically applied to data in Dataverse : AI-generated responses will inherit and honor the source data’s sensitivity labels Learn more about protecting Dataverse data with Microsoft Purview. 3. Purview DSPM for AI can now provide visibility into unauthenticated interactions with Copilot Studio agents As organizations increasingly use Microsoft Copilot Studio to deploy AI agents for frontline customer interactions, gaining visibility into unauthenticated user interactions and proactively mitigating risks becomes increasingly critical. Building on existing Purview and Copilot Studio integrations, we’ve extended DSPM for AI and Audit in Copilot Studio to provide visibility into unauthenticated interactions, now in preview. This gives organizations a more comprehensive view of AI-related data security risks across authenticated and unauthenticated users. For example, a healthcare provider hosting an external, customer-facing agent assistant must be able to detect and respond to attempts by unauthenticated users to access sensitive patient data. With these new capabilities in DSPM for AI, data security teams can now identify these interactions, assess potential exposure of sensitive data, and act accordingly. Additionally, integration with Purview Audit provides teams with seamless access to information needed for audit requirements. : Gain visibility into all AI interactions, including those from unauthenticated users Learn more about Purview for Copilot Studio. 4. Purview Data Loss Prevention extended to more Microsoft 365 agent scenarios To help organizations prevent data oversharing through AI, at Ignite 2024, we announced that data security admins could prevent Microsoft 365 Copilot from using certain labeled documents as grounding data to generate summaries or responses. Now in preview, this control also extends to agents published in Microsoft 365 Copilot that are grounded by Microsoft 365 data, including pre-built Microsoft 365 agents, agents built with the Agent Builder, and agents built with Copilot Studio. This helps ensure that files containing sensitive content are used appropriately by AI agents. For example, confidential legal documents with highly specific language that could lead to improper guidance if summarized by an AI agent, or "Internal only” documents that shouldn’t be used to generate content that can be shared outside of the organization. : Extend data loss prevention (DLP) policies to Microsoft 365 Copilot agents to protect sensitive data Learn more about Data Loss Prevention for Microsoft 365 Copilot and agents. The data protection capabilities we are extending to agents in Agent Builder and Copilot Studio demonstrate our continued investment in strengthening the Security and Governance pillar of the Copilot Control System (CSS). CCS provides integrated controls to help IT and security teams secure, manage, and monitor Copilot and agents across Microsoft 365, spanning governance, management, and reporting. Learn more here. Explore additional resources As developers and security teams continue to secure AI throughout its lifecycle, it’s important to stay ahead of emerging risks and ensure protection. Microsoft Security provides a range of tools and resources to help you proactively secure AI models, apps, and agents from code to runtime. Explore the following resources to deepen your understanding and strengthen your approach to AI security: Learn more about Security for AI solutions on our webpage Learn more about Microsoft Purview SDK Get started with Azure AI Foundry Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Defender for Cloud Get started with Microsoft 365 Copilot Get started with Copilot Studio Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity, Jeremy D'Hoinne, Akif Khan, Manuel Acosta, Avivah Litan, Deepak Seth, Bart Willemsen, 10 February 2025 2 IBM. "Cost of a Data Breach 2024: Financial Industry." IBM Think, 13 Aug. 2024, https://www.ibm.com/think/insights/cost-of-a-data-breach-2024-financial-industry; Cser, Tamas. "The Cost of Finding Bugs Later in the SDLC." Functionize, 5 Jan. 2023, https://www.functionize.com/blog/the-cost-of-finding-bugs-later-in-the-sdlcHow to use DSPM for AI Data Risk Assessment to Address Internal Oversharing
Background Oversharing and data leak risks may occur with or without GenAI use. However, leaders are concerned that GenAI tools might grant faster access to content with incorrect permissions, making these files easier to locate. Oversharing occurs when an employee has access to information beyond what is necessary to do their jobs. It often happens accidentally, for example if a user saves sensitive files to a SharePoint site without realizing everyone has access to that location. It could also happen when people share files too broadly (e.g. everyone in the organization sharing a link). Or it can happen when files lack protection regardless of location. Microsoft Purview Data Security for Posture Management (DSPM) for AI’s Data Risk Assessment helps to address oversharing by allowing security teams to scan files containing sensitive data and identifying data repositories such as SharePoint sites with overly permissive user access. It provides visibility into overshared content, risk assessment, remediation actions, and detailed reports. Introduction Purview Data Security Posture Management for AI (DSPM for AI)’s Data Risk Assessment is for you if you: Are an existing Microsoft 365 Copilot customer, or someone wanting to deploy Microsoft 365 Copilot: or Want to address oversharing but have not yet deployed Microsoft 365 Copilot. Prerequisites Please refer to the prerequisites for DSPM for AI in the Microsoft Learn Docs. Log in to the Purview portal To begin, start by logging into Microsoft 365 Purview portal with your admin credentials: In the Microsoft Purview portal, go to the Home page. Find DSPM for AI under solutions. Head to Purview DSPM for AI -> Data Assessment. The Data Assessments tool identifies potential oversharing risks in your organization. It also provides fixes to limit access to sensitive data. As shown on the Data Risk Assessment landing page, there are two types of assessments: A Default Assessment. This assessment runs automatically every week. Custom Assessments. This assessment is user-triggered. This blog will focus on the Default assessment and will not cover Custom assessments. The Default assessment will run automatically weekly. Additionally, the Default assessment runs weekly and targets the top 100 SharePoint sites based on usage. Default assessment Next, click the View details button for the Default data risk assessment report on the Overview page. In the Oversharing Assessment for the week page, locate the visual reports bar. The visual reports bar provides a general overview of, Assessment details, which includes: Description. Top 100 accessed SharePoint sites by usage. Last updated, next updated, and frequency. Frequency of updates for the default assessment. Total items - a visual graph of the number of items scanned and/or not scanned for sensitive information types (SITs). Sensitivity labels on data – a visual graph that includes, The number of labeled SITs detected and not detected. The number of Not labeled SITs detected and not detected. The number of data not scanned. Items shared with – a visual graph that includes the number of links, Shared with anyone. Shared organization wide. Shared with specific people. Shared outside your organization. The following data points may indicate that oversharing has occurred in the tenant: Large amount of data not scanned. Large amount of data containing SITs but not labeled. Large amount of data shared externally. Site-specific data Next, locate the list of sites (Data source ID) and their info on the table below the visual reports bar, which includes information on: Source type Total items Total items accessed Times users accessed items Unique users accessing items Total Sensitive items Total scanned items Total unscanned items Items shared with Scroll through the list and identify potential sites that may contain oversharing based on the knowledge of whether the site is private or public, and the possible conditions below: A private site that is being shared externally based on sharing links info. A private site that has a high level of documents being shared outside of the org based on a high level of total items accessed and/or unique users accessed and/or times users accessed. A public site that has a high level of sensitive items based on total sensitive items count. A site that has a high level of total unscanned items. By clicking on the Export button, you can export the Data source IDs to an Excel, CSV, JSON, and TSV file. The rollout of the export capability has started and will be complete by end of the week (week of April 28, 2025). Secure and Govern Each Site Click into each site of interest, or sites may have potential oversharing, to review the site info in the flyout panel. Overview – provides an overview of the details for the site. Data source details – provides details of where the data comes from (i.e. SharePoint) and its corresponding URL Data coverage – displays the total items scanned in the site that are either: Labeled and SITs detected, or No SITs detected Not labeled and SITs detected, or No SITs detected *Data points that may indicate that oversharing has occurred in the tenant: 1. Lots of unscanned documents. 2. Lots of documents that contain SITs but not labeled. Identify – scans your data for sensitive information. Use Microsoft Purview On-demand classification data scan to scan for sensitive information for all content in this site. Microsoft Purview On-demand classification data scan is a feature to help discover and classify sensitive content in historical data across Microsoft 365. Protect – provides remediation actions that you can take to address internal oversharing: Limit Microsoft 365 Copilot access to this site - Restrict access or block processing of certain content in SharePoint - you can choose two methods of how Copilot accesses data in SharePoint: Restrict access by label – Block processing of content with a specific sensitivity label using Purview Data Loss Prevention (DLP) policy for Copilot Restrict all items – Restrict access to site(s) using SharePoint Advanced Management (SAM) restricted content discovery (RCD) Other labeling policies - Create sensitivity label taxonomy and publish labels to SharePoint via: Default sensitivity label for SharePoint document library Default labels – setup default labels to label all new items by default using sensitivity labels. Sensitive information auto-labeling policy - Use auto-labeling policies based on sensitive content or keywords. You can click View items to view the files with SITs. SharePoint site sensitivity label to apply a sensitivity container label to the site. Review unused files - Protect sensitive data from oversharing by deleting unused files with Purview Data Lifecycle Management (DLM) retention policies. Monitor – Ongoing access monitoring Run a site access review This section displays the number of sites: Shared with anyone. Shared organization wide. Shared with specific people. Shared externally. You can then run a SharePoint site access review using SAM Run an access review through Microsoft Entra to make sure access granted is up to date. Conclusion In this blog, we explored the concept of oversharing and its implications in collaborative environments. We discussed how Microsoft Purview DSPM for AI Data Risk Assessments can help identify and mitigate risks associated with sensitive data. Additionally, this blog provided a detailed guide on using the Data Risk Assessments tool, focusing on the Default assessment, which runs automatically every week. We covered how to interpret the visual reports and identify potential oversharing risks based on various data points. Additionally, we outlined steps to secure and govern each site, including remediation actions and access monitoring. For detailed guidance on all Purview + SAM features to address oversharing, please reference the oversharing blueprint - https://aka.ms/Copilot/Oversharing. Be sure to also check out the blog on How to deploy DSPM for AI to secure and govern all types of AI, including Microsoft Copilot experiences, Enterprise AI apps, and other AI apps! Resources Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Public webinar on oversharing - Secure AI: Practical Steps for Addressing Oversharing Concerns Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Downloadable whitepaper - Data Security for AI Adoption | Microsoft Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365Optimizing Cybersecurity Costs with FinOps
This blog highlights the integration of two essential technologies: Cybersecurity best practices and effective budget management across tools and services. Let’s understand FinOps FinOps is a cultural practice for cloud cost management. It enables teams to take ownership of cloud usage. It helps organizations maximize value by fostering collaboration among technology, finance, and business teams on data-driven spending decisions. FinOps Framework The FinOps Framework works across the following areas: Principles Collaborate as a team. Take responsibility for cloud resources. Ensure timely access to reports. Phases Inform: Visibility and allocation Optimize: Utilization Operate: Continuous improvement and operations Maturity: Crawl, Walk, Run Key Components of Cybersecurity Budgets Preventive Measures Preventive measures serve as the initial line of defense in cybersecurity. These measures encompass firewalls, antivirus software, and encryption tools. The primary objective of these measures is to avert cybersecurity incidents from occurring. They constitute a critical component of any comprehensive cybersecurity strategy and often account for a substantial portion of the budget. Detection & Monitoring Tools like Azure Firewalls and Azure monitoring are essential for identifying potential security threats and alerting teams early to minimize impact. Incident Response Incident response comprises the measures taken to mitigate the impact of a security breach after its occurrence. This process includes isolating compromised systems, eliminating malicious software, and restoring affected systems to their normal functionality Training & Awareness Training and awareness are crucial for cybersecurity. Educating employees about threats, teach them how to avoid risks, and inform them of company security policies. Investing in training can prevent security incidents. FinOps approach to managing the cost of Security Security Cost-Optimization Security is crucial as threats and cyber-attacks evolve. Azure FinOps helps identify and remove cloud spending inefficiencies, allowing resources to be reallocated to advanced threat detection, robust controls like MFA and ZTNA, and continuous monitoring tools. Azure FinOps provides visibility into cloud costs, identifying underutilized or redundant resources and over-provisioned budgets that can be redirected to cybersecurity. Continuous real-time monitoring helps spot trends, anomalies, and inefficiencies, aligning resources with strategic goals. Regular audits may reveal overlapping subscriptions or unused security features, while ongoing monitoring prevents these issues from recurring. The efficiency gained can fund advanced threat detection, new protection measures, or security training. FinOps ensures every dollar spent on cloud services adds value, transforming waste into a secure, efficient cloud environment. Risk Mitigation FinOps boosts visibility and transparency, helping teams find weaknesses and risks in licenses, identities, devices, and access points. This is crucial for improving IAM, configuring access controls correctly, and using MFA to protect systems and data, also involves continuous monitoring to spot security gaps early and align measures with organizational goals. It helps manage financial risk by estimating breach costs and allocating resources efficiently. Regular risk assessments and budget adjustments ensure effective security investments that balance defense and business objectives. Improved Compliance and Governance Complying with standards like GDPR, HIPAA, or PCI-DSS is essential for strong cyber defenses. A FinOps approach helps by automating compliance reporting, allowing organizations to use cost-effective tools such as Azure FinOps toolkit to meet regulations. Conclusion Azure FinOps is a useful tool for managing cybersecurity costs. It enhances cost visibility and accountability, enables budget optimization and assists with compliance audits and reporting, also helps businesses invest their resources effectively and efficiently.Level Up Your App Governance With Microsoft Defender for Cloud Apps Workshop Series
Over the past two years, there has been a significant increase in nation-state attacks leveraging OAuth apps. These attacks often serve as entry points for privilege escalation, lateral movement, and damage. To effectively mitigate these risks, security teams need visibility and control over SaaS apps including GenAI apps to ensure that only trusted and compliant apps are in use. Join one of these workshops to learn: Real-world examples of OAuth attacks New pre-built templates and custom rules to simplify app governance How to quickly identify and mitigate risks from high-risk or suspicious apps Best practices for operationalizing app governance to improve your security posture These workshops are designed to accommodate global participation, with flexible date and time options. Who Should Attend: This training is ideal for anyone interested in securing OAuth apps and improving their organization’s overall SaaS security. Date Time Registration Link April 22 8:30-9:30am UTC (1:30-2:30am PST) Registration Closed April 23 6-7pm UTC (11am-12pm PST) Registration Closed May 1 3:30-4:30pm UTC (8:30-9:30am PST) Register May 8 (UPDATED) 1-2pm UTC (6-7am PST) Register May 14 (UPDATED) 10am-11am UTC (3-4am PST) Register More about app governance App governance in Defender for Cloud Apps is a set of security and policy management capabilities designed for OAuth-enabled apps registered on Microsoft Entra ID, Google, and Salesforce. App governance delivers visibility, remediation, and governance into how these apps and their users access, use, and share sensitive data in Microsoft 365 and other cloud platforms through actionable insights and automated policy alerts and actions. App governance also enables you to see which user-installed OAuth applications have access to data on Microsoft 365, Google Workspace, and Salesforce. It tells you what permissions the apps have, and which users have granted access to their accounts. Getting started with App governance View the App Governance> Overview tab in the Microsoft Defender Portal. Your sign-in account must have one of the administrator roles to view any app governance data. For more information, see Turn on app governance for Microsoft Defender for Cloud Apps. Questions? Please post below.