microsoft purview
22 TopicsSecure your AI apps with user-context-aware controls | Microsoft Purview SDK
With built-in protections, prevent data leaks, block unsafe prompts, and avoid oversharing without rewriting your app. As a developer, focus on innovation while meeting evolving security and compliance requirements. And as a security admin, gain full visibility into AI data interactions, user activity, and policy enforcement across environments. Shilpa Ranganathan, Microsoft Purview Principal GPM, shares how new SDKs and Azure AI Foundry integrations bring enterprise-grade security to custom AI apps. Stop data leaks. Detect and block sensitive content in real-time with Microsoft Purview. Get started. Adapt AI security based on user roles. Block or allow access without changing your code. See it here. Prevent oversharing with built-in data protections. Only authorized users can see sensitive results. Start using Microsoft Purview. QUICK LINKS: 00:00 — Microsoft Purview controls for developers 00:16 — AI app protected by Purview 02:23 — User context aware 03:08 — Prevent data oversharing 04:15 — Behind the app 05:17 — API interactions 06:50 — Data security admin AI app protection 07:26 — Monitor and Govern AI Interactions 08:30 — Wrap up Link References Check out https://aka.ms/MicrosoftPurviewSDK Microsoft Purview API Explorer at https://github.com/microsoft/purview-api-samples/ For the Microsoft Purview Chat App go to https://github.com/johnea-chva/purview-chat Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -You can now infuse the data security controls that you’re used to with Microsoft 365 Copilot into your own custom-built AI apps and agentic solutions, even those running in non-Microsoft clouds. In fact, today I’ll show you how we are helping developers and data security teams work together to prevent some of the biggest challenges around data leaks, oversharing, and compliance during AI interactions so that you can start secure with code integrated controls that free you up and make it seamless for you as a developer to focus on building secure apps and agents while knowing that potential users and their activities with work data will be kept secure. -All of which is made possible with Microsoft Purview controls built into Azure AI Foundry, along with the new developer SDK that can be used to protect data during AI interactions, where protections can vary based on specific user context, even when apps are running in non-Microsoft Clouds, which ultimately helps your data in apps and agents stay secure as policies evolve while providing you as a security admin the visibility to evolve protections to protect against leaks and risky insiders to maintain control of your data, prevent data oversharing to unintended recipients, and govern AI data and compliance of your industry and regional requirements by default. This approach makes it simple for you as a developer to translate the requirements of your data security teams as you build your apps using the Microsoft Purview SDK. -In fact, let me show you an example of an AI app that’s protected by Microsoft Purview. This is an AI-powered company chat app. It’s a sample that you can find on GitHub, and it’s using Azure AI Foundry services on the backend for large language model, and Cosmos DB to retrieve relevant information based on a user’s prompt. I’m signed in as a user on the external vendor team. -Now, I’m going to write a prompt that adds sensitive information with a credit card, and immediately, I see a response that this request and prompt violates our company’s sensitive information policy, which was set in Microsoft Purview, so our valuable information is protected. But the real power here is that the controls are user context aware too. It’s not just blocking all credit cards because there are easier ways to do that in code or with system prompts. Let me show you the same app without code changes for another user. I’m logged in as a member of the Customer Support Engineering team and I’m allowed to interact with credit card numbers as part of my job, so I’m going to write the same prompt. Now I’ll submit it, and you’ll see the app generates an appropriate response. And nothing changed in the app. The only change was my user context. -And that was an example of a prompt being analyzed prior to sending it to the application so that it could generate a response. Let me show you another example that proactively prevents data oversharing based on the information retrieval process used by the app. I’m still logged in with the user’s account on the Customer Support Engineering team, and I’ll prompt our app to send me information for recent transactions with Relecloud with payment information to look at a duplicate charge. This takes a moment, looks up the transaction information in our Cosmos DB backend, and it’s presenting the results to me. -In this case, access permissions and protections have been applied using Microsoft Purview to the backend data source. And because our user account has permissions to that information, they received the response. This time, I’m signed in again as a user on the external vendor team. Again, I’ll write the same prompt, and because I shouldn’t and do not have access to retrieve that information, the app tells me that it can’t respond. Again, it is the same app without any code changes and my user context prevented me from seeing information that I shouldn’t be able to see. As a developer, these controls are simple to integrate into your app code and you don’t need to worry about the policies themselves or which user should be in scope for them. -Let me show you. This is the code behind our app. First, you can see that it’s registered with Microsoft Entra to help connect the app with both organizational policies and the identity of the user interacting with the app for user context so that it can apply the right protection scope. This is all possible by using the access tokens once the user has logged in. The app then establishes the API connection with Microsoft Purview to look at the protection scopes API, as well as the process content API, so that it can check whether the submitted prompt or the response is allowed or not based on existing data security access and compliance policies. Based on what’s returned, the app either continues or informs the user of the policy violation. -Now that you’ve seen what’s behind the app, let me show you the actual API interactions between our app and Microsoft Purview. And for that, I’ll use a sample code that we’ve also published to GitHub to view the raw API responses in real time. This is the Purview API Explorer app. This is connected to the Microsoft Graph as you can see with the Request URI. I can use it to view protections and even view how content gets processed in real time, which I’ll do here. Once the user logs in, you’ll see that with the first API for protection scopes, the application will send the user content and application token, as well as the activities that the app supports, like upload text and download text, as noted here, for our prompts. -Once the request is sent to the API, Purview responds back to the application to tell it what to do. In this case, for uploading and downloading text. The application will wait for Purview’s response prior to displaying it back to the user. Now I’ll go to Start a Conversation. And on the left in the Request Body, you can see my raw prompt again with sensitive information contained in the text along with other metadata properties. I’ll send the request. On the right, I can see the details of the content response from the API. So in this case, it found a policy match and responded with the action RestrictedAccess and the restriction action to block. That’s what you’d need to know as a developer to protect your AI apps. -Then as a data security admin, for everything to work as demonstrated, there are a few things you’ll need configured in Microsoft Purview. First, to protect against data loss of sensitive or high value information like I showed using credit cards, you will need data loss prevention policies in place. Second, to help prevent oversharing with managed database sources like I showed from Cosmos DB, which also works with SQL databases, you’ll configure Information Protection policies. This ensures that your database instances are labeled with corresponding access protections applied. Then for visibility into activities with your connected apps, all prompt and response traffic is recorded and auditable. And for apps and agents running on Azure AI Foundry, it’s just one optional setting to light up native Microsoft Purview integration. -In fact, here’s the level of visibility that you get as a data security admin. In DSPM for AI, you can see interactions and associated risks from your AI line-of-business apps running on Azure and other clouds once they are enlightened with Microsoft Purview integration. Here you can see user trends, applicable protections, compliance, and agent count. And across the broader Microsoft Purview solutions, all activity and interactions from your apps are also captured and protected, including Audit Search, so that you can discover all app interactions, Communication Compliance for visibility into inappropriate interactions, and Insider Risk Management as part of activities that establish risk. Integrating your apps with Microsoft Purview’s SDK provides the control to free you up and make it seamless for you as a developer to focus on building secure apps and agents. At the same time, as the data security admin, it gives you continuous visibility to ensure that AI data interactions remain secure and compliant. -To learn more, check out aka.ms/MicrosoftPurviewSDK. We’ve also put links to both sample apps in the description below to help you get started. Keep checking back to Microsoft Mechanics for the latest updates, and thank you for watching.215Views0likes0CommentsIntroducing Microsoft Purview Alert Triage Agents for Data Loss Prevention & Insider Risk Management
Surface the highest-risk alerts across your environment, no matter their default severity, and take action. Customize how your agents reason, teach them what matters to your organization, and continuously refine to reduce time-to-resolution. Talhah Mir, Microsoft Purview Principal GPM, shows how to triage, investigate, and contain potential data risks before they escalate. Focus on the most high-risk alerts in your queue. Save time by letting Alert Triage Agents for DLP and IRM surface what matters. Watch how it works. Stay in control. Tailor triage priorities with your own rules to focus on what really matters. See how to customize your alert triage agent using Security Copilot. View alert triage agent efficiency stats. Know what your agent is doing and how well it’s performing. Check out Microsoft Purview. QUICK LINKS: 00:00 — Agents in Microsoft Purview 00:58 — Alert Triage Agent for DLP 01:54 — Customize Agents 03:32 — View prioritized alerts 05:17 — Calibrate Agent Behavior with Feedback 06:38 — Track Agent Performance and Usage 07:34 — Wrap up Link References Check out https://aka.ms/PurviewTriageAgents Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Staying ahead of potential data security threats and knowing which alerts deserve your attention isn’t just challenging. It’s overwhelming. Every day, your organization generates an increasing and enormous volume of data interactions, and it’s hard to know which potential risks are slipping through the cracks. In fact, on average, for every 66 new alerts logged in a day, nearly a third are not investigated because of the time and effort involved. And this is exactly where automation and AI in Microsoft Purview can make a material difference. With an agent-managed alert queue that, just like an experienced tier 1 analyst, sifts through the noise to identify Data Loss Prevention and Insider Risk Management alerts that pose the greatest risks to your organization, letting you focus your time and efforts on the most critical risks to your data. -Today, I’ll show you how the agents in Microsoft Purview work, the reasoning they use the prioritize alerts, and how to get them running in your environment. I’ll start with Alert Triage Agent for DLP. I’m in the Alerts page for Data Loss Prevention. You’ll see that just for this small date range, I have a long list of 385 active alerts. Now, I could use what’s in the Severity column to sort and prioritize what to work on first, clicking each, analyzing the details, which policies were triggered, and then repeating that process until I’ve worked my way through the list over the course of my day. And even then, I wouldn’t necessarily have the full picture. To save time, I ended up deprioritizing low and medium severity alerts, which still could present risks that need to be investigated, but it doesn’t have to be this way. -Instead, if I select my Alert Triage Agent view, I can see it’s done the work to triage the most important alerts, regardless of severity, that require my attention. There’s a curated list of 17 alerts for me to focus in on. And if you’re wondering if you can trust this triage list of alerts to be the ones that really need the most attention, you remain in control because you’re able to teach Copilot what you want to prioritize when you set up your agent. Let me show you. I’m in the Agents view and I’ll select the DLP agent. And if this is your first time using the agent, you’ll need to review what it does and how it’s triggered. In fact, it lists what it needs permissions for as it reasons over each alert. This includes your DLP configuration, reading incoming activity details and corresponding content, and then storing your feedback to refine how it will triage DLP alerts. -Next, you can move on to deployment settings. Here, you can choose how the agent runs or triggered and select the alert timeframe. The default is last 30 days. From there, I’ll deploy the agent. You’ll see that it tells me the next step is to customize it before it begins triaging alerts. This takes a little while to provision, and once it’s ready, there’s just one more step. Back in Alerts, I need to customize the agent. Here, I can enter my own instructions as text to help the agent prioritize alerts based on what’s important to my organization. For example, I can focus it on specific labels or projects, which can be modified over time. -Next, I can select the individual policies that I want to focus the agent on. I’m going to select all of these in this case, then hit Save. Once I hit Review, it generates custom classifiers and rules specific to what I’ve asked the agent to look for. Then I just need to start the agent, and that’s the magic behind agent-prioritized queue that I showed you earlier. So now, once the agent is ready, instead of trying to find that needle in our haystack of 385 alerts, I can just hit the toggle button to view the prioritized alerts from the Alert Triage Agent. Notice I’m not losing any of the alert details from before. It’s just presented as a triaged and prioritized queue, starting with the top handful of alerts that need my immediate attention with less urgent and not categorized alerts available to view in other tabs. -I’ll focus on what needs attention and click into the top one to see what the agent found. The Agent summary tells me that there are 25 files and eight with verified policy matches. Data includes credit cards, bank account numbers shared using SharePoint. Below that, you’ll see the sensitivity risk for each shared file, the exfiltration risk related primarily to the files containing financial data, and the policy risk. And I could see in this case, the DLP policy was triggered, and the user was allowed to share without restrictions. In the Details tab, you’ll notice that the alert severity set to low based on the policy configuration, but the triage agent, much like a human analyst, can render a verdict taking the entire context into account. Clicking into view details, I can find more information, including the related assets, where I can see each of the corresponding names, trainable classifiers if defined, and sensitive information types. I’ll scroll back up and show you one more tab here. -In Overview, I can see the user associated with the alert. Turns out this is an important policy match to prioritize Labels on 18 highly sensitive files were downgraded and it was shared without proper restriction. The user was warned and chose to proceed. I can now work on containing the risk and improving related policy protections to prevent future incidents like this one. Let’s continue to work through our prioritized alert queue, and you can see I’m now left with six. I’ll click into the first one. It’s a policy match for business-critical files containing financial and legal information. This is credit card information and a legal agreement in the shared content. That said, this happens to be a close partner of our company that typically handles this type of information, so it’s not important. And to prevent this and future similar alerts from being flagged as needing my attention, I can calibrate the agent’s response based on what matters to me. Kind of like you would teach a junior member of your team. So, in this alert categorization, I’ll click Change to add more context about why I disagree with this categorization so that other recipients from that domain are deprioritized. -In the details pane, I’ll change it to less urgent and add another property to deprioritize these types of alerts. In this case, I’ll add the external recipient email address. And after I hit submit, this will be added to the agent’s instruction set to further refine its rationale for prioritization. In fact, here in our list of what needs attention, you’ll see that the alert is no longer on the list. That’s how easy it is to get the agent to work on your behalf. And once you’ve been using the agent at any time, you can view its progress. In the Agent Overview, I can see my deployed agents and trending activities. If I click into my Data Loss Prevention Agent, I can see details about its recent activities. In the Performance tab, I can also see the agent effectiveness trend over time, and below that, a detailed breakdown of units consumed per day. This way, you can reduce your time to resolution even while your team is spread thin. -Now, I focused on the DLP agent today, and similarly, our alert triage agent in Insider Risk Management works on your behalf to create a prioritized alert queue of data incidents by risky users in your organization that require your attention, including evaluating the user risk based on historical context, as well as analyzing the user’s activity over weeks or months to help evaluate their risk, whether they’re intentional or not. In many ways, Purview’s new Alert Triage Agents for DLP and IRM, powered by Security Copilot, reduce the time, effort, and expert resources needed to truly understand the context of your alerts. It works alongside you and the whole team to accelerate and simplify your investigations. To learn more, check out aka.ms/PurviewTriageAgents, subscribe to Microsoft Mechanics if you haven’t yet, and thank you for watching.899Views1like0CommentsData security for agents and 3rd party AI in Microsoft Purview
With built-in visibility into how AI apps and agents interact with sensitive data — whether inside Microsoft 365 or across unmanaged consumer tools — you can detect risks early, take decisive action, and enforce the right protections without slowing innovation. See usage trends, investigate prompts and responses, and respond to potential data oversharing or policy violations in real time. From compliance-ready audit logs to adaptive data protection, you’ll have the insights and tools to keep data secure as AI becomes a part of everyday work. Shilpa Ranganathan, Microsoft Purview Principal Group PM, shares how to balance AI innovation with enterprise-grade data governance and security. Move from detection to prevention. Built-in, pre-configured policies you can activate in seconds. Check out DSPM for AI. Monitor risky usage and take action. Block risky users from uploading sensitive data into AI apps. See how to use DSPM for AI. Set instant guardrails. Use DSPM for AI to identify AI agents that may be at risk of data oversharing and take action. Get started. QUICK LINKS: 00:00 — AI app security, governance, & compliance 01:30 — Take Action with DSPM for AI 02:08 — Activity logging 02:32 — Control beyond Microsoft services 03:09 — Use DSPM for AI to monitor data risk 05:06 — ChatGPT Enterprise 05:36 — Set AI Agent guardrails using DSPM for AI 06:44 — Data oversharing 08:30 — Audit logs 09:19 — Wrap up Link References Check out https://aka.ms/SecureGovernAI Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Do you have a good handle on the data security risks introduced by the growing number of GenAI apps inside your organization? Today, 78% of users are bringing their own AI tools, often consumer grade, to use as they work and bypassing the data security protections you’ve set. And now, combined with the increased use of agents, it can be hard to know what data is being used in AI interactions to keep valuable data from leaking outside of your organization. -In the next few minutes, I’ll show you how enterprise grade data security, governance, and compliance can go hand in hand with GenAI adoption inside your organization with Data Security Posture Management for AI in Microsoft Purview. This single solution not only gives you automatic visibility into Microsoft Copilot and custom apps and agents in use inside your organization, but extends visibility into AI interactions happening across different non-Microsoft AI services that may be in use. Risk analytics then help you see at a glance what’s happening with your data with a breakdown of the top unethical AI interactions, sensitive data interactions per AI app, along with how employees are interacting with apps based on their risk profile, either high, medium, or low. And specifically for agents, we also provide dedicated reports to expose the data risks posed by agents in Microsoft 365 Copilot and maker created agents from Copilot Studio. And visibility is just one half of what we give you. You can also take action. -Here, DSPM for AI provides you proactive recommendations to help you take immediate action to enhance your data security and compliance posture right from the service using built-in and pre-configured Microsoft Purview policies. And with all AI interactions audited, not only do you get the visibility I just showed, but the data is automatically captured for data lifecycle management, eDiscovery, and Communication Compliance investigations. In fact, clicking on this one recommendation for compliance controls can help you set up policies in all these areas. -Now, if you’re wondering how activity signals from AI apps and agents flow into DSPM for AI in the first place, the good news is, for the AI apps and agents you build with either Microsoft Copilot services or with Azure AI, even if you haven’t configured a single policy in Microsoft Purview, activity logging is enabled by default, and built-in reports are generated for you out of the gate. As I showed, visibility and control extend beyond Microsoft services as soon as you take proactive action. Directly from DSPM for AI, the fortify data security recommendation, for example, when activated under the covers leverage Microsoft Purview’s built-in classifiers to detect sensitive data and to log interactions from local app traffic over the network, as well as the device level to protect file system interactions on Microsoft Purview onboarded PCs and Macs, and even web-based apps running in Microsoft Edge, to help prevent risky users from leaking sensitive data. -Next, with insights now flowing in, let me walk you through how you can use DSPM for AI every day to monitor your data risks and take action. I’ll start again from reports in the overview to look at GenAI apps that are popular in our organization. Something that is really concerning are the ones in use by my riskiest users who are interacting with popular consumer apps like DeepSeek and Google Gemini. ChatGPT consumer is at the top of the list, and it’s not a managed app for our organization. It’s brought in by users who are either using it for free or with a personal license, but what’s really concerning is that it has the highest number of risky users interacting with it, which could increase our risk of data loss. Now, my first inclination might be to block usage of the app outright. That said, if I scroll back up, instead I can see a proactive recommendation to prevent sensitive data exfiltration in ChatGPT with adaptive protection. -Clicking in, I can see the types of sensitive data shared by users and their prompts. Creating this policy will log the actions of minor-risk users and block high-risk users from typing in or uploading sensitive information into ChatGPT. I can also choose to customize this policy further, but I’ll keep what’s there and confirm. And with the policies activated, now let me show you the result. Here we have a user with an elevated risk level. They’re entering sensitive information into the prompt, and when they submit it, they are blocked. On the other hand, when a user with a lower risk level enters sensitive information and submits their prompt, they’re informed that their actions are being audited. -Next, as an admin, let me show you how this activity was audited. From DSPM for AI in the Activity Explorer, I can see all interactions and any matching sensitive information types. Here’s the activity we just saw, and I can click into it to see more details, including exactly what was shared in the user’s prompt. Now for ChatGPT Enterprise, there’s even more visibility due to the deep API integration with Microsoft Purview. By selecting this recommendation, you can register your ChatGPT Enterprise workspace to discover and govern AI interactions. In fact, this recommendation walks you through the setup process. Then with the interactions logged in Activity Explorer, not only are you able to see what prompts were submitted, but you can also get complete visibility into the generated responses. -Next, with the rapid development of AI agents, let me show you how you can use DSPM for AI to discover and set guardrails around information used with your user-created agents. Clicking on agents takes you to a filtered view. Immediately, I can see indicators of a potential oversharing issue. This is where data access permissions may be too broad and where not enough of my data is labeled with corresponding protections. I can also see the total agent interactions over time, the top five agents open to internet users, with interactions by unauthenticated or anonymous users. This is where people outside of my organization are interacting with agents grounded on my organization’s data, which can be bad. -I can also quickly see a breakdown of sensitive interactions per agent along with the top sensitivity labels referenced to get an idea of the type of data in use and how well protected it is. To find out more, from the Activity Explorer, I can see in this AI interaction, the agent was invoked in Copilot Chat, and I can view the agent’s details and see the prompt and response just like before. Now what I really want to do is to take a closer look at the potential data oversharing issue that was flagged. For that, I’ll return to my dashboard and click into the default assessment. These run every seven days, scanning files containing sensitive data and identifying where those files are located, such as SharePoint sites with overly permissive user access. -And I can dig into the details. I’ll click into the top one for “Obsidian Merger” and I can see label coverage for the data within it. And in the protect tab, there are eight sensitivity labels and five that are referenced by Copilot and agents. Since I want agents to honor data classifications and their related protections, I can configure recommended policies. The most stringent option is to restrict all items, removing the entire site from view of Copilot and agents. Or for more granular controls, I also have a few more options. I can create default sensitivity labels for newly created items, or if I move back to the top-level options, I have the option to “Restrict Access by Label.” The Obsidian Merger information is highly privileged, and even if you’re on the core team working on it, we don’t want agents to reason over the information, so I’ll pick this label option. -From there, I need to extend the list of sensitivity labels and I’ll select Obsidian Merger, then confirm to create the policy. And this will now block the agent from reasoning over the content that includes the Obsidian Merger label. In fact, let’s look at the policy in action. Here you can see the user is asking the Copilot agent to summarize the Project Obsidian M&A doc and even though they are the owner and author of the file, the agent cannot reason over it. It responds, “Unfortunately, I can’t provide detailed information because the content is protected.” -As I mentioned, for both your agents and GenAI apps across Microsoft and non-Microsoft services, all activity is recorded in Audit logs to help conduct investigations whenever needed. In fact, DSPM for AI logged activity flows directly into Microsoft Purview’s best-in-class solutions for insider risk management, letting your security teams detect risky AI prompts as part of their investigations into risky users, communication compliance to aid investigations into non-compliance use in AI interactions, such as a user trying to get sensitive information like an acquisition plan, eDiscovery, where interactions across your Copilots, agents, and AI apps can be collected and reviewed to help conduct investigations and respond to litigations. -So that was an overview of how GenAI adoption can go hand in hand with your enterprise grade data security, governance, and compliance requirements for your organizations, keeping your data protected. To learn more, check out aka.ms/SecureGovernAI. Keep watching Microsoft Mechanics for the latest updates, and thanks for watching.1.2KViews0likes0CommentsMicrosoft Purview protections for Copilot
Use Microsoft Purview and Microsoft 365 Copilot together to build a secure, enterprise-ready foundation for generative AI. Apply existing data protection and compliance controls, gain visibility into AI usage, and reduce risk from oversharing or insider threats. Classify, restrict, and monitor sensitive data used in Copilot interactions. Investigate risky behavior, enforce dynamic policies, and block inappropriate use — all from within your Microsoft 365 environment. Erica Toelle, Microsoft Purview Senior Product Manager, shares how to implement these controls and proactively manage data risks in Copilot deployments. Control what content can be referenced in generated responses. Check out Microsoft 365 Copilot security and privacy basics. Uncover risky or sensitive interactions. Use DSPM for AI to get a unified view of Copilot usage and security posture across your org. Block access to sensitive resources. See how to configure Conditional Access using Microsoft Entra. Watch our video here. QUICK LINKS: 00:00 — Microsoft Purview controls for Microsoft 365 Copilot 00:32 — Copilot security and privacy basics 01:47 — Built-in activity logging 02:24 — Discover and Prevent Data Loss with DSPM for AI 04:18 — Protect sensitive data in AI interactions 05:08 — Insider Risk Management 05:12 — Monitor and act on inappropriate AI use 07:14 — Wrap up Link References Check out https://aka.ms/M365CopilotwithPurview Watch our show on oversharing at https://aka.ms/OversharingMechanics Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Not all generative AI is created equal. In fact, if data security or privacy-related concerns are holding your organization back, today I’ll show you how the combination of Microsoft 365 Copilot and the data security controls in Microsoft Purview provide an enterprise-ready platform for GenAI in your organization. This way, GenAI is seamlessly integrated into your workflow across familiar apps and experiences, all backed by unmatched data security and visibility to minimize data risk and prevent data loss. First, let’s level set on a few Copilot security and privacy basics. Whether you’re using the free Copilot Chat that’s included with Microsoft 365 or have a Microsoft 365 Copilot license, they both honor your existing access permissions to work information in SharePoint and OneDrive, your Teams meetings and your email, meaning generated AI responses can only be based on information that you have access to. -Importantly, after you submit a prompt, Copilot will retrieve relevant index data to generate a response. The data only stays within your Microsoft 365 service trust boundary and doesn’t move out of it. Even when the data is presented to the large language models to generate a response, information is kept separate to the model, and is not used to train it. This is in contrast to consumer apps, especially the free ones, which are often designed to collect training data. As users upload files into them or paste content into their prompts, including sensitive data, the data is now duplicated and stored in a location outside of your Microsoft 365 service trust boundary, removing any file access controls or classifications you’ve applied in the process, placing your data at greater risk. -And beyond being stored there for indexing or reasoning, it can be used to retrain the underlying model. Next, adding to the foundational protections of Microsoft 365 Copilot, Microsoft Purview has activity logging built in and helps you to discover and protect sensitive data where you get visibility into current and potential risks, such as the use of unprotected sensitive data in Copilot interactions, classify and secure data where information protection helps you to automatically classify, and apply sensitivity labels to data, ensuring it remains protected even when it’s used with Copilot, and detect and mitigate insider risks where you can be alerted to employee activities with Copilot that pose a risk to your data, and much more. -Over the next few minutes, I’ll focus on Purview capabilities to get ahead of and prevent data loss and insider risks. We’ll start in Data Security Posture Management or DSPM for AI for short. DSPM for AI is the one place to get a rich and prioritized bird’s eye view on how Copilot is being used inside your organization and discover corresponding risks, along with recommendations to improve your data security posture that you can implement right from the solution. Importantly, this is where you’ll find detailed dashboards for Microsoft 365 Copilot usage, including agents. -Then in Activity Explorer, we make it easy to see recent activities with AI interactions that include sensitive information types, like credit cards, ID numbers or bank accounts. And you can drill into each activity to see details, as well as the prompt and response text generated. One tip here, if you are seeing a lot of sensitive information exposed, it points to an information oversharing issue where people have access to more information than necessary to do their job. If you find yourself in this situation, I recommend you also check out our recent show on the topic at aka.ms/OversharingMechanics where I dive into the specific things you should do to assess your Microsoft 365 environment for potential oversharing risks to ensure the right people can access the right information when using Copilot. -Ultimately, DSPM for AI gives you the visibility you need to establish a data security baseline for Copilot usage in your organization, and helps you put in place preventative measures right away. In fact, without leaving DSPM for AI on the recommendations page, you’ll find the policies we advise everyone to use to improve data security, such as this one for detecting potentially risky interactions using insider risk management and other recommendations, like this one to detect potentially unethical behavior using communication compliance policies and more. From there, you can dive in to Microsoft Purview’s best-in-class solutions for more granular insights, and to configure specific policies and protections. -I’ll start with information protection. You can manage data security controls with Microsoft 365 Copilot in scope with the information protection policies, and the sensitivity labels that you have in use today. In fact, by default, any Copilot response using content with sensitivity labels will automatically inherit the highest priority label for the referenced content. And using data loss prevention policies, you can prevent Copilot from processing any content that has a specific sensitivity label applied. This way, even if users have access to those files, Copilot will effectively ignore this content as it retrieves relevant information from Microsoft Graph used to generate responses. Insider risk management helps you to catch data risk based on trending activities of people on your network using established user risk indicators and thresholds, and then uses policies to prevent accidental or intentional data misuse as they interact with Copilot where you can easily create policies based on quick policy templates, like this one looking for high-risk data leak patterns from insiders. -By default, this quick policy will scope all users in groups with a defined triggering event of data exfiltration, along with activity indicators, including external sharing, bulk downloads, label downgrades, and label removal in addition to other activities that indicate a high risk of data theft. And it doesn’t stop there. As individuals perform more risky activities, those can add up to elevate that user’s risk level. Here, instead of manually adjusting data security policies, using Adaptive Protection controls, you can also limit Copilot use depending on a user’s dynamic risk level, for example, when a user exceeds your defined risk condition thresholds to reach an elevated risk level, as you can see here. -Using Conditional Access policies in Microsoft Entra, in this case based on authentication context, as well as the condition for insider risk that you set in Microsoft Purview, you can choose to block their permission when attempting to access sites with a specific sensitivity label. That way, even if a user is granted access to a SharePoint site resource by an owner, their access will be blocked by the Conditional Access policy you set. Again, this is important because Copilot honors the user’s existing permissions to work with information. This way, Copilot will not return information that they do not have access to. -Next, Communication Compliance is a related insider risk solution that can act on potentially inappropriate Copilot interactions. In fact, there are specific policy options for Microsoft 365 Copilot interactions in communication compliance where you can flag jailbreak or prompt injection attempts using Prompt Shields classifiers. Communication compliance can be set to alert reviewers of that activity so they can easily discover policy matches and take corresponding actions. For example, if a person tries to use Copilot in an inappropriate way, like trying to get it to work around its instructions to generate content that Copilot shouldn’t, it will report on that activity, and you’ll also be able to see the response informing the user that their activity was blocked. -Once you have the controls you want in place, it’s a good idea to keep going back to DSPM for AI so you can see where Copilot usage is matching your data security policies. Sensitive interactions per AI app shows you interactions based on sensitive information types. Top unethical AI interactions surfaces insights based on the communication compliance controls you’ve defined. Top sensitivity labels referenced in Microsoft 365 Copilot reports on the labels you’ve created, and applied to reference content. And you can see Copilot interactions mapped to insider risk severity levels. Then digging into these reports shows you a filtered view of activities in Activity Explorer with time-based trends and details for each. Additionally, because all Copilot interactions are logged, like other Microsoft 365 activities in email, Microsoft Teams, SharePoint and OneDrive, you can now use the new data security investigation solution. This uses AI to quickly reason over thousands of items, including Copilot Chat interactions to help you investigate the potential cause of risks for known data leaks in similar incidents. -So that’s how Microsoft 365 Copilot, along with Microsoft Purview, provides comprehensive controls to help protect your data, minimize risk, and quickly identify Copilot interactions that could lead to compromise so you can take corrective actions. No other AI solution has this level of protection and control. To learn more, check out aka.ms/M365CopilotwithPurview. Keep watching Microsoft Mechanics for the latest updates and thanks for watching.3.4KViews0likes0CommentsMicrosoft Purview: New data security controls for the browser & network
Protect your organization’s data with Microsoft Purview. Gain complete visibility into potential data leaks, from AI applications to unmanaged cloud services, and take immediate action to prevent unwanted data sharing. Microsoft Purview unifies data security controls across Microsoft 365 apps, the Edge browser, Windows and macOS endpoints, and even network communications over HTTPS — all in one place. Take control of your data security with automated risk insights, real-time policy enforcement, and seamless management across apps and devices. Strengthen compliance, block unauthorized transfers, and streamline policy creation to stay ahead of evolving threats. Roberto Yglesias, Microsoft Purview Principal GPM, goes beyond Data Loss Prevention Keep sensitive data secure no matter where it lives or travels. Microsoft Purview DLP unifies controls across Microsoft 365, browsers, endpoints, and networks. See how it works. Know your data risks. Data Security Posture Management (DSPM) in Microsoft Purview delivers a 360° view of sensitive data at risk, helping you proactively prevent data leaks and strengthen security. Get started. One-click policy management. Unify data protection across endpoints, browsers, and networks. See how to set up and scale data security with Microsoft Purview. Watch our video here. QUICK LINKS: 00:00 — Data Loss Prevention in Microsoft Purview 01:33 — Assess DLP Policies with DSPM 03:10 — DLP across apps and endpoints 04:13 — Unmanaged cloud apps in Edge browser 04:39 — Block file transfers across endpoints 05:27 — Network capabilities 06:41 — Updates for policy creation 08:58 — New options 09:36 — Wrap up Link References Get started at https://aka.ms/PurviewDLPUpdates Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -As more and more people use lesser known and untrusted shadow AI applications and file sharing services at work, the controls to proactively protect your sensitive data need to evolve too. And this is where Data Loss Prevention, or DLP, in Microsoft Purview unifies the controls to protect your data in one place. And if you haven’t looked at this solution in a while, the scope of protection has expanded to ensure that your sensitive data stays protected no matter where it goes or how it’s consumed with controls that extend beyond what you’ve seen across Microsoft 365. Now adding browser-level protections that apply to unmanaged and non-Microsoft cloud apps when sensitive information is shared. -For your managed endpoints, today file system operations are also protected on Windows and macOS. And now we are expanding detection to the network layer. Meaning that as sensitive information is shared into apps and gets transmitted over web protocols, as an admin, you have visibility over those activities putting your information at risk, so you can take appropriate action. Also, Microsoft Purview data classification and policy management engines share the same classification service. Meaning that you can define the sensitive information you care about once, and we will proactively detect it even before you create any policies, which helps you streamline creating policies to protect that information. -That said, as you look to evolve your protections, where do you even start? Well, to make it easier to prioritize your efforts, Data Security Posture Management, or DSPM, provides a 360 degree view of data potentially at risk and in need of protection, such as potential data exfiltration activities that could lead to data loss, along with unprotected sensitive assets across data sources. Here at the top of the screen, you can see recommendations. I’ll act on this one to detect sensitive data leaks to unmanaged apps using something new called a Collection Policy. More on how you can configure this policy a bit later. -With the policy activated, new insights will take up to a day to reflect on our dashboard, so we’ll fast forward in time a little, and now you can see a new content category at the top of the chart for sensitive content shared with unmanaged cloud apps. Then back to the top, you can see the tile on the right has another recommendation to prevent users from performing cumulative exfiltration activities. And when I click it, I can enable multiple policies for both Insider Risk Management and Data Loss Prevention, all in one click. So DSPM makes it easier to continually assess and expand the protection of your DLP policies. And there’s even a dedicated view of AI app-related risks with DSPM for AI, which provides visibility into how people in your organization are using AI apps and potentially putting your data at risk. -Next, let me show you DLP in action across different apps and endpoints, along with the new browser and network capabilities. I’ll demonstrate the user experience for managed devices and Microsoft 365 apps when the right controls are in place. Here I have a letter of intent detailing an upcoming business acquisition. Notice it isn’t labeled. I’ll open up Outlook, and I’ll search for and attach the file we just saw. Due to the sensitivity of the information detected in the document, it’s fired up a policy tip warning me that I’m out of compliance with my company policy. Undeterred, I’ll type a quick message and hit send. And my attempt to override the warning is blocked. -Next, I’ll try something else. I’ll go back to Word and copy the text into the body of my email, and you’ll see the same policy tip. And, again, I’m blocked when I still try to send that email. These protections also extend to Teams chat, Word, Excel, PowerPoint and more. Next, let me show you how protections even extend to unmanaged cloud apps running in the Edge browser. For example, if you want to use a generative AI website like you’re seeing here with DeepSeek, even if I manually type in content that matches my Data Loss Prevention policy, you’ll see that when I hit submit, our Microsoft Purview policy blocks the transmission of this content. This is different from endpoint DLP, which can protect file system operations like copy and paste. These Edge browser policies complement existing endpoint DLP protections in Windows and macOS. -For example, here I have the same file with sensitive information that we saw before. My company uses Microsoft Teams, but a few of our suppliers use Slack, so I’ll try to upload my sensitive doc into Slack, and we see a notification that my action is blocked. And since these protections are on the file and run in the file system itself, this would work for any app. That said, let’s try another operation by copying the sensitive document to my removable USB drive. And here I’m also blocked. So we’ve seen how DLP protections extend to Microsoft 365 apps, managed browsers, and file systems. -Additionally, new protections can extend to network communication protocols when sharing information with local apps running against web services over HTTPS. In fact, here I have a local install of the ChatGPT app running. As you see, this is not in a browser. In this case, if I unintentionally add sensitive information to my prompt, when it passes the information over the network to call the ChatGPT APIs, Purview will be able to detect it. Let’s take a look. If I move over to DSPM for AI in Microsoft Purview, as an admin, I have visibility into the latest activity related to AI interactions. If I select an activity which found sensitive data shared, it displays the user and app details, and I can even click into the interaction details to see exactly what was shared in the prompt as well as what specifically was detected as sensitive information on it. This will help me decide the actions we need to take. Additionally, the ability to block sharing over network protocols is coming later this year. -Now, let’s switch gears to the latest updates for policy creation. I showed earlier setting up the new collection policy in one click from DSPM. Let me show you how we would configure the policy in detail. In Microsoft Purview, you can set this up in Data Loss Prevention under Classifiers on the new Collection Policies page. These policies enable you to tailor the discovery of data and activities from the browser, network, and devices. You can see that I already have a few created here, and I’ll go ahead and create a new one right from here. -Next, for what data to detect, I can choose the right classifiers. I have the option to scope these down to include specific classifiers, or include all except for the ones that I want to exclude. I’ll just keep them all. For activities to detect, I can choose the activities I want. In this case, I’ll select text and files shared with a cloud or AI app. Now, I’ll hit add. And next I can choose where to collect the data from. This includes connected data sources, like devices, Copilot experiences, or Enterprise AI apps. The unmanaged cloud apps tab uses the Microsoft Defender for Cloud Apps catalog to help me target the applications I want in scope. -In this case, I’ll go ahead and select all the first six on this page. For each of these applications, I can scope which users this policy applies to as a group or separately. I’ll scope them all together for simplicity. Here I have the option to include or exclude users or groups from the policy. In this case, I’ll keep all selected and save it. Next, I have the option of choosing whether I want AI prompt and responses that are detected to be captured and preserved in Purview. This enabled the experience we saw earlier of viewing the full interaction. -Finally, in mode, you can turn the policy on. Or if you leave it off, this will save it so that you can enable it later. Once I have everything configured, I just need to review and create my policy, and that’s it. In addition, as you create DLP policies, you’ll notice new corresponding options. Let me show you the main one. For each policy, you’ll now be asked what type of data you want to protect. First is data stored in connected sources. This includes Microsoft 365 and endpoint policies, which you’re likely already using now. The new option is data in browser and network activity. This protects data in real-time as it’s being used in the browser or transmitted over the network. From there, configuring everything else in the policy should feel familiar with other policies you’ve already defined. -To learn more and get started with how you can extend your DLP protections, check out aka.ms/PurviewDLPUpdates. Keep checking back to Microsoft Mechanics for all the latest updates and thanks for watching.2.3KViews1like0CommentsOversharing Control at Enterprise Scale | Updates for Microsoft 365 Copilot in Microsoft Purview
Minimize risks that come with oversharing and potential data loss. Use Microsoft Purview and its new Data Security Posture Management (DSPM) for AI insights, along with new Data Loss Prevention policies for Microsoft 365 Copilot, and SharePoint Advanced Management, which is now included with Microsoft 365 Copilot. Automate site access reviews at scale and add controls to restrict access to sites if they contain highly sensitive information. Erica Toelle, Microsoft Purview Senior PM, shows how to control data visibility, automate site access reviews, and fine-tune permissions with Pilot, Deploy, Optimize phases. Protect your data from unwanted exposure. Find and secure high-risk SharePoint sites with Microsoft Purview’s oversharing report. Start here. Secure Microsoft 365 Copilot adoption at scale. Check out the Pilot-Deploy-Optimize approach, to align AI use with your organization’s data governance. Watch here. Boost security, compliance, and governance. Scoped DLP policies enable Microsoft 365 Copilot to respect data labels. Take a look. Watch our video here. QUICK LINKS: 00:00 — Minimize risk of oversharing 01:24 — Oversharing scenarios 04:03 — How oversharing can occur 05:38 — Restrict discovery & limit access 06:36 — Scope sites 07:15 — Pilot phase 08:16 — Deploy phase 09:17 — Site access reviews 10:00 — Optimize phase 10:54 — Wrap up Link References Check out https://aka.ms/DeployM365Copilot Watch our show on the basics of oversharing at https://aka.ms/SMBoversharing Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube:https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community:https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast:https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter:https://twitter.com/MSFTMechanics Share knowledge on LinkedIn:https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram:https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok:https://www.tiktok.com/@msftmechanics Video Transcript: -Are you looking to deploy Microsoft 365 Copilot at scale, but concerned that your information is overshared? Ultimately, you want to ensure that your users and teams can only get to the data required to do their jobs and nothing more. For example, while using Microsoft 365 Copilot and interacting with work data, you don’t want information surfaced that users should not have permissions to view. So, where do you even start to solve for this? You might have hundreds or thousands of SharePoint sites to assess and right-size information access. Additionally, knowing where your sensitive or high value information resides and making sure that the policies you set to protect information continuously and avoid returning to an oversharing state, can come with challenges. -The good news is there are a number of updated tools and resources available to help you get a handle on all this. In the next few minutes, I’ll unpack the approach you can take to help you minimize the risks that come with oversharing and potential data loss using Microsoft Purview and its new Data Security Posture Management for AI insights, along with new Data Loss Prevention policies for Microsoft 365 Copilot and more. And SharePoint Advance Management, which is now included with Microsoft 365 Copilot. This helps you automate site access reviews at scale and adds controls to restrict access to sites even if they contain highly sensitive information. First, let’s look at how information oversharing can inadvertently occur just as it would with everyday search when using Microsoft 365 Copilot. -I’ll explain how it works. When you submit a prompt before presenting that to a large language model, the prompt is interpreted by Copilot and using a process called Retrieval Augmented Generation it then finds and retrieves grounding information that you are allowed to access in places like SharePoint, OneDrive, Microsoft Teams, your email and calendar, and optionally the internet, as well as other connected data sources. The retrieved information is appended to your prompt as additional context. Then that larger prompt is presented to the large language model. With that added grounding information, the response is generated then formatted for the app that you’re using. For this to work well, that information retrieval step relies on accurate search. And what’s important here is as you use Copilot it can only retrieve information that you explicitly have access to and nothing else. This is how search works in Microsoft 365 and SharePoint. The controls you put in place to achieve just enough access will reduce data security risk, whether you intend to use Microsoft 365 Copilot or not. -So, let me show you a few examples you may have experienced where content is overshared. I’ll start in Business Chat. I’m logged in is Adele Vance from the sales team. Her customers are pressuring her for information about new products that haven’t been internally or externally announced. She submits a prompt for 2025 product plans and the response returns a few clearly sensitive documents that she shouldn’t have access to, and the links in the response and in the citations take Adele right to those files. -Now, I’m going to switch perspectives to someone on the product planning team building the confidential plan stored in a private SharePoint site. I’m working on the 2025 product plan on a small team. This is the same doc that Adele just found in Business Chat, and if you look at the top of the document right now, there was one other person who I expect in the document. Then suddenly a few more people appear to have the document open and I don’t know who these people are and they shouldn’t be here. So, this file is definitely overshared. -Now, I’m going to switch back to Adele’s perspective as beyond the product planning doc. The response also describes a new project with the code name Thunderbolt. So, I’ll choose the Copilot recommended prompt to provide more details about Project Thunderbolt, and we can see a couple of recent documents with information that I as Adele should not have access to as a member of the sales team. In fact, if I open the file, I can get right to the detailed specifications and pricing information. -Now, let’s dig into the potential reasons why this is happening, and then I’ll cover how you discover and correct these conditions at enterprise scale. First, privacy settings for SharePoint sites can be set to public or private. These settings are most commonly configured as sites are created. Often sites are set to public, which means anyone in your organization can find content contained within those sites, and by extension, so can Microsoft 365 Copilot. -Next, is setting the default sharing option to everyone in an organization. One common misperception here is just by creating the link, you’re enabling access to that file, folder, or site automatically. That’s not how these links work though. Once a sharing link is redeemed or clicked on by the recipient, that person will have access to and be able to search for the shared content. There are, however, sharing approaches, which auto-redeem sharing links, such as pasting the link into an email and sending that to lots of people. In that case, those recipients have access to the content and will be able to search for it immediately. -Related to this is granting permissions to the everyone except external users group, as you define membership for your SharePoint sites. This group gives everyone in your organization access and the ability to search for that information too. And you’ll also want to look into permissions granted to other large and inclusive groups, which are often maintained using dynamic group membership. And if you’re using Data Loss Prevention, information protection, or other classification controls from Microsoft Purview, labeled content can also trigger sharing restrictions. -So, let’s move on to addressing these common issues and the controls you will use in Microsoft 365, Microsoft Purview, and SharePoint Advance Management. At a high level, there are two primary ways to implement protections. The first approach is to restrict content discovery so that information doesn’t appear in search. Restricting discovery still allows users to access content they’ve previously accessed as well as the content shared with them. The downsides are that content people should not have access to is still accessible, and importantly, Copilot cannot work with restricted content even if it’s core to a person’s job. So, we recommend restricting content discovery as a short-term solution. -The second approach is to limit information access by tightening permissions on sites, folders, and individual files. This option has stronger protections against data loss and users can still request access, if they need it to do their jobs. Meaning only people who need access have access. We recommend limiting access as an ongoing best practice. Then to scope the sites that you want to allow and protect, we provide a few options to help you know where to start. First, you can use the SharePoint Active sites list where you can sort by activity to discover which SharePoint sites should be universally accessible for all employees in your organization. Then as part of the new Data Security Posture Management for AI reporting in Microsoft Purview, the oversharing report lets you easily find the sites with higher risk containing the most sensitive information that you want to protect. The sites you define to allow access and limit access will be used in later steps. Now, let’s move on to the steps for repairing your data from Microsoft 365 Copilot. We’ve mapped best practices and tools for Copilot adoption across Pilot, Deploy, and Optimize phases. -First, in the Pilot phase, we recommend organization-wide controls to easily restrict discovery when using Copilot. This means taking your list of universally accessible sites previously mentioned, then using a capability called Restricted SharePoint search, where you can create and allow list of up to 100 sites, then allow just those sites to be used with search in Copilot. Then in parallel in Microsoft Purview, we’ll configure ways to get visibility into Copilot usage patterns where you can enable audit mode using Data Loss Prevention policies to detect sharing of labeled or unlabeled sensitive content. And likewise, you’ll enable analysis of Copilot interactions as a part of communication compliance. Again, these approaches do not impact information access only discoverability via Copilot and search. -Now, let’s move on to the broader Deploy phase where you will enable Copilot for more users. Here you’ll use the list of identified sites from Microsoft Purview’s oversharing report to identify sites with the most sensitive information. Controls in Microsoft Purview provide proactive information protection with sensitivity labels for your files, emails, meetings, groups, and sites. For each item, you can use more targeted controls to right-size site access by assigning permissions to specific users and groups. And when applied, these controls on the backend will move public sites to private and control access to defined site members based on the permissions you set. Next, you can enable new Data Loss Prevention from Microsoft 365 Copilot policies to exclude specific labels from Copilot prompts and responses. And you can change your DLP policies from the audit mode that you set during the Pilot phase to start blocking unnecessary sharing of labeled content where you’ll now turn on the policies in order to enforce them. -Then, two options from SharePoint Advance Management are to use restricted access control to limit access to individual sites. That way only members in defined security groups will have access, and to limit site access by operationalizing site owner access reviews. Then as an additional fine-tuning option, you can target restricted content discovery on individual sites, like you see here with our leadership site to prevent Copilot from using their content as you continue to work through access management controls. And as part of the Deploy phase, you’ll disable restricted SharePoint search once you have the right controls in place. Together, these options will impact both access permissions, as well as discovery via Copilot and search. -Next, the final Optimize phase is about setting your organization up for the long term. This includes permissioning, information classifications, and data lifecycle management. Here you’ll continually monitor your data security risks using oversharing reports. Then implement auto-labeling and classification strategies using Microsoft Purview, and ensure that as new sites are created, site owners and automated provisioning respect access management principles. These processes help ensure that your organization doesn’t drift back into an oversharing state to keep your data protected and ongoing permissions in check. Now, if we switch back to our initial user examples in Business Chat with our controls in place, if we try the same prompts as before, you’ll see that Adele can no longer access sensitive information, even if she knows exactly what to look for in her prompts. The data is now protected and access has been right-sized for everyone in the organization. -So, those are the steps and tools to prepare your information from Microsoft 365 Copilot at enterprise scale, and help ensure that your data is protected and that everyone has just enough access to do their jobs. To learn more, check out aka.ms/DeployM365Copilot. Also, watch our recent show on the basics of oversharing at aka.ms/SMBoversharing for more tips to rightsize permissions for SharePoint site owners. Keep watching Microsoft Mechanics for the latest updates and thanks for watching.2KViews0likes0Comments