microsoft purview
22 TopicsIntroducing Microsoft Purview Alert Triage Agents for Data Loss Prevention & Insider Risk Management
Surface the highest-risk alerts across your environment, no matter their default severity, and take action. Customize how your agents reason, teach them what matters to your organization, and continuously refine to reduce time-to-resolution. Talhah Mir, Microsoft Purview Principal GPM, shows how to triage, investigate, and contain potential data risks before they escalate. Focus on the most high-risk alerts in your queue. Save time by letting Alert Triage Agents for DLP and IRM surface what matters. Watch how it works. Stay in control. Tailor triage priorities with your own rules to focus on what really matters. See how to customize your alert triage agent using Security Copilot. View alert triage agent efficiency stats. Know what your agent is doing and how well it’s performing. Check out Microsoft Purview. QUICK LINKS: 00:00 — Agents in Microsoft Purview 00:58 — Alert Triage Agent for DLP 01:54 — Customize Agents 03:32 — View prioritized alerts 05:17 — Calibrate Agent Behavior with Feedback 06:38 — Track Agent Performance and Usage 07:34 — Wrap up Link References Check out https://aka.ms/PurviewTriageAgents Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Staying ahead of potential data security threats and knowing which alerts deserve your attention isn’t just challenging. It’s overwhelming. Every day, your organization generates an increasing and enormous volume of data interactions, and it’s hard to know which potential risks are slipping through the cracks. In fact, on average, for every 66 new alerts logged in a day, nearly a third are not investigated because of the time and effort involved. And this is exactly where automation and AI in Microsoft Purview can make a material difference. With an agent-managed alert queue that, just like an experienced tier 1 analyst, sifts through the noise to identify Data Loss Prevention and Insider Risk Management alerts that pose the greatest risks to your organization, letting you focus your time and efforts on the most critical risks to your data. -Today, I’ll show you how the agents in Microsoft Purview work, the reasoning they use the prioritize alerts, and how to get them running in your environment. I’ll start with Alert Triage Agent for DLP. I’m in the Alerts page for Data Loss Prevention. You’ll see that just for this small date range, I have a long list of 385 active alerts. Now, I could use what’s in the Severity column to sort and prioritize what to work on first, clicking each, analyzing the details, which policies were triggered, and then repeating that process until I’ve worked my way through the list over the course of my day. And even then, I wouldn’t necessarily have the full picture. To save time, I ended up deprioritizing low and medium severity alerts, which still could present risks that need to be investigated, but it doesn’t have to be this way. -Instead, if I select my Alert Triage Agent view, I can see it’s done the work to triage the most important alerts, regardless of severity, that require my attention. There’s a curated list of 17 alerts for me to focus in on. And if you’re wondering if you can trust this triage list of alerts to be the ones that really need the most attention, you remain in control because you’re able to teach Copilot what you want to prioritize when you set up your agent. Let me show you. I’m in the Agents view and I’ll select the DLP agent. And if this is your first time using the agent, you’ll need to review what it does and how it’s triggered. In fact, it lists what it needs permissions for as it reasons over each alert. This includes your DLP configuration, reading incoming activity details and corresponding content, and then storing your feedback to refine how it will triage DLP alerts. -Next, you can move on to deployment settings. Here, you can choose how the agent runs or triggered and select the alert timeframe. The default is last 30 days. From there, I’ll deploy the agent. You’ll see that it tells me the next step is to customize it before it begins triaging alerts. This takes a little while to provision, and once it’s ready, there’s just one more step. Back in Alerts, I need to customize the agent. Here, I can enter my own instructions as text to help the agent prioritize alerts based on what’s important to my organization. For example, I can focus it on specific labels or projects, which can be modified over time. -Next, I can select the individual policies that I want to focus the agent on. I’m going to select all of these in this case, then hit Save. Once I hit Review, it generates custom classifiers and rules specific to what I’ve asked the agent to look for. Then I just need to start the agent, and that’s the magic behind agent-prioritized queue that I showed you earlier. So now, once the agent is ready, instead of trying to find that needle in our haystack of 385 alerts, I can just hit the toggle button to view the prioritized alerts from the Alert Triage Agent. Notice I’m not losing any of the alert details from before. It’s just presented as a triaged and prioritized queue, starting with the top handful of alerts that need my immediate attention with less urgent and not categorized alerts available to view in other tabs. -I’ll focus on what needs attention and click into the top one to see what the agent found. The Agent summary tells me that there are 25 files and eight with verified policy matches. Data includes credit cards, bank account numbers shared using SharePoint. Below that, you’ll see the sensitivity risk for each shared file, the exfiltration risk related primarily to the files containing financial data, and the policy risk. And I could see in this case, the DLP policy was triggered, and the user was allowed to share without restrictions. In the Details tab, you’ll notice that the alert severity set to low based on the policy configuration, but the triage agent, much like a human analyst, can render a verdict taking the entire context into account. Clicking into view details, I can find more information, including the related assets, where I can see each of the corresponding names, trainable classifiers if defined, and sensitive information types. I’ll scroll back up and show you one more tab here. -In Overview, I can see the user associated with the alert. Turns out this is an important policy match to prioritize Labels on 18 highly sensitive files were downgraded and it was shared without proper restriction. The user was warned and chose to proceed. I can now work on containing the risk and improving related policy protections to prevent future incidents like this one. Let’s continue to work through our prioritized alert queue, and you can see I’m now left with six. I’ll click into the first one. It’s a policy match for business-critical files containing financial and legal information. This is credit card information and a legal agreement in the shared content. That said, this happens to be a close partner of our company that typically handles this type of information, so it’s not important. And to prevent this and future similar alerts from being flagged as needing my attention, I can calibrate the agent’s response based on what matters to me. Kind of like you would teach a junior member of your team. So, in this alert categorization, I’ll click Change to add more context about why I disagree with this categorization so that other recipients from that domain are deprioritized. -In the details pane, I’ll change it to less urgent and add another property to deprioritize these types of alerts. In this case, I’ll add the external recipient email address. And after I hit submit, this will be added to the agent’s instruction set to further refine its rationale for prioritization. In fact, here in our list of what needs attention, you’ll see that the alert is no longer on the list. That’s how easy it is to get the agent to work on your behalf. And once you’ve been using the agent at any time, you can view its progress. In the Agent Overview, I can see my deployed agents and trending activities. If I click into my Data Loss Prevention Agent, I can see details about its recent activities. In the Performance tab, I can also see the agent effectiveness trend over time, and below that, a detailed breakdown of units consumed per day. This way, you can reduce your time to resolution even while your team is spread thin. -Now, I focused on the DLP agent today, and similarly, our alert triage agent in Insider Risk Management works on your behalf to create a prioritized alert queue of data incidents by risky users in your organization that require your attention, including evaluating the user risk based on historical context, as well as analyzing the user’s activity over weeks or months to help evaluate their risk, whether they’re intentional or not. In many ways, Purview’s new Alert Triage Agents for DLP and IRM, powered by Security Copilot, reduce the time, effort, and expert resources needed to truly understand the context of your alerts. It works alongside you and the whole team to accelerate and simplify your investigations. To learn more, check out aka.ms/PurviewTriageAgents, subscribe to Microsoft Mechanics if you haven’t yet, and thank you for watching.903Views1like0CommentsMicrosoft Purview: New data security controls for the browser & network
Protect your organization’s data with Microsoft Purview. Gain complete visibility into potential data leaks, from AI applications to unmanaged cloud services, and take immediate action to prevent unwanted data sharing. Microsoft Purview unifies data security controls across Microsoft 365 apps, the Edge browser, Windows and macOS endpoints, and even network communications over HTTPS — all in one place. Take control of your data security with automated risk insights, real-time policy enforcement, and seamless management across apps and devices. Strengthen compliance, block unauthorized transfers, and streamline policy creation to stay ahead of evolving threats. Roberto Yglesias, Microsoft Purview Principal GPM, goes beyond Data Loss Prevention Keep sensitive data secure no matter where it lives or travels. Microsoft Purview DLP unifies controls across Microsoft 365, browsers, endpoints, and networks. See how it works. Know your data risks. Data Security Posture Management (DSPM) in Microsoft Purview delivers a 360° view of sensitive data at risk, helping you proactively prevent data leaks and strengthen security. Get started. One-click policy management. Unify data protection across endpoints, browsers, and networks. See how to set up and scale data security with Microsoft Purview. Watch our video here. QUICK LINKS: 00:00 — Data Loss Prevention in Microsoft Purview 01:33 — Assess DLP Policies with DSPM 03:10 — DLP across apps and endpoints 04:13 — Unmanaged cloud apps in Edge browser 04:39 — Block file transfers across endpoints 05:27 — Network capabilities 06:41 — Updates for policy creation 08:58 — New options 09:36 — Wrap up Link References Get started at https://aka.ms/PurviewDLPUpdates Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -As more and more people use lesser known and untrusted shadow AI applications and file sharing services at work, the controls to proactively protect your sensitive data need to evolve too. And this is where Data Loss Prevention, or DLP, in Microsoft Purview unifies the controls to protect your data in one place. And if you haven’t looked at this solution in a while, the scope of protection has expanded to ensure that your sensitive data stays protected no matter where it goes or how it’s consumed with controls that extend beyond what you’ve seen across Microsoft 365. Now adding browser-level protections that apply to unmanaged and non-Microsoft cloud apps when sensitive information is shared. -For your managed endpoints, today file system operations are also protected on Windows and macOS. And now we are expanding detection to the network layer. Meaning that as sensitive information is shared into apps and gets transmitted over web protocols, as an admin, you have visibility over those activities putting your information at risk, so you can take appropriate action. Also, Microsoft Purview data classification and policy management engines share the same classification service. Meaning that you can define the sensitive information you care about once, and we will proactively detect it even before you create any policies, which helps you streamline creating policies to protect that information. -That said, as you look to evolve your protections, where do you even start? Well, to make it easier to prioritize your efforts, Data Security Posture Management, or DSPM, provides a 360 degree view of data potentially at risk and in need of protection, such as potential data exfiltration activities that could lead to data loss, along with unprotected sensitive assets across data sources. Here at the top of the screen, you can see recommendations. I’ll act on this one to detect sensitive data leaks to unmanaged apps using something new called a Collection Policy. More on how you can configure this policy a bit later. -With the policy activated, new insights will take up to a day to reflect on our dashboard, so we’ll fast forward in time a little, and now you can see a new content category at the top of the chart for sensitive content shared with unmanaged cloud apps. Then back to the top, you can see the tile on the right has another recommendation to prevent users from performing cumulative exfiltration activities. And when I click it, I can enable multiple policies for both Insider Risk Management and Data Loss Prevention, all in one click. So DSPM makes it easier to continually assess and expand the protection of your DLP policies. And there’s even a dedicated view of AI app-related risks with DSPM for AI, which provides visibility into how people in your organization are using AI apps and potentially putting your data at risk. -Next, let me show you DLP in action across different apps and endpoints, along with the new browser and network capabilities. I’ll demonstrate the user experience for managed devices and Microsoft 365 apps when the right controls are in place. Here I have a letter of intent detailing an upcoming business acquisition. Notice it isn’t labeled. I’ll open up Outlook, and I’ll search for and attach the file we just saw. Due to the sensitivity of the information detected in the document, it’s fired up a policy tip warning me that I’m out of compliance with my company policy. Undeterred, I’ll type a quick message and hit send. And my attempt to override the warning is blocked. -Next, I’ll try something else. I’ll go back to Word and copy the text into the body of my email, and you’ll see the same policy tip. And, again, I’m blocked when I still try to send that email. These protections also extend to Teams chat, Word, Excel, PowerPoint and more. Next, let me show you how protections even extend to unmanaged cloud apps running in the Edge browser. For example, if you want to use a generative AI website like you’re seeing here with DeepSeek, even if I manually type in content that matches my Data Loss Prevention policy, you’ll see that when I hit submit, our Microsoft Purview policy blocks the transmission of this content. This is different from endpoint DLP, which can protect file system operations like copy and paste. These Edge browser policies complement existing endpoint DLP protections in Windows and macOS. -For example, here I have the same file with sensitive information that we saw before. My company uses Microsoft Teams, but a few of our suppliers use Slack, so I’ll try to upload my sensitive doc into Slack, and we see a notification that my action is blocked. And since these protections are on the file and run in the file system itself, this would work for any app. That said, let’s try another operation by copying the sensitive document to my removable USB drive. And here I’m also blocked. So we’ve seen how DLP protections extend to Microsoft 365 apps, managed browsers, and file systems. -Additionally, new protections can extend to network communication protocols when sharing information with local apps running against web services over HTTPS. In fact, here I have a local install of the ChatGPT app running. As you see, this is not in a browser. In this case, if I unintentionally add sensitive information to my prompt, when it passes the information over the network to call the ChatGPT APIs, Purview will be able to detect it. Let’s take a look. If I move over to DSPM for AI in Microsoft Purview, as an admin, I have visibility into the latest activity related to AI interactions. If I select an activity which found sensitive data shared, it displays the user and app details, and I can even click into the interaction details to see exactly what was shared in the prompt as well as what specifically was detected as sensitive information on it. This will help me decide the actions we need to take. Additionally, the ability to block sharing over network protocols is coming later this year. -Now, let’s switch gears to the latest updates for policy creation. I showed earlier setting up the new collection policy in one click from DSPM. Let me show you how we would configure the policy in detail. In Microsoft Purview, you can set this up in Data Loss Prevention under Classifiers on the new Collection Policies page. These policies enable you to tailor the discovery of data and activities from the browser, network, and devices. You can see that I already have a few created here, and I’ll go ahead and create a new one right from here. -Next, for what data to detect, I can choose the right classifiers. I have the option to scope these down to include specific classifiers, or include all except for the ones that I want to exclude. I’ll just keep them all. For activities to detect, I can choose the activities I want. In this case, I’ll select text and files shared with a cloud or AI app. Now, I’ll hit add. And next I can choose where to collect the data from. This includes connected data sources, like devices, Copilot experiences, or Enterprise AI apps. The unmanaged cloud apps tab uses the Microsoft Defender for Cloud Apps catalog to help me target the applications I want in scope. -In this case, I’ll go ahead and select all the first six on this page. For each of these applications, I can scope which users this policy applies to as a group or separately. I’ll scope them all together for simplicity. Here I have the option to include or exclude users or groups from the policy. In this case, I’ll keep all selected and save it. Next, I have the option of choosing whether I want AI prompt and responses that are detected to be captured and preserved in Purview. This enabled the experience we saw earlier of viewing the full interaction. -Finally, in mode, you can turn the policy on. Or if you leave it off, this will save it so that you can enable it later. Once I have everything configured, I just need to review and create my policy, and that’s it. In addition, as you create DLP policies, you’ll notice new corresponding options. Let me show you the main one. For each policy, you’ll now be asked what type of data you want to protect. First is data stored in connected sources. This includes Microsoft 365 and endpoint policies, which you’re likely already using now. The new option is data in browser and network activity. This protects data in real-time as it’s being used in the browser or transmitted over the network. From there, configuring everything else in the policy should feel familiar with other policies you’ve already defined. -To learn more and get started with how you can extend your DLP protections, check out aka.ms/PurviewDLPUpdates. Keep checking back to Microsoft Mechanics for all the latest updates and thanks for watching.2.3KViews1like0CommentsProtect data used in prompts with common AI apps | Microsoft Purview
Protect data while getting the benefits of generative AI with Microsoft Defender for Cloud Apps and Microsoft Purview. Safeguard against shadow IT risks with Microsoft Defender for Cloud Apps, unveiling hidden generative AI applications. Leverage Microsoft Purview to evaluate data exposure, automating policy enforcement for enhanced security. Ensure compliance with built-in data protections in Copilot for Microsoft 365, aligned with organizational policies set in Microsoft Purview, while maintaining trust and mitigating risks seamlessly across existing and future cloud applications. Erin Miyake, Microsoft Purview’s Principal Product Manager, shares how to take a unified approach to protecting your data. Block sensitive data from being used with generative AI. See how to use data loss prevention policies for content sensitivity in Microsoft Purview. Locate and analyze generative AI apps in use. Auto-block risky apps as they’re classified using updated risk assessments, eliminating the need to manually control allowed and blocked apps. See how it works. Create data loss prevention policies. Secure data for generative AI. Steps to get started in Microsoft Purview’s AI Hub. Watch our video here: QUICK LINKS: 00:00 — Secure your data for generative AI 01:16 — App level experiences 01:46 — Block based on data sensitivity 02:45 — Admin experience 03:57 — Microsoft Purview AI Hub 05:08 — Set up policies 05:53 — Tailor policies to your needs 06:35 — Set up AI Hub in Microsoft Purview 07:09 — Wrap Up Link References: For information on Microsoft Defender for Cloud Apps go to https://aka.ms/MDA Check out Microsoft Purview capabilities for AI go to https://aka.ms/PurviewAI/docs Watch our episode on Copilot for Microsoft 365 data protections at https://aka.ms/CopilotAdminMechanics Watch our episode about Data Loss Prevention policy options at https://aka.ms/DLPMechanics Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Generative AI with large language models like GPT is fast becoming a central part of everyday app experiences. With hundreds of popular apps now available and growing. But do you know which generative AI apps are being adopted via shadow IT inside your organization? And if your sensitive data is at risk? -Today I am going to show you a unified approach to protecting your data while still getting the benefits of generative AI. With Microsoft Defender for Cloud Apps to help you quickly see what risky generative AI apps are in use and Microsoft Purview to assess your sensitive data exposure so that you can automate policy enforced protections based on data sensitivity and the AI app in use. -Now, this isn’t to say that there aren’t safe ways to take advantage of generative AI with work data right now. Copilot for Microsoft 365, for example, has the unique advantage of data protections built in that respect your organization’s data security and compliance needs. This is based on the policies you set in Microsoft Purview for your data in Microsoft 365. -That said, the challenge is in knowing which generative AI apps that people are using inside your organization to trust. What you want is to have policies where you can “set it and forget it” so that existing and future cloud apps are visible to IT. And if the risk thresholds you set are met, they’re blocked and audited. Let’s start with the user experience. Here, I’m on a managed device. I’m not signed in with a work account or connected to a VPN and I’m trying to access an AI app that is unsanctioned by my IT and security teams. -You’ll see that the Google Gemini app in this case, and this could be any app you choose, is blocked with a red SmartScreen page and a message for why it was blocked. This app level block is based on Microsoft Defender for Endpoint with Cloud App policies. More on that in a second. Beyond app level policies, let’s try something else. You can also act based on the sensitivity of the data being used with generative AI. For example, the copy and paste of sensitive work data from a managed device into a generative AI app. Let me show you. -I have a Word document open, which contains sensitive information on the left and on the right I have OpenAI’s ChatGPT web experience running, and I’m signed in using my personal account. This file is sensitive because it includes keywords we’ve flagged in data loss prevention policies for a confidential project named Obsidian. Let’s say I want to summarize the content from the confidential Word Doc. -I’ll start by selecting all the texts I want and copied it into my clipboard, but when I try to paste it into the prompt, you’ll see that I’m blocked and the reason why. This block was based on an existing data loss prevention policy for content sensitivity defined in Microsoft Purview, which we’ll explore in a moment. Importantly, these examples did not require that my device used a VPN with firewall controls to filter sites or IP addresses, and I didn’t have to use my working email account to sign into those generative AI apps for the protections to work. -So let’s switch gears to the admin perspective to see what you can do to find generative AI apps in use. To get started, you’ll run Cloud discovery and Microsoft Defender for cloud apps. It’s a process that can parse network traffic logs for most major providers to discover and analyze apps and use. Once you’ve uploaded your networking logs, analysis can take up to 24 hours. And that process then parses the traffic from your network logs and brings it together with Microsoft’s intelligent and continuously updated knowledge base of cloud apps. -The reports from your cloud discovery show you app categories, risk levels from visited apps, discovered apps with the most traffic, top entities, which can be users or IPs along with where various app headquarters locations are in the world. A lot of this information is easily filtered and there are links into categories, apps, and sub reports. In fact, I’ll click into generative AI here to filter on those discovered apps and find out which apps people are using. -From here, you can manually sanction or unsanction apps from the list, and you can create policies to automatically unsanction and block risky apps as they’re added to this category based on continuously updated risk assessments so that you don’t need to keep returning to the policy to manually add apps. Next, to protect high value sensitive information that’s where Microsoft Purview comes in. -And now with the new AI hub, it can even show you where sensitive information is used with AI apps. AI Hub gives you a holistic view of data security risks and Microsoft Copilot and in other generative AI assistants in use. It provides insights about the number of prompts sent to Microsoft Copilot experiences over time and the number of visits to other AI assistants. Below that is where you can see the total number of prompts with sensitive data across AI assistants used in your organization, and you can also see the sensitive information types being shared. -Additionally, there are charts that break down the number of users accessing AI apps by insider risk severity level, including Microsoft Copilot as well as other AI assistants in use. Insider risk severity levels for users reflect potentially risky activities and are calculated by insider risk management and Microsoft Purview. Next in the Activity Explorer, you’ll find a detailed view of the interactions with AI assistants, along with information about the sensitive information type, content labels, and file names. You can drill into each activity for more information with details about the sensitive information that was added to the prompt. -All of this detail super useful because it can help you fine tune your policies further. In fact, let’s take a look at how simple it is to set up policies. From the policies tab, you can easily create policies to get started. I’ll choose the fortify your data security for generative AI policy template. It’s designed to protect against unwanted content sharing with AI assistants. -You’ll see that this sets up built-in risk levels for Adaptive Protection. It also creates data loss prevention policies to prevent pasting or uploading sensitive information by users with an elevated risk level. This is initially configured in test mode, but as I’ll show you can edit this later, and if you don’t have labels already set up, default labels for content classification will be set up for you so that you can preserve document access rights in Copilot for Microsoft 365. -After you review the details, it’s just one click to create these policies. And as I mentioned, these policies are also editable once they’ve been configured, so you can tailor them to your needs. I’m in the DLP policy view and here’s the policy we just created in AI Hub. I’ll select it and edit the policy. To save time, I’ve gone directly to the advanced rules option, and I’ll edit the first one. -Now, I’ll add the sensitive info type we saw before. I’ll search for Obsidian, select it, and add. Now, if I save my changes, I can move to policy mode. Currently I’m in test mode, and when I’m comfortable with my configurations, I can select turn the policy on immediately, and within an hour the policy will be enforced. And for more information about data loss prevention policy options, check out our recent episode at aka.ms/DLPMechanics. -So that’s what AI Hub and Microsoft Purview can do. And if you’re wondering how to set it up for the first time, the good news is when you open AI Hub, once you have audit enabled, and if you have Copilot for Microsoft 365, you’ll already start to see analytics insights populated. Otherwise, once you turn on Microsoft Purview audit, it takes up to 24 hours to initiate. -Then you’ll want to install the Microsoft Purview browser extension to detect risky user activity and get insights into user interactions with other AI assistant. And onboard devices to Microsoft Purview to take advantage of endpoint DLP capabilities to protect sensitive data from being shared. So as I demonstrated today, the combination of both Microsoft Defender for Cloud Apps and Microsoft Purview gives you the visibility you need to detect risky AI apps in use with your sensitive data and enforce automated policy protections. -To learn more about implementing Microsoft Defender for Cloud Apps, go to aka.ms/MDA. To learn more about implementing Microsoft Purview capabilities for AI, go to aka.ms/PurviewAI/docs. And for a deeper dive on Copilot for Microsoft 365 protections, check out our recent episode at aka.ms/CopilotAdminMechanics. Of course, keep watching Microsoft Mechanics for the latest tech updates, and thanks for watching.7.9KViews1like0Comments