microsoft purview
27 TopicsLabeling Files is Worth It | Speed & Protection Benefits in Microsoft Purview
Classify your data, apply clear labels, and enforce protections that automatically adapt to human and AI interactions so you can reduce risk without slowing down workflows. Proactively monitor, assess, and respond to risk in real time. Use labeling and layered policies to stop accidental sharing, manage AI access, and maintain consistent protection across your organization. Matt McSpirit, Microsoft Mechanics expert, joins Jeremy Chapman to share how to turn scattered data into actionable security that moves as fast as your team and AI. Scan your environment beyond standard detection. Identify gaps where AI or big files might expose sensitive data. Get started with Microsoft Purview Information Protection. Reduce the risk of accidental sharing. Label sensitive data, including proprietary and hard-to-detect content, to enforce access controls instantly. See how DLP and IRM work. Act before exposures become incidents. Identify data risks early, prioritize what matters most, and take action to reduce exposure with Microsoft Purview DSPM. QUICK LINKS: 00:00 — Microsoft Purview data protection 01:04 — Data Loss Prevention 03:36 — Layered approach in addition to DLP 04:13 — Unified classification 04:27 — How sensitive data is determined 06:23 — Create trainable classifiers 07:06 — Distinction between classification and labeling 08:06 — Configure policy protections 09:12 — DLP in action 10:10 — IRM in action 10:51 — See how protections show up 13:37 — Move from reactive to proactive protection 15:00 — Wrap up Link References For deeper guidance, go to https://aka.ms/PurviewInformationProtection Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - If you don’t understand your data, what it is, where it lives, and how sensitive it is, you can’t protect it. And it’s easy to assume that you’re covered, maybe you’ve already got data loss prevention, or DLP, running with near realtime detection, which is helpful, yes, but it’s not enough. Protecting data today means going beyond what traditional tech scanning can catch and making sure that those harder to parse file types are covered too. And it also requires a layered approach with instant risk insights, starting with consistent and automatic classification, so everyone’s clear on what’s actually sensitive. Labels that make sensitive content easier to interpret and trigger automatic policies, and Adaptive Protection that responds to the risk level of each user, whether human or non-human, and how they engage with the data. In fact, this matters even more with AI that can now bring hidden or long forgotten information to the surface in just seconds. Now to walk us through all of this, I’m joined by a Microsoft Mechanics expert, Matt McSpirit. - Thanks, it’s great to be back. - Okay, so before we get into solutions, why don’t we unpack this a bit more. So for a lot of people, even as they adopt AI, there’s this notion that maybe DLP is good enough. It’s finding things like credit cards, it’s also looking at things like financial information, identity numbers, addresses, et cetera, even if you aren’t paying attention, by the way, to where that information is stored. So is it even worth the extra effort in doing something else? - Well, these are all fair points, and DLP is one powerful piece of the puzzle. And part of its appeal is that it works without the need to label or add any metadata to your content. It’s also rule-based and can look for sensitive information types as they’re being written, read, or sent, and then use what it finds to apply corresponding protections to prevent sharing or contain its sharing radius. - Okay, so what you just said sounds like all upsides. So the policies are relatively easy to configure, they work by default with all your Microsoft 365 and Office apps and your managed devices, as long as people are signed in with them, regardless, really, of where that file goes as well. So what’s the downside? - Well, depending on the scenario, there are a few areas. First, there’s speed of detection and response. Now in this case, I’ll show you an example of DLP in action. I’ll paste in a few thousand words from my clipboard into this Word document. And now DLP will compare it with hundreds of sensitive information types like bank numbers or IDs, dozens of trainable classifiers like contracts or patent applications, and do cross look-ups against exact data match, and more, which based on physics, orchestration, and query speeds, takes time. And it’s only when the policy tip appears whether I choose to apply the recommendation or not, that the content is protected. As you can see, I can’t now share this file externally because DLP has found sensitive information. So there’s a window of time based on a number of factors for DLP to find sensitive information and apply protection. Next, breadth of coverage is another area. You might have file types that can’t be scanned for text easily, like these files synced on my OneDrive location. These are proprietary file types from line of business apps as well as 3D CAD files. So in this case, you’d need a different way to identify the sensitivity of these files and protect the container of the files themselves, like you can see with this rights-protected document using the ARC Add File extension. - And that makes a lot of sense. You know, even though compute and detection are getting faster, if you’ve got like a hundred-page document and it’s got, or maybe a massive spreadsheet, it’s got passport numbers or similar things buried in it, it’s going to take significant time, then, to find that sensitive info. - Right, and if we add AI to the picture, which needs to orchestrate access to data across multiple data sources to respond in milliseconds, this isn’t the optimal approach when speed of response counts. And that’s where a layered approach comes in. In addition to your policy engines like DLP, it’s important to augment what you’re doing with unified data classification. It gives you a broader, persistent understanding of sensitive data across your environment so that it’s easy to assess your data risk and then add sensitivity labeling to your data security strategy. This way, DLP can immediately act on an existing signal rather than having to evaluate everything from scratch each time. - Okay, so why don’t we go deeper then on unified classification as part of this layered approach. - So this actually gets to the heart of the problem. Over time, as data keeps growing and shifting, different teams and tools have ended up defining sensitive data in their own ways, and it’s hard to know where all that data lives. No one really intends for the inconsistency, it just happens and you’re left with a patchwork view of your data instead of one clear picture. And that’s why the first step is giving everything that works with your data, whether that’s your users, AI, or your apps and policy engines, a single consistent way to recognize what’s important. So here in Data Explorer, Microsoft Purview has already identified sensitive data across my environment automatically. This reflects a unified data classification approach that discovers your sensitive data wherever it lives. I didn’t build any rules for this. This discovery happens automatically. And if I drill in, I can see exactly where these files are, even preview the content to see the content in question and easily understand why they were identified as sensitive. - And there’s really a lot to it that’s powering this classification. So what is Purview then looking at to determine if there’s sensitive information there? - Right, there’s a lot happening under the covers. Purview uses two main built-in classification methods. First, sensitive information types that detect specific regulated data such as credentials, IDs, or financial numbers with more than 300 built-in detection patterns for regulated data. And second, more than a hundred pre-trained classifiers that understand broader categories of content like budgets, HR files, or source code. These classifiers are built using Microsoft’s domain expertise and training data sets to recognize common business content categories. Additionally, how fresh your data is also matters to Purview. Purview evaluates new and modified content, automatically analyzing the data with the latest classifications and policies so that your most recent data is well understood and has the latest protections. And if you want to evaluate data that hasn’t been accessed recently, you can run on-demand classification to scan data at rest, helping you uncover sensitive data that might otherwise be overlooked. - And building on what you said, Matt, you know, you can also teach Purview to recognize content that’s unique to your organization. For example, you can create your own trainable classifiers by providing real sample content. You just have to point it to a SharePoint site with 50 to 500 files of matching content. Or you can use exact data match for structured data comparisons against exact text strings. Think of things like code names, or maybe a specific customer, partner, or competitor names, and more. And Purview, it also supports fingerprinting for things like standard forms or templates so that they’re recognized even if the wording changes. Of course, classifications can trigger protections once they’re paired with active policies. - Right, and interestingly, labels can also trigger protection policies. - And we should really unpack this a bit more, because I think a lot of people watching probably make the mistake of conflating classification and labeling as being one and the same thing. - It’s a common mistake, but there is an important distinction. In fact, there’s an easy way to think about this. Think of data classification as recognizing what your data is. It’s about understanding the sensitive information that’s present in your data. And data labeling is the simple to understand wording along with your intent for how the data should be handled. For example, a confidential/do not forward label needs no complex explanation on how you should handle the data if you’re the user. And on the backend, Purview quietly protects the data based on how you’ve define protections associated with that label, like access restrictions or watermarking. And the bonus is that this guidance and protection travels with the data. And you can set labels up in Microsoft Purview Information Protection. This lets you create sensitivity labels like these to define how different types of data should be classified. Once you’ve done that, you can configure policy protections that are triggered by those labels, such as encryption, limiting the sharing radius or visual markings, and more. And when used in tandem with DLP, you can even prevent Copilot from processing labeled content. Next, with your labels created, you can publish them so they appear in apps like Word, Excel, PowerPoint, and Outlook, and are honored across services like Fabric, Dataverse, and of course, as I mentioned, Copilot. All of what I’ve shown you is included with most versions of Microsoft 365. And with Microsoft 365 E5, you can even set up auto labeling, so Purview can apply labels automatically when it detects sensitive content. - So labels are respected across all those destinations. - That’s right, and once a label is applied, it’s recognized across supported workloads, and Purview solutions like DLP, Insider Risk Management, and more, know how to handle that data properly. So instead of stitching together separate tools, each with its own definition of sensitive data, you define sensitivity only once. And that same signal drives consistent protection wherever the data travels to. In fact, let me show you how this works in practice. So here in DLP, I’m going to create a policy based on what Purview has already automatically discovered across SharePoint and OneDrive. From the Insights card, you can see the top sensitive information types like medical, IP and trade secrets, financial data, and medical identifiers. So I’ll get started, then choose to create all of the recommended policies. Now, if I go back to my DLP policies view and look at the ones I’ve just created, you’ll see that there are four new policies. If I click in to edit one, you’ll notice that Purview has already preselected the right conditions with trainable classifiers and actions predefined for the policy. And from there, I can even add to this policy. In this case, I’ll add my confidential labels to the policy. These are the same ones I’ve shown before. So in short, classification identifies the sensitive content, the conditions being met will then trigger the corresponding policies to enforce protections. This reduces configuration effort and ensures consistency across your environment. And in Insider Risk Management, labels work as risk signals too. So here in the policy template, I’m adding a condition that focuses on activity involving items labeled confidential. And that way, if users including non-human agents, exfiltrate or misuse high-value labeled data, printing it, copying it to external storage, or sharing externally, IRM will automatically elevate their risk score based on the activities against the labeled data. So labels also help enforce adaptive protections based on the risk profile of who, whether that’s a human user or a non-human AI agent, and their activities with the data. What we call Adaptive Protection. - Okay, so now we’ve got all of our policies in place. Why don’t we see how those protections show up in the flow of work, including AI interactions? So first I’m going to upload the same file that Matt showed before, but this time, it has a confidential label applied. So when I try to share it externally, you can see that I’m blocked instantly because that label is detected right away. DLP blocks the action based on the label, and this, again, is before that file could be scanned for sensitive information. Now I’m going to switch desktops. On the left here is a window with a synced folder in File Explorer. And you can see that there are proprietary file types and CAD files like we saw before, and each are labeled but cannot be analyzed for sensitive information types or classifiers. So with the labels applied to these encrypted P files, as they are, if I do try to drag and drop a file into my removable USB driver location in the window on the right, you’ll see I get a data loss prevention notification. Now because in this case, I’m under the file count threshold that we set before in policy, I can allow or override this, but I would’ve been blocked outright if I had transferred multiple files. Now again, the labels in these uncommon file types are what triggered the data loss prevention policy. And inside of risk management, it is also watching for risky handling of labeled content. For example, I can currently access this highly confidential acquisition site and see all the documents contained within it, for the moment. That said, though, because I just attempted to copy confidential information to my external USB drive, that’s going to catch up with me and automatically change my risk profile. So now after some time has passed, if I try to access that same site, I’m blocked outright and denied access. The protection automatically adapted to my heightened risk profile and blocked the site, without the administrator even needing to take any action. And by the way, the same assessment against risk profile would happen if it was an AI agent and it tried to do the same thing. And beyond agents, why don’t we look at label protection, and how that works in general with AI. So here I’m in Copilot and I have a document uploaded to SharePoint. So I’ll prompt Copilot to summarize the file named Relecloud Acquisition, and you’ll see that Copilot will first check the user’s permissions and the presence of a label before it does anything. Now, because this document is labeled as highly confidential and we have a DLP policy in place to block Copilot from processing sensitive files, it tells me that it can’t summarize that content because of its sensitivity label. - So from creation to risky behavior and even Copilot interactions, the same sensitivity label ensures consistent protection. But the work is never really done. New data keeps coming and risk changes over time. That’s where, because you’ve already classified your data, Purview’s Data Security Posture Management, or DSPM, addresses this by continually assessing your data risk. It’s deeply integrated across Microsoft and beyond, giving you one centralized place to discover unprotected sensitive data across your entire digital estate, including select non-Microsoft services. Built-in intelligence continually assesses data risk to help you prioritize and mitigate high-risk exposures, taking advantage of recommendations where you can strengthen your policy directly from DSPM itself. AI observability features also give you granular insight into what agents are doing and any risk they may introduce. And custom reports make it easy to embed posture management into daily operations by highlighting where to improve. - And this is all built to help you then move from reactive investigation to more proactive and measurable risk reduction. - Exactly, and actually, this is just scratching the surface of what Purview can do. You can also use AI itself to manage human and AI data risk using deep-reasoning Purview agents. For example, they can triage alerts and automatically message users in Teams with the sensitive data found and the actions they need to take. - Okay, so as you saw, there are lots of ways that this layered approach goes beyond traditional DLP protection. So where can everyone who’s watching right now learn more? - Well, first, check out aka.ms/PurviewInformationProtection. Again, if you use Microsoft 365 in your organization, you’ll have Microsoft Purview today, and you can get the more advanced Purview capabilities with Microsoft 365 E5. So it’s worth exploring further. So start using unified classification and labels today. - Thanks, Matt, and thank you for joining us. Be sure to subscriber Microsoft Mechanics if you haven’t already, and we’ll see you next time.198Views0likes0CommentsData Security Investigations in Microsoft Purview
Search across massive volumes of files using natural language, pinpoint the highest risk content, and connect it to user activity to see the full scope of an incident. Investigate and act in one workflow. Analyze content deeply across files, emails, and AI interactions, uncover hidden or unclassified sensitive data, and contain exposure fast. Proactively identify risks, respond to incidents with clarity, and mitigate impact before it spreads. Christophe Fiessinger, Microsoft Purview Principal Squad Leader, joins Jeremy Chapman to walk through real-world investigation workflows — from scoping and analysis to mitigation and automation — so you can move faster and make more informed security decisions. Pinpoint high-risk files. Locate files hidden among hundreds of confidential documents using contextual search. See how Microsoft Purview Data Security Investigations works. Search thousands of files in seconds. Use natural language queries to uncover relevant sensitive data. Get started with Microsoft Purview Data Security Investigations. Contain data leaks immediately. Purge exposed files while retaining investigation evidence. Take action with Microsoft Purview Data Security Investigations. QUICK LINKS: 00:00 — Keep data safe with DSI 01:26 — Connect dots between data risk & impact 02:47 — Built-in AI 03:47 — Work across the full lifecycle of an incident 04:56 — Create an investigation 06:36 — Deep search and analysis 09:03 — How DSI helps data leaks 10:40 — Contain risk with built-in mitigation 11:32 — Automate using agents 13:23 — Estimator tool 14:57 — Wrap up Link References As a Microsoft Purview admin, just go to https://purview.microsoft.com/dsi Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - If you’ve ever had to respond to a major data breach, insider-driven data theft, or even a suspicious leak involving high-value information, you know the hardest part isn’t just detecting the activity, it’s understanding what data was actually taken, how valuable it is, and what risks that creates to your organization. Today we’re going to show you how the now generally available Microsoft Purview Data Security Investigations, or DSI, dramatically accelerates that process using AI to read and analyze and connect the dots fast at massive scale. I’m joined by Christophe Fiessinger from the Microsoft Purview team to demonstrate more. Welcome. - Thanks, Jeremy. Happy to be here. - Thanks so much for joining us today. So most IT teams that I speak to, they’re often using things like SIEMS or incident management tools that connect activity across compromised accounts, devices, and files when they’re responding to things like security events. But these tools, they rarely reveal what’s affected in terms of the files and what’s contained in them. They might show labels, they might show file names or basic metadata like the location or the owner. - Exactly. Beyond labels on metadata, it’s all about context. Metadata gives you the file name, classification might tell you it’s a financial document, and the label might say it’s confidential, but traditional tools can’t really tell you what’s in the content and how much risk it exposes. They just tag the content, they don’t explain it. - So how does DSI then change things? - So DSI on the other end doesn’t just say it’s a confidential financial document. In fact, you might have hundreds of those. Instead, it actually reads and understand each file and the data risks they pose. So of the hundred or so finance documents classified confidential, it can find the one file that carried an existential threat to your company, like the one that contains your entire customer list with the unique credentials that each customer uses to log in your online service. In DSI, that level of insight comes from hybrid vector search and generative AI working together. Hybrid vector search can pick up on semantically similar items, synonyms, or the subtle ways people hide sensitive information while also matching precise text strings like code names or account numbers. In short, it finds the right files by combining context with keyword precision, then generative AI takes over and actually analyzes those files. It performs deep content analysis to uncover sensitive data, security risk, and relationship hidden inside the impacted document. - So it’s removing a ton of manual effort by connecting the dots around the data risk and also its impact. - That’s right. DSI helps you rapidly understand and mitigate the downstream impact. You can start large-scale data investigation and use natural language search to find and narrow in on impact data. From there, you can leverage our powerful built-in AI to deeply analyze content, files, email, team messages, and even review and analyze prompts and responses from AI apps and agents, built-in Microsoft Foundry, Copilot Studio, as well as non-Microsoft agents and apps at scale. DSI is able to establish the context around information and even detect obscure sensitive information that might not have been flagged. It can reason over dozens of major world languages with production-grade quality. And it can directly mitigate identified risk. For example, a specific high value content has been distributed to multiple users. You can purge every instance of those files. With DSI, you can also work on data investigations more efficiently across the full lifecycle of an incident with the rest of your team. As part of Microsoft Purview, you can trigger investigation directly from Data Security Posture Management to dig deeper into data that’s at risk and see how valuable it is. And in Insider Risk Management where you might want to understand larger sets of data being used by risky users or agents. Equally, DSI also provides a useful bridge to your security operations team who can start DSI investigations directly from Microsoft Defender XDR. And because DSI is now integrated with the Microsoft Sentinel graph, data security analysts can connect at-risk information to the activities around it, who accessed it, where it was shared, whether behaviors like compromised sessions or impossible travel were involved, and visually correlate risky content, users, and their activities. It automatically combines unified audit logs, Entra audit logs, and threat intelligence which would otherwise need to be manually correlated. - That’s a really powerful solution. Can you show us an example of an investigation? - Let me show you Data Security Investigations and where to quickly find all your current and future investigations. From the main Data Security Investigations overview, you’ll find everything you need to get started. identifying content, analyzing deeply what’s contained in that content, and mitigating risk, as well as access to all of your previous investigation so you can quickly pick up where you’ve left off and create new investigation from here. You can start an investigation in a few ways. Sometimes proactively using DSI to assess potential data secure risk or other times reactively like when you already know data is leaked and you need to investigate the breach. In this case, I’m going to start this investigation from Data Security Posture Management to get ahead of data risk in our environment. One of the most common types of data leaks is exfiltration of confidential information. Like if an employee moves on to a competitor with trade secrets or a seller wants to bring their client list their new job. Here I can see a recommended objective to prevent exfiltration of risky destinations. Once I click to view objectives, I can see the amount of data exfiltrated, top sources, as well as file types, and I can see an action to create a new investigation using DSI. Here I just need to give it a name, then provide some context about what I’m trying to do in this investigation like, “I’m looking into confidential data that may have been exfiltrated from my organization. I’m specifically looking for confidential and proprietary information about Project Obsidian, the new release we’re working on.” Now I’ll confirm and create the investigation. From here, I can put in the rest of the parameters for deeper search and analysis. In the investigation, I can see a summary about the investigation and from here I can refine the search scope and make change to the date range and people if I want, which will keep things more efficient. And if I need to, I can always add more data sources to the scope. I’ll keep the data source as is and hit add to scope. This grabs the content from the data source and into our investigation. Now I can further analyze the data and I can use a natural language query. And as mentioned DSI will analyze thousands of languages as part of the process. There are a few intelligent search suggestions, but I’m going to do my own search for “information disclosed to customers about project obsidian.” And in just a few seconds I’ll get information assessing exactly what I’m looking for based on my search criteria. It finds over a thousand items with a lot of different languages represented as you can see. On the left, the AI also suggests content categories based on the executed vector search so that it’s easy to organize and make sense of the amount of risk per category. So I’ll filter all those files down to using the obsidian category, and there they are. From here I can select which ones I want to deeply analyze. I’ll choose all of them in this case and hit examine. And here to choose the focus area for the investigation, I can look for credentials, analyze risk, and get mitigation recommendations. I’m going to choose risk in my case so that I can act quickly to contain the risk and hit examine one more time to kick up the process. As it works, I can view its details. This is where AI runs deep content analysis against all the content in these files by looking at the file content itself. This goes beyond common sensitive information types and trainable classifier matches. And depending on the number and size of the files that you have in scope for this, it could take a few moments to run. And you’ll see that it found relevant results each with an assessment, if it’s privileged content, and overall security risk scores and a risk explanation. I can drill into any of these to preview the content in line like this Microsoft 365 Copilot chat message. Moving back, I can also see other risk scores and explanations for credentials on the right-hand columns. - So DSI in this case uncovered a lot of what we call dark data. These are files that were never classified, which is great then for getting ahead of risk, but leaks do eventually happen. And when they do, we need a way to see exactly what got out and how we contain it. - That has happened pretty often, unfortunately. Let me show you a case where credentials were leaked externally as part of a security breach and I had DSI helped. And to show you the integration for SecOps teams with Microsoft Defender XDR, I’ll start from an active incident for data exfiltration in this case. In the incident view, you get the high-level signals, the attack timeline, which users on device were hit, and the file names involved. But we still don’t know what was actually inside those files and what earlier activities might have set up the attack or created additional risk across other files. So from the action menu, I’ll create a DSI investigation right from this open incident to find out more about the content in those files. Here I just need to give it a name, then also paste it in a description and some additional context like I did before for the AI. Then I’ll create the investigation and then it links me directly to an investigation in Microsoft Purview. Like before, I can see a summary and refine the search scope if I want. This time I’m going to fast forward a few steps for scoping the data source and examining the content and just go right to the examination results. Here you can see the subject or title of each item, extracted credentials, including usernames, passwords, and more, credential types including API tokens and MFA, a surrounding snippet or the text around the credential details for context, and the thought process with a summary of the AI reasoning. Next, I also want to show the built-in mitigation. We can actually purge the sensitive files that were forwarded around by email to contain the damage without touching the original copy so we’ll keep the evidence. From the results, I’ll select the items I want, then I’ll choose add to mitigation which will in turn create a list of files and messages containing those credentials. From the list I’ll select purge queue, then view the messages and run the purge where I can choose from a recoverable soft purge or permanent deletion with a hard purge. I’ll keep the default and confirm the purge. Then all the information matching that query will be deleted in minutes. And since these files are part of the investigation, they stay retained for review but are hidden from end users. And safeguards like in-place holds for eDiscovery still work normally so protected files aren’t removed. - Okay, so far we’ve defined all the investigations up front. Is there maybe a way to automate the process using agents? - Absolutely. We’re adding new capabilities to help tackle a major hidden risk, credentials buried in everyday files. While Microsoft Purview DLP protects credentials in real time as files are created or shared, the Data Security Posture Agent powered by Security Copilot helps security teams identify and prioritize credential-related risks across scope data allocations. Here you can see that I’ve already enabled the agent and there’s a few tasks in progress. These can be started manually or run on a schedule. I’ll start a new assignment for this agent and create a credential scanning task. We’ll be adding our task types to this over time. I can give it a name or keep what’s there. Then add some additional context, in this case, to look for credentials and passwords. Then I can view its progress as it completes scanning data locations, access patterns, analyzing risky documents, and generating the report. The agent works autonomously scanning thousands of locations and potentially millions of files. I’m going to move over to a scan I ran earlier to save some time. Once the agent completes its scan, you’ll see a prioritized list of exposed credentials such as passwords, API keys, encryption keys, tokens, and more, each with a risk score and the agent’s reasoning. From there, I can group the results into categories, then filter for the highest risk credentials. For each credential found, I can explore the details of the credential itself plus its surrounding context. - It’s a huge advantage really to run these types of credential scans at scale to catch those risks. But why don’t we switch gears though for the human-led investigations. DSI is using pay-as-you-go billing, which, you know, if people are watching this, they’re probably wondering, how do I keep these investigations in check without breaking the bank? - So cost, as you say, are usage based and billed through Azure. They’re going to vary depending on the size and complexity of your investigation. So we’ve introduced a new estimator tool to help. Before I go there, as a baseline to see the compute unit I’ve been showing until now, I’ll start in the pay-as-you-go dashboard in DSI, and then filter by our last investigation. This one only used about 250 megabyte and 109 DSI compute unit, which is quite conservative. So let’s go back to the DSI overview tab and scroll down to our new estimate cost tool. This lets you input key values like investigation size and gigabytes and the number of vector searches, and it will estimate cost based on what you enter. It shows you the cost breakdown by types for size and AI usage. And the last related control I want to show you is in Azure Cost Management, where like any other Azure services, you can see forecast and accumulated costs. I’ll filter this by my DSI shared view. In this chart, you’ll see the investigation compute and gigabytes by day as well as a forecast. So, voila, you’ve got what what you need to investigate deeply with AI and keep costs in check while staying ahead of incidents. And we’re only getting started. More integration, smarter AI, new mitigation actions, and more agentic workflows are on the way. - Thanks so much for joining us today, Christophe. And if you want to learn more about DSI and try it out for yourself. As a Microsoft Purview admin, just go to purview.microsoft.com/dsi. And keep watching Microsoft Mechanics for the latest updates. We’ll see you again soon.255Views0likes0CommentsAutomate Data Security Triage & Posture | Agents in Microsoft Purview
Cut through alert noise and focus on the risks that matter with Agents in Microsoft Purview. Use Data Security Triage Agent to prioritize incidents, investigate user activity with full context, and uncover hidden patterns that signal real threats. Identify and act on high-risk behavior, like data exfiltration or persistent access, before it leads to data loss. Detect sensitive data across your environment using natural language with Data Security Posture Agent. Analyze content to find what’s exposed, apply protections or restrict access, and surface hidden credentials, so you can take action and continuously reduce risk. Michelle Slotwinski, Microsoft Purview Senior Product Manager, shares how to stay ahead of data risk by turning investigation into proactive protection. Find it. Prioritize it. Fix it. Investigate risks with the Data Security Posture + Triage Agents in Microsoft Purview. Start here. From reactive to ready. Uncover sensitive data, focus on what matters most, and reduce risk with the Data Security Posture and Triage Agents in Microsoft Purview. Take a look. Reduce risks before they’re exposed. Identify hidden passwords, API keys, and credentials buried in files with the Data Security Posture Agent credential scanning capability. Check it out. QUICK LINKS: 00:00 — Reduce data risks 00:59 — Data Security Triage Agent 01:46 — Investigate risks 03:29 — Detect patterns 05:17 — Uncover nested insights 07:44 — Credential scanning 09:03 — Wrap up Link References https://aka.ms/AgentsinPurview Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Data has always moved fast. What’s new is how many places it can show up and how fast tools like AI can surface it. In the next few minutes, I’ll show you how to rapidly identify and reduce your data risks as information flows across more apps, agents, and workflows than ever using the power and speed of AI itself. This is all made possible with the latest Data Security Agents in Microsoft Purview, which work alongside you to reduce the burden of managing the surge in risks from human and AI activity, enabling rapid identification of what truly needs your attention while enabling you to proactively perform deep content analysis to uncover sensitive data at risk, including credentials and secrets that may be deeply hidden within your data. -And we are constantly evolving these agents to meet your everyday needs, removing manual work, and taking care of the busy work for you, while surfacing context-related insights based on their ability to deeply understand the data in your environment. In Microsoft Purview, you can explore agents from the left navigation. Like most analysts, I’ll start the day by reviewing alerts, and so I’ll begin with the Data Security Triage Agent. This agent can triage alerts for both Data Loss Prevention and Insider Risk Management. -I’m interested in the ones for Insider Risk, so I’ll open it. Here are all my triaged alerts. And I can see the agent has triaged and prioritized my alert queue down from 200 alerts to 40 that need my attention. There’s more happening under the hood than it seems. Powered by new advanced AI reasoning, the Data Security Triage Agent can process tens of thousands of activity logs at scale to add context and boost investigation accuracy. In fact, you can now see this in the richer insights that are packed into every alert. To show you, I’ll click into this alert for a data leak associated with a departing employee and view details. First, the summary tells me why this alert is highly risky. It’s flagging a highly privileged departing user, a senior engineer in fact, because it’s observed their pattern of accessing, archiving, and exfiltrating both business and personal files using multiple methods. It’s highlighting key activities. Bulk archive to export data to removable media, observed external sharing to a SharePoint Online site, and Access to Sensitive Files. -Notably, their last working day is recorded as March 31st and the alert was generated on March 27th, so we still have a few days to act before they leave our organization. Let’s dig in deeper into Bulk archive creation. The summary tells me that high-value engineering assets were included. The device and IP address are indicated along with the time this activity occurred: March 23rd. And although the agent hasn’t detected any sensitive information, it has discovered file sensitivity labels. Files have both been archived and copied to removable media. And under details, we can see file counts, names, and types. If we filter on this activity, there’s even more detail. We can see the mix of personal and business files that the engineer has taken. In fact, let’s dig into one of them. I’ll click into the top Engineering designs file where we can see even more detail about the activity, including who performed it with their UPN, jsmith, location details, device details, and more. So using the Data Security Triage Agent for Insider Risk saves time from manual investigations. It also helps prevent important details from falling through the cracks by catching less obvious patterns too. -In this second pattern, Observed External User Added to SharePoint Online Site, the agent was able to pick up upon the fact that the tech-savvy engineer was able to establish persistence to SharePoint resources by adding their personal Gmail account as an external member of the SharePoint site. This way, they would still have access to team resources even if their work account was deprovisioned. By detecting this behavioral pattern, the agent can infer user intent, something that traditional signals alone would have missed, especially considering that content on the SharePoint site did not contain classic sensitive information or match existing classifiers that would normally trigger protection policies. So the agent helps catch those edge cases. It lays out its findings for your validation and escalates the alert to contain the risk. In fact, here’s how advanced AI reasoning works. -Under the hood, instead of one monolithic agent, it’s designed to intelligently plan investigation tasks and orchestrate multiple specialized sub‑agents. Each sub‑agent is an expert in a distinct capability or skill domain to retrieve information like inferred user intent, decomposition of complex tasks, understanding compliance, as well as associated data risks, and more. Results are then presented as Triaged Alerts so that you can quickly see what is important in your environment. Now I mentioned that as an analyst, you’re in control of validating agent outputs and taking action. Let me show you what that experience looks like. You can quickly and easily filter the activities within a risk pattern. And then preview the content in line within the investigation so you don’t need to traverse your intranet to view files, like this SharePoint document to see why it was flagged. And ultimately, you’ll confirm if the agent findings are true positives. -Next, our Data Security Posture Agent helps us to go further by uncovering nested insights for specific users, groups, or sites. And it lets you stay ahead of data risks by finding sensitive data across your estate through natural language discovery. It uses large language models for contextual analysis. And beyond simple keywords or classifiers, it identifies real risk based on the purpose and context of content, which is often deeply hidden within files. And it also recommends actions. If you recall, our Triage agent found a key insight. Our engineer user, jsmith, was observed downloading key files, like Engineering designs to his local device. Notably, the file wasn’t labeled. So next, I want to do a deep analysis of the content under his account using the Data Security Posture Agent. The first thing I need to do is scope the discovery to our user, Joshua Smith, and to their specific mailbox, which comprises their email, Teams chats, and Copilot interactions, and we’ll select Site to investigate their OneDrive. -Next, I’ll prompt the Posture Agent. “Find me all the files for this user that contain engineering architecture designs, programming code, or technical documentation.” And this operation can take a few moments or hours depending on the amount of data that the agentic process needs to analyze. The agent performs deep content analysis, reasoning over the file content and going beyond keywords and pre-defined data types. It understands context and whether or not in this case, valuable architectural designs, code or technical specs are present and exposed. Once it’s complete, the Data Security Posture Agent summarizes the number of files that match the prompt I entered. It’s found 16 files, 4 of which are not labeled, so let’s dig in further and view insights. Notice it hasn’t found any email or Teams messages or Copilot interactions. And you can see at the top of the Engineering designs file is one of the files without a label. As I scroll, I can see another three unlabeled files below. -Because the agent was able to deeply analyze the content within these files, it saved me from the manual effort of doing this myself. I can now take action by individually selecting these files and applying a label. I’ll choose this one for Highly confidential. This label will trigger a related policy to restrict downloading the files or external sharing to user accounts outside of our organization, like the user jsmith’s personal Gmail account that we uncovered before. Next, let’s dig further into the content. Let’s see if any of these files contain additional secrets, like passwords or credentials, that could further put us at risk in the wrong hands. For that, we’ll use the new credential scanning capability of the Data Security Posture Agent, which can autonomously surface credentials buried in data across your organization. -The first thing I need to do is create a Credential Scanning Task. I’ll give it a name based on our scan and scope its data source to the Project Abacus SharePoint Site, which, if you remember, our user Joshua Smith had persistent access to via his personal Gmail account. And I can also provide more context because we want to see if he has hidden credentials in any of the content on this site that might give him access to other services and infrastructure. -With the task created, the agent will now scan that site using the same AI analysis that powers our Data Security Investigations solution. When the agent completes its scan, if we review its results, you’ll see a prioritized list of exposed credentials, such as private keys, Entra credentials, and API tokens, each with a risk score and the agent’s reasoning. Once it’s finished, then it’s easy to review the agent’s findings and drill into source content to see the discovered credentials inline. And of course, from there, you can take action to disable access to files containing credentials. -So, that’s how Data Security Agents in Microsoft Purview work alongside you to remove manual work for you, while surfacing hard-to-find context-related insights. And the good news is that if your organization has Microsoft 365 E5 or E7, you’ll have access to these agents included as part of your license. If not, they are also available on a consumption basis. To learn more and get started, check out aka.ms/AgentsinPurview. Keep watching Microsoft Mechanics for the latest tech updates, and thanks for watching.179Views0likes0CommentsNew Agents in Microsoft Purview
Use the Data Security Triage Agent to cut through alert overload, eliminate false positives, and immediately understand which Insider Risk or DLP incidents need your attention. Stay in control with automated user outreach and clear, contextual reasoning behind every alert. Use the Data Security Posture Agent to uncover risks that hide behind context with natural-language queries. When issues are found, apply labels and trigger security policies right from the insight, helping you proactively prevent data loss. Powered by Security Copilot, these agents give you a faster, smarter, more efficient way to manage data security. Cut through alert overload with AI-driven triage. Elevate only alerts that matter to save time and sharpen focus. Get started with the Data Security Triage Agent in Microsoft Purview. Pinpoint where sensitive data needs immediate protection. Ask natural-language questions to reveal data risks across Outlook, Teams, Copilot, SharePoint, OneDrive, and AI interactions. Check it out. QUICK LINKS: 00:00 — Agents in Microsoft Purview 00:44 — Data Security Triage Agent 01:48 — Data Security Posture Agent Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: Whether you’re an admin focused on strengthening your organization’s data security posture, or an analyst concerned with mitigating immediate data risks, the new AI-powered Data Security Agents in Microsoft Purview simplify the process. They work alongside you to ease the burden of identifying and addressing the increased risks from the growing volumes of human and automated agentic activity that use your organization’s data. Guided by your feedback, they don’t just react, they help you proactively improve your security posture while enabling more rapid identification and mitigation as data risks unfold. As you start your day, the Data Security Triage Agent is your AI-powered assistant for managing insider risk management and data loss prevention alerts. It sifts through your alert queue, using advanced reasoning to establish context, assessing sensitive information flagged by policies, and eliminating false positives, taking care of the busy work for you. It surfaces the highest-priority alerts that truly need your attention, and provides clear reasoning behind its decisions, including details about the data owner, or last user involved in the incident. Then it goes a step further, autonomously contacting associated users in Microsoft Teams with details on the sensitive information found, and recommended actions. It tracks progress intelligently, nudging users as often as you define, helping you to remediate imminent risks faster. And as an analyst, you maintain full control with visibility into agent impact, and the actions taken over time. Next, the Data Security Posture Agent lets you explore, in natural language, how well your high-value data is protected across sources like Outlook Mailboxes, including Teams Chats, as well as SharePoint and OneDrive. When you submit a query, AI-powered intent analysis goes beyond keywords and predefined data types to uncover risk factors rooted in context, revealing where data is truly at risk, and needs protection. Built-in policy control then lets you apply human logic to label files and trigger corresponding security policies to proactively prevent data loss. These agents in Microsoft Purview are powered by the Security Copilot platform, and are ready for you to try today.526Views0likes0CommentsNew Data Security Posture Management | Microsoft Purview
Identify sensitive files, understand emerging data risks, and focus remediation efforts where they matter most without slowing down productivity. You can also remediate oversharing, enforce data loss prevention policies, and monitor AI agent activity with full visibility into their interactions with sensitive data. Talhah Mir, Microsoft Purview Partner GM, shares how to take control of your data security posture, act on top priorities, and build a sustainable discipline for protecting your organization’s information at scale. One place to manage all of your data security posture. Target the most critical data risks instantly. Check out the new DSPM solution in Microsoft Purview. Stop oversharing. Safeguard sensitive data fast in Microsoft 365 Copilot with DSPM’s one-click policies. Take a look at Microsoft Purview DSPM. Gain control over AI-driven automation. Prevent agents from introducing hidden data risks. See how it works with DSPM. QUICK LINKS: 00:00 — Unified solution with DSPM 01:48 — Day-to-day DSPM use 03:36 — Prevent oversharing 05:52 — AI observability 07:42 — Longer-term view of DSPM 08:25 — How to get DSPM working in your org 09:28 — Wrap up Link References Try it out at https://aka.ms/DSPM Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -The more secure your data, the more confidently you can adopt and scale AI and agents across your organization. But it’s easier said than done, especially if you’re using multiple tools just to discover what data is in use, and your risk across different services today. Where AI agents just exacerbate the challenge because they can interact with your data and produce outcomes exponentially faster than everyday users, making it harder to respond at equivalent speed. And to not get in the way of productivity, both human and AI, you can’t just lock everything down. You need to be able to dynamically apply data protection based on risky activity. This is where the newly expanded Microsoft Purview Data Security Posture Management, or DSPM for short, changes everything. Deeply integrated across Microsoft ecosystem and beyond, it provides a single, unified solution for discovering sensitive data across your digital estate, including from non-Microsoft services. -Built-in intelligence continuously evaluates your data risk, isolating the areas that pose the greatest risk and that deserve the most attention right now. Integrated and adaptive protection, based on both human and non-human risky activity, lets you remediate policy gaps directly within DSPM, in just a few clicks. Agents in Purview can then autonomously work alongside you and help you to explore how well your data is protected across specific scenarios. -Powerful new AI observability capabilities then give you granular visibility into agent activity with a first-time view into how much risk they may be introducing into your organization. And custom reports help you to embed posture management into your daily operations by pinpointing areas to strengthen. Even if you haven’t configured a single policy in Microsoft Purview, as I’ll show you in the quick onboarding steps, you’ll be able to use DSPM out of the box. -But first, I’ll start with a tour of how you can use DSPM as part of your day to day. By design, the experience is organized to speed up your understanding of data risks at play and what to do about them. You can start by interacting directly using suggested Copilot prompts, or work your way down the dashboard where at a glance you can see key posture metrics for data discovery based on the percentage of classified or labeled files, data protection, which is a measure of the percentage of activity covered by existing policy, and data investigation with the percentage of alerts that have been triaged. Emerging data risks are succinctly presented to you at a glance, and you can quickly see available agents to explore your data risk further. Next, top objectives guides you on what data risk scenarios need priority attention across your environment. We’ll go deeper on this one in a second. -Then, in the data snapshot, data exposure can also be categorized by services and across different platforms in use inside your environment. Additionally, we help you to quickly understand your organization’s data exposure based on its recency. Stale data flags data which was last accessed or updated over a year ago, that needs closer attention. Fresh data, on the other hand, which is higher in volume, indicates data that has been updated or accessed in the past year. Finally, the chart at the bottom reflects the 30-day trends in your organization’s data security posture specific to overshared and exfiltrated items. So you can start your day with a custom and comprehensive assessment of trending data risk. -Let’s go back to the priority objective highlighted to prevent oversharing of sensitive data, which has even more gravity given the rise of AI. Clicking into see all objectives brings me to the complete list of recommended objectives by risk area in order of priority. At the bottom, I have a few with a healthy green status and a few above those that clearly need attention. They each reflect an outcome-based approach that I can follow through to remediation. I’ll view the top objective on the oversharing to see why it has been prioritized. And I can see data oversharing trends at a glance over time. More than 30,000 files are currently at risk of oversharing, and there are metrics for how many sensitive files are unlabeled and externally shared. Importantly, risk patterns break down why this objective is something to focus on. -This chart shows overshared sensitive data tied to top Microsoft 365 data sources, and we can see the site name in SharePoint plus the total number of potentially overshared items categorized by how they were shared. DSPM is recommending a data loss prevention policy to protect sensitive data referenced in Microsoft 365 Copilot. This will restrict Copilot access to only labeled documents and emails. It will operate in simulation mode so that I can initially test and tune this policy and enforce it when I’m ready. I’ll hit apply to get everything going. Once that’s run, after some time, when you return to the dashboard, you’ll be able to see the outcome of the objective. Our oversharing objective is no longer a priority; we’re in a healthier green state. Files at risk of oversharing have now halved. And prevent data exposure in Microsoft 365 Copilot interactions has now shifted to be our top priority. -This time I’ll click in to directly view the remediation plan, and I can see a timeline of when I can expect to see impact once I take action. There are a number of default policies in place along with a few recommended policies. In fact, this one is a brand new Data Loss Prevention control that works during Copilot interactions to restrict sensitive information types from being processed during AI reasoning or used as part of web search, and so we can select and apply it. Now, I’ve shown you the new outcome-focused experience for resolving top objectives. -Next, let’s switch gears to look at AI observability. Agents can introduce unique risks that differ from human users. They could have more privilege to perform tasks and access and consume sensitive files across multiple systems at a faster rate than humanly possible. Just as we do for humans, we now can apply risk levels to your agents based on their data activity. Here you can see a full inventory of agents working across your organization, how many are high risk, and the total with sensitive interactions. Followed by a breakdown of individual agents and their risk level along with their status. These reflect the policies that you have in place to govern agents. This first agent is risky, but it’s still active, so let’s take a closer look. It’s a new Microsoft Agent 365 agent, which uniquely gives me deeper visibility into its activity. -The good news is it’s now been quarantined, so it’s not discoverable by users. We can see the knowledge and tools it can access, policy coverage, the agent owner, and its agent identity. Below we can see the agent risk level, risky activity matches, and their categories. Finally, there are also recommended actions to take. Of course, your agents will reference data across your digital estate. Here in asset explorer, you can see a unified view of unlabeled or classified data by workload. Beyond Microsoft 365 and Azure, data is also coming in from Salesforce, Databricks, Snowflake, and others. This is made possible by direct integration with Microsoft Sentinel data lake. -And this level of visibility will continue to expand as we grow our ecosystem of partner solutions with deep insights on specific data sources. That said, beyond in-depth and dynamic insights into your data risk, DSPM also helps you to take longer-term view of Data Security Posture Management as a sustainable discipline inside your organization. Nine new reports help you to build your organizational muscle for DSPM in key areas from data protection hygiene with data sensitivity label and activity; specific policy coverage and risky activity by both users and AI. I’ll click into this one for auto-labeling policy coverage, and I can quickly see key metrics with a detailed bird’s eye view of what sensitive information types are being discovered and automatically labeled, and where we’re missing opportunities to enforce auto-labeling. -Now, if you’re wondering what it takes to get DSPM working in your organization, if you’re using Microsoft 365 E5 now, you have access to DSPM already. Set-up is simple. From the Microsoft Purview portal, once you’ve navigated to the DSPM Solution, you just need to click get started. There are two service prerequisites for unified auditing and insights, as well as collection policies for AI that you’ll need to have enabled for everything I’ve shown you today to work. -Then, all you need to do is hit start setup, and that’s it, you’re ready to go. Depending on the size of your tenant, the service will take a day or so to start bringing in the data to generate insights. Integrating DSPM with partner solutions is also straightforward. From the setup tasks, you’ll select extend your insights with data discovery. Then, you’ll connect your Sentinel Workspace if that hasn’t already been done. Configure Sentinel data lake as the place to ingest logging data, and connect to available partner solutions like Snowflake and Salesforce using Sentinel connectors. In fact, soon you’ll be able to configure protections to those platforms directly from DSPM. -Whether you’re managing data risk from employees, AI agents, or third-party platforms, the newly expanded DSPM gives you a single solution for discovery and remediation. To try it out, visit aka.ms/DSPM. And if you’re already using classic DSPM solutions, you can easily switch to the new experience and get back to the classic ones under solutions. Subscribe to Microsoft Mechanics for the latest AI and security updates, and thank you for watching!1.5KViews0likes0CommentsSecure your AI apps with user-context-aware controls | Microsoft Purview SDK
With built-in protections, prevent data leaks, block unsafe prompts, and avoid oversharing without rewriting your app. As a developer, focus on innovation while meeting evolving security and compliance requirements. And as a security admin, gain full visibility into AI data interactions, user activity, and policy enforcement across environments. Shilpa Ranganathan, Microsoft Purview Principal GPM, shares how new SDKs and Azure AI Foundry integrations bring enterprise-grade security to custom AI apps. Stop data leaks. Detect and block sensitive content in real-time with Microsoft Purview. Get started. Adapt AI security based on user roles. Block or allow access without changing your code. See it here. Prevent oversharing with built-in data protections. Only authorized users can see sensitive results. Start using Microsoft Purview. QUICK LINKS: 00:00 — Microsoft Purview controls for developers 00:16 — AI app protected by Purview 02:23 — User context aware 03:08 — Prevent data oversharing 04:15 — Behind the app 05:17 — API interactions 06:50 — Data security admin AI app protection 07:26 — Monitor and Govern AI Interactions 08:30 — Wrap up Link References Check out https://aka.ms/MicrosoftPurviewSDK Microsoft Purview API Explorer at https://github.com/microsoft/purview-api-samples/ For the Microsoft Purview Chat App go to https://github.com/johnea-chva/purview-chat Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -You can now infuse the data security controls that you’re used to with Microsoft 365 Copilot into your own custom-built AI apps and agentic solutions, even those running in non-Microsoft clouds. In fact, today I’ll show you how we are helping developers and data security teams work together to prevent some of the biggest challenges around data leaks, oversharing, and compliance during AI interactions so that you can start secure with code integrated controls that free you up and make it seamless for you as a developer to focus on building secure apps and agents while knowing that potential users and their activities with work data will be kept secure. -All of which is made possible with Microsoft Purview controls built into Azure AI Foundry, along with the new developer SDK that can be used to protect data during AI interactions, where protections can vary based on specific user context, even when apps are running in non-Microsoft Clouds, which ultimately helps your data in apps and agents stay secure as policies evolve while providing you as a security admin the visibility to evolve protections to protect against leaks and risky insiders to maintain control of your data, prevent data oversharing to unintended recipients, and govern AI data and compliance of your industry and regional requirements by default. This approach makes it simple for you as a developer to translate the requirements of your data security teams as you build your apps using the Microsoft Purview SDK. -In fact, let me show you an example of an AI app that’s protected by Microsoft Purview. This is an AI-powered company chat app. It’s a sample that you can find on GitHub, and it’s using Azure AI Foundry services on the backend for large language model, and Cosmos DB to retrieve relevant information based on a user’s prompt. I’m signed in as a user on the external vendor team. -Now, I’m going to write a prompt that adds sensitive information with a credit card, and immediately, I see a response that this request and prompt violates our company’s sensitive information policy, which was set in Microsoft Purview, so our valuable information is protected. But the real power here is that the controls are user context aware too. It’s not just blocking all credit cards because there are easier ways to do that in code or with system prompts. Let me show you the same app without code changes for another user. I’m logged in as a member of the Customer Support Engineering team and I’m allowed to interact with credit card numbers as part of my job, so I’m going to write the same prompt. Now I’ll submit it, and you’ll see the app generates an appropriate response. And nothing changed in the app. The only change was my user context. -And that was an example of a prompt being analyzed prior to sending it to the application so that it could generate a response. Let me show you another example that proactively prevents data oversharing based on the information retrieval process used by the app. I’m still logged in with the user’s account on the Customer Support Engineering team, and I’ll prompt our app to send me information for recent transactions with Relecloud with payment information to look at a duplicate charge. This takes a moment, looks up the transaction information in our Cosmos DB backend, and it’s presenting the results to me. -In this case, access permissions and protections have been applied using Microsoft Purview to the backend data source. And because our user account has permissions to that information, they received the response. This time, I’m signed in again as a user on the external vendor team. Again, I’ll write the same prompt, and because I shouldn’t and do not have access to retrieve that information, the app tells me that it can’t respond. Again, it is the same app without any code changes and my user context prevented me from seeing information that I shouldn’t be able to see. As a developer, these controls are simple to integrate into your app code and you don’t need to worry about the policies themselves or which user should be in scope for them. -Let me show you. This is the code behind our app. First, you can see that it’s registered with Microsoft Entra to help connect the app with both organizational policies and the identity of the user interacting with the app for user context so that it can apply the right protection scope. This is all possible by using the access tokens once the user has logged in. The app then establishes the API connection with Microsoft Purview to look at the protection scopes API, as well as the process content API, so that it can check whether the submitted prompt or the response is allowed or not based on existing data security access and compliance policies. Based on what’s returned, the app either continues or informs the user of the policy violation. -Now that you’ve seen what’s behind the app, let me show you the actual API interactions between our app and Microsoft Purview. And for that, I’ll use a sample code that we’ve also published to GitHub to view the raw API responses in real time. This is the Purview API Explorer app. This is connected to the Microsoft Graph as you can see with the Request URI. I can use it to view protections and even view how content gets processed in real time, which I’ll do here. Once the user logs in, you’ll see that with the first API for protection scopes, the application will send the user content and application token, as well as the activities that the app supports, like upload text and download text, as noted here, for our prompts. -Once the request is sent to the API, Purview responds back to the application to tell it what to do. In this case, for uploading and downloading text. The application will wait for Purview’s response prior to displaying it back to the user. Now I’ll go to Start a Conversation. And on the left in the Request Body, you can see my raw prompt again with sensitive information contained in the text along with other metadata properties. I’ll send the request. On the right, I can see the details of the content response from the API. So in this case, it found a policy match and responded with the action RestrictedAccess and the restriction action to block. That’s what you’d need to know as a developer to protect your AI apps. -Then as a data security admin, for everything to work as demonstrated, there are a few things you’ll need configured in Microsoft Purview. First, to protect against data loss of sensitive or high value information like I showed using credit cards, you will need data loss prevention policies in place. Second, to help prevent oversharing with managed database sources like I showed from Cosmos DB, which also works with SQL databases, you’ll configure Information Protection policies. This ensures that your database instances are labeled with corresponding access protections applied. Then for visibility into activities with your connected apps, all prompt and response traffic is recorded and auditable. And for apps and agents running on Azure AI Foundry, it’s just one optional setting to light up native Microsoft Purview integration. -In fact, here’s the level of visibility that you get as a data security admin. In DSPM for AI, you can see interactions and associated risks from your AI line-of-business apps running on Azure and other clouds once they are enlightened with Microsoft Purview integration. Here you can see user trends, applicable protections, compliance, and agent count. And across the broader Microsoft Purview solutions, all activity and interactions from your apps are also captured and protected, including Audit Search, so that you can discover all app interactions, Communication Compliance for visibility into inappropriate interactions, and Insider Risk Management as part of activities that establish risk. Integrating your apps with Microsoft Purview’s SDK provides the control to free you up and make it seamless for you as a developer to focus on building secure apps and agents. At the same time, as the data security admin, it gives you continuous visibility to ensure that AI data interactions remain secure and compliant. -To learn more, check out aka.ms/MicrosoftPurviewSDK. We’ve also put links to both sample apps in the description below to help you get started. Keep checking back to Microsoft Mechanics for the latest updates, and thank you for watching.422Views0likes0CommentsIntroducing Microsoft Purview Alert Triage Agents for Data Loss Prevention & Insider Risk Management
Surface the highest-risk alerts across your environment, no matter their default severity, and take action. Customize how your agents reason, teach them what matters to your organization, and continuously refine to reduce time-to-resolution. Talhah Mir, Microsoft Purview Principal GPM, shows how to triage, investigate, and contain potential data risks before they escalate. Focus on the most high-risk alerts in your queue. Save time by letting Alert Triage Agents for DLP and IRM surface what matters. Watch how it works. Stay in control. Tailor triage priorities with your own rules to focus on what really matters. See how to customize your alert triage agent using Security Copilot. View alert triage agent efficiency stats. Know what your agent is doing and how well it’s performing. Check out Microsoft Purview. QUICK LINKS: 00:00 — Agents in Microsoft Purview 00:58 — Alert Triage Agent for DLP 01:54 — Customize Agents 03:32 — View prioritized alerts 05:17 — Calibrate Agent Behavior with Feedback 06:38 — Track Agent Performance and Usage 07:34 — Wrap up Link References Check out https://aka.ms/PurviewTriageAgents Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Staying ahead of potential data security threats and knowing which alerts deserve your attention isn’t just challenging. It’s overwhelming. Every day, your organization generates an increasing and enormous volume of data interactions, and it’s hard to know which potential risks are slipping through the cracks. In fact, on average, for every 66 new alerts logged in a day, nearly a third are not investigated because of the time and effort involved. And this is exactly where automation and AI in Microsoft Purview can make a material difference. With an agent-managed alert queue that, just like an experienced tier 1 analyst, sifts through the noise to identify Data Loss Prevention and Insider Risk Management alerts that pose the greatest risks to your organization, letting you focus your time and efforts on the most critical risks to your data. -Today, I’ll show you how the agents in Microsoft Purview work, the reasoning they use the prioritize alerts, and how to get them running in your environment. I’ll start with Alert Triage Agent for DLP. I’m in the Alerts page for Data Loss Prevention. You’ll see that just for this small date range, I have a long list of 385 active alerts. Now, I could use what’s in the Severity column to sort and prioritize what to work on first, clicking each, analyzing the details, which policies were triggered, and then repeating that process until I’ve worked my way through the list over the course of my day. And even then, I wouldn’t necessarily have the full picture. To save time, I ended up deprioritizing low and medium severity alerts, which still could present risks that need to be investigated, but it doesn’t have to be this way. -Instead, if I select my Alert Triage Agent view, I can see it’s done the work to triage the most important alerts, regardless of severity, that require my attention. There’s a curated list of 17 alerts for me to focus in on. And if you’re wondering if you can trust this triage list of alerts to be the ones that really need the most attention, you remain in control because you’re able to teach Copilot what you want to prioritize when you set up your agent. Let me show you. I’m in the Agents view and I’ll select the DLP agent. And if this is your first time using the agent, you’ll need to review what it does and how it’s triggered. In fact, it lists what it needs permissions for as it reasons over each alert. This includes your DLP configuration, reading incoming activity details and corresponding content, and then storing your feedback to refine how it will triage DLP alerts. -Next, you can move on to deployment settings. Here, you can choose how the agent runs or triggered and select the alert timeframe. The default is last 30 days. From there, I’ll deploy the agent. You’ll see that it tells me the next step is to customize it before it begins triaging alerts. This takes a little while to provision, and once it’s ready, there’s just one more step. Back in Alerts, I need to customize the agent. Here, I can enter my own instructions as text to help the agent prioritize alerts based on what’s important to my organization. For example, I can focus it on specific labels or projects, which can be modified over time. -Next, I can select the individual policies that I want to focus the agent on. I’m going to select all of these in this case, then hit Save. Once I hit Review, it generates custom classifiers and rules specific to what I’ve asked the agent to look for. Then I just need to start the agent, and that’s the magic behind agent-prioritized queue that I showed you earlier. So now, once the agent is ready, instead of trying to find that needle in our haystack of 385 alerts, I can just hit the toggle button to view the prioritized alerts from the Alert Triage Agent. Notice I’m not losing any of the alert details from before. It’s just presented as a triaged and prioritized queue, starting with the top handful of alerts that need my immediate attention with less urgent and not categorized alerts available to view in other tabs. -I’ll focus on what needs attention and click into the top one to see what the agent found. The Agent summary tells me that there are 25 files and eight with verified policy matches. Data includes credit cards, bank account numbers shared using SharePoint. Below that, you’ll see the sensitivity risk for each shared file, the exfiltration risk related primarily to the files containing financial data, and the policy risk. And I could see in this case, the DLP policy was triggered, and the user was allowed to share without restrictions. In the Details tab, you’ll notice that the alert severity set to low based on the policy configuration, but the triage agent, much like a human analyst, can render a verdict taking the entire context into account. Clicking into view details, I can find more information, including the related assets, where I can see each of the corresponding names, trainable classifiers if defined, and sensitive information types. I’ll scroll back up and show you one more tab here. -In Overview, I can see the user associated with the alert. Turns out this is an important policy match to prioritize Labels on 18 highly sensitive files were downgraded and it was shared without proper restriction. The user was warned and chose to proceed. I can now work on containing the risk and improving related policy protections to prevent future incidents like this one. Let’s continue to work through our prioritized alert queue, and you can see I’m now left with six. I’ll click into the first one. It’s a policy match for business-critical files containing financial and legal information. This is credit card information and a legal agreement in the shared content. That said, this happens to be a close partner of our company that typically handles this type of information, so it’s not important. And to prevent this and future similar alerts from being flagged as needing my attention, I can calibrate the agent’s response based on what matters to me. Kind of like you would teach a junior member of your team. So, in this alert categorization, I’ll click Change to add more context about why I disagree with this categorization so that other recipients from that domain are deprioritized. -In the details pane, I’ll change it to less urgent and add another property to deprioritize these types of alerts. In this case, I’ll add the external recipient email address. And after I hit submit, this will be added to the agent’s instruction set to further refine its rationale for prioritization. In fact, here in our list of what needs attention, you’ll see that the alert is no longer on the list. That’s how easy it is to get the agent to work on your behalf. And once you’ve been using the agent at any time, you can view its progress. In the Agent Overview, I can see my deployed agents and trending activities. If I click into my Data Loss Prevention Agent, I can see details about its recent activities. In the Performance tab, I can also see the agent effectiveness trend over time, and below that, a detailed breakdown of units consumed per day. This way, you can reduce your time to resolution even while your team is spread thin. -Now, I focused on the DLP agent today, and similarly, our alert triage agent in Insider Risk Management works on your behalf to create a prioritized alert queue of data incidents by risky users in your organization that require your attention, including evaluating the user risk based on historical context, as well as analyzing the user’s activity over weeks or months to help evaluate their risk, whether they’re intentional or not. In many ways, Purview’s new Alert Triage Agents for DLP and IRM, powered by Security Copilot, reduce the time, effort, and expert resources needed to truly understand the context of your alerts. It works alongside you and the whole team to accelerate and simplify your investigations. To learn more, check out aka.ms/PurviewTriageAgents, subscribe to Microsoft Mechanics if you haven’t yet, and thank you for watching.1.3KViews1like0CommentsData security for agents and 3rd party AI in Microsoft Purview
With built-in visibility into how AI apps and agents interact with sensitive data — whether inside Microsoft 365 or across unmanaged consumer tools — you can detect risks early, take decisive action, and enforce the right protections without slowing innovation. See usage trends, investigate prompts and responses, and respond to potential data oversharing or policy violations in real time. From compliance-ready audit logs to adaptive data protection, you’ll have the insights and tools to keep data secure as AI becomes a part of everyday work. Shilpa Ranganathan, Microsoft Purview Principal Group PM, shares how to balance AI innovation with enterprise-grade data governance and security. Move from detection to prevention. Built-in, pre-configured policies you can activate in seconds. Check out DSPM for AI. Monitor risky usage and take action. Block risky users from uploading sensitive data into AI apps. See how to use DSPM for AI. Set instant guardrails. Use DSPM for AI to identify AI agents that may be at risk of data oversharing and take action. Get started. QUICK LINKS: 00:00 — AI app security, governance, & compliance 01:30 — Take Action with DSPM for AI 02:08 — Activity logging 02:32 — Control beyond Microsoft services 03:09 — Use DSPM for AI to monitor data risk 05:06 — ChatGPT Enterprise 05:36 — Set AI Agent guardrails using DSPM for AI 06:44 — Data oversharing 08:30 — Audit logs 09:19 — Wrap up Link References Check out https://aka.ms/SecureGovernAI Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Do you have a good handle on the data security risks introduced by the growing number of GenAI apps inside your organization? Today, 78% of users are bringing their own AI tools, often consumer grade, to use as they work and bypassing the data security protections you’ve set. And now, combined with the increased use of agents, it can be hard to know what data is being used in AI interactions to keep valuable data from leaking outside of your organization. -In the next few minutes, I’ll show you how enterprise grade data security, governance, and compliance can go hand in hand with GenAI adoption inside your organization with Data Security Posture Management for AI in Microsoft Purview. This single solution not only gives you automatic visibility into Microsoft Copilot and custom apps and agents in use inside your organization, but extends visibility into AI interactions happening across different non-Microsoft AI services that may be in use. Risk analytics then help you see at a glance what’s happening with your data with a breakdown of the top unethical AI interactions, sensitive data interactions per AI app, along with how employees are interacting with apps based on their risk profile, either high, medium, or low. And specifically for agents, we also provide dedicated reports to expose the data risks posed by agents in Microsoft 365 Copilot and maker created agents from Copilot Studio. And visibility is just one half of what we give you. You can also take action. -Here, DSPM for AI provides you proactive recommendations to help you take immediate action to enhance your data security and compliance posture right from the service using built-in and pre-configured Microsoft Purview policies. And with all AI interactions audited, not only do you get the visibility I just showed, but the data is automatically captured for data lifecycle management, eDiscovery, and Communication Compliance investigations. In fact, clicking on this one recommendation for compliance controls can help you set up policies in all these areas. -Now, if you’re wondering how activity signals from AI apps and agents flow into DSPM for AI in the first place, the good news is, for the AI apps and agents you build with either Microsoft Copilot services or with Azure AI, even if you haven’t configured a single policy in Microsoft Purview, activity logging is enabled by default, and built-in reports are generated for you out of the gate. As I showed, visibility and control extend beyond Microsoft services as soon as you take proactive action. Directly from DSPM for AI, the fortify data security recommendation, for example, when activated under the covers leverage Microsoft Purview’s built-in classifiers to detect sensitive data and to log interactions from local app traffic over the network, as well as the device level to protect file system interactions on Microsoft Purview onboarded PCs and Macs, and even web-based apps running in Microsoft Edge, to help prevent risky users from leaking sensitive data. -Next, with insights now flowing in, let me walk you through how you can use DSPM for AI every day to monitor your data risks and take action. I’ll start again from reports in the overview to look at GenAI apps that are popular in our organization. Something that is really concerning are the ones in use by my riskiest users who are interacting with popular consumer apps like DeepSeek and Google Gemini. ChatGPT consumer is at the top of the list, and it’s not a managed app for our organization. It’s brought in by users who are either using it for free or with a personal license, but what’s really concerning is that it has the highest number of risky users interacting with it, which could increase our risk of data loss. Now, my first inclination might be to block usage of the app outright. That said, if I scroll back up, instead I can see a proactive recommendation to prevent sensitive data exfiltration in ChatGPT with adaptive protection. -Clicking in, I can see the types of sensitive data shared by users and their prompts. Creating this policy will log the actions of minor-risk users and block high-risk users from typing in or uploading sensitive information into ChatGPT. I can also choose to customize this policy further, but I’ll keep what’s there and confirm. And with the policies activated, now let me show you the result. Here we have a user with an elevated risk level. They’re entering sensitive information into the prompt, and when they submit it, they are blocked. On the other hand, when a user with a lower risk level enters sensitive information and submits their prompt, they’re informed that their actions are being audited. -Next, as an admin, let me show you how this activity was audited. From DSPM for AI in the Activity Explorer, I can see all interactions and any matching sensitive information types. Here’s the activity we just saw, and I can click into it to see more details, including exactly what was shared in the user’s prompt. Now for ChatGPT Enterprise, there’s even more visibility due to the deep API integration with Microsoft Purview. By selecting this recommendation, you can register your ChatGPT Enterprise workspace to discover and govern AI interactions. In fact, this recommendation walks you through the setup process. Then with the interactions logged in Activity Explorer, not only are you able to see what prompts were submitted, but you can also get complete visibility into the generated responses. -Next, with the rapid development of AI agents, let me show you how you can use DSPM for AI to discover and set guardrails around information used with your user-created agents. Clicking on agents takes you to a filtered view. Immediately, I can see indicators of a potential oversharing issue. This is where data access permissions may be too broad and where not enough of my data is labeled with corresponding protections. I can also see the total agent interactions over time, the top five agents open to internet users, with interactions by unauthenticated or anonymous users. This is where people outside of my organization are interacting with agents grounded on my organization’s data, which can be bad. -I can also quickly see a breakdown of sensitive interactions per agent along with the top sensitivity labels referenced to get an idea of the type of data in use and how well protected it is. To find out more, from the Activity Explorer, I can see in this AI interaction, the agent was invoked in Copilot Chat, and I can view the agent’s details and see the prompt and response just like before. Now what I really want to do is to take a closer look at the potential data oversharing issue that was flagged. For that, I’ll return to my dashboard and click into the default assessment. These run every seven days, scanning files containing sensitive data and identifying where those files are located, such as SharePoint sites with overly permissive user access. -And I can dig into the details. I’ll click into the top one for “Obsidian Merger” and I can see label coverage for the data within it. And in the protect tab, there are eight sensitivity labels and five that are referenced by Copilot and agents. Since I want agents to honor data classifications and their related protections, I can configure recommended policies. The most stringent option is to restrict all items, removing the entire site from view of Copilot and agents. Or for more granular controls, I also have a few more options. I can create default sensitivity labels for newly created items, or if I move back to the top-level options, I have the option to “Restrict Access by Label.” The Obsidian Merger information is highly privileged, and even if you’re on the core team working on it, we don’t want agents to reason over the information, so I’ll pick this label option. -From there, I need to extend the list of sensitivity labels and I’ll select Obsidian Merger, then confirm to create the policy. And this will now block the agent from reasoning over the content that includes the Obsidian Merger label. In fact, let’s look at the policy in action. Here you can see the user is asking the Copilot agent to summarize the Project Obsidian M&A doc and even though they are the owner and author of the file, the agent cannot reason over it. It responds, “Unfortunately, I can’t provide detailed information because the content is protected.” -As I mentioned, for both your agents and GenAI apps across Microsoft and non-Microsoft services, all activity is recorded in Audit logs to help conduct investigations whenever needed. In fact, DSPM for AI logged activity flows directly into Microsoft Purview’s best-in-class solutions for insider risk management, letting your security teams detect risky AI prompts as part of their investigations into risky users, communication compliance to aid investigations into non-compliance use in AI interactions, such as a user trying to get sensitive information like an acquisition plan, eDiscovery, where interactions across your Copilots, agents, and AI apps can be collected and reviewed to help conduct investigations and respond to litigations. -So that was an overview of how GenAI adoption can go hand in hand with your enterprise grade data security, governance, and compliance requirements for your organizations, keeping your data protected. To learn more, check out aka.ms/SecureGovernAI. Keep watching Microsoft Mechanics for the latest updates, and thanks for watching.1.9KViews0likes0CommentsMicrosoft Purview protections for Copilot
Use Microsoft Purview and Microsoft 365 Copilot together to build a secure, enterprise-ready foundation for generative AI. Apply existing data protection and compliance controls, gain visibility into AI usage, and reduce risk from oversharing or insider threats. Classify, restrict, and monitor sensitive data used in Copilot interactions. Investigate risky behavior, enforce dynamic policies, and block inappropriate use — all from within your Microsoft 365 environment. Erica Toelle, Microsoft Purview Senior Product Manager, shares how to implement these controls and proactively manage data risks in Copilot deployments. Control what content can be referenced in generated responses. Check out Microsoft 365 Copilot security and privacy basics. Uncover risky or sensitive interactions. Use DSPM for AI to get a unified view of Copilot usage and security posture across your org. Block access to sensitive resources. See how to configure Conditional Access using Microsoft Entra. Watch our video here. QUICK LINKS: 00:00 — Microsoft Purview controls for Microsoft 365 Copilot 00:32 — Copilot security and privacy basics 01:47 — Built-in activity logging 02:24 — Discover and Prevent Data Loss with DSPM for AI 04:18 — Protect sensitive data in AI interactions 05:08 — Insider Risk Management 05:12 — Monitor and act on inappropriate AI use 07:14 — Wrap up Link References Check out https://aka.ms/M365CopilotwithPurview Watch our show on oversharing at https://aka.ms/OversharingMechanics Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Not all generative AI is created equal. In fact, if data security or privacy-related concerns are holding your organization back, today I’ll show you how the combination of Microsoft 365 Copilot and the data security controls in Microsoft Purview provide an enterprise-ready platform for GenAI in your organization. This way, GenAI is seamlessly integrated into your workflow across familiar apps and experiences, all backed by unmatched data security and visibility to minimize data risk and prevent data loss. First, let’s level set on a few Copilot security and privacy basics. Whether you’re using the free Copilot Chat that’s included with Microsoft 365 or have a Microsoft 365 Copilot license, they both honor your existing access permissions to work information in SharePoint and OneDrive, your Teams meetings and your email, meaning generated AI responses can only be based on information that you have access to. -Importantly, after you submit a prompt, Copilot will retrieve relevant index data to generate a response. The data only stays within your Microsoft 365 service trust boundary and doesn’t move out of it. Even when the data is presented to the large language models to generate a response, information is kept separate to the model, and is not used to train it. This is in contrast to consumer apps, especially the free ones, which are often designed to collect training data. As users upload files into them or paste content into their prompts, including sensitive data, the data is now duplicated and stored in a location outside of your Microsoft 365 service trust boundary, removing any file access controls or classifications you’ve applied in the process, placing your data at greater risk. -And beyond being stored there for indexing or reasoning, it can be used to retrain the underlying model. Next, adding to the foundational protections of Microsoft 365 Copilot, Microsoft Purview has activity logging built in and helps you to discover and protect sensitive data where you get visibility into current and potential risks, such as the use of unprotected sensitive data in Copilot interactions, classify and secure data where information protection helps you to automatically classify, and apply sensitivity labels to data, ensuring it remains protected even when it’s used with Copilot, and detect and mitigate insider risks where you can be alerted to employee activities with Copilot that pose a risk to your data, and much more. -Over the next few minutes, I’ll focus on Purview capabilities to get ahead of and prevent data loss and insider risks. We’ll start in Data Security Posture Management or DSPM for AI for short. DSPM for AI is the one place to get a rich and prioritized bird’s eye view on how Copilot is being used inside your organization and discover corresponding risks, along with recommendations to improve your data security posture that you can implement right from the solution. Importantly, this is where you’ll find detailed dashboards for Microsoft 365 Copilot usage, including agents. -Then in Activity Explorer, we make it easy to see recent activities with AI interactions that include sensitive information types, like credit cards, ID numbers or bank accounts. And you can drill into each activity to see details, as well as the prompt and response text generated. One tip here, if you are seeing a lot of sensitive information exposed, it points to an information oversharing issue where people have access to more information than necessary to do their job. If you find yourself in this situation, I recommend you also check out our recent show on the topic at aka.ms/OversharingMechanics where I dive into the specific things you should do to assess your Microsoft 365 environment for potential oversharing risks to ensure the right people can access the right information when using Copilot. -Ultimately, DSPM for AI gives you the visibility you need to establish a data security baseline for Copilot usage in your organization, and helps you put in place preventative measures right away. In fact, without leaving DSPM for AI on the recommendations page, you’ll find the policies we advise everyone to use to improve data security, such as this one for detecting potentially risky interactions using insider risk management and other recommendations, like this one to detect potentially unethical behavior using communication compliance policies and more. From there, you can dive in to Microsoft Purview’s best-in-class solutions for more granular insights, and to configure specific policies and protections. -I’ll start with information protection. You can manage data security controls with Microsoft 365 Copilot in scope with the information protection policies, and the sensitivity labels that you have in use today. In fact, by default, any Copilot response using content with sensitivity labels will automatically inherit the highest priority label for the referenced content. And using data loss prevention policies, you can prevent Copilot from processing any content that has a specific sensitivity label applied. This way, even if users have access to those files, Copilot will effectively ignore this content as it retrieves relevant information from Microsoft Graph used to generate responses. Insider risk management helps you to catch data risk based on trending activities of people on your network using established user risk indicators and thresholds, and then uses policies to prevent accidental or intentional data misuse as they interact with Copilot where you can easily create policies based on quick policy templates, like this one looking for high-risk data leak patterns from insiders. -By default, this quick policy will scope all users in groups with a defined triggering event of data exfiltration, along with activity indicators, including external sharing, bulk downloads, label downgrades, and label removal in addition to other activities that indicate a high risk of data theft. And it doesn’t stop there. As individuals perform more risky activities, those can add up to elevate that user’s risk level. Here, instead of manually adjusting data security policies, using Adaptive Protection controls, you can also limit Copilot use depending on a user’s dynamic risk level, for example, when a user exceeds your defined risk condition thresholds to reach an elevated risk level, as you can see here. -Using Conditional Access policies in Microsoft Entra, in this case based on authentication context, as well as the condition for insider risk that you set in Microsoft Purview, you can choose to block their permission when attempting to access sites with a specific sensitivity label. That way, even if a user is granted access to a SharePoint site resource by an owner, their access will be blocked by the Conditional Access policy you set. Again, this is important because Copilot honors the user’s existing permissions to work with information. This way, Copilot will not return information that they do not have access to. -Next, Communication Compliance is a related insider risk solution that can act on potentially inappropriate Copilot interactions. In fact, there are specific policy options for Microsoft 365 Copilot interactions in communication compliance where you can flag jailbreak or prompt injection attempts using Prompt Shields classifiers. Communication compliance can be set to alert reviewers of that activity so they can easily discover policy matches and take corresponding actions. For example, if a person tries to use Copilot in an inappropriate way, like trying to get it to work around its instructions to generate content that Copilot shouldn’t, it will report on that activity, and you’ll also be able to see the response informing the user that their activity was blocked. -Once you have the controls you want in place, it’s a good idea to keep going back to DSPM for AI so you can see where Copilot usage is matching your data security policies. Sensitive interactions per AI app shows you interactions based on sensitive information types. Top unethical AI interactions surfaces insights based on the communication compliance controls you’ve defined. Top sensitivity labels referenced in Microsoft 365 Copilot reports on the labels you’ve created, and applied to reference content. And you can see Copilot interactions mapped to insider risk severity levels. Then digging into these reports shows you a filtered view of activities in Activity Explorer with time-based trends and details for each. Additionally, because all Copilot interactions are logged, like other Microsoft 365 activities in email, Microsoft Teams, SharePoint and OneDrive, you can now use the new data security investigation solution. This uses AI to quickly reason over thousands of items, including Copilot Chat interactions to help you investigate the potential cause of risks for known data leaks in similar incidents. -So that’s how Microsoft 365 Copilot, along with Microsoft Purview, provides comprehensive controls to help protect your data, minimize risk, and quickly identify Copilot interactions that could lead to compromise so you can take corrective actions. No other AI solution has this level of protection and control. To learn more, check out aka.ms/M365CopilotwithPurview. Keep watching Microsoft Mechanics for the latest updates and thanks for watching.4.3KViews0likes0CommentsMicrosoft Purview: New data security controls for the browser & network
Protect your organization’s data with Microsoft Purview. Gain complete visibility into potential data leaks, from AI applications to unmanaged cloud services, and take immediate action to prevent unwanted data sharing. Microsoft Purview unifies data security controls across Microsoft 365 apps, the Edge browser, Windows and macOS endpoints, and even network communications over HTTPS — all in one place. Take control of your data security with automated risk insights, real-time policy enforcement, and seamless management across apps and devices. Strengthen compliance, block unauthorized transfers, and streamline policy creation to stay ahead of evolving threats. Roberto Yglesias, Microsoft Purview Principal GPM, goes beyond Data Loss Prevention Keep sensitive data secure no matter where it lives or travels. Microsoft Purview DLP unifies controls across Microsoft 365, browsers, endpoints, and networks. See how it works. Know your data risks. Data Security Posture Management (DSPM) in Microsoft Purview delivers a 360° view of sensitive data at risk, helping you proactively prevent data leaks and strengthen security. Get started. One-click policy management. Unify data protection across endpoints, browsers, and networks. See how to set up and scale data security with Microsoft Purview. Watch our video here. QUICK LINKS: 00:00 — Data Loss Prevention in Microsoft Purview 01:33 — Assess DLP Policies with DSPM 03:10 — DLP across apps and endpoints 04:13 — Unmanaged cloud apps in Edge browser 04:39 — Block file transfers across endpoints 05:27 — Network capabilities 06:41 — Updates for policy creation 08:58 — New options 09:36 — Wrap up Link References Get started at https://aka.ms/PurviewDLPUpdates Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -As more and more people use lesser known and untrusted shadow AI applications and file sharing services at work, the controls to proactively protect your sensitive data need to evolve too. And this is where Data Loss Prevention, or DLP, in Microsoft Purview unifies the controls to protect your data in one place. And if you haven’t looked at this solution in a while, the scope of protection has expanded to ensure that your sensitive data stays protected no matter where it goes or how it’s consumed with controls that extend beyond what you’ve seen across Microsoft 365. Now adding browser-level protections that apply to unmanaged and non-Microsoft cloud apps when sensitive information is shared. -For your managed endpoints, today file system operations are also protected on Windows and macOS. And now we are expanding detection to the network layer. Meaning that as sensitive information is shared into apps and gets transmitted over web protocols, as an admin, you have visibility over those activities putting your information at risk, so you can take appropriate action. Also, Microsoft Purview data classification and policy management engines share the same classification service. Meaning that you can define the sensitive information you care about once, and we will proactively detect it even before you create any policies, which helps you streamline creating policies to protect that information. -That said, as you look to evolve your protections, where do you even start? Well, to make it easier to prioritize your efforts, Data Security Posture Management, or DSPM, provides a 360 degree view of data potentially at risk and in need of protection, such as potential data exfiltration activities that could lead to data loss, along with unprotected sensitive assets across data sources. Here at the top of the screen, you can see recommendations. I’ll act on this one to detect sensitive data leaks to unmanaged apps using something new called a Collection Policy. More on how you can configure this policy a bit later. -With the policy activated, new insights will take up to a day to reflect on our dashboard, so we’ll fast forward in time a little, and now you can see a new content category at the top of the chart for sensitive content shared with unmanaged cloud apps. Then back to the top, you can see the tile on the right has another recommendation to prevent users from performing cumulative exfiltration activities. And when I click it, I can enable multiple policies for both Insider Risk Management and Data Loss Prevention, all in one click. So DSPM makes it easier to continually assess and expand the protection of your DLP policies. And there’s even a dedicated view of AI app-related risks with DSPM for AI, which provides visibility into how people in your organization are using AI apps and potentially putting your data at risk. -Next, let me show you DLP in action across different apps and endpoints, along with the new browser and network capabilities. I’ll demonstrate the user experience for managed devices and Microsoft 365 apps when the right controls are in place. Here I have a letter of intent detailing an upcoming business acquisition. Notice it isn’t labeled. I’ll open up Outlook, and I’ll search for and attach the file we just saw. Due to the sensitivity of the information detected in the document, it’s fired up a policy tip warning me that I’m out of compliance with my company policy. Undeterred, I’ll type a quick message and hit send. And my attempt to override the warning is blocked. -Next, I’ll try something else. I’ll go back to Word and copy the text into the body of my email, and you’ll see the same policy tip. And, again, I’m blocked when I still try to send that email. These protections also extend to Teams chat, Word, Excel, PowerPoint and more. Next, let me show you how protections even extend to unmanaged cloud apps running in the Edge browser. For example, if you want to use a generative AI website like you’re seeing here with DeepSeek, even if I manually type in content that matches my Data Loss Prevention policy, you’ll see that when I hit submit, our Microsoft Purview policy blocks the transmission of this content. This is different from endpoint DLP, which can protect file system operations like copy and paste. These Edge browser policies complement existing endpoint DLP protections in Windows and macOS. -For example, here I have the same file with sensitive information that we saw before. My company uses Microsoft Teams, but a few of our suppliers use Slack, so I’ll try to upload my sensitive doc into Slack, and we see a notification that my action is blocked. And since these protections are on the file and run in the file system itself, this would work for any app. That said, let’s try another operation by copying the sensitive document to my removable USB drive. And here I’m also blocked. So we’ve seen how DLP protections extend to Microsoft 365 apps, managed browsers, and file systems. -Additionally, new protections can extend to network communication protocols when sharing information with local apps running against web services over HTTPS. In fact, here I have a local install of the ChatGPT app running. As you see, this is not in a browser. In this case, if I unintentionally add sensitive information to my prompt, when it passes the information over the network to call the ChatGPT APIs, Purview will be able to detect it. Let’s take a look. If I move over to DSPM for AI in Microsoft Purview, as an admin, I have visibility into the latest activity related to AI interactions. If I select an activity which found sensitive data shared, it displays the user and app details, and I can even click into the interaction details to see exactly what was shared in the prompt as well as what specifically was detected as sensitive information on it. This will help me decide the actions we need to take. Additionally, the ability to block sharing over network protocols is coming later this year. -Now, let’s switch gears to the latest updates for policy creation. I showed earlier setting up the new collection policy in one click from DSPM. Let me show you how we would configure the policy in detail. In Microsoft Purview, you can set this up in Data Loss Prevention under Classifiers on the new Collection Policies page. These policies enable you to tailor the discovery of data and activities from the browser, network, and devices. You can see that I already have a few created here, and I’ll go ahead and create a new one right from here. -Next, for what data to detect, I can choose the right classifiers. I have the option to scope these down to include specific classifiers, or include all except for the ones that I want to exclude. I’ll just keep them all. For activities to detect, I can choose the activities I want. In this case, I’ll select text and files shared with a cloud or AI app. Now, I’ll hit add. And next I can choose where to collect the data from. This includes connected data sources, like devices, Copilot experiences, or Enterprise AI apps. The unmanaged cloud apps tab uses the Microsoft Defender for Cloud Apps catalog to help me target the applications I want in scope. -In this case, I’ll go ahead and select all the first six on this page. For each of these applications, I can scope which users this policy applies to as a group or separately. I’ll scope them all together for simplicity. Here I have the option to include or exclude users or groups from the policy. In this case, I’ll keep all selected and save it. Next, I have the option of choosing whether I want AI prompt and responses that are detected to be captured and preserved in Purview. This enabled the experience we saw earlier of viewing the full interaction. -Finally, in mode, you can turn the policy on. Or if you leave it off, this will save it so that you can enable it later. Once I have everything configured, I just need to review and create my policy, and that’s it. In addition, as you create DLP policies, you’ll notice new corresponding options. Let me show you the main one. For each policy, you’ll now be asked what type of data you want to protect. First is data stored in connected sources. This includes Microsoft 365 and endpoint policies, which you’re likely already using now. The new option is data in browser and network activity. This protects data in real-time as it’s being used in the browser or transmitted over the network. From there, configuring everything else in the policy should feel familiar with other policies you’ve already defined. -To learn more and get started with how you can extend your DLP protections, check out aka.ms/PurviewDLPUpdates. Keep checking back to Microsoft Mechanics for all the latest updates and thanks for watching.2.9KViews1like0Comments