microsoft purview
617 TopicsPart 3: DSPM for AI: Governing Data Risk in an Agent‑Driven Enterprise
Why Agent Security Alone Is Not Enough? Foundry‑level controls are designed to prevent unsafe behavior and bound autonomy at runtime. But even the strongest preventive controls cannot answer key governance questions on their own: Where is sensitive data being used in AI prompts and responses? Which agents are interacting with high‑risk data—and how often? Are agents oversharing, drifting from expected behavior, or creating compliance exposure over time? How do we demonstrate control, auditability, and accountability for AI systems to regulators and leadership? These are not theoretical concerns. With agents acting continuously and autonomously, risk no longer shows up as a single event—it shows up as patterns, trends, and posture. DSPM for AI exists to make those patterns visible. At its core, DSPM for AI provides a centralized, risk‑centric view of how data is used, exposed, and governed across AI applications and agents. It shifts the conversation from individual incidents to organizational posture. DSPM for AI answers a simple but critical question: “Given how our AI systems are actually being used, what is our current data risk—and where should we intervene?” Unlike traditional DSPM, DSPM for AI expands visibility into: Prompts and responses Agent interactions with enterprise data Oversharing patterns Agent‑driven risk signals Trends across first‑party and third‑party AI usage What DSPM for AI Brings into Focus? 1. AI Interaction Visibility DSPM for AI treats AI prompts, responses, and agent activity as first‑class security telemetry. This allows security teams to see: Sensitive data being submitted to AI systems High‑risk interactions involving regulated information Repeated exposure patterns rather than one‑off events In short, AI conversations become auditable security signals, not blind spots. 2. Oversharing and Exposure Risk One of the most common AI risks is unintentional oversharing—especially when agents retrieve or combine data across systems. DSPM for AI makes it possible to: Identify where sensitive data exists but is poorly labeled Detect when unlabeled or over‑shared data is being accessed via AI Prioritize remediation based on actual usage, not static classification This ties directly back to the Sensitive Data Leakage patterns discussed earlier—but at an organizational scale. 3. Agent‑Level Risk Context DSPM for AI extends posture management beyond users to agents themselves. Security teams can: Inventory agents operating in the environment View agent activity trends Identify agents exhibiting higher‑risk behavior patterns This enables a powerful shift: agents can be assessed, reviewed, and governed just like digital workers. 4. Bridging Security, Compliance, and Audit DSPM for AI connects operational security with governance outcomes. Through integration with audit logs, retention, and compliance workflows, organizations gain: Evidence for investigations and regulatory inquiries Consistent compliance posture across human and agent activity A defensible, repeatable governance model for AI systems This is where AI risk becomes explainable, reportable, and manageable—not just prevented. How DSPM for AI Complements Azure AI Foundry? If Azure AI Foundry provides the control plane that enforces safe agent behavior, DSPM for AI provides the visibility plane that measures how that behavior translates into risk over time. Think of it this way: Foundry controls prevent and constrain DSPM for AI observes, measures, and prioritizes Together, they enable continuous governance Without DSPM, security teams are left guessing whether controls are effective at scale. With DSPM, risk becomes quantifiable and actionable. Why This Matters for Security Leaders? For security leaders, agentic AI introduces a familiar challenge in an unfamiliar form: Risk is non‑deterministic Behavior changes over time Impact can span multiple systems instantly DSPM for AI gives leaders the ability to: Monitor AI risk like any other enterprise workload Prioritize remediation where it matters most Move from reactive investigations to proactive governance This is not about slowing innovation—it’s about making AI adoption defensible. Closing: From Secure Agents to Governed AI Securing agents is necessary—but it is not sufficient on its own. As AI systems increasingly act on behalf of the organization, governance must shift from individual controls to continuous posture management. DSPM for AI provides the missing link between prevention and accountability, turning fragmented AI activity into a coherent risk narrative. Together, Azure AI Foundry and DSPM for AI enable organizations to not only build and deploy agents safely, but to operate AI systems with clarity, confidence, and control at scale. In the agentic era, security prevents incidents—but governance determines trust.Feature Request: Extend Security Copilot inclusion (M365 E5) to M365 A5 Education tenants
Background At Ignite 2025, Microsoft announced that Security Copilot is included for all Microsoft 365 E5 customers, with a phased rollout starting November 18, 2025. This is a significant step forward for security operations. The gap Microsoft 365 A5 for Education is the academic equivalent of E5 — it includes the same core security stack: Microsoft Defender, Entra, Intune, and Purview. However, the Security Copilot inclusion explicitly covers only commercial E5 customers. There is no public roadmap or timeline for extending this benefit to A5 education tenants. Why this matters Education institutions face the same cybersecurity threats as commercial organizations — often with fewer dedicated security resources. The A5 license was positioned as the premium security offering for education. Excluding it from Security Copilot inclusion creates an inequity between commercial and education customers holding functionally equivalent license tiers. Request We would like Microsoft to: Confirm whether Security Copilot inclusion will be extended to M365 A5 Education tenants If yes, provide an indicative timeline If no, clarify the rationale and what alternative paths exist for education customers Are other EDU admins in the same situation? Would appreciate any upvotes or comments to help raise visibility with the product team.7Views0likes0CommentsAnnouncing GA: Advanced Resource Sets in Microsoft Purview Unified Catalog
The Microsoft Purview product team is constantly listening to customer feedback about the data governance challenges that slow teams down. One of the most persistent pain points — understanding the true shape of large-scale data lakes where thousands of files represent a single logical dataset — has driven a highly requested capability. We are pleased to announce that Advanced Resource Sets are now generally available for all Microsoft Purview Unified Catalog customers. The Problem It Solves Anyone managing a modern data lake knows the clutter: a single partitioned dataset like a daily transaction log might manifest as hundreds or thousands of individual files in Azure Data Lake Storage or Amazon S3. Without intelligent grouping, each of those files appears as a separate asset in the catalog. The result is a flood of noise — a catalog that technically contains your data estate but makes it nearly impossible to reason about it at a logical level. Data stewards end up buried in meaningless entries. Analysts searching for "the transactions table" find thousands of file-level hits instead of one clean, actionable asset. Governance efforts stall because nobody can agree on what the estate looks like. Advanced Resource Sets directly address this by grouping those physically separate but logically related files into a single, representative catalog asset — giving your teams a clean, meaningful view of the data landscape. What Advanced Resource Sets Actually Do The standard resource set capability in Purview already groups files using naming pattern heuristics. Advanced Resource Sets go significantly further, and this is where it gets interesting. Custom pattern configuration allows data curators to define precisely how partitioned datasets should be grouped — whether that is by date partition, region, environment, or any other dimension embedded in your file naming conventions. You are no longer relying solely on out-of-the-box heuristics. Partition schema surfacing means Purview now extracts and displays the partition dimensions themselves as metadata on the resource set asset. Instead of knowing only that "a resource set called transactions exists," your teams can see "that resource set is partitioned by year, month, and region." That is the difference between a data inventory and a genuinely useful data catalog. Accurate asset counts ensure that your catalog's asset metrics reflect logical datasets rather than raw file counts — giving leadership and governance teams a truthful picture of the data estate's scale. Getting Started — Simpler Than You Might Expect Enabling Advanced Resource Sets requires no additional connectors or infrastructure changes. The feature is activated and configured directly within the Microsoft Purview Governance Portal. At a high level: Sign in with an account that has Data Curator role in the default domain. Open Account settings in Microsoft Purview. Use the toggle to enable or disable Advanced resource sets. Define custom pattern rules by going to Data Map -> Source Management -> Pattern Rules Trigger a rescan (or allow scheduled scans to run). Purview will re-evaluate existing assets and collapse file-level entries into properly grouped resource sets with partition schema metadata attached. What You Can Do With It Once configured, Advanced Resource Sets surface in the Unified Catalog alongside all other scanned assets — but now at the right level of abstraction for your data consumers and governance teams. Data discoverability improves immediately. Analysts searching the catalog find logical datasets, not file fragments. They can evaluate partition coverage, understand data freshness based on partition metadata, and make confident decisions about whether an asset meets their needs before requesting access. Governance accuracy follows naturally. Data owners can apply classifications, sensitivity labels, and glossary terms to a single representative asset rather than chasing down hundreds of file-level entries. Ready to enable Advanced Resource Sets in your environment? Head to the Microsoft Purview Portal, navigate to account settings. Full documentation is available at Microsoft Learn: Manage resource sets.Microsoft Purview Data Quality Thresholds: More Control, More Trust
What Are Data Quality Thresholds? A data quality threshold defines the minimum acceptable score for a rule to pass. Instead of applying a single fixed standard across all data, organizations can now set expectations that align with business context and criticality. For example: An email column may require 99% completeness A product description column may only require 85% completeness Financial or regulatory data may require 100% accuracy With customizable thresholds, quality expectations become more meaningful and business-aligned. Why Does This Matter? Previously, using a single hardcoded threshold could lead to misleading quality scores. Critical data might appear “healthy” even when it didn’t meet business standards. With Data Quality Thresholds, you can: Define rule-level expectations Align quality scores with business risk Increase trust in DQ reporting Improve governance decision-making Data Asset-Level Quality Threshold Users can define data quality thresholds at the data asset level to measure how suitable a dataset is for specific business use cases. This allows organizations to quantify the overall health and fitness of a data asset before it is used in analytics, reporting, or data products. If the measured data quality score falls below the predefined threshold, the system can trigger notifications to the data asset owner or steward, prompting them to take corrective actions. It is important to note that not all data assets are equally critical. Therefore, thresholds should be context-driven and use-case specific. Example Scenario A marketing dataset used for campaign analysis may tolerate a lower quality threshold (e.g., 80%), since minor inconsistencies may not significantly impact insights. However, a financial reporting dataset used for regulatory filings may require a very high threshold (e.g., 98–100%), as even small errors can lead to compliance risks. Data Quality Rule-Level Threshold Thresholds can also be defined at the individual rule level, particularly for rules applied to specific columns. This provides more granular control and ensures that critical data elements are held to higher standards. Not all attributes have the same importance, so thresholds should reflect business criticality. Example Scenarios Email vs. Gender (Customer Contact Data) A completeness rule for a customer’s email address should have a higher threshold (e.g., 95–100%), since missing or invalid email addresses directly impact communication and engagement. In contrast, a gender attribute may have a lower threshold (e.g., 70–80%), as it is often less critical for most use cases. Billing Address vs. CRM Address A billing address is highly critical because it directly impacts: Invoice generation Tax calculations Timely delivery of invoices Therefore, the threshold for billing address quality should be very high (e.g., 98–100%). On the other hand, a CRM address used for general customer profiling may have a lower threshold, as occasional inaccuracies may not significantly affect business operations. The Impact By enabling flexible, context-aware scoring, Data Quality Thresholds help organizations move beyond generic quality checks and toward business-driven data quality management. Summary Data Quality Thresholds define the minimum acceptable score for data quality rules, allowing organizations to move beyond a one-size-fits-all approach and align quality expectations with business context and criticality. Instead of using fixed thresholds, organizations can set custom thresholds based on how important the data is. For example, financial data may require near-perfect accuracy, while less critical fields can tolerate lower thresholds. Thresholds can be applied at two levels: Data Asset Level: Measures the overall fitness of a dataset for a specific use case. Critical datasets (e.g., financial reporting) require higher thresholds than less critical ones (e.g., marketing analytics). Rule Level: Applies to individual columns or rules, ensuring that critical attributes (e.g., email, billing address) have stricter quality requirements than less important ones. This approach improves: Alignment with business risk and priorities Trust in data quality reporting Governance decision-making Focus on high-impact data issues Overall, data quality thresholds enable more meaningful, context-aware, and business-driven data quality management, helping organizations prioritize what matters most and build confidence in their data.Accelerate connectors development using AI agent in Microsoft Sentinel
Today, we’re excited to announce the public preview of a Sentinel connector builder agent, via VS code extension, that helps developers build Microsoft Sentinel codeless connectors faster with low-code and AI-assisted prompts. This new capability brings guided workflows directly into the tooling developers already use, helping accelerate time to value as the Sentinel ecosystem continues to grow. Learn more at Create custom connectors using Sentinel connector AI agent Why this matters As the Microsoft Sentinel ecosystem continues to expand, developers are increasingly tasked with delivering high‑quality, production‑ready connectors at a faster pace, often while working across different cloud platforms and development environments. Building these integrations involves coordinating schemas, configuration artifacts, Azure deployment concepts, and validation steps that provide flexibility and control, but can span multiple tools and workflows. As connector development scales across more partners and scenarios, there is a clear opportunity to better integrate these capabilities into the developer environments teams already rely on. The new Sentinel connector builder agent, using GitHub Copilot in the Sentinel VS code extension, brings more of the connector development lifecycle -- authoring, validation, testing, and deployment into a single, cohesive workflow. By consolidating these common steps, it helps developers move more easily from design to validation and deployment without disrupting established processes. A guided, AI‑assisted workflow inside VS Code The Sentinel connector builder agent for Visual Studio Code is designed to help developers move from API documentation to a working codeless connector more efficiently. The experience begins with an ISVs API documentation. Using GitHub Copilot chat inside VS Code, developers can describe the connector they want to build and point the extension to their API docs, either by URL or inline content. From there, the AI‑guided workflow reads and extracts the relevant details needed to begin building the connector. Open the VS Code chat and set the chat to Agent mode. Prompt the agent using sentinel. When prompted, select /create-connector and select any supported API. For example in Contoso API, enter the prompt as: @sentinel /create-connector Create a connector for Contoso. Here are the API docs: https://contoso-security-api.azurewebsites.net/v0101/api-doc Next, the agent generates the required artifacts such as polling configurations, data collection rules (DCRs), table schemas, and connector definitions, using guided prompts with built‑in validation. This step‑by‑step experience helps ensure configurations remain consistent and aligned as they’re created. Note: During agent evaluation, select Allow responses once to approve changes, or select the option Bypass Approvals in the chat. It might take up to several minutes for the evaluations to finish. As the connector takes shape, developers can validate and test configurations directly within VS Code, including testing API interactions before deployment. Validation of the API data source and polling configuration are surfaced in context, supporting faster iteration without leaving the development environment. When ready, connectors can be deployed directly from VS Code to accessible Microsoft Sentinel workspaces, streamlining the path from development to deployment without requiring manual navigation of the Azure portal. Key capabilities The VS Code connector builder experience includes: AI‑guided connector creation to generate codeless connectors from API documentation using natural language prompts. Support for common authentication methods, including Basic authentication, OAuth 2.0, and API keys. Automated validation to check schemas, cross‑file consistency, and configuration correctness as you build. Built‑in testing to validate polling configurations and API interactions before deployment. One‑click deployment that allows publishing connectors directly to accessible Microsoft Sentinel workspaces from within VS Code. Together, these capabilities support a more efficient path from API documentation to a working Microsoft Sentinel connector. Testimonials As partners begin using the Sentinel connector builder agent, feedback from the community will help shape future enhancements and refinements. Here is what some of our early adopters have to say about the experience: “The connector builder agent accelerated our initial exploration of the codeless connector framework and helped guide our connector design decisions.” -- Rodrigo Rodrigues, Technology Alliance Director “The connector builder agent helped us quickly explore and validate connector options on the codeless connector framework while developing our Sentinel integration.” --Chris Nicosia, Head of Cloud and Tech Partnerships Start building This public preview represents an important step toward simplifying how ISVs build and maintain integrations with Microsoft Sentinel. If you’re ready to get started, the Sentinel connector builder agent is available in public preview for all participants. In the unlikely event that an ISV encounters any issues in building or updating a CCF connector, App Assure is here to help. Reach out to us here.Labeling Files is Worth It | Speed & Protection Benefits in Microsoft Purview
Classify your data, apply clear labels, and enforce protections that automatically adapt to human and AI interactions so you can reduce risk without slowing down workflows. Proactively monitor, assess, and respond to risk in real time. Use labeling and layered policies to stop accidental sharing, manage AI access, and maintain consistent protection across your organization. Matt McSpirit, Microsoft Mechanics expert, joins Jeremy Chapman to share how to turn scattered data into actionable security that moves as fast as your team and AI. Scan your environment beyond standard detection. Identify gaps where AI or big files might expose sensitive data. Get started with Microsoft Purview Information Protection. Reduce the risk of accidental sharing. Label sensitive data, including proprietary and hard-to-detect content, to enforce access controls instantly. See how DLP and IRM work. Act before exposures become incidents. Identify data risks early, prioritize what matters most, and take action to reduce exposure with Microsoft Purview DSPM. QUICK LINKS: 00:00 — Microsoft Purview data protection 01:04 — Data Loss Prevention 03:36 — Layered approach in addition to DLP 04:13 — Unified classification 04:27 — How sensitive data is determined 06:23 — Create trainable classifiers 07:06 — Distinction between classification and labeling 08:06 — Configure policy protections 09:12 — DLP in action 10:10 — IRM in action 10:51 — See how protections show up 13:37 — Move from reactive to proactive protection 15:00 — Wrap up Link References For deeper guidance, go to https://aka.ms/PurviewInformationProtection Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - If you don’t understand your data, what it is, where it lives, and how sensitive it is, you can’t protect it. And it’s easy to assume that you’re covered, maybe you’ve already got data loss prevention, or DLP, running with near realtime detection, which is helpful, yes, but it’s not enough. Protecting data today means going beyond what traditional tech scanning can catch and making sure that those harder to parse file types are covered too. And it also requires a layered approach with instant risk insights, starting with consistent and automatic classification, so everyone’s clear on what’s actually sensitive. Labels that make sensitive content easier to interpret and trigger automatic policies, and Adaptive Protection that responds to the risk level of each user, whether human or non-human, and how they engage with the data. In fact, this matters even more with AI that can now bring hidden or long forgotten information to the surface in just seconds. Now to walk us through all of this, I’m joined by a Microsoft Mechanics expert, Matt McSpirit. - Thanks, it’s great to be back. - Okay, so before we get into solutions, why don’t we unpack this a bit more. So for a lot of people, even as they adopt AI, there’s this notion that maybe DLP is good enough. It’s finding things like credit cards, it’s also looking at things like financial information, identity numbers, addresses, et cetera, even if you aren’t paying attention, by the way, to where that information is stored. So is it even worth the extra effort in doing something else? - Well, these are all fair points, and DLP is one powerful piece of the puzzle. And part of its appeal is that it works without the need to label or add any metadata to your content. It’s also rule-based and can look for sensitive information types as they’re being written, read, or sent, and then use what it finds to apply corresponding protections to prevent sharing or contain its sharing radius. - Okay, so what you just said sounds like all upsides. So the policies are relatively easy to configure, they work by default with all your Microsoft 365 and Office apps and your managed devices, as long as people are signed in with them, regardless, really, of where that file goes as well. So what’s the downside? - Well, depending on the scenario, there are a few areas. First, there’s speed of detection and response. Now in this case, I’ll show you an example of DLP in action. I’ll paste in a few thousand words from my clipboard into this Word document. And now DLP will compare it with hundreds of sensitive information types like bank numbers or IDs, dozens of trainable classifiers like contracts or patent applications, and do cross look-ups against exact data match, and more, which based on physics, orchestration, and query speeds, takes time. And it’s only when the policy tip appears whether I choose to apply the recommendation or not, that the content is protected. As you can see, I can’t now share this file externally because DLP has found sensitive information. So there’s a window of time based on a number of factors for DLP to find sensitive information and apply protection. Next, breadth of coverage is another area. You might have file types that can’t be scanned for text easily, like these files synced on my OneDrive location. These are proprietary file types from line of business apps as well as 3D CAD files. So in this case, you’d need a different way to identify the sensitivity of these files and protect the container of the files themselves, like you can see with this rights-protected document using the ARC Add File extension. - And that makes a lot of sense. You know, even though compute and detection are getting faster, if you’ve got like a hundred-page document and it’s got, or maybe a massive spreadsheet, it’s got passport numbers or similar things buried in it, it’s going to take significant time, then, to find that sensitive info. - Right, and if we add AI to the picture, which needs to orchestrate access to data across multiple data sources to respond in milliseconds, this isn’t the optimal approach when speed of response counts. And that’s where a layered approach comes in. In addition to your policy engines like DLP, it’s important to augment what you’re doing with unified data classification. It gives you a broader, persistent understanding of sensitive data across your environment so that it’s easy to assess your data risk and then add sensitivity labeling to your data security strategy. This way, DLP can immediately act on an existing signal rather than having to evaluate everything from scratch each time. - Okay, so why don’t we go deeper then on unified classification as part of this layered approach. - So this actually gets to the heart of the problem. Over time, as data keeps growing and shifting, different teams and tools have ended up defining sensitive data in their own ways, and it’s hard to know where all that data lives. No one really intends for the inconsistency, it just happens and you’re left with a patchwork view of your data instead of one clear picture. And that’s why the first step is giving everything that works with your data, whether that’s your users, AI, or your apps and policy engines, a single consistent way to recognize what’s important. So here in Data Explorer, Microsoft Purview has already identified sensitive data across my environment automatically. This reflects a unified data classification approach that discovers your sensitive data wherever it lives. I didn’t build any rules for this. This discovery happens automatically. And if I drill in, I can see exactly where these files are, even preview the content to see the content in question and easily understand why they were identified as sensitive. - And there’s really a lot to it that’s powering this classification. So what is Purview then looking at to determine if there’s sensitive information there? - Right, there’s a lot happening under the covers. Purview uses two main built-in classification methods. First, sensitive information types that detect specific regulated data such as credentials, IDs, or financial numbers with more than 300 built-in detection patterns for regulated data. And second, more than a hundred pre-trained classifiers that understand broader categories of content like budgets, HR files, or source code. These classifiers are built using Microsoft’s domain expertise and training data sets to recognize common business content categories. Additionally, how fresh your data is also matters to Purview. Purview evaluates new and modified content, automatically analyzing the data with the latest classifications and policies so that your most recent data is well understood and has the latest protections. And if you want to evaluate data that hasn’t been accessed recently, you can run on-demand classification to scan data at rest, helping you uncover sensitive data that might otherwise be overlooked. - And building on what you said, Matt, you know, you can also teach Purview to recognize content that’s unique to your organization. For example, you can create your own trainable classifiers by providing real sample content. You just have to point it to a SharePoint site with 50 to 500 files of matching content. Or you can use exact data match for structured data comparisons against exact text strings. Think of things like code names, or maybe a specific customer, partner, or competitor names, and more. And Purview, it also supports fingerprinting for things like standard forms or templates so that they’re recognized even if the wording changes. Of course, classifications can trigger protections once they’re paired with active policies. - Right, and interestingly, labels can also trigger protection policies. - And we should really unpack this a bit more, because I think a lot of people watching probably make the mistake of conflating classification and labeling as being one and the same thing. - It’s a common mistake, but there is an important distinction. In fact, there’s an easy way to think about this. Think of data classification as recognizing what your data is. It’s about understanding the sensitive information that’s present in your data. And data labeling is the simple to understand wording along with your intent for how the data should be handled. For example, a confidential/do not forward label needs no complex explanation on how you should handle the data if you’re the user. And on the backend, Purview quietly protects the data based on how you’ve define protections associated with that label, like access restrictions or watermarking. And the bonus is that this guidance and protection travels with the data. And you can set labels up in Microsoft Purview Information Protection. This lets you create sensitivity labels like these to define how different types of data should be classified. Once you’ve done that, you can configure policy protections that are triggered by those labels, such as encryption, limiting the sharing radius or visual markings, and more. And when used in tandem with DLP, you can even prevent Copilot from processing labeled content. Next, with your labels created, you can publish them so they appear in apps like Word, Excel, PowerPoint, and Outlook, and are honored across services like Fabric, Dataverse, and of course, as I mentioned, Copilot. All of what I’ve shown you is included with most versions of Microsoft 365. And with Microsoft 365 E5, you can even set up auto labeling, so Purview can apply labels automatically when it detects sensitive content. - So labels are respected across all those destinations. - That’s right, and once a label is applied, it’s recognized across supported workloads, and Purview solutions like DLP, Insider Risk Management, and more, know how to handle that data properly. So instead of stitching together separate tools, each with its own definition of sensitive data, you define sensitivity only once. And that same signal drives consistent protection wherever the data travels to. In fact, let me show you how this works in practice. So here in DLP, I’m going to create a policy based on what Purview has already automatically discovered across SharePoint and OneDrive. From the Insights card, you can see the top sensitive information types like medical, IP and trade secrets, financial data, and medical identifiers. So I’ll get started, then choose to create all of the recommended policies. Now, if I go back to my DLP policies view and look at the ones I’ve just created, you’ll see that there are four new policies. If I click in to edit one, you’ll notice that Purview has already preselected the right conditions with trainable classifiers and actions predefined for the policy. And from there, I can even add to this policy. In this case, I’ll add my confidential labels to the policy. These are the same ones I’ve shown before. So in short, classification identifies the sensitive content, the conditions being met will then trigger the corresponding policies to enforce protections. This reduces configuration effort and ensures consistency across your environment. And in Insider Risk Management, labels work as risk signals too. So here in the policy template, I’m adding a condition that focuses on activity involving items labeled confidential. And that way, if users including non-human agents, exfiltrate or misuse high-value labeled data, printing it, copying it to external storage, or sharing externally, IRM will automatically elevate their risk score based on the activities against the labeled data. So labels also help enforce adaptive protections based on the risk profile of who, whether that’s a human user or a non-human AI agent, and their activities with the data. What we call Adaptive Protection. - Okay, so now we’ve got all of our policies in place. Why don’t we see how those protections show up in the flow of work, including AI interactions? So first I’m going to upload the same file that Matt showed before, but this time, it has a confidential label applied. So when I try to share it externally, you can see that I’m blocked instantly because that label is detected right away. DLP blocks the action based on the label, and this, again, is before that file could be scanned for sensitive information. Now I’m going to switch desktops. On the left here is a window with a synced folder in File Explorer. And you can see that there are proprietary file types and CAD files like we saw before, and each are labeled but cannot be analyzed for sensitive information types or classifiers. So with the labels applied to these encrypted P files, as they are, if I do try to drag and drop a file into my removable USB driver location in the window on the right, you’ll see I get a data loss prevention notification. Now because in this case, I’m under the file count threshold that we set before in policy, I can allow or override this, but I would’ve been blocked outright if I had transferred multiple files. Now again, the labels in these uncommon file types are what triggered the data loss prevention policy. And inside of risk management, it is also watching for risky handling of labeled content. For example, I can currently access this highly confidential acquisition site and see all the documents contained within it, for the moment. That said, though, because I just attempted to copy confidential information to my external USB drive, that’s going to catch up with me and automatically change my risk profile. So now after some time has passed, if I try to access that same site, I’m blocked outright and denied access. The protection automatically adapted to my heightened risk profile and blocked the site, without the administrator even needing to take any action. And by the way, the same assessment against risk profile would happen if it was an AI agent and it tried to do the same thing. And beyond agents, why don’t we look at label protection, and how that works in general with AI. So here I’m in Copilot and I have a document uploaded to SharePoint. So I’ll prompt Copilot to summarize the file named Relecloud Acquisition, and you’ll see that Copilot will first check the user’s permissions and the presence of a label before it does anything. Now, because this document is labeled as highly confidential and we have a DLP policy in place to block Copilot from processing sensitive files, it tells me that it can’t summarize that content because of its sensitivity label. - So from creation to risky behavior and even Copilot interactions, the same sensitivity label ensures consistent protection. But the work is never really done. New data keeps coming and risk changes over time. That’s where, because you’ve already classified your data, Purview’s Data Security Posture Management, or DSPM, addresses this by continually assessing your data risk. It’s deeply integrated across Microsoft and beyond, giving you one centralized place to discover unprotected sensitive data across your entire digital estate, including select non-Microsoft services. Built-in intelligence continually assesses data risk to help you prioritize and mitigate high-risk exposures, taking advantage of recommendations where you can strengthen your policy directly from DSPM itself. AI observability features also give you granular insight into what agents are doing and any risk they may introduce. And custom reports make it easy to embed posture management into daily operations by highlighting where to improve. - And this is all built to help you then move from reactive investigation to more proactive and measurable risk reduction. - Exactly, and actually, this is just scratching the surface of what Purview can do. You can also use AI itself to manage human and AI data risk using deep-reasoning Purview agents. For example, they can triage alerts and automatically message users in Teams with the sensitive data found and the actions they need to take. - Okay, so as you saw, there are lots of ways that this layered approach goes beyond traditional DLP protection. So where can everyone who’s watching right now learn more? - Well, first, check out aka.ms/PurviewInformationProtection. Again, if you use Microsoft 365 in your organization, you’ll have Microsoft Purview today, and you can get the more advanced Purview capabilities with Microsoft 365 E5. So it’s worth exploring further. So start using unified classification and labels today. - Thanks, Matt, and thank you for joining us. Be sure to subscriber Microsoft Mechanics if you haven’t already, and we’ll see you next time.163Views0likes0CommentsI built a free, open-source M365 security assessment tool - looking for feedback
I work as an IT consultant, and a good chunk of my time is spent assessing Microsoft 365 environments for small and mid-sized businesses. Every engagement started the same way: connect to five different PowerShell modules, run dozens of commands across Entra ID, Exchange Online, Defender, SharePoint, and Teams, manually compare each setting against CIS benchmarks, then spend hours assembling everything into a report the client could actually read. The tools that automate this either cost thousands per year, require standing up Azure infrastructure just to run, or only cover one service area. I wanted something simpler: one command that connects, assesses, and produces a client-ready deliverable. So I built it. What M365 Assess does https://github.com/Daren9m/M365-Assess is a PowerShell-based security assessment tool that runs against a Microsoft 365 tenant and produces a comprehensive set of reports. Here is what you get from a single run: 57 automated security checks aligned to the CIS Microsoft 365 Foundations Benchmark v6.0.1, covering Entra ID, Exchange Online, Defender for Office 365, SharePoint Online, and Teams 12 compliance frameworks mapped simultaneously -- every finding is cross-referenced against NIST 800-53, NIST CSF 2.0, ISO 27001:2022, SOC 2, HIPAA, PCI DSS v4.0.1, CMMC 2.0, CISA SCuBA, and DISA STIG (plus CIS profiles for E3 L1/L2 and E5 L1/L2) 20+ CSV exports covering users, mailboxes, MFA status, admin roles, conditional access policies, mail flow rules, device compliance, and more A self-contained HTML report with an executive summary, severity badges, sortable tables, and a compliance overview dashboard -- no external dependencies, fully base64-encoded, just open it in any browser or email it directly The entire assessment is read-only. It never modifies tenant settings. Only Get-* cmdlets are used. A few things I'm proud of Real-time progress in the console. As the assessment runs, you see each check complete with live status indicators and timing. No staring at a blank terminal wondering if it hung. The HTML report is a single file. Logos, backgrounds, fonts -- everything is embedded. You can email the report as an attachment and it renders perfectly. It supports dark mode (auto-detects system preference), and all tables are sortable by clicking column headers. Compliance framework mapping. This was the feature that took the most work. The compliance overview shows coverage percentages across all 12 frameworks, with drill-down to individual controls. Each finding links back to its CIS control ID and maps to every applicable framework control. Pass/Fail detail tables. Each security check shows the CIS control reference, what was checked, what the expected value is, what the actual value is, and a clear Pass/Fail/Warning status. Findings include remediation descriptions to help prioritize fixes. Quick start If you want to try it out, it takes about 5 minutes to get running: # Install prerequisites (if you don't have them already) Install-Module Microsoft.Graph, ExchangeOnlineManagement -Scope CurrentUser Clone and run git clone https://github.com/Daren9m/M365-Assess.git cd M365-Assess .\Invoke-M365Assessment.ps1 The interactive wizard walks you through selecting assessment sections, entering your tenant ID, and choosing an authentication method (interactive browser login, certificate-based, or pre-existing connections). Results land in a timestamped folder with all CSVs and the HTML report. Requires PowerShell 7.x and runs on Windows (macOS and Linux are experimental -- I would love help testing those platforms). Cloud support M365 Assess works with: Commercial (global) tenants GCC, GCC High, and DoD environments If you work in government cloud, the tool handles the different endpoint URIs automatically. What is next This is actively maintained and I have a roadmap of improvements: More automated checks -- 140 CIS v6.0.1 controls are tracked in the registry, with 57 automated today. Expanding coverage is the top priority. Remediation commands -- PowerShell snippets and portal steps for each finding, so you can fix issues directly from the report. XLSX compliance matrix -- A spreadsheet export for audit teams who need to work in Excel. Standalone report regeneration -- Re-run the report from existing CSV data without re-assessing the tenant. I would love your feedback I have been building this for my own consulting work, but I think it could be useful to the broader community. If you try it, I would genuinely appreciate hearing: What checks should I prioritize next? Which security controls matter most in your environment? What compliance frameworks are most requested by your clients or auditors? How does the report land with non-technical stakeholders? Is the executive summary useful, or does it need work? macOS/Linux users -- does it run? What breaks? I have tested it on macOS, but not extensively. Bug reports, feature requests, and contributions are all welcome on GitHub. Repository: https://github.com/Daren9m/M365-Assess License: MIT (free for commercial and personal use) Runtime: PowerShell 7.x Thanks for reading. Happy to answer any questions in the comments.498Views1like1CommentSecurity as the core primitive - Securing AI agents and apps
This week at Microsoft Ignite, we shared our vision for Microsoft security -- In the agentic era, security must be ambient and autonomous, like the AI it protects. It must be woven into and around everything we build—from silicon to OS, to agents, apps, data, platforms, and clouds—and throughout everything we do. In this blog, we are going to dive deeper into many of the new innovations we are introducing this week to secure AI agents and apps. As I spend time with our customers and partners, there are four consistent themes that have emerged as core security challenges to secure AI workloads. These are: preventing agent sprawl and access to resources, protecting against data oversharing and data leaks, defending against new AI threats and vulnerabilities, and adhering to evolving regulations. Addressing these challenges holistically requires a coordinated effort across IT, developers, and security leaders, not just within security teams and to enable this, we are introducing several new innovations: Microsoft Agent 365 for IT, Foundry Control Plane in Microsoft Foundry for developers, and the Security Dashboard for AI for security leaders. In addition, we are releasing several new purpose-built capabilities to protect and govern AI apps and agents across Microsoft Defender, Microsoft Entra, and Microsoft Purview. Observability at every layer of the stack To facilitate the organization-wide effort that it takes to secure and govern AI agents and apps – IT, developers, and security leaders need observability (security, management, and monitoring) at every level. IT teams need to enable the development and deployment of any agent in their environment. To ensure the responsible and secure deployment of agents into an organization, IT needs a unified agent registry, the ability to assign an identity to every agent, manage the agent’s access to data and resources, and manage the agent’s entire lifecycle. In addition, IT needs to be able to assign access to common productivity and collaboration tools, such as email and file storage, and be able to observe their entire agent estate for risks such as over-permissioned agents. Development teams need to build and test agents, apply security and compliance controls by default, and ensure AI models are evaluated for safety guardrails and security vulnerabilities. Post deployment, development teams must observe agents to ensure they are staying on task, accessing applications and data sources appropriately, and operating within their cost and performance expectations. Security & compliance teams must ensure overall security of their AI estate, including their AI infrastructure, platforms, data, apps, and agents. They need comprehensive visibility into all their security risks- including agent sprawl and resource access, data oversharing and leaks, AI threats and vulnerabilities, and complying with global regulations. They want to address these risks by extending their existing security investments that they are already invested in and familiar with, rather than using siloed or bolt-on tools. These teams can be most effective in delivering trustworthy AI to their organizations if security is natively integrated into the tools and platforms that they use every day, and if those tools and platforms share consistent security primitives such as agent identities from Entra; data security and compliance controls from Purview; and security posture, detections, and protections from Defender. With the new capabilities being released today, we are delivering observability at every layer of the AI stack, meeting IT, developers, and security teams where they are in the tools they already use to innovate with confidence. For IT Teams - Introducing Microsoft Agent 365, the control plane for agents, now in preview The best infrastructure for managing your agents is the one you already use to manage your users. With Agent 365, organizations can extend familiar tools and policies to confidently deploy and secure agents, without reinventing the wheel. By using the same trusted Microsoft 365 infrastructure, productivity apps, and protections, organizations can now apply consistent and familiar governance and security controls that are purpose-built to protect against agent-specific threats and risks. gement and governance of agents across organizations Microsoft Agent 365 delivers a unified agent Registry, Access Control, Visualization, Interoperability, and Security capabilities for your organization. These capabilities work together to help organizations manage agents and drive business value. The Registry powered by the Entra provides a complete and unified inventory of all the agents deployed and used in your organization including both Microsoft and third-party agents. Access Control allows you to limit the access privileges of your agents to only the resources that they need and protect their access to resources in real time. Visualization gives organizations the ability to see what matters most and gain insights through a unified dashboard, advanced analytics, and role-based reporting. Interop allows agents to access organizational data through Work IQ for added context, and to integrate with Microsoft 365 apps such as Outlook, Word, and Excel so they can create and collaborate alongside users. Security enables the proactive detection of vulnerabilities and misconfigurations, protects against common attacks such as prompt injections, prevents agents from processing or leaking sensitive data, and gives organizations the ability to audit agent interactions, assess compliance readiness and policy violations, and recommend controls for evolving regulatory requirements. Microsoft Agent 365 also includes the Agent 365 SDK, part of Microsoft Agent Framework, which empowers developers and ISVs to build agents on their own AI stack. The SDK enables agents to automatically inherit Microsoft's security and governance protections, such as identity controls, data security policies, and compliance capabilities, without the need for custom integration. For more details on Agent 365, read the blog here. For Developers - Introducing Microsoft Foundry Control Plane to observe, secure and manage agents, now in preview Developers are moving fast to bring agents into production, but operating them at scale introduces new challenges and responsibilities. Agents can access tools, take actions, and make decisions in real time, which means development teams must ensure that every agent behaves safely, securely, and consistently. Today, developers need to work across multiple disparate tools to get a holistic picture of the cybersecurity and safety risks that their agents may have. Once they understand the risk, they then need a unified and simplified way to monitor and manage their entire agent fleet and apply controls and guardrails as needed. Microsoft Foundry provides a unified platform for developers to build, evaluate and deploy AI apps and agents in a responsible way. Today we are excited to announce that Foundry Control Plane is available in preview. This enables developers to observe, secure, and manage their agent fleets with built-in security, and centralized governance controls. With this unified approach, developers can now identify risks and correlate disparate signals across their models, agents, and tools; enforce consistent policies and quality gates; and continuously monitor task adherence and runtime risks. Foundry Control Plane is deeply integrated with Microsoft’s security portfolio to provide a ‘secure by design’ foundation for developers. With Microsoft Entra, developers can ensure an agent identity (Agent ID) and access controls are built into every agent, mitigating the risk of unmanaged agents and over permissioned resources. With Microsoft Defender built in, developers gain contextualized alerts and posture recommendations for agents directly within the Foundry Control Plane. This integration proactively prevents configuration and access risks, while also defending agents from runtime threats in real time. Microsoft Purview’s native integration into Foundry Control Plane makes it easy to enable data security and compliance for every Foundry-built application or agent. This allows Purview to discover data security and compliance risks and apply policies to prevent user prompts and AI responses from safety and policy violations. In addition, agent interactions can be logged and searched for compliance and legal audits. This integration of the shared security capabilities, including identity and access, data security and compliance, and threat protection and posture ensures that security is not an afterthought; it’s embedded at every stage of the agent lifecycle, enabling you to start secure and stay secure. For more details, read the blog. For Security Teams - Introducing Security Dashboard for AI - unified risk visibility for CISOs and AI risk leaders, coming soon AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 90% of security professionals, including CISOs, report that their responsibilities have expanded to include data governance and AI oversight within the past year. 1 At the same time, 86% of risk managers say disconnected data and systems lead to duplicated efforts and gaps in risk coverage. 2 To address these needs, we are excited to introduce the Security Dashboard for AI. This serves as a unified dashboard that aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview. This unified dashboard allows CISOs and AI risk leaders to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. For example, you can see your full AI inventory and get visibility into a quarantined agent, flagged for high data risk due to oversharing sensitive information in Purview. The dashboard then correlates that signal with identity insights from Entra and threat protection alerts from Defender to provide a complete picture of exposure. From there, you can delegate tasks to the appropriate teams to enforce policies and remediate issues quickly. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, there’s nothing new to buy. If you’re already using Microsoft security products to secure AI, you’re already a Security Dashboard for AI customer. Figure 5: Security Dashboard for AI provides CISOs and AI risk leaders with a unified view of their AI risk by bringing together their AI inventory, AI risk, and security recommendations to strengthen overall posture Together, these innovations deliver observability and security across IT, development, and security teams, powered by Microsoft’s shared security capabilities. With Microsoft Agent 365, IT teams can manage and secure agents alongside users. Foundry Control Plane gives developers unified governance and lifecycle controls for agent fleets. Security Dashboard for AI provides CISOs and AI risk leaders with a consolidated view of AI risks across platforms, apps, and agents. Added innovation to secure and govern your AI workloads In addition to the IT, developer, and security leader-focused innovations outlined above, we continue to accelerate our pace of innovation in Microsoft Entra, Microsoft Purview, and Microsoft Defender to address the most pressing needs for securing and governing your AI workloads. These needs are: Manage agent sprawl and resource access e.g. managing agent identity, access to resources, and permissions lifecycle at scale Prevent data oversharing and leaks e.g. protecting sensitive information shared in prompts, responses, and agent interactions Defend against shadow AI, new threats, and vulnerabilities e.g. managing unsanctioned applications, preventing prompt injection attacks, and detecting AI supply chain vulnerabilities Enable AI governance for regulatory compliance e.g. ensuring AI development, operations, and usage comply with evolving global regulations and frameworks Manage agent sprawl and resource access 76% of business leaders expect employees to manage agents within the next 2–3 years. 3 Widespread adoption of agents is driving the need for visibility and control, which includes the need for a unified registry, agent identities, lifecycle governance, and secure access to resources. Today, Microsoft Entra provides robust identity protection and secure access for applications and users. However, organizations lack a unified way to manage, govern, and protect agents in the same way they manage their users. Organizations need a purpose-built identity and access framework for agents. Introducing Microsoft Entra Agent ID, now in preview Microsoft Entra Agent ID offers enterprise-grade capabilities that enable organizations to prevent agent sprawl and protect agent identities and their access to resources. These new purpose-built capabilities enable organizations to: Register and manage agents: Get a complete inventory of the agent fleet and ensure all new agents are created with an identity built-in and are automatically protected by organization policies to accelerate adoption. Govern agent identities and lifecycle: Keep the agent fleet under control with lifecycle management and IT-defined guardrails for both agents and people who create and manage them. Protect agent access to resources: Reduce risk of breaches, block risky agents, and prevent agent access to malicious resources with conditional access and traffic inspection. Agents built in Microsoft Copilot Studio, Microsoft Foundry, and Security Copilot get an Entra Agent ID built-in at creation. Developers can also adopt Entra Agent ID for agents they build through Microsoft Agent Framework, Microsoft Agent 365 SDK, or Microsoft Entra Agent ID SDK. Read the Microsoft Entra blog to learn more. Prevent data oversharing and leaks Data security is more complex than ever. Information Security Media Group (ISMG) reports that 80% of leaders cite leakage of sensitive data as their top concern. 4 In addition to data security and compliance risks of generative AI (GenAI) apps, agents introduces new data risks such as unsupervised data access, highlighting the need to protect all types of corporate data, whether it is accessed by employees or agents. To mitigate these risks, we are introducing new Microsoft Purview data security and compliance capabilities for Microsoft 365 Copilot and for agents and AI apps built with Copilot Studio and Microsoft Foundry, providing unified protection, visibility, and control for users, AI Apps, and Agents. New Microsoft Purview controls safeguard Microsoft 365 Copilot with real-time protection and bulk remediation of oversharing risks Microsoft Purview and Microsoft 365 Copilot deliver a fully integrated solution for protecting sensitive data in AI workflows. Based on ongoing customer feedback, we’re introducing new capabilities to deliver real-time protection for sensitive data in M365 Copilot and accelerated remediation of oversharing risks: Data risk assessments: Previously, admins could monitor oversharing risks such as SharePoint sites with unprotected sensitive data. Now, they can perform item-level investigations and bulk remediation for overshared files in SharePoint and OneDrive to quickly reduce oversharing exposure. Data Loss Prevention (DLP) for M365 Copilot: DLP previously excluded files with sensitivity labels from Copilot processing. Now in preview, DLP also prevents prompts that include sensitive data from being processed in M365 Copilot, Copilot Chat, and Copilot agents, and prevents Copilot from using sensitive data in prompts for web grounding. Priority cleanup for M365 Copilot assets: Many organizations have org-wide policies to retain or delete data. Priority cleanup, now generally available, lets admins delete assets that are frequently processed by Copilot, such as meeting transcripts and recordings, on an independent schedule from the org-wide policies while maintaining regulatory compliance. On-demand classification for meeting transcripts: Purview can now detect sensitive information in meeting transcripts on-demand. This enables data security admins to apply DLP policies and enforce Priority cleanup based on the sensitive information detected. & bulk remediation Read the full Data Security blog to learn more. Introducing new Microsoft Purview data security capabilities for agents and apps built with Copilot Studio and Microsoft Foundry, now in preview Microsoft Purview now extends the same data security and compliance for users and Copilots to agents and apps. These new capabilities are: Enhanced Data Security Posture Management: A centralized DSPM dashboard that provides observability, risk assessment, and guided remediation across users, AI apps, and agents. Insider Risk Management (IRM) for Agents: Uniquely designed for agents, using dedicated behavioral analytics, Purview dynamically assigns risk levels to agents based on their risky handing of sensitive data and enables admins to apply conditional policies based on that risk level. Sensitive data protection with Azure AI Search: Azure AI Search enables fast, AI-driven retrieval across large document collections, essential for building AI Apps. When apps or agents use Azure AI Search to index or retrieve data, Purview sensitivity labels are preserved in the search index, ensuring that any sensitive information remains protected under the organization’s data security & compliance policies. For more information on preventing data oversharing and data leaks - Learn how Purview protects and governs agents in the Data Security and Compliance for Agents blog. Defend against shadow AI, new threats, and vulnerabilities AI workloads are subject to new AI-specific threats like prompt injections attacks, model poisoning, and data exfiltration of AI generated content. Although security admins and SOC analysts have similar tasks when securing agents, the attack methods and surfaces differ significantly. To help customers defend against these novel attacks, we are introducing new capabilities in Microsoft Defender that deliver end-to-end protection, from security posture management to runtime defense. Introducing Security Posture Management for agents, now in preview As organizations adopt AI agents to automate critical workflows, they become high-value targets and potential points of compromise, creating a critical need to ensure agents are hardened, compliant, and resilient by preventing misconfigurations and safeguarding against adversarial manipulation. Security Posture Management for agents in Microsoft Defender now provides an agent inventory for security teams across Microsoft Foundry and Copilot Studio agents. Here, analysts can assess the overall security posture of an agent, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions, all aligned to the MITRE ATT&CK framework. Additionally, the new agent attack path analysis visualizes how an agent’s weak security posture can create broader organizational risk, so you can quickly limit exposure and prevent lateral movement. Introducing Threat Protection for agents, now in preview Attack techniques and attack surfaces for agents are fundamentally different from other assets in your environment. That’s why Defender is delivering purpose-built protections and detections to help defend against them. Defender is introducing runtime protection for Copilot Studio agents that automatically block prompt injection attacks in real time. In addition, we are announcing agent-specific threat detections for Copilot Studio and Microsoft Foundry agents coming soon. Defender automatically correlates these alerts with Microsoft’s industry-leading threat intelligence and cross-domain security signals to deliver richer, contextualized alerts and security incident views for the SOC analyst. Defender’s risk and threat signals are natively integrated into the new Microsoft Foundry Control Plane, giving development teams full observability and the ability to act directly from within their familiar environment. Finally, security analysts will be able to hunt across all agent telemetry in the Advanced Hunting experience in Defender, and the new Agent 365 SDK extends Defender’s visibility and hunting capabilities to third-party agents, starting with Genspark and Kasisto, giving security teams even more coverage across their AI landscape. To learn more about how you can harden the security posture of your agents and defend against threats, read the Microsoft Defender blog. Enable AI governance for regulatory compliance Global AI regulations like the EU AI Act and NIST AI RMF are evolving rapidly; yet, according to ISMG, 55% of leaders report lacking clarity on current and future AI regulatory requirements. 5 As enterprises adopt AI, they must ensure that their AI innovation aligns with global regulations and standards to avoid costly compliance gaps. Introducing new Microsoft Purview Compliance Manager capabilities to stay ahead of evolving AI regulations, now in preview Today, Purview Compliance Manager provides over 300 pre-built assessments for common industry, regional, and global standards and regulations. However, the pace of change for new AI regulations requires controls to be continuously re-evaluated and updated so that organizations can adapt to ongoing changes in regulations and stay compliant. To address this need, Compliance Manager now includes AI-powered regulatory templates. AI-powered regulatory templates enable real-time ingestion and analysis of global regulatory documents, allowing compliance teams to quickly adapt to changes as they happen. As regulations evolve, the updated regulatory documents can be uploaded to Compliance Manager, and the new requirements are automatically mapped to applicable recommended actions to implement controls across Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft 365, and Microsoft Foundry. Automated actions by Compliance Manager further streamline governance, reduce manual workload, and strengthen regulatory accountability. Introducing expanded Microsoft Purview compliance capabilities for agents and AI apps now in preview Microsoft Purview now extends its compliance capabilities across agent-generated interactions, ensuring responsible use and regulatory alignment as AI becomes deeply embedded across business processes. New capabilities include expanded coverage for: Audit: Surface agent interactions, lifecycle events, and data usage with Purview Audit. Unified audit logs across user and agent activities, paired with traceability for every agent using an Entra Agent ID, support investigation, anomaly detection, and regulatory reporting. Communication Compliance: Detect prompts sent to agents and agent-generated responses containing inappropriate, unethical, or risky language, including attempts to manipulate agents into bypassing policies, generating risky content, or producing noncompliant outputs. When issues arise, data security admins get full context, including the prompt, the agent’s output, and relevant metadata, so they can investigate and take corrective action Data Lifecycle Management: Apply retention and deletion policies to agent-generated content and communication flows to automate lifecycle controls and reduce regulatory risk. Read about Microsoft Purview data security for agents to learn more. Finally, we are extending our data security, threat protection, and identity access capabilities to third-party apps and agents via the network. Advancing Microsoft Entra Internet Access Secure Web + AI Gateway - extend runtime protections to the network, now in preview Microsoft Entra Internet Access, part of the Microsoft Entra Suite, has new capabilities to secure access to and usage of GenAI at the network level, marking a transition from Secure Web Gateway to Secure Web and AI Gateway. Enterprises can accelerate GenAI adoption while maintaining compliance and reducing risk, empowering employees to experiment with new AI tools safely. The new capabilities include: Prompt injection protection which blocks malicious prompts in real time by extending Azure AI Prompt Shields to the network layer. Network file filtering which extends Microsoft Purview to inspect files in transit and prevents regulated or confidential data from being uploaded to unsanctioned AI services. Shadow AI Detection that provides visibility into unsanctioned AI applications through Cloud Application Analytics and Defender for Cloud Apps risk scoring, empowering security teams to monitor usage trends, apply Conditional Access, or block high-risk apps instantly. Unsanctioned MCP server blocking prevents access to MCP servers from unauthorized agents. With these controls, you can accelerate GenAI adoption while maintaining compliance and reducing risk, so employees can experiment with new AI tools safely. Read the Microsoft Entra blog to learn more. As AI transforms the enterprise, security must evolve to meet new challenges—spanning agent sprawl, data protection, emerging threats, and regulatory compliance. Our approach is to empower IT, developers, and security leaders with purpose-built innovations like Agent 365, Foundry Control Plane, and the Security Dashboard for AI. These solutions bring observability, governance, and protection to every layer of the AI stack, leveraging familiar tools and integrated controls across Microsoft Defender, Microsoft Entra, and Microsoft Purview. The future of security is ambient, autonomous, and deeply woven into the fabric of how we build, deploy, and govern AI systems. Explore additional resources Learn more about Security for AI solutions on our webpage Learn more about Microsoft Agent 365 Learn more about Microsoft Entra Agent ID Get started with Microsoft 365 Copilot Get started with Microsoft Copilot Studio Get started with Microsoft Foundry Get started with Microsoft Defender for Cloud Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Purview Compliance Manager Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Bedrock Security, 2025 Data Security Confidence Index, published Mar 17, 2025. 2 AuditBoard & Ascend2, Connected Risk Report 2024; as cited by MIT Sloan Management Review, Spring 2025. 3 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more 4 First Annual Generative AI study: Business Rewards vs. Security Risks, , Q3 2023, ISMG, N=400 5 First Annual Generative AI study: Business Rewards vs. Security Risks, Q3 2023, ISMG, N=400Guidance: Sensitivity Labels during Mergers & Acquisitions (separate tenants, non-M365, etc.)
We’re building an internal playbook for how to handle Microsoft Purview sensitivity labels during mergers and acquisitions, and I’d really appreciate any lessons learned or best practices. Specifically, I’m interested in how others have handled: Acquired organizations on a separate Microsoft 365/O365 tenant for an extended period (pre- and post-close): How did you handle “Internal Only” content when the two tenants couldn’t fully trust each other yet? Any tips to reduce friction for collaboration between tenants during the transition? Existing label structures, such as: We use labels like “All Internal Only” and labels with user-defined permissions — has anyone found good patterns for mapping or reconciling these with another company’s labels? What if the acquired company is already using sensitivity labels with a different taxonomy? How did you rationalize or migrate them? Acquisitions where the target does not use Microsoft 365 (for example, Google Workspace, on-prem, or other platforms): Any strategies for protecting imported content with labels during or after migration? Gotchas around legacy permissions versus label-based protections? General pitfalls or watch-outs between deal close and full migration: Anything you wish you had known before your first M&A with Purview labels in play? Policies or configurations you’d recommend setting (or avoiding) during the interim period? Any examples, war stories, or template approaches you’re willing to share would be incredibly helpful as we shape our playbook. Thanks in advance for any insights!100Views0likes1CommentData Security Investigations in Microsoft Purview
Search across massive volumes of files using natural language, pinpoint the highest risk content, and connect it to user activity to see the full scope of an incident. Investigate and act in one workflow. Analyze content deeply across files, emails, and AI interactions, uncover hidden or unclassified sensitive data, and contain exposure fast. Proactively identify risks, respond to incidents with clarity, and mitigate impact before it spreads. Christophe Fiessinger, Microsoft Purview Principal Squad Leader, joins Jeremy Chapman to walk through real-world investigation workflows — from scoping and analysis to mitigation and automation — so you can move faster and make more informed security decisions. Pinpoint high-risk files. Locate files hidden among hundreds of confidential documents using contextual search. See how Microsoft Purview Data Security Investigations works. Search thousands of files in seconds. Use natural language queries to uncover relevant sensitive data. Get started with Microsoft Purview Data Security Investigations. Contain data leaks immediately. Purge exposed files while retaining investigation evidence. Take action with Microsoft Purview Data Security Investigations. QUICK LINKS: 00:00 — Keep data safe with DSI 01:26 — Connect dots between data risk & impact 02:47 — Built-in AI 03:47 — Work across the full lifecycle of an incident 04:56 — Create an investigation 06:36 — Deep search and analysis 09:03 — How DSI helps data leaks 10:40 — Contain risk with built-in mitigation 11:32 — Automate using agents 13:23 — Estimator tool 14:57 — Wrap up Link References As a Microsoft Purview admin, just go to https://purview.microsoft.com/dsi Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - If you’ve ever had to respond to a major data breach, insider-driven data theft, or even a suspicious leak involving high-value information, you know the hardest part isn’t just detecting the activity, it’s understanding what data was actually taken, how valuable it is, and what risks that creates to your organization. Today we’re going to show you how the now generally available Microsoft Purview Data Security Investigations, or DSI, dramatically accelerates that process using AI to read and analyze and connect the dots fast at massive scale. I’m joined by Christophe Fiessinger from the Microsoft Purview team to demonstrate more. Welcome. - Thanks, Jeremy. Happy to be here. - Thanks so much for joining us today. So most IT teams that I speak to, they’re often using things like SIEMS or incident management tools that connect activity across compromised accounts, devices, and files when they’re responding to things like security events. But these tools, they rarely reveal what’s affected in terms of the files and what’s contained in them. They might show labels, they might show file names or basic metadata like the location or the owner. - Exactly. Beyond labels on metadata, it’s all about context. Metadata gives you the file name, classification might tell you it’s a financial document, and the label might say it’s confidential, but traditional tools can’t really tell you what’s in the content and how much risk it exposes. They just tag the content, they don’t explain it. - So how does DSI then change things? - So DSI on the other end doesn’t just say it’s a confidential financial document. In fact, you might have hundreds of those. Instead, it actually reads and understand each file and the data risks they pose. So of the hundred or so finance documents classified confidential, it can find the one file that carried an existential threat to your company, like the one that contains your entire customer list with the unique credentials that each customer uses to log in your online service. In DSI, that level of insight comes from hybrid vector search and generative AI working together. Hybrid vector search can pick up on semantically similar items, synonyms, or the subtle ways people hide sensitive information while also matching precise text strings like code names or account numbers. In short, it finds the right files by combining context with keyword precision, then generative AI takes over and actually analyzes those files. It performs deep content analysis to uncover sensitive data, security risk, and relationship hidden inside the impacted document. - So it’s removing a ton of manual effort by connecting the dots around the data risk and also its impact. - That’s right. DSI helps you rapidly understand and mitigate the downstream impact. You can start large-scale data investigation and use natural language search to find and narrow in on impact data. From there, you can leverage our powerful built-in AI to deeply analyze content, files, email, team messages, and even review and analyze prompts and responses from AI apps and agents, built-in Microsoft Foundry, Copilot Studio, as well as non-Microsoft agents and apps at scale. DSI is able to establish the context around information and even detect obscure sensitive information that might not have been flagged. It can reason over dozens of major world languages with production-grade quality. And it can directly mitigate identified risk. For example, a specific high value content has been distributed to multiple users. You can purge every instance of those files. With DSI, you can also work on data investigations more efficiently across the full lifecycle of an incident with the rest of your team. As part of Microsoft Purview, you can trigger investigation directly from Data Security Posture Management to dig deeper into data that’s at risk and see how valuable it is. And in Insider Risk Management where you might want to understand larger sets of data being used by risky users or agents. Equally, DSI also provides a useful bridge to your security operations team who can start DSI investigations directly from Microsoft Defender XDR. And because DSI is now integrated with the Microsoft Sentinel graph, data security analysts can connect at-risk information to the activities around it, who accessed it, where it was shared, whether behaviors like compromised sessions or impossible travel were involved, and visually correlate risky content, users, and their activities. It automatically combines unified audit logs, Entra audit logs, and threat intelligence which would otherwise need to be manually correlated. - That’s a really powerful solution. Can you show us an example of an investigation? - Let me show you Data Security Investigations and where to quickly find all your current and future investigations. From the main Data Security Investigations overview, you’ll find everything you need to get started. identifying content, analyzing deeply what’s contained in that content, and mitigating risk, as well as access to all of your previous investigation so you can quickly pick up where you’ve left off and create new investigation from here. You can start an investigation in a few ways. Sometimes proactively using DSI to assess potential data secure risk or other times reactively like when you already know data is leaked and you need to investigate the breach. In this case, I’m going to start this investigation from Data Security Posture Management to get ahead of data risk in our environment. One of the most common types of data leaks is exfiltration of confidential information. Like if an employee moves on to a competitor with trade secrets or a seller wants to bring their client list their new job. Here I can see a recommended objective to prevent exfiltration of risky destinations. Once I click to view objectives, I can see the amount of data exfiltrated, top sources, as well as file types, and I can see an action to create a new investigation using DSI. Here I just need to give it a name, then provide some context about what I’m trying to do in this investigation like, “I’m looking into confidential data that may have been exfiltrated from my organization. I’m specifically looking for confidential and proprietary information about Project Obsidian, the new release we’re working on.” Now I’ll confirm and create the investigation. From here, I can put in the rest of the parameters for deeper search and analysis. In the investigation, I can see a summary about the investigation and from here I can refine the search scope and make change to the date range and people if I want, which will keep things more efficient. And if I need to, I can always add more data sources to the scope. I’ll keep the data source as is and hit add to scope. This grabs the content from the data source and into our investigation. Now I can further analyze the data and I can use a natural language query. And as mentioned DSI will analyze thousands of languages as part of the process. There are a few intelligent search suggestions, but I’m going to do my own search for “information disclosed to customers about project obsidian.” And in just a few seconds I’ll get information assessing exactly what I’m looking for based on my search criteria. It finds over a thousand items with a lot of different languages represented as you can see. On the left, the AI also suggests content categories based on the executed vector search so that it’s easy to organize and make sense of the amount of risk per category. So I’ll filter all those files down to using the obsidian category, and there they are. From here I can select which ones I want to deeply analyze. I’ll choose all of them in this case and hit examine. And here to choose the focus area for the investigation, I can look for credentials, analyze risk, and get mitigation recommendations. I’m going to choose risk in my case so that I can act quickly to contain the risk and hit examine one more time to kick up the process. As it works, I can view its details. This is where AI runs deep content analysis against all the content in these files by looking at the file content itself. This goes beyond common sensitive information types and trainable classifier matches. And depending on the number and size of the files that you have in scope for this, it could take a few moments to run. And you’ll see that it found relevant results each with an assessment, if it’s privileged content, and overall security risk scores and a risk explanation. I can drill into any of these to preview the content in line like this Microsoft 365 Copilot chat message. Moving back, I can also see other risk scores and explanations for credentials on the right-hand columns. - So DSI in this case uncovered a lot of what we call dark data. These are files that were never classified, which is great then for getting ahead of risk, but leaks do eventually happen. And when they do, we need a way to see exactly what got out and how we contain it. - That has happened pretty often, unfortunately. Let me show you a case where credentials were leaked externally as part of a security breach and I had DSI helped. And to show you the integration for SecOps teams with Microsoft Defender XDR, I’ll start from an active incident for data exfiltration in this case. In the incident view, you get the high-level signals, the attack timeline, which users on device were hit, and the file names involved. But we still don’t know what was actually inside those files and what earlier activities might have set up the attack or created additional risk across other files. So from the action menu, I’ll create a DSI investigation right from this open incident to find out more about the content in those files. Here I just need to give it a name, then also paste it in a description and some additional context like I did before for the AI. Then I’ll create the investigation and then it links me directly to an investigation in Microsoft Purview. Like before, I can see a summary and refine the search scope if I want. This time I’m going to fast forward a few steps for scoping the data source and examining the content and just go right to the examination results. Here you can see the subject or title of each item, extracted credentials, including usernames, passwords, and more, credential types including API tokens and MFA, a surrounding snippet or the text around the credential details for context, and the thought process with a summary of the AI reasoning. Next, I also want to show the built-in mitigation. We can actually purge the sensitive files that were forwarded around by email to contain the damage without touching the original copy so we’ll keep the evidence. From the results, I’ll select the items I want, then I’ll choose add to mitigation which will in turn create a list of files and messages containing those credentials. From the list I’ll select purge queue, then view the messages and run the purge where I can choose from a recoverable soft purge or permanent deletion with a hard purge. I’ll keep the default and confirm the purge. Then all the information matching that query will be deleted in minutes. And since these files are part of the investigation, they stay retained for review but are hidden from end users. And safeguards like in-place holds for eDiscovery still work normally so protected files aren’t removed. - Okay, so far we’ve defined all the investigations up front. Is there maybe a way to automate the process using agents? - Absolutely. We’re adding new capabilities to help tackle a major hidden risk, credentials buried in everyday files. While Microsoft Purview DLP protects credentials in real time as files are created or shared, the Data Security Posture Agent powered by Security Copilot helps security teams identify and prioritize credential-related risks across scope data allocations. Here you can see that I’ve already enabled the agent and there’s a few tasks in progress. These can be started manually or run on a schedule. I’ll start a new assignment for this agent and create a credential scanning task. We’ll be adding our task types to this over time. I can give it a name or keep what’s there. Then add some additional context, in this case, to look for credentials and passwords. Then I can view its progress as it completes scanning data locations, access patterns, analyzing risky documents, and generating the report. The agent works autonomously scanning thousands of locations and potentially millions of files. I’m going to move over to a scan I ran earlier to save some time. Once the agent completes its scan, you’ll see a prioritized list of exposed credentials such as passwords, API keys, encryption keys, tokens, and more, each with a risk score and the agent’s reasoning. From there, I can group the results into categories, then filter for the highest risk credentials. For each credential found, I can explore the details of the credential itself plus its surrounding context. - It’s a huge advantage really to run these types of credential scans at scale to catch those risks. But why don’t we switch gears though for the human-led investigations. DSI is using pay-as-you-go billing, which, you know, if people are watching this, they’re probably wondering, how do I keep these investigations in check without breaking the bank? - So cost, as you say, are usage based and billed through Azure. They’re going to vary depending on the size and complexity of your investigation. So we’ve introduced a new estimator tool to help. Before I go there, as a baseline to see the compute unit I’ve been showing until now, I’ll start in the pay-as-you-go dashboard in DSI, and then filter by our last investigation. This one only used about 250 megabyte and 109 DSI compute unit, which is quite conservative. So let’s go back to the DSI overview tab and scroll down to our new estimate cost tool. This lets you input key values like investigation size and gigabytes and the number of vector searches, and it will estimate cost based on what you enter. It shows you the cost breakdown by types for size and AI usage. And the last related control I want to show you is in Azure Cost Management, where like any other Azure services, you can see forecast and accumulated costs. I’ll filter this by my DSI shared view. In this chart, you’ll see the investigation compute and gigabytes by day as well as a forecast. So, voila, you’ve got what what you need to investigate deeply with AI and keep costs in check while staying ahead of incidents. And we’re only getting started. More integration, smarter AI, new mitigation actions, and more agentic workflows are on the way. - Thanks so much for joining us today, Christophe. And if you want to learn more about DSI and try it out for yourself. As a Microsoft Purview admin, just go to purview.microsoft.com/dsi. And keep watching Microsoft Mechanics for the latest updates. We’ll see you again soon.215Views0likes0Comments