data loss prevention
396 TopicsCopilot DLP Policy Licensing
Hi everyone We are currently preparing our tenant for a broader Microsoft 365 Copilot rollout and in preparation to that we were in the progress of hardening our SharePoint files to ensure that sensitive information stays protected. Our original idea was to launch sensitivity labels together with a Purview data loss prevention policy that excludes Copilot from accessing and using files that have confidential sensitivity labels. Some weeks ago when I did an initial setup, everything worked just fine and I was able to create the before mentioned custom DLP policy. However, when I checked the previously created DLP policy a few days back, the action to block Copilot was gone and the button to add a new action in the custom policy is greyed out. I assume that in between the initial setup and me checking the policy, Microsoft must have moved the feature out of our licensing plan (Microsoft 365 E3 & Copilot). Now my question is what the best licensing options would be on top of our existing E3 licences. For cost reasons, a switch to Microsoft 365 E5 is not an option as we have the E3 licences through benefits. Thanks!Solved94Views0likes2CommentsTeams Private Channels Reengineered: Compliance & Data Security Actions Needed by Sept 20, 2025
You may have missed this critical update, as it was published only on the Microsoft Teams blog and flagged as a Teams change in the Message Center under MC1134737. However, it represents a complete reengineering of how private channel data is stored and managed, with direct implications for Microsoft Purview compliance policies, including eDiscovery, Legal Hold, Data Loss Prevention (DLP), and Retention. 🔗 Read the official blog post here New enhancements in Private Channels in Microsoft Teams unlock their full potential | Microsoft Community Hub What’s Changing? A Shift from User to Group Mailboxes Historically, private channel data was stored in individual user mailboxes, requiring compliance and security policies to be scoped at the user level. Starting September 20, 2025, Microsoft is reengineering this model: Private channels will now use dedicated group mailboxes tied to the team’s Microsoft 365 group. Compliance and security policies must be applied to the team’s Microsoft 365 group, not just individual users. Existing user-level policies will not govern new private channel data post-migration. This change aligns private channels with how shared channels are managed, streamlining policy enforcement but requiring manual updates to ensure coverage. Why This Matters for Data Security and Compliance Admins If your organization uses Microsoft Purview for: eDiscovery Legal Hold Data Loss Prevention (DLP) Retention Policies You must review and update your Purview eDiscovery and legal holds, DLP, and retention policies. Without action, new private channel data may fall outside existing policy coverage, especially if your current policies are not already scoped to the team’s group. This could lead to significant data security, governance and legal risks. Action Required by September 20, 2025 Before migration begins: Review all Purview policies related to private channels. Apply policies to the team’s Microsoft 365 group to ensure continuity. Update eDiscovery searches to include both user and group mailboxes. Modify DLP scopes to include the team’s group. Align retention policies with the team’s group settings. Migration will begin in late September and continue through December 2025. A PowerShell command will be released to help track migration progress per tenant. Migration Timeline Migration begins September 20, 2025, and continues through December 2025. Migration timing may vary by tenant. A PowerShell command will be released to help track migration status. I recommend keeping track of any additional announcements in the message center.173Views1like0CommentsRetired: The Data Loss Prevention Ninja Training is here!
August 2025: New Ninja training can be found at https://aka.ms/DLPNinja RETIRED July 2025: Under Construction for new hosting location The Microsoft Purview Data Loss Prevention Ninja Training is here! We are very excited and pleased to announce this rendition of the Ninja Training Series. With all the other training out there, our team has been working diligently to get this content out there. There are several videos and resources out there and the overall purpose of the Microsoft Purview Data Loss Prevention Ninja training is to help you master this realm. We aim to get you up-to-date links to the community blogs, training videos, Interactive Guides, learning paths, and any other relevant documentation. To make it easier for you to start and advance your knowledge gradually without throwing you in deep waters, we split content in each offering into three levels: beginner, intermediate, and advanced. Please find the Microsoft Purview Information Protection Ninja Training here. In addition, after each section, there will be a knowledge check based on the training material you’d have just finished! Since there’s a lot of content, the goal of these knowledge checks is to help you determine if you were able to get a few of the major key takeaways. There’ll be a fun certificate issued at the end of the training: Disclaimer: This is NOT an official Microsoft certification and only acts as a way of recognizing your participation in this training content. Lastly, this training will be updated one to two times a year to ensure you all have the latest and greatest material! If there's any topic you'd like for us to include and/or have any thoughts on this training, please let us know what you think below in the comments! Legend/Acronyms (D) Microsoft Documentation (V) Video (B) Blog (P) PDF (S) Site (SBD) Scenario Based Demo (Video) (DAG) Deployment Acceleration Guide MIP Microsoft Information Protection (old terminology for Microsoft Purview Information Protection) AIP Azure Information Protection ULC Unified Labeling Client SIT Sensitive Information Type RBAC Role-based access control eDLP Endpoint DLP OME Office 365 Message Encryption EDM Exact Data Match DLP Data Loss Prevention SPO SharePoint Online OCR Optical character recognition MCAS Microsoft Cloud App Security (old terminology for Microsoft Defender for Cloud Apps) TC Trainable Classifiers ODSP OneDrive SharePoint EXO Exchange Online Microsoft Purview Data Loss Prevention (DLP) Microsoft’s DLP solution provides a broad range of capabilities to address the modern workplace and the unique challenges represented by these very different scenarios. One of the key investment areas is in providing a unified and comprehensive solution across the many different kinds of environments and services where sensitive data is stored, used or shared. This includes platforms native to Microsoft and also non-Microsoft services and apps. Beginner Training Public forums to contact the overall information protection team Yammer Tech Community Introducing Microsoft Purview (V) In this video, hear from Microsoft executives on this new product family and our vision for the future of data governance. Introduction to Microsoft Purview Data Loss Prevention? (V) In this video, you’ll find an overview on Microsoft Purview Data Loss Prevention. Quick overview on new Exchange DLP Predicates (V) This video provides a quick walk through on creating an Exchange DLP policy and a soft focus on the new predicates and actions. Microsoft Purview Information Protection Framework (D) Check out the above documentation to see how Microsoft Purview Information Protection uses 3 pillars to deploy an information protection solution. Protect Data with Zero Trust (LP) Zero Trust isn't a tool or product, it's an essential security strategy, with data at its core. Here, you'll learn how to identify and protect your data using a Zero Trust approach. Learn about data loss prevention (D) Learn about DLP basics and Microsoft Unified DLP and why it’s uniquely positioned to protect your data in the cloud. How to secure your data with Microsoft Security (V) The above video is a quick summary on how to protect your data. Microsoft Purview Information Protection and Data Loss Prevention Roadmap (S) Please check out the above site on the latest items on our public roadmap. Microsoft Purview Information Protection support for PDF and GitHub (V) and Ignite Conversation (V) The above videos walk through announcements regarding support for PDF and GitHub Microsoft Defender for Cloud Apps integration (D) Please visit the above documentation to learn more about how Microsoft Purview Information Protection integrates with Microsoft Defender for Cloud Apps Trainable Classifiers (D) Check out the documentation to create custom trainable classifiers. Retrain a classifier in content explorer (D) The above documentation shows you how to improve the performance of custom trainable classifiers by providing them more feedback. Explain data loss prevention reporting capabilities (LP) The above learning path walks you through reporting in the Microsoft Purview Compliance Portal. Review and analyze data loss prevention reports (LP) The above learning path walks you through analyzing reports in the Microsoft Purview Compliance Portal. Beginner Knowledge Check Intermediate Training Microsoft Compliance Extension for Chrome (B) aka Microsoft Purview Extension (D) Please check out the above blog and Microsoft Doc to understand what we’re doing to expand our DLP capabilities to Chrome. Microsoft Purview extension for Firefox (D) The above documentation details procedures to roll out the Microsoft Purview extension for Firefox. Data Loss Prevention and Endpoint DLP (V) This video details how Microsoft approaches information protection across Files, emails, Teams, endpoints and others. How DLP works between the Compliance portal and Exchange admin center (D) You can create a data loss prevention (DLP) policy in two different admin centers; the above document walks through the differences and similarities. Data Loss Prevention across endpoints, apps, & services | Microsoft Purview (V) This video walks you through how to protect sensitive data everywhere you create, view, and access information with one Data Loss Prevention policy in Microsoft Purview. Data Loss Prevention Policy Tips Reference Guide (D) and Quick Overview (V) Please check out the above documentation and short video on where we support policy tips. Create a DLP Policy for Microsoft 365 Online Services (IG) Please use the above interactive guide to see how to create DLP policies. Apply Microsoft Purview Endpoint DLP to Devices (IG) Please use the above interactive guide to see how to create Endpoint DLP policies. Sites for testing documentation (S) The above site details locations where you can get sample data. Scope of DLP Protection for Microsoft Teams (D) The above documentation walks through how DLP protection is applied differently to Teams entities. Manage DLP alerts in the Microsoft Purview compliance portal (LP) The above learning path walks you through managing DLP alerts. Endpoint activities you can monitor and best practices (LP) The above learning path walks you through Endpoint DLP activities and best practices. Troubleshoot and Manage Microsoft Purview Data Loss Prevention for your Endpoint Devices (B) The above blog goes through a quick guide to troubleshooting Endpoint DLP. Microsoft Purview DLP Interactive Guides (IG) Please visit the above home page to see the latest interactive guides walking you through DLP. Learn how to investigate Microsoft Purview Data Loss Prevention alerts in Microsoft 365 Defender (B) This blog is a step-by-step guided walkthrough of the Microsoft 365 Defender Analyst experience for Microsoft Purview Data Loss Prevention (DLP) incident management. Intermediate Knowledge Check Advanced Training Microsoft Defender for Cloud Apps and Data Loss Preventions (D) Please check out the documentation above detailing how the integration to Microsoft Defender for Cloud Apps further enhances your data loss prevention plan. Power BI: Learn about centralized data loss prevention policies (V) This video highlights DLP capabilities with Power BI. Take a unified and comprehensive approach to prevent data exfiltration with Microsoft (V) This video helps show how we can help you prevent unauthorized sharing, use, and transfer of sensitive information across your applications, services, endpoints, and on-premises file shares – all from a single place. Onboard macOS devices into Microsoft 365 (D), capability announcement (B), and additional screengrabs (B) Please use the documentation above to deploy macOS devices into Endpoint DLP and check out the blog to see a few screengrabs on how the user experience. Troubleshooting Guides (D) Resolve issues that affect DLP policy tips Changes to a data loss prevention policy don't take effect in Outlook 2013 in Microsoft 365 DLP policy tips in Security and Compliance Center don't work in OWA/Outlook How to troubleshoot data loss prevention policy tips in Exchange Online Protection in Microsoft 365 Please check out the below documentation to find guides on common issues. Securing data in an AI-first world with Microsoft Purview (B) The above blog details some new updates on AI with Microsoft Purview. Common questions on Microsoft Purview Data Loss Prevention for endpoints (B) This guide covers the top-of-mind FAQs on Microsoft Purview Data Loss Prevention for endpoints (referred to as Endpoint DLP in the blog). Guidance for investigating Microsoft Purview Data Loss Prevention incidents (B) This blog provides guidance for choosing the best investigation experience suited for your organization when using Microsoft Purview Data Loss Prevention. Data Loss Prevention: From on-premises to cloud (PDF) This whitepaper focuses on why you should move to cloud-native data loss prevention. The Microsoft Purview DLP Migration Assistant for Symantec (IG) Follow the above IG to get guidance on migrating from Symantec to Microsoft Purview DLP. Migrating from Windows Information Protection to Microsoft Purview (B) The above blog gives guidance on how to migrate from WIP to the Microsoft Purview stack. Insider Risk in Conditional Access | Microsoft Entra + Microsoft Purview Adaptive Protection (V) The above video goes through how to protect your organization from insider threats with Microsoft Entra's Conditional Access and Adaptive Protection in Microsoft Purview. Please check out this link for a blog with more details. (B) Protect sensitive data throughout its Copilot journey (B) The above details how the native integration enables organizations to leverage the power of GenAI when working with sensitive data as Copilot can understand and honor the controls such as encryption and provide comprehensive visibility into usage. Protect at the speed and scale of AI with Copilot for Security in Microsoft Purview (B) The above blog details the embedded experiences of Copilot for Security in Microsoft Purview (Communication Compliance, Data Loss Prevention, Insider Risk Management, and eDiscovery. Strengthen protection to mitigate data overexposure in GenAI tools with data classification/labeling (B) The blog above goes into detail on OCR, its cost, and how it goes into the AI Realm with Microsoft Purview Information Protection and Data Loss Prevention. Microsoft Purview Exact Data Match (EDM) support for multi-token corroborative evidence (B) The above blog goes into the new feature that improves the accuracy and effectiveness of EDM detection. Advanced Knowledge Check Once you’ve finished the training and the knowledge checks, please go to our attestation portal to generate your certificate; you'll see it in your inbox within 3-5 business days (Coming Soon). We hope you enjoy this training!84KViews14likes20CommentsSensitivity Auto-labelling via Document Property
Why is this needed? Sensitivity labels are generally relevant within an organisation only. If a file is labelled within one environment and then moved to another environment, sensitivity label content markings may be visible, but by default, the applied sensitivity label will not be understood. This can lead to scenarios where information that has been generated externally is not adequately protected. My favourite analogy for these scenarios is to consider the parallels between receiving sensitive information and unpacking groceries. When unpacking groceries, you might sit your grocery bag on a counter or on the floor next to the pantry. You’ll likely then unpack each item, take a look at it and then decide where to place it. Without looking at an item to determine its correct location, you might place it in the wrong location. Porridge might be safe from the kids on the bottom shelf. If you place items that need to be protected, such as chocolate, on the bottom shelf, it’s not likely to last very long. So, I affectionately refer to information that hasn’t been evaluated as ‘porridge’, as until it has been checked, it will end up on the bottom shelf of the pantry where it is quite accessible. Label-based security controls, such as Data Loss Prevention (DLP) policies using conditions of ‘content contains sensitivity label’ will not apply to these items. To ensure the security of any contained sensitive information, we should look for potential clues to its sensitivity and then utilize these clues to ensure that the contained information is adequately protected - We take a closer look at the ‘porridge’, determine whether it’s an item that needs protection and if so, move it to a higher shelf in the pantry so that it’s out of reach for the kids. Effective use of Purview revolves around the use of ‘know your data’ strategies. We should be using as many methods as possible to try to determine the sensitivity of items. This can include the use of Sensitive Information Types (SITs) containing keyword or pattern-based classifiers, trainable classifiers, Exact Data Match, Document fingerprinting, etc. Matching items via SITs present in the items content can be problematic due to false positives. Keywords like ‘Sensitive’ or ‘Protected’ may be mentioned out of context, such as when referring to a classification or an environment. When classifications have been stamped via a property, it allows us to match via context rather than content. We don’t need to guess at an item’s sensitivity if another system has already established what the item’s classification is. These methods are much less prone to false positives. Why isn’t everyone doing this? Document properties are often not considered in Purview deployments. SharePoint metadata management seems to be a dying artform and most compliance or security resources completing Purview configurations don’t have this skill set. There’s also a lack of understanding of the relevance of checking for item properties. Microsoft haven’t helped as the documentation in this space is somewhat lacking and needs to be unpicked via some aligning DLP guidance (Create a DLP policy to protect documents with FCI or other properties). Many of these configurations will also be tied to regional requirements. Document properties being used by systems where I’m from, in Australia, will likely be very different to those used in other parts of the world. In the following sections, we’ll take a look at applicable use cases and walk through how to enable these configurations. Scenarios for use Labelling via document property isn’t for everyone. If your organisation is new to classification or you don’t have external partners that you collaborate with at higher sensitivity levels, then this likely isn’t for you. For those that collaborate heavily and have a shared classification framework, as is often seen across government, this is a must! This approach will also be highly relevant to multi-tenant organisations or conglomerates where information is regularly shared between environments. The following scenarios are examples of where this configuration will be relevant: 1. Migrating from 3 rd party classification tools If an item has been previously stamped by a 3 rd party classification tool, then evaluating its applied document properties will provide a clear picture of its security classification. These properties can then be used in service-based auto-labelling policies to effectively transition items from 3 rd party tools to Microsoft Purview sensitivity labels. As labels are applied to items, they will be brought into scope of label-based controls. 2. Detecting data spill Data spill is a term that is used to define situations where information that is of a higher than permitted security classification land in an environment. Consider a Microsoft 365 tenant that is approved for the storage of Official information but Top Secret files are uploaded to it. Document properties that align with higher than permitted classifications provide us with an almost guaranteed method of identifying spilled items. Pairing this document property with an auto-labelling policy allows for the application of encryption to lock unauthorized users out of the items. Tools like Content Explorer and eDiscovery can then be used to easily perform cleanup activities. If using document properties and auto-labelling for this purpose, keep in mind that you’ll need to create sensitivity labels for higher than permitted classifications in order to catch spilled items. These labels won’t impact usability as you won’t publish them to users. You will, however, need to publish them to a single user or break glass account so that they’re not ignored by auto-labelling. 3. Blocking access by AI tools If your organization was concerned about items with certain properties applied being accessed by generative AI tools, such as Copilot, you could use Auto-labelling to apply a sensitivity label that restricts EXTRACT permissions. You can find some information on this at Microsoft 365 Copilot data protection architecture | Microsoft Learn. This should be relevant for spilled data, but might also be useful in situations where there are certain records that have been marked via properties and which should not be Copilot accessible. 4. External Microsoft Purview Configurations Sensitivity labels are relevant internally only. A label, in its raw form, is essentially a piece of metadata with an ID (or GUID) that we stamp on pieces of information. These GUIDs are understood by your tenant only. If an item marked with a GUID shows up in another Microsoft 365 tenant, the GUID won’t correspond with any of that tenant’s labels or label-based controls. The art in Microsoft Purview lies in interpreting the sensitivity of items based on content markings and other identifiers, so that data security can be maintained. Document properties applied by Purview, such as ClassificationContentMarkingHeaderText are not relevant to a specific tenant, which makes them portable. We can use these properties to help maintain classifications as items move between environments. 5. Utilizing metadata applied by Records Management solutions Some EDRMS, Records or Content Management solutions will apply properties to items. If an item has been previously managed and then stamped with properties, potentially including a security classification, via one of these systems, we could use this information to inform sensitivity label application. 6. 3 rd party classification tools used externally Even if your organisation hasn’t been using 3rd party classification tools, you should consider that partner organisations, such as other Government departments, might be. Evaluating the properties applied by external organisations to items that you receive will allow you to extend protections to these items. If classification tools like Janus or Titus are used in your geography/industry, then you may want to consider checking for their properties. Regarding the use of auto-classification tools Some organisations, particularly those in Government, will have organisational policies that prevent the use of automatic classification capabilities. These policies are intended to ensure that each item is assessed by an actual person for risk of disclosure rather than via an automated service that could be prone to error. However, when auto-labelling is used to interpret and honour existing classifications, we are lowering rather than raising the risk profile. If the item’s existing classification (applied via property) is ignored, the item will be treated as porridge and is likely to be at risk. If auto-labelling is able to identify a high-risk item and apply the relevant label, it will then be within scope of Purview’s data security controls, including label-based DLP, groups and sites data out of place alerting, and potentially even item encryption. The outcome is that, through the use of auto-labelling, we are able to significantly reduce risk of inappropriate or unintended disclosure. Configuration Process Setting up document property-based auto-labelling is fairly straightforward. We need to setup a managed property and then utilize it an auto-labelling policy. Below, I've split this process into 6 steps: Step 1 – Prepare your files In order to make use of document properties, an item with the properties applied will first need to be indexed by SharePoint. SharePoint will record the properties as ‘crawled properties’, which we’ll then need to convert into ‘managed properties’ to make them useful. If you already have items with the relevant properties stored in SharePoint, then they are likely already indexed. If not, you’ll need to upload or create an item or items with the properties applied. For testing, you’ll want to create a file with each property/value combination so that you can confirm that your auto-labelling policies are all working correctly. This could require quite a few files depending on the number of properties you’re looking for. To kick off your crawled property generation though, you could create or upload a single file with the correct properties applied. For example: In the above, I’ve created properties for ClassificationContentMarkingHeaderText and ClassificationContentMarkingFooterText, which you’ll often see applied by Purview when an item has a sensitivity label content marking applied to it. I’ve also included properties to help identify items classified via JanusSeal, Titus and Objective. Step 2 – Index the files After creating or uploading your file, we then need SharePoint to index it. This should happen fairly quickly depending on the size of your environment. I'd expect to wait sometime between 10 minutes and 24 hrs. If you're not in a hurry, then I'd recommend just checking back the next day. You'll know when this has been completed when you head into SharePoint Admin > Search > Managed Search Schema > Crawled Properties and can find your newly indexed properties: Step 3 – Configure managed properties Next, the properties need to be configured as managed properties. To do this, go to SharePoint Admin > More features > Search > Managed Search Schema > Managed Properties. Create a new managed property and give it a name. Note that there are some character restrictions in naming, but you should be able to get it close to your document property name. Set the property’s type to text, select queryable and retrievable. Under ‘mappings to crawled properties’, choose add mapping, search for and select the property indexed from the file property. Note that the crawled property will have the same name as your document property, so there’s no need to browse through all of them: Repeat this so that you have a managed property for each document property that you want to look for. Step 4 – Configure Auto-labelling policies Next up, create some auto-labelling policies. You’ll need one for each label that you want to apply, not one per property as you can check multiple properties within the one auto-labelling policy. - From within Purview, head to Information Protection > Policies > Auto-labelling policies. - Create a new policy using the custom policy template. - Give your policy an appropriate name (e.g. Label PROTECTED via property). - Select the label that you want to apply (e.g. PROTECTED). - Select SharePoint based services (SharePoint and OneDrive). - Name your auto-labelling rules appropriately (e.g. SPO – Contains PROTECTED property) - Enter your conditions as a long string with property and value separated via a colon and multiple entries separated with a comma. For example: ClassificationContentMarkingHeaderText:PROTECTED,ClassificationContentMarkingFooterText:PROTECTED,Objective-Classification:PROTECTED,PMDisplay:PROTECTED,TitusSEC:PROTECTED Note that the properties that you are referencing are the Managed Property rather than the document property. This will be relevant if your managed property ended up having a different name due to character restrictions. After pasting in your string into the UI, the resultant rule should look something like this: When done, you can either leave your policy in simulation mode or save it and then turn it on from the auto-labelling policies screen. Just be aware of any potential impacts, such as accidently locking users out by automatically deploying a label with encryption configuration. You can reduce any potential impact by targeting your auto-labelling policy at a site or set of sites initially and then expanding its scope after testing. Step 5 - Test Testing your configuration will be as easy as uploading or creating a set of files with the relevant document properties in place. Once uploaded, you’ll need to give SharePoint some time to index the items and then the auto-labelling policy some time to apply sensitivity labels to them. To confirm label application, you can head to the document library where your test files are located and enable the sensitivity column. Files that have been auto-labelled will have their label listed: You could also check for auto-labelling activity in Purview via Activity explorer: Step 6 – Expand into DLP If you’ve spent the time setting up managed properties, then you really should consider capitalizing on them in your DLP configurations. DLP policy conditions can be configured in the same manner that we configured Auto-labelling in Step 3 above. The document property also gives us an anchor for DLP conditions that is independent of an item’s sensitivity label. You may wish to consider the following: DLP policies blocking external sharing of items with certain properties applied. This might be handy for situations where auto-labelling hasn’t yet labelled an item. DLP policies blocking the external sharing of items where the applied sensitivity label doesn’t match the applied document property. This could provide an indication of risky label downgrade. You could extend such policies into Insider Risk Management (IRM) by creating IRM policies that are aligned with the above DLP policies. This will allow for document properties to be considered in user risk calculation, which can inform controls like Adaptive Protection. Here's an example of a policy from the DLP rule summary screen that shows conditions of item contains a label or one of our configured document properties: Thanks for reading and I hope this article has been of use. If you have any questions or feedback, please feel free to reach out.2.2KViews8likes8CommentsFrom Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI
By Chirag Mehta, Vice President and Principal Analyst - Constellation Research AI is changing the way organizations work. It helps teams write code, detect fraud, automate workflows, and make complex decisions faster than ever before. But as AI adoption increases, so do the risks, many of which traditional security tools were not designed to address. Cybersecurity leaders are starting to see that AI security is not just another layer of defense. It is becoming essential to building trust, ensuring resilience, and maintaining business continuity. Earlier this year, after many conversations with CISOs and CIOs, I saw a clear need to bring more attention to this topic. That led to my report on AI Security, which explores how AI-specific vulnerabilities differ from traditional cybersecurity risks and why securing AI systems calls for a more intentional approach. Why AI Changes the Security Landscape AI systems do not behave like traditional software. They learn from data instead of following pre-defined logic. This makes them powerful, but also vulnerable. For example, an AI model can: Misinterpret input in ways that humans cannot easily detect Be tricked into producing harmful or unintended responses through crafted prompts Leak sensitive training data in its outputs Take actions that go against business policies or legal requirements These are not coding flaws. They are risks that originate from how AI systems process information and act on it. These risks become more serious with agentic AI. These systems act on behalf of humans, interact with other software, and sometimes with other AI agents. They can make decisions, initiate actions, and change configurations. If one is compromised, the consequences can spread quickly. A key challenge is that many organizations still rely on traditional defenses to secure AI systems. While those tools remain necessary, they are no longer enough. AI introduces new risks across every layer of the stack, including data, networks, endpoints, applications, and cloud infrastructure. As I explained in my report, the security focus must shift from defending the perimeter to governing the behavior of AI systems, the data they use, and the decisions they make. The Shift Toward AI-Aware Cyber Resilience Cyber resilience is the ability to withstand, adapt to, and recover from attacks. Meeting that standard today requires understanding how AI is developed, deployed, and used by employees, customers, and partners. To get there, organizations must answer questions such as: Where is our sensitive data going, and is it being used safely to train models? What non-human identities, such as AI agents, are accessing systems and data? Can we detect when an AI system is being misused or manipulated? Are we in compliance with new AI regulations and data usage rules? Let’s look at how Microsoft has evolved its mature security portfolio to help protect AI workloads and support this shift toward resilience. Microsoft’s Approach to Secure AI Microsoft has taken a holistic and integrated approach to AI security. Rather than creating entirely new tools, it is extending existing products already used by millions to support AI workloads. These features span identity, data, endpoint, and cloud protection. 1. Microsoft Defender: Treating AI Workloads as Endpoints AI models and applications are emerging as a new class of infrastructure that needs visibility and protection. Defender for Cloud secures AI workloads across Azure and other cloud platforms such as AWS and GCP by monitoring model deployments and detecting vulnerabilities. Defender for Cloud Apps extends protection to AI-enabled apps running at the edge Defender for APIs supports AI systems that use APIs, which are often exposed to risks such as prompt injection or model manipulation Additionally, Microsoft has launched tools to support AI red-teaming, content safety, and continuous evaluation capabilities to ensure agents operate safely and as intended. This allows teams identify and remediate risks such as jailbreaks or prompt injection before models are deployed. 2. Microsoft Entra: Managing Non-Human Identities As organizations roll out more AI agents and copilots, non-human identities are becoming more common. These digital identities need strong oversight. Microsoft Entra helps create and manage identities for AI agents Conditional Access ensures AI agents only access the resources they need, based on real-time signals and context Privileged Identity Management manages, controls, and monitors AI agents access to important resources within an organization 3. Microsoft Purview: Securing Data Used in AI Purview plays an important role in securing both the data that powers AI apps and agents, and the data they generate through interactions. Data discovery and classification helps label sensitive information and track its use Data Loss Prevention policies help prevent leaks or misuse of data in tools such as Copilot or agents built in Azure AI Foundry Insider Risk Management alerts security teams when employees feed sensitive data into AI systems without approval Purview also helps organizations meet transparency and compliance requirements, extending the same policies they already use today to AI workloads, without requiring separate configurations, as regulations like the EU AI Act take effect. Here's a video that explains the above Microsoft security products: Securing AI Is Now a Strategic Priority AI is evolving quickly, and the risks are evolving with it. Traditional tools still matter, but they were not built for systems that learn, adapt, and act independently. They also weren’t designed for the pace and development approaches AI requires, where securing from the first line of code is critical to staying protected at scale. Microsoft is adapting its security portfolio to meet this shift. By strengthening identity, data, and endpoint protections, it is helping customers build a more resilient foundation. Whether you are launching your first AI-powered tool or managing dozens of agents across your organization, the priority is clear. Secure your AI systems before they become a point of weakness. You can read more in my AI Security report and learn how Microsoft is helping organizations secure AI supporting these efforts across its security portfolio.Hacking Made Easy, Patching Made Optional: A Modern Cyber Tragedy
In today’s cyber threat landscape, the tools and techniques required to compromise enterprise environments are no longer confined to highly skilled adversaries or state-sponsored actors. While artificial intelligence is increasingly being used to enhance the sophistication of attacks, the majority of breaches still rely on simple, publicly accessible tools and well-established social engineering tactics. Another major issue is the persistent failure of enterprises to patch common vulnerabilities in a timely manner—despite the availability of fixes and public warnings. This negligence continues to be a key enabler of large-scale breaches, as demonstrated in several recent incidents. The Rise of AI-Enhanced Attacks Attackers are now leveraging AI to increase the credibility and effectiveness of their campaigns. One notable example is the use of deepfake technology—synthetic media generated using AI—to impersonate individuals in video or voice calls. North Korean threat actors, for instance, have been observed using deepfake videos and AI-generated personas to conduct fraudulent job interviews with HR departments at Western technology companies. These scams are designed to gain insider access to corporate systems or to exfiltrate sensitive intellectual property under the guise of legitimate employment. Social Engineering: Still the Most Effective Entry Point And yet, many recent breaches have begun with classic social engineering techniques. In the cases of Coinbase and Marks & Spencer, attackers impersonated employees through phishing or fraudulent communications. Once they had gathered sufficient personal information, they contacted support desks or mobile carriers, convincingly posing as the victims to request password resets or SIM swaps. This impersonation enabled attackers to bypass authentication controls and gain initial access to sensitive systems, which they then leveraged to escalate privileges and move laterally within the network. Threat groups such as Scattered Spider have demonstrated mastery of these techniques, often combining phishing with SIM swap attacks and MFA bypass to infiltrate telecom and cloud infrastructure. Similarly, Solt Thypoon (formerly DEV-0343), linked to North Korean operations, has used AI-generated personas and deepfake content to conduct fraudulent job interviews—gaining insider access under the guise of legitimate employment. These examples underscore the evolving sophistication of social engineering and the need for robust identity verification protocols. Built for Defense, Used for Breach Despite the emergence of AI-driven threats, many of the most successful attacks continue to rely on simple, freely available tools that require minimal technical expertise. These tools are widely used by security professionals for legitimate purposes such as penetration testing, red teaming, and vulnerability assessments. However, they are also routinely abused by attackers to compromise systems Case studies for tools like Nmap, Metasploit, Mimikatz, BloodHound, Cobalt Strike, etc. The dual-use nature of these tools underscores the importance of not only detecting their presence but also understanding the context in which they are being used. From CVE to Compromise While social engineering remains a common entry point, many breaches are ultimately enabled by known vulnerabilities that remain unpatched for extended periods. For example, the MOVEit Transfer vulnerability (CVE-2023-34362) was exploited by the Cl0p ransomware group to compromise hundreds of organizations, despite a patch being available. Similarly, the OpenMetadata vulnerability (CVE-2024-28255, CVE-2024-28847) allowed attackers to gain access to Kubernetes workloads and leverage them for cryptomining activity days after a fix had been issued. Advanced persistent threat groups such as APT29 (also known as Cozy Bear) have historically exploited unpatched systems to maintain long-term access and conduct stealthy operations. Their use of credential harvesting tools like Mimikatz and lateral movement frameworks such as Cobalt Strike highlights the critical importance of timely patch management—not just for ransomware defense, but also for countering nation-state actors. Recommendations To reduce the risk of enterprise breaches stemming from tool misuse, social engineering, and unpatched vulnerabilities, organizations should adopt the following practices: 1. Patch Promptly and Systematically Ensure that software updates and security patches are applied in a timely and consistent manner. This involves automating patch management processes to reduce human error and delay, while prioritizing vulnerabilities based on their exploitability and exposure. Microsoft Intune can be used to enforce update policies across devices, while Windows Autopatch simplifies the deployment of updates for Windows and Microsoft 365 applications. To identify and rank vulnerabilities, Microsoft Defender Vulnerability Management offers risk-based insights that help focus remediation efforts where they matter most. 2. Implement Multi-Factor Authentication (MFA) To mitigate credential-based attacks, MFA should be enforced across all user accounts. Conditional access policies should be configured to adapt authentication requirements based on contextual risk factors such as user behavior, device health, and location. Microsoft Entra Conditional Access allows for dynamic policy enforcement, while Microsoft Entra ID Protection identifies and responds to risky sign-ins. Organizations should also adopt phishing-resistant MFA methods, including FIDO2 security keys and certificate-based authentication, to further reduce exposure. 3. Identity Protection Access Reviews and Least Privilege Enforcement Conducting regular access reviews ensures that users retain only the permissions necessary for their roles. Applying least privilege principles and adopting Microsoft Zero Trust Architecture limits the potential for lateral movement in the event of a compromise. Microsoft Entra Access Reviews automates these processes, while Privileged Identity Management (PIM) provides just-in-time access and approval workflows for elevated roles. Just-in-Time Access and Risk-Based Controls Standing privileges should be minimized to reduce the attack surface. Risk-based conditional access policies can block high-risk sign-ins and enforce additional verification steps. Microsoft Entra ID Protection identifies risky behaviors and applies automated controls, while Conditional Access ensures access decisions are based on real-time risk assessments to block or challenge high-risk authentication attempts. Password Hygiene and Secure Authentication Promoting strong password practices and transitioning to passwordless authentication enhances security and user experience. Microsoft Authenticator supports multi-factor and passwordless sign-ins, while Windows Hello for Business enables biometric authentication using secure hardware-backed credentials. 4. Deploy SIEM and XDR for Detection and Response A robust detection and response capability is vital for identifying and mitigating threats across endpoints, identities, and cloud environments. Microsoft Sentinel serves as a cloud-native SIEM that aggregates and analyses security data, while Microsoft Defender XDR integrates signals from multiple sources to provide a unified view of threats and automate response actions. 5. Map and Harden Attack Paths Organizations should regularly assess their environments for attack paths such as privilege escalation and lateral movement. Tools like Microsoft Defender for Identity help uncover Lateral Movement Paths, while Microsoft Identity Threat Detection and Response (ITDR) integrates identity signals with threat intelligence to automate response. These capabilities are accessible via the Microsoft Defender portal, which includes an attack path analysis feature for prioritizing multicloud risks. 6. Stay Current with Threat Actor TTPs Monitor the evolving tactics, techniques, and procedures (TTPs) employed by sophisticated threat actors. Understanding these behaviours enables organizations to anticipate attacks and strengthen defenses proactively. Microsoft Defender Threat Intelligence provides detailed profiles of threat actors and maps their activities to the MITRE ATT&CK framework. Complementing this, Microsoft Sentinel allows security teams to hunt for these TTPs across enterprise telemetry and correlate signals to detect emerging threats. 7. Build Organizational Awareness Organizations should train staff to identify phishing, impersonation, and deepfake threats. Simulated attacks help improve response readiness and reduce human error. Use Attack Simulation Training, in Microsoft Defender for Office 365 to run realistic phishing scenarios and assess user vulnerability. Additionally, educate users about consent phishing, where attackers trick individuals into granting access to malicious apps. Conclusion The democratization of offensive security tooling, combined with the persistent failure to patch known vulnerabilities, has significantly lowered the barrier to entry for cyber attackers. Organizations must recognize that the tools used against them are often the same ones available to their own security teams. The key to resilience lies not in avoiding these tools, but in mastering them—using them to simulate attacks, identify weaknesses, and build a proactive defense. Cybersecurity is no longer a matter of if, but when. The question is: will you detect the attacker before they achieve their objective? Will you be able to stop them before reaching your most sensitive data? Additional read: Gartner Predicts 30% of Enterprises Will Consider Identity Verification and Authentication Solutions Unreliable in Isolation Due to AI-Generated Deepfakes by 2026 Cyber security breaches survey 2025 - GOV.UK Jasper Sleet: North Korean remote IT workers’ evolving tactics to infiltrate organizations | Microsoft Security Blog MOVEit Transfer vulnerability Solt Thypoon Scattered Spider SIM swaps Attackers exploiting new critical OpenMetadata vulnerabilities on Kubernetes clusters | Microsoft Security Blog Microsoft Defender Vulnerability Management - Microsoft Defender Vulnerability Management | Microsoft Learn Zero Trust Architecture | NIST tactics, techniques, and procedures (TTP) - Glossary | CSRC https://learn.microsoft.com/en-us/security/zero-trust/deploy/overviewDeep Dive: DLP Incidents, Alerts & Events - Part 2
Alerts Overview Like the Incidents, alerts also provide comprehensive information such as severity, status, category etc. to help users understand and navigate efficiently. In addition to these standard details, the alert view also displays the correlation reason, which is particularly beneficial for security analysts and administrators. The correlation reason explains why an alert is linked to a particular incident or other alerts, helping users trace how different pieces of suspicious activity are connected. By understanding the correlation reason, users can better assess the scope and impact of security issues, streamline investigations, and take more effective remediation actions, ultimately improving the overall security posture of the organisation Alert Details The details page is divided into two sections. The section on the left presents information regarding Impacted Users and Devices (for Endpoints), including details such as the alert story, policy description, matched sensitive information, and its count. It also displays related events with detection time and location (such as Exchange or Endpoint). The right pane lets you view alert status, details, manage the alert, or move it to another incident. It shows evidence (entity details like IP, user, device), policy info, incident details, correlation reasons, plus alert comments, history, and a timeline. Event Details Now that we understand incidents and alerts, let's review events. Events offer detailed information about activity. Below is a brief description of each section for reference. Details Event Details Event ID: A unique identifier assigned to each event. It can be used to associate the event with corresponding DLP rule match activity within the Activity Explorer. Location: Specifies where the activity occurred, such as Exchange, Devices, etc. Time of Activity: Indicates the exact time at which the event took place. Impacted Entities: Hostname: The name of the device where the DLP event occurred. IP Address: The IP address of the device or client involved in the event. Application: The app or service used during the event (e.g., Edge, Outlook, Teams). File Name: The name of the file involved in the DLP violation. File Path: The full directory path where the file was located or accessed. File Extension: The file type suffix (e.g., .docx, .pdf) indicating format. Sha1: A SHA-1 hash value used to uniquely identify the file. Sha256: A SHA-256 hash value offering stronger file identification. File Type: The type of the file (e.g., document, spreadsheet, image). File Size: The size of the file in bytes or megabytes. RMS Encrypted: Indicates whether the file is protected using Rights Management Services. MDATP Device ID: A unique identifier for the device from Microsoft Defender for Endpoint. Client Country: The geographic location of the client device based on IP. Client IP Location: More granular geolocation data derived from the client IP address. Target Domain: The destination domain involved in the data transfer (e.g., external email or cloud service). Evidence File: A copy or reference to the file that triggered the DLP event, used for investigation. Hunt & Monitor: Hunt All Sensitive Content Activity by Device: Displays all sensitive data interactions per device, useful for device-level threat hunting. Activity by User: Shows user-specific actions involving sensitive content, helping identify insider risks. DLP Violations for Last 30 Days: Summarizes all DLP policy violations over the past month for trend analysis and compliance tracking. User & Role Info: Displays the user detail and the role assigned to the user. Policy Details: DLP Policy Matched: The specific DLP policy that was triggered by the event, based on organizational data protection rules. Rule Matched: The exact rule within the DLP policy that was violated (e.g., sharing sensitive data externally). Sensitive Info Types Detected: Lists predefined or custom sensitive information types (e.g., credit card numbers, health records) found in the content. You can click on the SIT to view more information such as count by confidence levels, matched content and surrounding text. Trainable Classifiers Detected: Lists AI-based classifiers that identify sensitive content based on context and user behavior (e.g., source code, resumes). Violating Action: The user action that caused the violation (e.g., copy to USB, email to external domain, print, upload to cloud). Enforcement Mode: Indicates whether the policy is in audit, block/warn, override/warn & bypass mode—defining how the system responds to violations. Override Justification Text: If a user overrides a policy block, this field captures their justification or reason. Sensitive Information Type: A more detailed view of the specific sensitive data detected (e.g., “India Aadhaar Number”, “EU Passport Number”). Source This tab enables users to preview content intended for exfiltration, including files or emails. The Actions tab allows authorised users to download the specified content. Note: Previewing or downloading content requires that the administrator or analyst possesses the appropriate role permissions. Sensitive Info Types Lists predefined or custom sensitive information types found in the content with matched content and surrounding text info. Trainable Classifiers Lists Trainable Classifiers and their details. Metadata Provides the detailed metadata in simple txt format. Hope this helps!Deep Dive: DLP Incidents, Alerts & Events - Part 1
Correlation Analytics Prior to delving into the specifics, it is essential to understand that Microsoft Defender employs correlation analytics to aggregate related alerts and automated investigations from various products into a single incident. This comprehensive perspective enables security analysts to gain a clearer understanding of broader attack scenarios, facilitating more effective responses to complex threats across the organisation. When Microsoft Defender identifies suspicious activity, it generates alerts. These alerts are then assessed to determine whether they should: Create a new incident, if the alert is distinct within a defined timeframe. Be appended to an existing incident, if the alert shares attributes with those already grouped. The correlation engine evaluates several criteria using proprietary algorithms, including: Entities (such as users, devices) Artifacts (including files, processes, email senders) Timeframes Event sequences (for example, a phishing email followed by a malicious click) Microsoft Defender's capabilities extend beyond initial incident creation. It continually monitors for relationships among incidents and may automatically merge them in cases where: Alerts share attacker IP addresses, exhibit similar attack patterns, or affect related endpoints The same user or device triggers multiple alerts within the established correlation window As a result, it is possible for alerts originating from different products or sources to be merged into a single incident or alert. For further information, please refer to the following article: Alert correlation and incident merging in the Microsoft Defender portal - Microsoft Defender XDR | Microsoft Learn Naming Microsoft Defender XDR automatically names incidents, alerts, and events using attributes like activity, channel, users, detection sources, and category. This dynamic and context-aware naming convention plays a crucial role in security operations, as it enables analysts to quickly understand the nature and scope of an issue without needing to delve into each detail immediately. For instance, the incident name may indicate whether the event involves multiple endpoints, several users, or has been detected by more than one security product. Such automated, attribute-based naming supports rapid triage and better situational awareness, especially when dealing with a high volume of alerts across complex environments. Examples: Incident – Exfiltration on one endpoint reported by multiple sources Alert – DLP policy (EXO PAN Policy) matched for email with subject (SS) Event – Sensitive info in '1-MB-Test.docx' copied to cloud Incident Overview With a foundational understanding of correlation analytics and aggregation, lets now turn our attention to an Exfiltration Incident. The primary interface pane presents essential information designed to facilitate efficient navigation. The incident can be expanded to display its associated alerts. Although incidents are correlated, various filters—such as Entities and Policy—may be utilized to further refine the data within each incident. Additional filtering options are accessible, as shown in the reference image on the right. The 'Copy List link' function allows you to generate and distribute a direct link to the incident list, ensuring that any active filters are preserved within the URL. The following columns are displayed in the Primary Pane. Incident ID – A unique identifier assigned to each incident for tracking and reference. Tags – Custom labels added to incidents for categorization, filtering, or workflow management. Severity – Indicates the potential impact of the incident (e.g., Low, Medium, High). Investigation State – Shows the current progress of the investigation (e.g., In Progress, Completed). Categories – Classifies the incident type (e.g., Exfiltration, Initial Access etc). Impacted Assets – Lists affected entities such as devices, users, applications etc. Active Alerts – Displays the active vs total number of alerts contributing to the incident. Detection Sources & Product Names – Identifies which Defender product or sensor generated the alerts (e.g., Defender XDR, Microsoft Data Loss Prevention). Time (Last Update, First & Last Activity) – Indicates the most recent update time and the initial and final activity times related to the incident. Policy Name – Refers to the policy that triggered the alert or incident. Policy Rule Name – The specific rule within the policy that was violated or matched. Data Stream – Indicates the type of data involved (e.g., Exchange, Endpoint). Data Sensitivity – Indicates the maximum sensitivity level identified among all associated files or devices. This enables analysts to efficiently determine if sensitive or regulated information (such as PII, financial records, or credentials) may be exposed to risk. Status – Reflects the current state of the incident (e.g., Active, Resolved, In-Progress). Assigned to – Shows the analyst or team responsible for investigating and resolving the incident. Classification – Analyst-defined label indicating whether the incident is True Positive, False Positive, or Informational, expected activity. Determination – Specifies the nature of the threat (e.g., Malicious, Suspicious, Clean). Device Groups – Groups of devices impacted or involved in the incident, often used for scoping and filtering. Workspaces – Logical containers or environments where incidents are managed, especially in multi-tenant or MSSP setups. Incident Details The details page lets you view and play back the full attack story, from start to finish. It features an incident graph that maps how the attack unfolded, connecting alerts, entities (such as users and devices), assets, and a timeline of events. The graph offers a holistic overview of the incident, showing its origin, spread, and affected entities. You can interactively explore details by clicking on nodes for more information and taking remediation steps like isolating devices or deleting files. Direct investigation into specific alerts and contextual actions are available without leaving the graph view. You can also view details about alerts, assets, investigations, evidence and responses, summaries, similar incidents, and generate incident reports or export them as PDFs. The platform enables you to manage and merge incidents, utilise Copilot, and investigate data security threats using AI. Additionally, it displays device risk and exposure levels alongside app details and associated risks, and provides the count of active alerts categorised by severity.1.2KViews0likes0Comments