Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
*Thank you to my teammates Ian Parramore and David Hoerster for reviewing and contributing to this blog.*
With the launch of the Sentinel Platform, a new suite of features for the Microsoft Sentine...
Dec 29, 20251KViews
1like
3Comments
This blog post is about understanding and using ms-settings URIs in Windows to control and customize the Settings experience in enterprise environments.
Dec 29, 2025510Views
2likes
0Comments
8 MIN READ
In November 2023 at Microsoft Ignite, we announced the integration of Microsoft Sentinel with Microsoft Defender XDR into the unified Microsoft Defender portal. Fast forward, in July 2024 we announce...
Dec 23, 20253.3KViews
4likes
1Comment
Introduction:
Managing Windows Server benefits licensing across hybrid environments can be challenging. Azure Arc combined with Azure Policy simplifies this by automatically enforcing licensing com...
Dec 22, 2025467Views
1like
0Comments
Recent Discussions
I have absolutely no idea what Microsoft Defender 365 wants me to do here
The process starts with an emal: There's more below on the email - an offer for credit monitoring, an option to add another device, an option to download the mobile app - but I don't want to do any of the, so I click on the "Open Defender" button, which results in this: OK, so my laptop is the bad boy here, there's that Status not of "Action recommended", with no "recommendations" and the only live link here is "Add device", something I don't need to do. The only potential "problem" I can even guess at here is that Microsoft is telling me that the laptop needs updating. Since I seldom use the laptop, only when traveling, I'd guess the next time I'd fire it up the update will occur, but of course I really don't know that's the recommended action it's warning me about, do I? You'd expect that if something is warning you "ACTION NEEDED!!!" they'd be a little more explicit, wouldn't you?Request for Advice on Managing Shared Glossary Terms in Microsoft Purview
Hi everyone, I'm looking for guidance from others who are working with Microsoft Purview (Unified Catalog), especially around glossary and governance domain design. Scenario; I have multiple governance domains, all within the same Purview tenant. We have some core business concepts from the conceptual data models for example, a term like “PARTY” that are needed in every governance domain. However, based on Microsoft’s documentation: Glossary terms can only be created inside a specific governance domain, not at a tenant‑wide or global level. The Enterprise Glossary is only a consolidated view, not a place to create global terms or maintain a single shared version. It simply displays all terms from all domains in one list. If the same term is needed across domains, Purview requires separate term objects in each domain. Consistency must therefore be managed manually (by re‑creating the term in each domain) or by importing/exporting via CSV or automation (API/PyApacheAtlas)? This leads to questions about maintainability; especially when we want one consistent definition across all domains. What I'm hoping to understand from others: How are you handling shared enterprise concepts – enterprise and conceptual data models that need to appear in multiple governance domains? Are you duplicating terms in each domain and synchronising them manually or via automation? Have you adopted a “central domain” for hosting enterprise‑standard terms and then linking or referencing them in other domains? Is there any better pattern you’ve found to avoid fragmentation and to ensure consistent definitions across domains? Any advice, lessons learned, or examples of how you’ve structured glossary governance in Purview would be really helpful. this is be a primary ORKS - Establish a unified method to consistently link individual entities (e.g., PARTY) to their associated PII‑classified column‑level data assets in Microsoft Purview, ensuring sensitive data is accurately identified, governed, and monitored across all domains. – I.e CDE to Glossary terms Thanks in advance!13Views0likes0CommentsFrom “No” to “Now”: A 7-Layer Strategy for Enterprise AI Safety
The “block” posture on Generative AI has failed. In a global enterprise, banning these tools doesn't stop usage; it simply pushes intellectual property into unmanaged channels and creates a massive visibility gap in corporate telemetry. The priority has now shifted from stopping AI to hardening the environment so that innovation can run at velocity without compromising data sovereignty. Traditional security perimeters are ineffective against the “slow bleed” of AI leakage - where data moves through prompts, clipboards, and autonomous agents rather than bulk file transfers. To secure this environment, a 7-layer defense-in-depth model is required to treat the conversation itself as the new perimeter. 1. Identity: The Only Verifiable Perimeter Identity is the primary control plane. Access to AI services must be treated with the same rigor as administrative access to core infrastructure. The strategy centers on enforcing device-bound Conditional Access, where access is strictly contingent on device health. To solve the "Account Leak" problem, the deployment of Tenant Restrictions v2 (TRv2) is essential to prevent users from signing into personal tenants using corporate-managed devices. For enhanced coverage, Universal Tenant Restrictions (UTR) via Global Secure Access (GSA) allows for consistent enforcement at the cloud edge. While TRv2 authentication-plane is GA, data-plane protection is GA for the Microsoft 365 admin center and remains in preview for other workloads such as SharePoint and Teams. 2. Eliminating the Visibility Gap (Shadow AI) You can’t secure what you can't see. Microsoft Defender for Cloud Apps (MDCA) serves to discover and govern the enterprise AI footprint, while Purview DSPM for AI (formerly AI Hub) monitors Copilot and third-party interactions. By categorizing tools using MDCA risk scores and compliance attributes, organizations can apply automated sanctioning decisions and enforce session controls for high-risk endpoints. 3. Data Hygiene: Hardening the “Work IQ” AI acts as a mirror of internal permissions. In a "flat" environment, AI acts like a search engine for your over-shared data. Hardening the foundation requires automated sensitivity labeling in Purview Information Protection. Identifying PII and proprietary code before assigning AI licenses ensures that labels travel with the data, preventing labeled content from being exfiltrated via prompts or unauthorized sharing. 4. Session Governance: Solving the “Clipboard Leak” The most common leak in 2025 is not a file upload; it’s a simple copy-paste action or a USB transfer. Deploying Conditional Access App Control (CAAC) via MDCA session policies allows sanctioned apps to function while specifically blocking cut/copy/paste. This is complemented by Endpoint DLP, which extends governance to the physical device level, preventing sensitive data from being moved to unmanaged USB storage or printers during an AI-assisted workflow. Purview Information Protection with IRM rounds this out by enforcing encryption and usage rights on the files themselves. When a user tries to print a "Do Not Print" document, Purview triggers an alert that flows into Microsoft Sentinel. This gives the SOC visibility into actual policy violations instead of them having to hunt through generic activity logs. 5. The “Agentic” Era: Agent 365 & Sharing Controls Now that we're moving from "Chat" to "Agents", Agent 365 and Entra Agent ID provide the necessary identity and control plane for autonomous entities. A quick tip: in large-scale tenants, default settings often present a governance risk. A critical first step is navigating to the Microsoft 365 admin center (Copilot > Agents) to disable the default “Anyone in organization” sharing option. Restricting agent creation and sharing to a validated security group is essential to prevent unvetted agent sprawl and ensure that only compliant agents are discoverable. 6. The Human Layer: “Safe Harbors” over Bans Security fails when it creates more friction than the risk it seeks to mitigate. Instead of an outright ban, investment in AI skilling-teaching users context minimization (redacting specifics before interacting with a model) - is the better path. Providing a sanctioned, enterprise-grade "Safe Harbor" like M365 Copilot offers a superior tool that naturally cuts down the use of Shadow AI. 7. Continuous Ops: Monitoring & Regulatory Audit Security is not a “set and forget” project, particularly with the EU AI Act on the horizon. Correlating AI interactions and DLP alerts in Microsoft Sentinel using Purview Audit (specifically the CopilotInteraction logs) data allows for real-time responses. Automated SOAR playbooks can then trigger protective actions - such as revoking an Agent ID - if an entity attempts to access sensitive HR or financial data. Final Thoughts Securing AI at scale is an architectural shift. By layering Identity, Session Governance, and Agentic Identity, AI moves from being a fragmented risk to a governed tool that actually works for the modern workplace.Updating SDK for Java used by Defender for Server/CSPM in AWS
Hi, I have a customer who is Defender for Cloud/CSPM in AWS. Last week, Cloud AWS Health Dashboard lit up with a recommendation around the use of AWS SDK for Java 1.x in their organization. This version will reach end of support on December 31, 2025. The recommendation is to migrate to AWS SDK for Java 2.x. The issue is present in all of AWS workload accounts. They found that a large amount of these alerts is caused by the Defender CSPM service, running remotely, and using AWS SDK for Java 1.x. Customer attaching a couple of sample events that were gathered from the CloudTrail logs. Please note that in both cases: assumed-role: DefenderForCloud-Ciem sourceIP: 20.237.136.191 (MS Azure range) userAgent: aws-sdk-java/1.12.742 Linux/6.6.112.1-2.azl3 OpenJDK_64-Bit_Server_VM/21.0.9+10-LTS java/21.0.9 kotlin/1.6.20 vendor/Microsoft cfg/retry-mode/legacy cfg/auth-source#unknown Can someone provide guidance about this? How to find out if DfC is going to leverage AWS SDK for Java 2.x after Dec 31, 2025? Thanks, TerruLineage in Purview - Fabric
Dears, I was checking the lineage which is collected in Microsoft Fabric by Purview and I saw this limitations on the Microsoft page : I have some sources like : Dataverse (I am doing a Dataverse Direct Shortcut to bring the information from Dataverse into Fabric - Raw Layer) Sharepoint (I am doing a Fabric Shortcut into sharepoint. - Fabric Raw Layer) to bring the files into Fabric SQL Server MI ( I am doing a Fabric mirroring - Fabric Raw Layer) I did not undertand very well the lineage exceptions mentioned above. Let me ask this questions please: 1) Suppose I have a table which came from SQL Server MI. This table was first ingested into Fabric Raw Layer, then into Bronze Layer, then into Fabric Silver Layer and After to Fabric Gold Layer. Each layer has its own fabric workspace. When I bring the metadata via Purview , into the Purview Unified Catalog do I get automatic lineage between Silver layer and Gold layer for that table (suppose the table is named table 1). Or this is not possible because : 1.1 ) They are different workspaces 1.2) The table was originated outside Fabric , in the SQL Server MI? 2) Suppose I have a table in Silver layer (in the silver layer workspace), then, in the gold layer, I create a shortcut to that table. And then I use a notebook to transform that table into another table in the same gold layer. Will I have lineage between the Silver table and the new transformed table in gold layer? 3) To make the relationship of lineage between tables which is not create automatically I was thinking in using Purview Rest APIs. Which API shall I use specifically for this? Thanks, Pedro Thanks, Pedro29Views0likes1CommentIngesting Windows Security Events into Custom Datalake Tables Without Using Microsoft‑Prefixed Table
Hi everyone, I’m looking to see whether there is a supported method to ingest Windows Security Events into custom Microsoft Sentinel Data Lake–tiered tables (for example, SecurityEvents_CL) without writing to or modifying the Microsoft‑prefixed analytical tables. Essentially, I want to route these events directly into custom tables only, bypassing the default Microsoft‑managed tables entirely. Has anyone implemented this, or is there a recommended approach? Thanks in advance for any guidance. Best Regards, Prabhu KiranLatest Threat Intelligence (December 2025)
Microsoft Defender for IoT has released the December 2025 Threat Intelligence package. The package is available for download from the Microsoft Defender for IoT portal (click Updates, then Download file). Threat Intelligence updates reflect the combined impact of proprietary research and threat intelligence carried out by Microsoft security teams. Each package contains the latest CVEs (Common Vulnerabilities and Exposures), IOCs (Indicators of Compromise), and other indicators applicable to IoT/ICS/OT networks (published during the past month) researched and implemented by Microsoft Threat Intelligence Research - CPS. The CVE scores are aligned with the National Vulnerability Database (NVD). Starting with the August 2023 threat intelligence updates, CVSSv3 scores are shown if they are relevant; otherwise the CVSSv2 scores are shown. Guidance Customers are recommended to update their systems with the latest TI package in order to detect potential exposure risks and vulnerabilities in their networks and on their devices. Threat Intelligence packages are updated every month with the most up-to-date security information available, ensuring that Microsoft Defender for IoT can identify malicious actors and behaviors on devices. Update your system with the latest TI package The package is available for download from the Microsoft Defender for IoT portal (click Updates, then Download file), for more information, please review Update threat intelligence data | Microsoft Docs. MD5 Hash: 5c642a16bf56cb6d98ef8b12fdc89939 For cloud connected sensors, Microsoft Defender for IoT can automatically update new threat intelligence packages following their release, click here for more information.Unifying AWS and Azure Security Operations with Microsoft Sentinel
The Multi-Cloud Reality Most modern enterprises operate in multi-cloud environments using Azure for core workloads and AWS for development, storage, or DevOps automation. While this approach increases agility, it also expands the attack surface. Each platform generates its own telemetry: Azure: Activity Logs, Defender for Cloud, Entra ID sign-ins, Sentinel analytics AWS: CloudTrail, GuardDuty, Config, and CloudWatch Without a unified view, security teams struggle to detect cross-cloud threats promptly. That’s where Microsoft Sentinel comes in, bridging Azure and AWS into a single, intelligent Security Operations Center (SOC). Architecture Overview Connect AWS Logs to Sentinel AWS CloudTrail via S3 Connector Enable the AWS CloudTrail connector in Sentinel. Provide your S3 bucket and IAM role ARN with read access. Sentinel will automatically normalize logs into the AWSCloudTrail table. AWS GuardDuty Connector Use the AWS GuardDuty API integration for threat detection telemetry. Detected threats, such as privilege escalation or reconnaissance, appear in Sentinel as the AWSGuardDuty table. Normalize and Enrich Data Once logs are flowing, enrich them to align with Azure activity data. Example KQL for mapping CloudTrail to Sentinel entities: AWSCloudTrail | extend AccountId = tostring(parse_json(Resources)[0].accountId) | extend User = tostring(parse_json(UserIdentity).userName) | extend IPAddress = tostring(SourceIpAddress) | project TimeGenerated, EventName, User, AccountId, IPAddress, AWSRegion Then correlate AWS and Azure activities: let AWS = AWSCloudTrail | summarize AWSActivity = count() by User, bin(TimeGenerated, 1h); let Azure = AzureActivity | summarize AzureActivity = count() by Caller, bin(TimeGenerated, 1h); AWS | join kind=inner (Azure) on $left.User == $right.Caller | where AWSActivity > 0 and AzureActivity > 0 | project TimeGenerated, User, AWSActivity, AzureActivity Automate Cross-Cloud Response Once incidents are correlated, Microsoft Sentinel Playbooks (Logic Apps) can automate your response: Example Playbook: “CrossCloud-Containment.json” Disable user in Entra ID Send a command to the AWS API via Lambda to deactivate IAM key Notify SOC in Teams Create ServiceNow ticket POST https://api.aws.amazon.com/iam/disable-access-key PATCH https://graph.microsoft.com/v1.0/users/{user-id} { "accountEnabled": false } Build a Multi-Cloud SOC Dashboard Use Sentinel Workbooks to visualize unified operations: Query 1 – CloudTrail Events by Region AWSCloudTrail | summarize Count = count() by AWSRegion | render barchart Query 2 – Unified Security Alerts union SecurityAlert, AWSGuardDuty | summarize TotalAlerts = count() by ProviderName, Severity | render piechart Scenario Incident: A compromised developer account accesses EC2 instances on AWS and then logs into Azure via the same IP. Detection Flow: CloudTrail logs → Sentinel detects unusual API calls Entra ID sign-ins → Sentinel correlates IP and user Sentinel incident triggers playbook → disables user in Entra ID, suspends AWS IAM key, notifies SOC Strengthen Governance with Defender for Cloud Enable Microsoft Defender for Cloud to: Monitor both Azure and AWS accounts from a single portal Apply CIS benchmarks for AWS resources Surface findings in Sentinel’s SecurityRecommendations tableEntity playbook in XDR
Hello All! In my Logic Apps Sentinel automations I often use the entity trigger to run some workflows. Some time ago there was information, that Sentinel will be moved to the Microsoft XDR, some of the Sentinel elements are already there. In XDR I can run playbook from the incident level, but I can't do it from the entity level - for example in the XDR when I clicked in the IP or when I open IP address page I can't find the Run playbook button or something like that. Do you know if the Run playbook on entity feature will be moved to XDR also? Best, Piotr K.Scam Defender pop Up… help please
Can someone please help my dad has had what looks like scam pop ups come up on his MacBook I have reported to Microsoft but don’t know how long it will be until they get back to me. I want to know if anyone can confirm they’re scams and how I can get them off his screen, it won’t let us click on the X Thank you68Views0likes2CommentsMIP SDK cannot read file labels if a message was encrypted by Outlook Classic.
C++ application uses MIP SDK version 1.14.108. The application does Office files decryption and labels reading. The problem with labels reading is observed. Steps to reproduce: Create a docx file with a label which does not impose encryption. Open Outlook Classic, compose email, attach the document from 1, click Encrypt, send. During message sending our application intercepts encrypted by Outlook docx file in temporary folder C:\Users\UserName\AppData\Local\Temp Application decrypts the intercepted file using mipns::FileHandler::RemoveProtection. Visual inspection demonstrates that decryption runs successfully. Then a separate FileHandler for decrypted file is created, and mipns::FileHandler::GetLabel() returns an empty label. It means that the label was lost during decryption. Upon visual inspection of the decrypted file via Word we can see that the label is missing. Also, we do not see MSIP_Label* entries in meta data (File -> Info -> Properties -> Advanced Properties -> Custom). Here is a fragment of MIP SDK reducted log during file handler creation ================= file_engine_impl.cpp:327 "Creating file handler for: [D:\GitRepos\ ...reducted]" mipns::FileEngineImpl::CreateFileHandlerImpl gsf_utils.cpp:50 "Initialized GSF" `anonymous-namespace'::InitGsfHelper data_spaces.cpp:415 "No LabelInfo stream was found. No v1 custom properties" mipns::DataSpaces::GetLabelInfoStream data_spaces.cpp:428 "No LabelInfo stream was found. No v1 custom properties" mipns::DataSpaces::GetXmlPropertiesV1 file_format_base.cpp:155 "Getting protection from input..." mipns::FileFormatBase::GetProtection license_parser.cpp:233 "XPath returned no results" `anonymous-namespace'::GetXmlNodesFromPath license_parser.cpp:233 "XPath returned no results" `anonymous-namespace'::GetXmlNodesFromPath license_parser.cpp:299 "GetAppDataNode - Failed to get ID in PL app data section, parsing failed" `anonymous-namespace'::GetAppDataNode api_log_cache.cpp:58 "{{============== API CACHED LOGS BEGIN ============}}" mipns::ApiLogCache::LogAllMessages file_engine_impl.cpp:305 "Starting API call: file_create_file_handler_async scenarioId=89fd6484-7db7-4f68-8cf7-132f87825a26" mipns::FileEngineImpl::CreateFileHandlerAsync 37948 default_task_dispatcher_delegate.cpp:83 "Executing task 'ApiObserver-0' on a new detached thread" mipns::DefaultTaskDispatcherDelegate::ExecuteTaskOnIndependentThread 37948 file_engine_impl.cpp:305 "Ended API call: file_create_file_handler_async" mipns::FileEngineImpl::CreateFileHandlerAsync 37948 file_engine_impl.cpp:305 "Starting API task: file_create_file_handler_async scenarioId=89fd6484-7db7-4f68-8cf7-132f87825a26" mipns::FileEngineImpl::CreateFileHandlerAsync file_engine_impl.cpp:327 "Creating file handler for: [D:\GitRepos\...reducted....docx]" mipns::FileEngineImpl::CreateFileHandlerImpl file_format_factory_impl.cpp:88 "Create File Format. Extension: [.docx]" mipns::FileFormatFactoryImpl::Create file_format_base.cpp:363 "V1 metadata is not supported for file extension .docx. Setting metadata version to 0" mipns::FileFormatBase::CalculateMetadataVersion compound_file.cpp:183 "Open compound file for read" mipns::CompoundFile::OpenRead gsf_utils.cpp:50 "Initialized GSF" `anonymous-namespace'::InitGsfHelper compound_file_storage_impl.cpp:351 "Get Metadata" mipns::CompoundFileStorageImpl::GetMetadata compound_file_storage_impl.cpp:356 "No Metadata, not creating GSF object" mipns::CompoundFileStorageImpl::GetMetadata metadata.cpp:119 "Create Metadata" mipns::Metadata::Metadata metadata.cpp:136 "Got [0] properties from DocumentSummaryInformation" mipns::Metadata::GetProperties compound_file_storage_impl.cpp:351 "Get Metadata" mipns::CompoundFileStorageImpl::GetMetadata compound_file_storage_impl.cpp:356 "No Metadata, not creating GSF object" mipns::CompoundFileStorageImpl::GetMetadata metadata.cpp:119 "Create Metadata" mipns::Metadata::Metadata metadata.cpp:136 "Got [0] properties from DocumentSummaryInformation" mipns::Metadata::GetProperties =================Data Quality Error (Internal Service Error)
I am facing an issue while running the DQ scan, when i tried doing manual scan and scheduled scans both time i faced Internal Service Error. ( DataQualityInternalError Internal service error occurred .Please retry or contact Microsoft support ) Data Profiling is running successfully but for none of the asset, DQ is working. After the lineage patch which MS had fixed, they had introduced Custom SQL option to create a rule, and after that only i am facing this issue. Is anyone else also facing the same? I tried with different data sources (ADLS, and Synapse) its same for both. If anyone has an idea, do share it here, it will be helpful.Pre-migration queries related to data discovery and file analysis
Hi Team, A scenario involves migrating approximately 25 TB of data from on‑premises file shares to SharePoint. Before the migration, a discovery phase is required to understand the composition of the data. The goal is to identify file types (Microsoft Office documents, PDFs, images, etc.) without applying any labels at this stage. The discovery requirements include: Identification of file types Detection of duplicate or redundant files Identification of embedded UNC paths, macros, and document links Detection of applications running directly from file shares Guidance is needed on which Microsoft Purview components—such as the on‑premises scanner or the Data Map—can support these discovery requirements. Clarification is also needed on whether Purview is capable of meeting all the above needs. Clarification is also needed on whether Purview can detect duplicate or redundant files, and if so, which module or capability enables this. Additionally, since Purview allows downloading only up to 10,000 logs at a time, what would be the best approach to obtain discovery logs for a dataset of this size (25 TB)? Thank you !22Views0likes0CommentsSecurity alerts Graph API and MFA
Hello, Does anyone know if https://learn.microsoft.com/en-us/graph/api/resources/partner-security-partnersecurityalert-api-overview?view=graph-rest-beta Security alerts works with app only method (app + cert or app + secret )? Currently i successfully login to graph using both methods Welcome to Microsoft Graph! Connected via apponly access using "ID number" Then i request to get alerts and i get error code: WARNING: Error body: {"error":{"code":"UnknownError","message":"{\"Error\":{\"Code\":\"Unauthorized_MissingMFA\",\"Message\":\"MFA is required for this request. The provided authentication token does not have MFA I found that old API worked with app+user credential only but it's not mentioned in new one about this restriction.Looking for a way to set up mail moderation using Entra dynamic group
Our organization is working on shifting from a hybrid AD-Entra environment to Entra only. We currently use mail-moderated dynamic distribution lists using Extension Attributes to set the rules for mass internal company emails. In conjunction with us migrating to Entra only, we are also planning to use an API integration to manage our Entra account creation and updates. This integration does not have the ability to populate the Extension Attribute fields. Because of these changes we will no longer be able to use the existing dynamic distribution lists we have, and we have not had luck finding a solution for it yet. Has anyone else gone through this or have any experience solving for this same problem?41Views1like2CommentsTenant Forwarding - Trusted ARC Sealer
As part of a tenant to tenant migration we often need to forward mail from one tenant to another. This can cause some issues with email authentication verdicts on the destination tenant. Is it possible or best practice to configure another tenant as a Trusted ARC sealer to help with forwarded email deliverability?Entra Risky Users Custom Role
My customer implemented unified RBAC (Defender Portal) and removed the Entra Security Operator role. They lost the ability to manage Risky Users in Entra. Two options explored by the customer - Protected Identity Administrator role (licensing unclear) or create a custom role with microsoft.directory/identityProtection/riskyUsers/update, which they couldn't find under custom role. Do you know if there are other options to manage Risky Users without using the Security Operator role?How to make DLP\Auto labeling more efficient
Good Morning All As part of come new compliance policy, I have introduced labels to an organisation to manage some data. Part of the requirements is auto labeling. I have identified a set of documents that I want to apply this to, selected around 10 to 15 of the these documents and put these into a test SharePoint site. I created a bespoke sensitive info type and tested by uploading the file to this, it does recognize this document falls under this remit however when I apply this to an auto labeling policy it does not pick anything up, I have attempted to change the confidence level and still getting little to no success. I run a DLP policy and it picked up 2 of the documents, even though they are all pretty much the same document. I originally used a regex to target as the primary element that is specific to all these files (project code), however I got no results, so I reverted to a keyword list that had specific words in all these folders, and still I get very in-accurate results. Are there any tips to making DLP/Auto labeling more efficient?70Views1like2CommentsDLP USB Block
Currently we have DLP policies setup to block the use of USB devices and copying data to it. When checking the activity explorer I am still seeing user's able to copy data to USB devices and for the action item it says "Audit" when in the DLP policies we explicitly set it to block. Has anyone run into this issue or seen similar behavior?