Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
As organizations accelerate AI adoption, securing AI workloads has become a top priority. Unlike traditional cloud applications, AI systems introduce new risks—such as prompt injection, data leakage,...
Feb 24, 202627Views
0likes
0Comments
Attacks move faster than security teams can react. They spread across identities, endpoints, and SaaS apps in minutes, overwhelming analysts with signals and leaving little time to act. By the time a...
Feb 24, 202634Views
0likes
0Comments
Security teams today operate under constant pressure. They are expected to respond faster, automate more, and do so without sacrificing precision. Traditional security orchestration, automation and r...
Feb 23, 20261.9KViews
7likes
3Comments
Microsoft is pleased to announce the February 2026 Revision (v2602) of the security baseline package for Windows Server 2025! You can download the baseline package from the Microsoft Security Complia...
Feb 23, 2026446Views
0likes
0Comments
Recent Discussions
Clarification on UEBA Behaviors Layer Support for Zscaler and Fortinet Logs
I would like to confirm whether the new UEBA Behaviors Layer in Microsoft Sentinel currently supports generating behavior insights for Zscaler and Fortinet log sources. Based on the documentation, the preview version of the Behaviors Layer only supports specific vendors under CommonSecurityLog (CyberArk Vault and Palo Alto Threats), AWS CloudTrail services, and GCP Audit Logs. Since Zscaler and Fortinet are not listed among the supported vendors, I want to verify: Does the UEBA Behaviors Layer generate behavior records for Zscaler and Fortinet logs, or are these vendors currently unsupported for behavior generation? As logs from Zscaler and Fortinet will also be get ingested in CommonSecurityLog table only.7Views0likes0CommentsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.SolvedUsing managed identities to assign users and groups to app-roles in Enterprise apps
Hi everyone, I'm looking for a way to use managed identities to assign users and groups to app-roles in Enterprise apps via Azure DevOps pipelines (using Workload Identity Federation) Currently it seems I can't add a managed identity as an owner on the enterprise app, for example. Thanks in advance!78Views0likes1CommentMFA catch-22 during onboarding due to registration policy
Hi, We are experiencing a catch-22 scenario during user onboarding related to MFA. New users are required to install the Microsoft Authenticator app via our Company Portal. However, they are prompted to complete MFA registration before they can access or download anything from the Company Portal. Since they do not yet have the Authenticator app installed, they are effectively blocked from completing the MFA setup. From our investigation, it appears that the Multi-Factor Authentication registration policy is enforcing MFA registration for new users. In our scenario, this creates a circular dependency. We have attempted to exclude our office network from MFA using Conditional Access, but this does not resolve the issue because the MFA registration policy is triggered before Conditional Access policies are evaluated. Our questions: Is there a recommended way to handle MFA onboarding in this type of scenario? Can Conditional Access policies be used instead of the MFA registration policy for initial MFA enrollment?Save the date - January 26, 2026 - AMA: Best practices for applying Zero Trust using Intune
Join us on January 26 at 10:00 AM PT, to Ask Microsoft Anything (AMA) and get the answers you need to implement the right policies, security settings, device configurations, and more. Never trust, always verify. Tune in for tips and insights to help you secure your endpoints using Microsoft Intune as part of your larger Zero Trust strategy. Find out how you can use Intune to protect both access and data on organization-owned devices and personal devices used for work. Go to aka.ms/AMA/IntuneZeroTrust and select "attend" to add this event to your calendar. Have questions? Submit them early by signing in to Tech Community and posting them on the event page!Data Product Owner and Contacts should be separate fields
Currently, the 'contacts' field under a data product has a 1 on 1 relationship with the 'data product owner' field. It is not possible to add 'contacts' seperately. I believe this does not make sense for most organizations. For example, our data products have one owner, and multiple contacts (e.g. data stewards, data experts). That's how our governance works. We are not going to add people to the 'data product owner' field that are no data owners, just to show them in contacts. Also, why would you have two fields that basically do the same? Clicking on 'data product owner' already gives me the information for 'contacts'. Please let us add contacts here, that are not the data product owner.443Views4likes9CommentsOnboard devices in Purview is grayed out
I’m getting started with Microsoft Purview and running into issues onboarding devices. In the Purview portal, no devices appear, and the “Onboard devices” option is grayed out. I have EMS E5 licenses assigned to all users, and I’m signed in as a Global Admin with Purview Administrator and Security Administrator roles. All devices are managed by Intune and run Windows 11 Enterprise with the latest updates. They are Microsoft Entra joined (AAD joined), show up correctly in Defender, and their Defender onboarding status is active and onboarded. What piece am I missing that would prevent these devices from showing in Purview and keep the onboarding option disabled? Any guidance would be appreciated.94Views0likes4CommentsM365 Compliance connector error in Power Automate flow with retention label action
Hi everyone, I've been troubleshooting this issue for some time now but haven't found a solution yet. I'm configuring retention labels in Purview that trigger a 'Run a Power Automate Flow' action at the end of the retention period. This functionality is outlined in this Microsoft Learn article: https://learn.microsoft.com/en-us/purview/retention-label-flow The issue arises on the Power Automate side. To set up the flow that integrates with the retention label, the Compliance 365 Connector must be used. This connector requires a Power Automate Premium license, which I have on my account. The flows are also set up in the Default Power Automate environment, as required. Despite following all the necessary steps, the flows won't launch. I keep encountering the same error each time. I've even tried creating multiple retention labels and corresponding flows, each using the compliance connector, but the result is always the same. I've attached documentation with screenshots for reference: First error in flow checker: When opening the flow I see ‘Forbidden Error’ and ‘The response is not in JSON format’ These are the inputs: These are the outputs: For test purposes I created multiple retention labels with each time configured to start the flow 1-5 days after the label is applied. Each time getting the same errors. I know it can sometimes take some time for a labels action to be run, but has been a few months now. Please help405Views0likes2CommentsHow to Include Custom Details from an Alert in Email Generated by a Playbook
I have created an analytics rule that queries Sentinel for security events pertaining to group membership additions, and triggers an alert for each event found. The rule does not create an incident. Within the rule logic, I have created three "custom details" for specific fields within the event (TargetAccount, MemberName, SubjectAccount). I have also created a corresponding playbook for the purpose of sending an email to me when an alert is triggered. The associated automation rule has been configured and is triggered in the analytics rule. All of this is working as expected - when a member is added to a security group, I receive an email. The one remaining piece is to populate the email message with the custom details that I've identified in the rule. However, I'm not sure how to do this. Essentially, I would like the values of the three custom details shown in the first screenshot below to show up in the body of the email, shown in the second screenshot, next to their corresponding names. So, for example, say Joe Smith is added to the group "Admin" by Tom Jones. These are the fields and values in the event that I want to pull out. TargetAccount = Admin MemberName = Joe Smith Subject Account = Tom Jones The custom details would then be populated as such: Security_Group = Admin Member_Added = Joe Smith Added_By = Tom Jones and then, the body of the email would contain: Group: Admin Member Added: Joe Smith Added By: Tom Jones1.7KViews0likes6CommentsClassification on DataBricks
Hello everyone, I would like to request an updated confirmation regarding the correct functioning of custom classification for Databricks Unity Catalog data sources. Here is my current setup: The data source is active. Source scanning is working correctly. I created the custom classification in “Annotation management / Classifications”. I created and successfully tested the regular expression under “Annotation management / Classification Rules”. I generated the Custom Scan Rule Set in “Source management / Scan Rule Sets”, associated to Databricks and selecting the custom rule. However, when running the scan on Databricks: I do not find any option to select my Scan Rule Set (for another source like Teradata, this option is visible). No classification findings are generated based on my custom rule. Other tests do produce findings (system-generated). Does anyone have insights on what I should verify? Or is this custom classification functionality not supported for Databricks?Automatic sensitivity label on existing labeled documents and emails
If I enable today automatic sensitivity labeling for label "Confidential" on behalf on sensitive information type "Credit Card" and 1000 documents are labeled with the label "Confidential". What happend if I remove the sensitive information type "Credit Card" from the label "Confidential", and put it on the Automatic sensitivity label "Highly Confidential". What happend to the 1000 documents which already have the label "Confidential"? Will it be modified to "Highly Confidential" or not?23Views0likes0CommentsIdentityLogonEvents - IsNtlmV1
Hi, I cannot find documentation on how the IdentityLogonEvents table's AdditionalFields.IsNtlmV1 populated. In a demo environment, I intentionally "enforced" NTLMv1 and made an NTLMv1 connection to a domain controller. On the DC's Security log, event ID 4624 shows correct info: Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): NTLM V1 Key Length: 128 On MDI side however it looks like this: (using the following KQL to display relevant info here: IdentityLogonEvents | where ReportId == @"f70dbd37-af8e-4e4e-a77d-b4250f9e0d0b" | extend todynamic(AdditionalFields) | project TimeGenerated, ActionType, Application, LogonType, Protocol,IsNtlmV1 = AdditionalFields.IsNtlmV1 ) TimeGenerated ActionType Application LogonType Protocol IsNtlmV1 Nov 28, 2025 10:43:05 PM LogonSuccess Active Directory Credentials validation Ntlm false Can someone please explain, under which circumstances will the IsNtlmV1 property become "true"? Thank you in advanceEmail to external(trusted user) not require verify user Identity(with Google or One-time passcode)
Dear Expert and Community, I am starting with MS Purview - Data Loss Prevention. I have one point to clarify and seek your advise / comment / contribute or sharing good practice regarding with below: - Firstly, we can send email to externally user contain sensitive information, it is encryption or blocked (result: worked as expected). If remail encrypt, the external receiver require verify the Identity via sign in with google acc / with a one time password. - Second: we plan sending email to external user (only trusted user / domain). Is it possible, do not require these scope user reverify their Identity again and again? If yes, how to do it? If not - why? Well appreciated for update and supporting. Thanks,Text formatting issue with URL Hyperlinking in phishing campaign indicators.
I am running some phishing campaigns and while editing a payload i added a URL hyperlinking indicator. I type in the text for the indicator and include some empty lines. However, when it's previewed and in the actual email extra lines are removed. This makes it look all crammed together and not very readable. Any idea how i can include empty lines to break it up?URL Hyperlinking phishing training
Mi using the Defender phishing simulations to perform testing. When creating a positive reinforcement email that goes to the person you have the option to use default text or put in your own text. When I put in my own text I have lines in the text, but when it renders the lines are not displayed so it looks like a bunch of text crammed together. Any idea how to get these lines to display?Purview Data Map scanning Microsoft Fabric and no classifications applied or scan rule sets
Microsoft Purview cannot currently apply built-in or custom classifications (including sensitive information types) to metadata discovered from Microsoft Fabric workspace scans. While Purview can register Fabric workspaces and extract structural metadata (workspaces, Lakehouses, Warehouses, tables, columns, and limited lineage), classification rules are not executed against Fabric assets in the same way they are for supported sources such as Azure SQL, ADLS Gen2, or on-prem databases. This results in classification gaps across a core enterprise analytics platform. Why This Is a Significant Service Omission 1. Breaks the Core Value Proposition of Purview 2. Undermines Regulatory and Risk Management Controls 3. Creates an Inconsistent Governance Experience 4. Blocks Downstream Purview Capabilities 5. Forces Anti-Patterns and Workarounds The lack of automated classification support for Microsoft Fabric workspace data represents a material service omission in Microsoft Purview, significantly limiting its effectiveness as a unified data governance platform and introducing avoidable compliance, operational, and assurance risks—particularly in regulated environments. Are there plans to improve this and if so what are the timescales?Issue Using Built in Trainable Classifiers in Auto Labelling Policies - Purview
Over the last few days, I have run into issue while configuring Auto labelling policies in Purview specifically when using built in classifiers for eg: Budget, Agreements These classifiers are parr of ready to use. They have been working well for us until recently but now saving an auto labelling rule that includes any of Trainable classifiers getting client side error: 'Could not find rule pack associated with sensitive information type' this is unexpected because: same classifiers eg: Budget worked perfectly just few weeks ago. No changes have made to roll, permissions on our side. Still not sure why showing issue now. Kindly request you, help me with root cause of the cause. Please feel free to post it comments if someone faced same issue in using trainable classifiers in auto labelling policies. Thanks in advance. Regards, BanuMuraliFuture - Purview vs Fabric for governance
I was wondering if anyone else was struggling to determine where exactly the future of Purview is regarding Data Governance? I see Microsoft pushing a ton of Data Governance functions into Fabric along with promoting the Catalog and Governance capabilities are in Fabric, but this doesn't cover the Enterprise. We are working to build a more robust Enterprise Data Governance solution, but I'm struggling to find reasons to build out Purview with all of the direction in expanding Fabric more than I hear news around Purview. Purview doesn't even ingest metadata on tables and columns from Fabric Lakehouse and Warehouse? Hoping gather other's experience and view points.77Views0likes1CommentHow to cancel a Norton subscription Allow Uniqueness of Glossary Terms across Governance Domains
When glossary terms are created and published, there is no check for the same term name in another governance domain. Some organizations do want to enforce term uniqueness across all domains. Would it be feasible to provide an optional switch in Unified Catalog settings to turn this on?
Events
Strong access strategy isn’t about initial setup: it’s about keeping operations fast, safe, and scalable as environments constantly change. Learn how Microsoft Security Copilot agent can be used with...
Tuesday, Mar 03, 2026, 09:00 AM PSTOnline
1like
39Attendees
1Comment