microsoft defender for cloud
35 TopicsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.1KViews0likes0CommentsTenant-based Microsoft Defender for Cloud Connector
As the title states the connector is connect but no alerts show in Sentinel. Alerts are in Defender for Cloud they do not show in Sentinel. Data connector is connected, law exports are configured to send to law rg. What's missing?234Views0likes5CommentsMicrosoft 365 defender alerts not capturing fields (entities) in azure sentinel
We got an alert from 365 defenders to azure sentinel ( A potentially malicious URL click was detected). To investigate this alert we have to check in the 365 defender portal. We noticed that entities are not capturing (user, host, IP). How can we resolve this issue? Note: This is not a custom rule.2.7KViews1like3CommentsHow to exclude IPs & accounts from Analytic Rule, with Watchlist?
We are trying to filter out some false positives from a Analytic rule called "Service accounts performing RemotePS". Using automation rules still gives a lot of false mail notifications we don't want so we would like to try using a watchlist with the serviceaccounts and IP combination we want to exclude. Anyone knows where and what syntax we would need to exlude the items on the specific Watchlist? Query: let InteractiveTypes = pack_array( // Declare Interactive logon type names 'Interactive', 'CachedInteractive', 'Unlock', 'RemoteInteractive', 'CachedRemoteInteractive', 'CachedUnlock' ); let WhitelistedCmdlets = pack_array( // List of whitelisted commands that don't provide a lot of value 'prompt', 'Out-Default', 'out-lineoutput', 'format-default', 'Set-StrictMode', 'TabExpansion2' ); let WhitelistedAccounts = pack_array('FakeWhitelistedAccount'); // List of accounts that are known to perform this activity in the environment and can be ignored DeviceLogonEvents // Get all logon events... | where AccountName !in~ (WhitelistedAccounts) // ...where it is not a whitelisted account... | where ActionType == "LogonSuccess" // ...and the logon was successful... | where AccountName !contains "$" // ...and not a machine logon. | where AccountName !has "winrm va_" // WinRM will have pseudo account names that match this if there is an explicit permission for an admin to run the cmdlet, so assume it is good. | extend IsInteractive=(LogonType in (InteractiveTypes)) // Determine if the logon is interactive (True=1,False=0)... | summarize HasInteractiveLogon=max(IsInteractive) // ...then bucket and get the maximum interactive value (0 or 1)... by AccountName // ... by the AccountNames | where HasInteractiveLogon == 0 // ...and filter out all accounts that had an interactive logon. // At this point, we have a list of accounts that we believe to be service accounts // Now we need to find RemotePS sessions that were spawned by those accounts // Note that we look at all powershell cmdlets executed to form a 29-day baseline to evaluate the data on today | join kind=rightsemi ( // Start by dropping the account name and only tracking the... DeviceEvents // ... | where ActionType == 'PowerShellCommand' // ...PowerShell commands seen... | where InitiatingProcessFileName =~ 'wsmprovhost.exe' // ...whose parent was wsmprovhost.exe (RemotePS Server)... | extend AccountName = InitiatingProcessAccountName // ...and add an AccountName field so the join is easier ) on AccountName // At this point, we have all of the commands that were ran by service accounts | extend Command = tostring(extractjson('$.Command', tostring(AdditionalFields))) // Extract the actual PowerShell command that was executed | where Command !in (WhitelistedCmdlets) // Remove any values that match the whitelisted cmdlets | summarize (Timestamp, ReportId)=arg_max(TimeGenerated, ReportId), // Then group all of the cmdlets and calculate the min/max times of execution... make_set(Command, 100000), count(), min(TimeGenerated) by // ...as well as creating a list of cmdlets ran and the count.. AccountName, AccountDomain, DeviceName, DeviceId // ...and have the commonality be the account, DeviceName and DeviceId // At this point, we have machine-account pairs along with the list of commands run as well as the first/last time the commands were ran | order by AccountName asc // Order the final list by AccountName just to make it easier to go through | extend HostName = iff(DeviceName has '.', substring(DeviceName, 0, indexof(DeviceName, '.')), DeviceName) | extend DnsDomain = iff(DeviceName has '.', substring(DeviceName, indexof(DeviceName, '.') + 1), "")253Views0likes1CommentHow to Set Up Sentinel Data Connectors for Kubernetes and GitHub
A guide to configure and use Sentinel Connectors to collect logs and data from your Kubernetes clusters and GitHub CI/CD pipelines. Part 2 of 3 part series about security monitoring of your Kubernetes Clusters and CI/CD pipelines by @singhabhi and @Umesh_Nagdev Link to Part 1 Link to Part 3 Introduction In part 1 of this series, we discussed the type of log sources you should consider for monitoring the security of your Kubernetes environment. This blog will demonstrate how to connect some of the critical log sources using Sentinel Data Connectors. Sentinel Data Connectors are a set of tools that enable you to collect and analyze logs and data from various sources, such as cloud services, applications, devices, and networks. Sentinel Data Connectors can help you monitor the health, performance, and security of your Kubernetes clusters and GitHub CI/CD pipelines, as well as detect and respond to threats and incidents. In this document, we will show you how to set up Sentinel Data Connectors for three types of sources: Kubernetes clusters, GitHub CI/CD pipelines, and Defender for Containers alerts and Defender for Cloud recommendations. We will also explain how to use the connectors to view and query the collected data in Sentinel. Security monitoring use cases Let’s first highlight some security risks you would want to monitor with Sentinel: 1. Pod Security Monitoring: Log source: Defender of Containers Risks monitored: Detect unauthorized or suspicious pods running in the cluster. Monitor for privilege escalation attempts within pods. Track and alert on changes to pod security policies. 2. Network Security Monitoring: Log source: Defender of Containers Risks monitored: Identify and alert on unexpected network traffic patterns. Monitor for unauthorized ingress and egress traffic. Detect and investigate potential denial-of-service (DoS) attacks. 3. Container Image Security: Log source: Defender for Cloud - Defender Cloud Security Posture Management (DCSPM) Risks monitored: Scan container images for vulnerabilities before deployment. Monitor for unauthorized or unsigned images. Track changes to container image repositories. 4. Kubelet Activity Monitoring: Log source: Defender of Containers Risks monitored: Monitor kubelet logs for signs of compromise or unauthorized access. Detect abnormal activities related to node management. 5. API Server Security: Log source: Defender of Containers Risks monitored: Monitor Kubernetes API server logs for suspicious activities. Track and alert on failed authentication attempts. Detect unusual API server request patterns. 6. RBAC (Role-Based Access Control) Monitoring: Log source: AKS Diagnostics Logs, Azure AD logs, Azure Monitor Container Insights Risks monitored: Monitor changes to RBAC policies and roles. Detect and alert on unauthorized access attempts. Track role binding changes and escalations. 7. Secrets and ConfigMap Access Monitoring: Log source: Defender of Containers Risks monitored: Monitor for unauthorized access to Kubernetes secrets and ConfigMaps. Detect changes to sensitive configuration data. Track usage patterns of sensitive information. 8. Audit Logging: Log source: AKS Diagnostic Logs Risks monitored: Enable and monitor Kubernetes audit logs for cluster-wide activities. Correlate audit logs to identify security events and policy violations. Regularly review audit logs for anomalies and potential threats. 9. Compliance Monitoring: Log source: Defender for Cloud - Defender Cloud Security Posture Management (DCSPM) Risks monitored: Ensure compliance with security standards and policies. Monitor for deviations from security best practices. Generate reports on compliance status and potential risks. 10. Container Runtime Security: Log source: Defender of Containers Risks monitored: Monitor runtime activities of containers for abnormal behavior. Detect and alert on suspicious system calls within containers. Integrate with container runtime security tools for enhanced monitoring. 11. Incident Response and Forensics: Log source: Defender of Containers Risks monitored: Develop and test incident response plans for Kubernetes security incidents. Monitor for indicators of compromise (IoCs) and initiate investigations in Sentinel Collect and analyze forensics data in the event of a security incident in Sentinel 12. Cluster Health Monitoring: Log source: AKS Diagnostic Logs Risks monitored: Regularly monitor the overall health of the Kubernetes cluster. Detect and alert on abnormal resource consumption or performance issues. Ensure the availability of critical components and services. Prerequisites Before you can set up Sentinel Data Connectors, you need to have the following: Sentinel workspace. This is where you store and analyze the data collected by the connectors. Enable Sentinel on the Log Analytics Workspace where you are exporting all of the below mentioned log sources . Instructions on how to setup Sentinel Kubernetes cluster. This is the source of the data for the Kubernetes Cluster using Diagnostics logs. You can use any Kubernetes cluster that supports the Kubernetes API, such as Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), or Amazon Elastic Kubernetes Service (EKS). We will showcase this with AKS. Instructions on how to deploy AKS GitHub account. This is the source of the code and manifests used for creating container images which are then deployed in your Kubernetes Clusters. You will need to configure DCSPM DevOps security for secure scanning of artifacts. Or if you are using a third-party scanning tool you will need to send the scan results to Sentinel. Container Registry. The images stored in the registry need to be scanned for vulnerabilities. You will need access to the scan logs this can be done via Defender for Cloud DCSPM Defender for Containers subscription. This is a service that provides security and compliance monitoring for your Kubernetes clusters. You need to enable Defender for Containers on your subscription where your Kubernetes cluster is located and configure it to send alerts to the Sentinel workspace. Instructions on how to enable Defender for Containers on a subscription. A Defender for Cloud DSPM subscription. This is a service that provides security and compliance recommendations for your cloud resources such as AKS, ACR, and Azure tenant. You need to enable Defender for Cloud DCSPM on your subscription with AKS cluster and configure it to send recommendations to the Sentinel workspace. Instructions on how to enable DCSPM on a subscription. How to Set Up Kubernetes Cluster Connector The Kubernetes Cluster connector allows you to collect logs and metrics from your Kubernetes cluster, such as cluster events, pod logs, node metrics, and container metrics. To ingest AKS logs into Sentinel, deploy the Azure Kubernetes Solution for Sentinel then, follow the steps below to enable the AKS data connector. Configure AKS data connector to ingest logs into Sentinel: In Microsoft Sentinel, go to the "Data connectors" page. Find and configure the "Azure Kubernetes Service (AKS)" connector. Launch the Azure Policy wizard under configuration to enable logging. Verify Integration: After configuration, verify that logs from your AKS cluster are flowing into Sentinel. Create Sentinel Workbooks and Queries (to be elaborated in part 3): Leverage Microsoft Sentinel workbooks and Kusto Query Language (KQL) queries to create visualizations and reports based on AKS logs. Customize the workbooks and queries based on your specific security and monitoring requirements. Set Up Alerts and Incidents (to be elaborated in part 3): Configure alerts within Microsoft Sentinel based on specific events or patterns detected in AKS logs. Set up incidents and response workflows to investigate and respond to security events. Monitor and Fine-Tune: Regularly monitor the integration, alerts, and logs to ensure that the AKS logs are being properly processed in MicrosoftSentinel. Fine-tune your configurations based on feedback, new security requirements, or changes to your AKS environment. How to Set Up GitHub connector To ingest logs into Sentinel, deploy the Microsoft Sentinel - Continuous Threat Monitoring for GitHub. Enable the two connector that are installed as part of this solution: GitHub Enterprise Audit Log connector: this connector collects GitHub audit logs which tracks changes to repository, user added/removed, pull request activities, etc. GitHub (using Webhooks) connector: to ingest you can ingest the scan data using a built in data connector for GitHub events. This connector can pull events related to code scanning alert, repository vulnerability alert (via Dependabot) and Secret Scanning Alert. How to Set Up Defender for Containers Alerts and Defender for Cloud Connector Sentinel has a buil-in data connector to ingest Defender for Cloud alerts and recommendation. You can find the details https://learn.microsoft.com/en-us/azure/sentinel/connect-defender-for-cloud#connect-to-microsoft-defender-for-cloud Setting up AKS data connector and additional logging for Sentinel Setup the Diagnostic Settings for the Azure Kubernetes Services to send the events to a Sentinel-enabled Log Analytics workspace. https://learn.microsoft.com/en-us/azure/aks/monitor-aks#aks-control-planeresource-logs. In our scenario we are using the following logs In addition, you will also need to enable Container Insights to get the Pod level data so you can run the search queries for risk related to Pod specifics like pods running in Default namespace. You can refer to this https://learn.microsoft.com/en-us/azure/azure-monitor/containers/kubernetes-monitoring-enable?tabs=cli#enable-container-insights resource to enable Container Insights. The logs will go to ContainerLogsV2 in Log Analytics Workspace https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-logs-schema#enable-the-containerlogv2-schema The following pic shows the ContainerLogv2 Schema as an example. You will need to for to Sentinel Content Hub and enable the following. This will give you Workbook, several hunting queries and a data connector to ingest AKS data. Your AKS cluster will populate the data in the following tables, which we will use to write custom search queries in the section below.7.4KViews1like0CommentsDataConnector throws error and can't be deleted
The DataConnector (from "Microsoft Defender for Cloud solution") "Tenant-based Microsoft Defender for Cloud (Preview )" could not be found anymore and just get Errors. The Problem is, the Connector can't be deleted, because it could not be found in the Dataconnectors.... ! There is an Error Massage every Time I open Sentinel, and it can't be deleted because the Connector is not displayed. Is there anaother way to delete this connector? --- The following codeless connectors are not valid, and will not be displayed: Connector display name: Tenant-based Microsoft Defender for Cloud (Preview ), ConnectorID: ..., ConnectorKind: StaticUI, Error: Error: staticConnectorModel for microsoftdefenderforcloudtenantbased was not found in static connectors list. Try to update the solution, if there is an update available. ---1.2KViews0likes1CommentMicrosoft Defender XDR / Defender for Endpoint data connectors inconsistent failures
Hello, We are deploying our SOC (Sentinel) environments via Bicep. Now the Defender XDR ( MicrosoftThreatProtection) and Defender for Endpoint ( MicrosoftDefenderAdvancedThreatProtection) data connectors are failing to deploy inconsistantly. It seems to be a known issue due to the following posts: - https://github.com/Azure/SimuLand/issues/23 - https://techcommunity.microsoft.com/t5/microsoft-sentinel/quot-missing-consent-invalid-license-quot-defender-for-endpoint/m-p/3027212 - https://github.com/Azure/Azure-Sentinel/issues/5007 Next to this issue I see almost no development on the data connectors API, is there some news to be spread how to enable data connectors automated in the future, since it seems to be moving to Content Hub. It is hard to find any docs about how to deploy this for example via Bicep!? Also I have a question regarding 'Tenant-based Microsoft Defender for Cloud (Preview)' data connector. We deploy this now via GenericUI data connector kind, but this has no option to enable it via automation. Same as the question in the previous paragraph, how would this be made possible?1.1KViews0likes0Comments