azure
170 TopicsUnderstand New Sentinel Pricing Model with Sentinel Data Lake Tier
Introduction on Sentinel and its New Pricing Model Microsoft Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platform that collects, analyzes, and correlates security data from across your environment to detect threats and automate response. Traditionally, Sentinel stored all ingested data in the Analytics tier (Log Analytics workspace), which is powerful but expensive for high-volume logs. To reduce cost and enable customers to retain all security data without compromise, Microsoft introduced a new dual-tier pricing model consisting of the Analytics tier and the Data Lake tier. The Analytics tier continues to support fast, real-time querying and analytics for core security scenarios, while the new Data Lake tier provides very low-cost storage for long-term retention and high-volume datasets. Customers can now choose where each data type lands—analytics for high-value detections and investigations, and data lake for large or archival types—allowing organizations to significantly lower cost while still retaining all their security data for analytics, compliance, and hunting. Please flow diagram depicts new sentinel pricing model: Now let's understand this new pricing model with below scenarios: Scenario 1A (PAY GO) Scenario 1B (Usage Commitment) Scenario 2 (Data Lake Tier Only) Scenario 1A (PAY GO) Requirement Suppose you need to ingest 10 GB of data per day, and you must retain that data for 2 years. However, you will only frequently use, query, and analyze the data for the first 6 months. Solution To optimize cost, you can ingest the data into the Analytics tier and retain it there for the first 6 months, where active querying and investigation happen. After that period, the remaining 18 months of retention can be shifted to the Data Lake tier, which provides low-cost storage for compliance and auditing needs. But you will be charged separately for data lake tier querying and analytics which depicted as Compute (D) in pricing flow diagram. Pricing Flow / Notes The first 10 GB/day ingested into the Analytics tier is free for 31 days under the Analytics logs plan. All data ingested into the Analytics tier is automatically mirrored to the Data Lake tier at no additional ingestion or retention cost. For the first 6 months, you pay only for Analytics tier ingestion and retention, excluding any free capacity. For the next 18 months, you pay only for Data Lake tier retention, which is significantly cheaper. Azure Pricing Calculator Equivalent Assuming no data is queried or analyzed during the 18-month Data Lake tier retention period: Although the Analytics tier retention is set to 6 months, the first 3 months of retention fall under the free retention limit, so retention charges apply only for the remaining 3 months of the analytics retention window. Azure pricing calculator will adjust accordingly. Scenario 1B (Usage Commitment) Now, suppose you are ingesting 100 GB per day. If you follow the same pay-as-you-go pricing model described above, your estimated cost would be approximately $15,204 per month. However, you can reduce this cost by choosing a Commitment Tier, where Analytics tier ingestion is billed at a discounted rate. Note that the discount applies only to Analytics tier ingestion—it does not apply to Analytics tier retention costs or to any Data Lake tier–related charges. Please refer to the pricing flow and the equivalent pricing calculator results shown below. Monthly cost savings: $15,204 – $11,184 = $4,020 per month Now the question is: What happens if your usage reaches 150 GB per day? Will the additional 50 GB be billed at the Pay-As-You-Go rate? No. The entire 150 GB/day will still be billed at the discounted rate associated with the 100 GB/day commitment tier bucket. Azure Pricing Calculator Equivalent (100 GB/ Day) Azure Pricing Calculator Equivalent (150 GB/ Day) Scenario 2 (Data Lake Tier Only) Requirement Suppose you need to store certain audit or compliance logs amounting to 10 GB per day. These logs are not used for querying, analytics, or investigations on a regular basis, but must be retained for 2 years as per your organization’s compliance or forensic policies. Solution Since these logs are not actively analyzed, you should avoid ingesting them into the Analytics tier, which is more expensive and optimized for active querying. Instead, send them directly to the Data Lake tier, where they can be retained cost-effectively for future audit, compliance, or forensic needs. Pricing Flow Because the data is ingested directly into the Data Lake tier, you pay both ingestion and retention costs there for the entire 2-year period. If, at any point in the future, you need to perform advanced analytics, querying, or search, you will incur additional compute charges, based on actual usage. Even with occasional compute charges, the cost remains significantly lower than storing the same data in the Analytics tier. Realized Savings Scenario Cost per Month Scenario 1: 10 GB/day in Analytics tier $1,520.40 Scenario 2: 10 GB/day directly into Data Lake tier $202.20 (without compute) $257.20 (with sample compute price) Savings with no compute activity: $1,520.40 – $202.20 = $1,318.20 per month Savings with some compute activity (sample value): $1,520.40 – $257.20 = $1,263.20 per month Azure calculator equivalent without compute Azure calculator equivalent with Sample Compute Conclusion The combination of the Analytics tier and the Data Lake tier in Microsoft Sentinel enables organizations to optimize cost based on how their security data is used. High-value logs that require frequent querying, real-time analytics, and investigation can be stored in the Analytics tier, which provides powerful search performance and built-in detection capabilities. At the same time, large-volume or infrequently accessed logs—such as audit, compliance, or long-term retention data—can be directed to the Data Lake tier, which offers dramatically lower storage and ingestion costs. Because all Analytics tier data is automatically mirrored to the Data Lake tier at no extra cost, customers can use the Analytics tier only for the period they actively query data, and rely on the Data Lake tier for the remaining retention. This tiered model allows different scenarios—active investigation, archival storage, compliance retention, or large-scale telemetry ingestion—to be handled at the most cost-effective layer, ultimately delivering substantial savings without sacrificing visibility, retention, or future analytical capabilities.687Views0likes0Commentsneed to create monitoring queries to track the health status of data connectors
I'm working with Microsoft Sentinel and need to create monitoring queries to track the health status of data connectors. Specifically, I want to: Identify unhealthy or disconnected data connectors, Determine when a data connector last lost connection Get historical connection status information What I'm looking for: A KQL query that can be run in the Sentinel workspace to check connector status OR a PowerShell script/command that can retrieve this information Ideally, something that can be automated for regular monitoring Looking at the SentinelHealth table, but unsure about the exact schema,connector, etc Checking if there are specific tables that track connector status changes Using Azure Resource Graph or management APIs Ive Tried multiple approaches (KQL, PowerShell, Resource Graph) however I somehow cannot get the information I'm looking to obtain. Please assist with this, for example i see this microsoft docs page, https://learn.microsoft.com/en-us/azure/sentinel/monitor-data-connector-health#supported-data-connectors however I would like my query to state data such as - Last ingestion of tables? How much data has been ingested by specific tables and connectors? What connectors are currently connected? The health of my connectors? Please help288Views2likes3CommentsIssue when ingesting Defender XDR table in Sentinel
Hello, We are migrating our on-premises SIEM solution to Microsoft Sentinel since we have E5 licences for all our users. The integration between Defender XDR and Sentinel convinced us to make the move. We have a limited budget for Sentinel, and we found out that the Auxiliary/Data Lake feature is sufficient for verbose log sources such as network logs. We would like to retain Defender XDR data for more than 30 days (the default retention period). We implemented the solution described in this blog post: https://jeffreyappel.nl/how-to-store-defender-xdr-data-for-years-in-sentinel-data-lake-without-expensive-ingestion-cost/ However, we are facing an issue with 2 tables: DeviceImageLoadEvents and DeviceFileCertificateInfo. The table forwarded by Defender to Sentinel are empty like this row: We created a support ticket but so far, we haven't received any solution. If anyone has experienced this issue, we would appreciate your feedback. Lucas56Views0likes0CommentsIngesting Akamai Audit Logs into Microsoft Sentinel using Azure Function Apps
Introduction Akamai provides extensive audit logs that can be valuable for security monitoring and compliance. To integrate Akamai Audit logs with Microsoft Sentinel, we can use Azure Function Apps to retrieve logs via the Akamai EdgeGrid API and send them to Log Analytics Workspace. In this guide, we will walk through deploying an Azure Function App that fetches Akamai Audit Logs and ingests them into Microsoft Sentinel. Prerequisites Before starting, ensure you have: An active Azure subscription with Microsoft Sentinel enabled. Akamai API credentials (EdgeGrid authentication: client_token, client_secret, and access_token). A Log Analytics Workspace (LAW) where logs will be ingested. Azure Function App deployed via VS Code. Python installed locally (Use the VSCode for the local deployment). High-Level Architecture Azure Function App calls Akamai API to fetch audit logs. Logs are parsed and sent to Microsoft Sentinel via Log Analytics API request to Azure Function App. Scheduled Execution ensures logs are fetched periodically. Step 1: Create an Azure Function App To deploy an Azure Function App via VS Code: Install the Azure Functions extension for VS Code. Install Azure Core Tools: npm install -g azure-functions-core-tools@4 --unsafe-perm true Create a Python-based Function App: func init AkamaiLogsFunction --python cd AkamaiLogsFunction func new --name FetchAkamaiLogs --template "HTTP trigger" --authlevel "anonymous" Step 2: Install Required Python Packages In your Function App directory, install the required dependencies: pip install requests akamai.edgegrid pip freeze > requirements.txt Step 3: Configure Environment Variables Instead of hardcoding API credentials, store them in Azure Function App settings: Go to Azure Portal > Function App. Navigate to Configuration > Application settings. Add the following environment variables: AKAMAI_CLIENT_TOKEN AKAMAI_CLIENT_SECRET AKAMAI_ACCESS_TOKEN WORKSPACE_ID (Log Analytics Workspace ID) SHARED_KEY (Log Analytics Shared Key) Step 4: Implement the Azure Function Code Create AkamaiLogFetcher.py with the following code: import azure.functions as func import logging import requests from akamai.edgegrid import EdgeGridAuth from urllib.parse import urljoin import os app = func.FunctionApp() # Azure Function HTTP Trigger @app.function_name(name="AkamaiLogFetcher") @app.route(route="fetchlogs", auth_level=func.AuthLevel.ANONYMOUS) def fetch_logs(req: func.HttpRequest) -> func.HttpResponse: logging.info("Processing Akamai log fetch request...") # Akamai API credentials (move these to Azure App Settings for security) baseurl = 'https://YOURBASEHOSTURL.luna.akamaiapis.net/' client_token = os.getenv("AKAMAI_CLIENT_TOKEN", "xxxxxxxxxxxxxx") client_secret = os.getenv("AKAMAI_CLIENT_SECRET", "xxxxxxxxxxxxx") access_token = os.getenv("AKAMAI_ACCESS_TOKEN", "xxxxxxxxxxxxxx") # Initialize session with authentication session = requests.Session() session.auth = EdgeGridAuth( client_token=client_token, client_secret=client_secret, access_token=access_token ) try: # Call Akamai API response = session.get(urljoin(baseurl, '/events/v3/events')) response.raise_for_status() # Raise an error for HTTP errors # Return response as JSON return func.HttpResponse(response.text, mimetype="application/json", status_code=response.status_code) except requests.exceptions.RequestException as e: logging.error(f"Error fetching logs: {e}") return func.HttpResponse(f"Failed to fetch logs: {str(e)}", status_code=500) Step 5: Deploy the Function to Azure Run the following command to deploy the function: func azure functionapp publish <YourFunctionAppName> Step 6: Setting Up the Logic App Workflow Create a new Logic App in Azure: Navigate to the Azure Portal -> Logic Apps -> Create. Choose Consumption Plan and select your preferred region. Click Review + Create, then Create. Add an HTTP Trigger: Select Recurrence as the trigger. Configure it to run every 10 minutes. Configure the HTTP Action to Fetch Logs from Akamai Function App API: Use the HTTP action in Logic Apps. Set the method to GET. Enter the Function App URL. Add the required headers (content type). Parse the JSON Response: Use the "Parse JSON" action to structure the response. Define the schema using a sample response from Akamai Audit Logs. Send Logs to Microsoft Sentinel: Use the "Azure Log Analytics - Send Data" action. Map the Akamai Audit log fields to the Log Analytics schema. Select the appropriate Custom Table in Log Analytics or use CommonSecurityLog. JSON Request body for Send Logs trigger Completed Logic App will look like this: Step 7: Testing and Validation Run a test execution of the Logic App. Check the Logic Apps run history to ensure successful Function App calls and data ingestion. Verify logs in Sentinel: Navigate to Microsoft Sentinel -> Logs. Run a KQL query: RadwareEvents_CL | where TimeGenerated > ago(10m) Summary This guide demonstrated how to use Azure Function Apps and Logic Apps to fetch Akamai Audit Logs via API and send them to Microsoft Sentinel. The serverless approach ensures efficient log collection without requiring dedicated infrastructure.1.6KViews3likes1CommentLog Ingestion Delay in all Data connectors
Hi, I have integrated multiple log sources in sentinel and all the log sources are ingesting logs between 7:00 pm to 2:00 am I want the log ingestion in real time. I have integrated Azure WAF, syslog, Fortinet, Windows servers. For evidence I am attaching a screenshots. I am totally clueless if anyone can help I will be very thankful!148Views0likes1CommentCodeless Connect Framework (CCF) Template Help
As the title suggests, I'm trying to finalize the template for a Sentinel Data Connector that utilizes the CCF. Unfortunately, I'm getting hung up on some parameter related issues with the polling config. The API endpoint I need to call utilizes a date range to determine the events to return and then pages within that result set. The issue is around the requirements for that date range and how CCF is processing my config. The API expects an HTTP GET verb and the query string should contain two instances of a parameter called EventDates among other params. For example, a valid query string may look something like: ../path/to/api/myEndpoint?EventDates=2025-08-25T15%3A46%3A36.091Z&EventDates=2025-08-25T16%3A46%3A36.091Z&PageSize=200&PageNumber=1 I've tried a few approaches in the polling config to accomplish this, but none have worked. The current config is as follows and has a bunch of extra stuff and names that aren't recognized by my API endpoint but are there simply to demonstrate different things: "queryParameters": { "EventDates.Array": [ "{_QueryWindowStartTime}", "{_QueryWindowEndTime}" ], "EventDates.Start": "{_QueryWindowStartTime}", "EventDates.End": "{_QueryWindowEndTime}", "EventDates.Same": "{_QueryWindowStartTime}", "EventDates.Same": "{_QueryWindowEndTime}", "Pagination.PageSize": 200 } This yields the following URL / query string: ../path/to/api/myEndpoint?EventDates.Array=%7B_QueryWindowStartTime%7D&EventDates.Array=%7B_QueryWindowEndTime%7D&EventDates.Start=2025-08-25T15%3A46%3A36.091Z&EventDates.End=2025-08-25T16%3A46%3A36.091Z&EventDates.Same=2025-08-25T16%3A46%3A36.091Z&Pagination.PageSize=200 There are few things to note here: The query param that is configured as an array (EventDates.Array) does indeed show up twice in the query string and with distinct values. The issue is, of course, that CCF doesn't seem to do the variable substitution for values nested in an array the way it does for standard string attributes / values. The query params that have distinct names (EventDates.Start and .End) both show up AND both have the actual timestamps substituted properly. Unfortunately, this doesn't match the API expectations since the names differ. The query params that are repeated with the same name (EventDates.Same) only show once and it seems to use the value from which comes last in the config (so last one overwrites the rest). Again, this doesn't meet the requirements of the API since we need both. I also tried a few other things ... Just sticking the query params and placeholders directly in the request.apiEndpoint polling config attribute. No surprise, it doesn't do the variable substitution there. Utilizing queryParametersTemplate instead of queryParameters. https://learn.microsoft.com/en-us/azure/sentinel/data-connector-connection-rules-referenceindicates this is a string parameter that expects a JSON string. I tried this with various approaches to the structure of the JSON. In ALL instances, the values here seemed to be completely ignored. All other examples from Azure-Sentinel repository utilize the POST verb. Perhaps that attribute isn't even interpreted on a GET request??? And because some AI agents suggested it and ... sure, why not??? ... I tried queryParametersTemplate as an actual query string template, so "EventDates={_QueryWindowStartTime}&EventDates={_QueryWindowEndTime}". Just as with previous attempts to use this attribute, it was completely ignored. I'm willing to try anything at this point, so if you have suggestions, I'll give it a shot! Thanks for any input you may have!206Views0likes4CommentsAutomating Microsoft Sentinel: Playbook Fundamentals
Welcome to the third entry of our blog series on automating Microsoft Sentinel. In this series, we’re showing you how to automate various aspects of Microsoft Sentinel, from simple automation of Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. So far, we’ve covered Part 1: Introduction to Automating Microsoft Sentinel where we talked about why you would want to automate as well as an overview of the different types of automation you can do in Sentinel and Part 2: Automation Rules where we talked about automating the mundane away. In this post, we’re going to start talking about Playbooks which can be used for automating just about anything. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: Introduction to Automating Microsoft Sentinel Part 2: Automation Rules – Automate the mundane away Part 3: Playbooks 1 Part I – Fundamentals [You are here] Part 4: Playbooks 2 Part II – Diving Deeper Part 5: Azure Functions / Custom Code Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 3: Playbooks - Fundamentals Pre-Built Playbooks in Content Hub Before we dive any deeper into Playbooks, I want to first point out that there are many pre-built playbooks available in the Content Hub. As of this writing, there are 484 playbooks available from 195 providers covering all manner of use cases like threat intelligence ingestion, incident response, operations integrations, and more in both first party Microsoft and third-party security tools. Before we dive into the internals of Playbooks and start creating our own, you really should do yourself a favor and take a look at the Content Hub and see if there isn’t already a Playbook doing what you want. You can also review the list of solutions at the Microsoft Sentinel GitHub page at Azure-Sentinel/Solutions at master · Azure/Azure-Sentinel Basic Structure of a Playbook Microsoft Sentinel Playbooks are built on Azure Logic Apps which is a low to no-code workflow automation platform. We’ll be diving into the details of how to create a Logic App from start to finish in the next installment of this series, but for now just know that there are two key “custom” features that Sentinel exposes for use in Playbooks: Triggers and Entities. Triggers The events or actions that can start a Playbook running are Triggers. These can be Incident, Alert, or Entity based. Incident Triggers Incident triggers are when an incident is either created or updated in Sentinel. Incident triggers can be tied to Automation Rules (which were covered in part 2 of this series) and can also be manually triggered by an analyst. Playbooks launched with Incident triggers receive all the incident objects, including any entities it contains as well as the alerts it is comprised of. Alert Triggers Alert triggers are similar to Incident triggers; except they trigger when an Alert is fired due to an Analytic Rule having a result. This is especially useful when you have an Alert that is not configured to create an Incident. Alert triggers can also be tied to Automation Rules Entity Triggers Entity triggers are different from Incident and Alert triggers as they cannot be tied to Automation Rules. Instead, they are triggered manually by an analyst. For example, let’s say that there is a user account that is part of an Incident and during the investigation the analyst decided they wanted to disable that user account in Entra. They could use an Entity Trigger to launch the Playbook, passing the Account Entity to the playbook for the account to be disabled. Entities We can’t really talk about Entity Triggers without talking about Entities themselves. So, what is an Entity in Sentinel? Entities are data elements that identify components in an alert or incident. There are many different types of entities within Sentinel, but for Playbooks we only need to focus on five key ones: IP Host Account URL FileHash (for more information on Entities in general, please see: https://learn.microsoft.com/azure/sentinel/entities ) How do you use Entity Triggers? When you are building an Analytic rule, you can identify the different Entities that it contains. These are then carried along as part of the Alert and exposed for further actions. This means that all you need to do is map the results of the Analytic Rule to the different Entity types using values returned from your query. For example, let’s say we are creating an Analytic Rule to alert on a new CloudShell user being created in Azure with the following query: let match_window = 3m; AzureActivity | where ResourceGroup has "cloud-shell" | where (OperationNameValue =~ "Microsoft.Storage/storageAccounts/listKeys/action") | where ActivityStatusValue =~ "Success" | extend TimeKey = bin(TimeGenerated, match_window), AzureIP = CallerIpAddress | join kind = inner (AzureActivity | where ResourceGroup has "cloud-shell" | where (OperationNameValue =~ "Microsoft.Storage/storageAccounts/write") | extend TimeKey = bin(TimeGenerated, match_window), UserIP = CallerIpAddress ) on Caller, TimeKey | summarize count() by TimeKey, Caller, ResourceGroup, SubscriptionId, TenantId, AzureIP, UserIP, HTTPRequest, Type, Properties, CategoryValue, OperationList = strcat(OperationNameValue, ' , ', OperationNameValue1) | extend Name = tostring(split(Caller,'@',0)[0]), UPNSuffix = tostring(split(Caller,'@',1)[0]) When we use this query as the basis for an Alert, we can then use Entity Mapping under Alert Enhancement to take the relevant fields returned and map them to Entity objects: This example maps the values "Caller", "Name", and "UPNSuffix" returned by the query to the "FullName", "Name", and "UPNSuffix" fields of an Account Entity. It also maps the UserIP result to the "Address" field of an IP Entity. When the Alert fires, it will include a collection of Account and IP Entities with the necessary values in its Entities field. Now if we wanted to, we could use a Playbook based on Entity Triggers to act on the Account or IP entities. What is a “strong” identifier versus a “weak” identifier and why is it important? Entities have fields that identify individual instances. Strong identifiers uniquely identify an entity, while weak identifiers may not. Often, combining weak identifiers can create a strong identifier. For example, Account entities can be identified by a strong identifier like a Microsoft Entra ID (GUID) or User Principal Name (UPN). Alternatively, a combination of weak identifiers like Name and NTDomain can be used. Different data sources might identify the same user differently. When Microsoft Sentinel recognizes two entities as the same based on their identifiers, it merges them into one for consistent handling. We’ll be covering more details on using Entities and Triggers in the next article when we start building Playbooks from scratch. Conclusion In this article we talked about the fundamentals of Playbooks in Sentinel , the Content Hub which is the home of pre-built Playbooks, as well as the different types of Triggers that can be used to launch a Playbook. In the next article we’ll be covering how to build a playbook from scratch and put these concepts to work. Additional Resources Supported triggers and actions in Microsoft Sentinel playbooks Entities in Microsoft Sentinel2.4KViews1like0CommentsPlanning your move to Microsoft Defender portal for all Microsoft Sentinel customers
Microsoft Sentinel is moving to the Microsoft Defender portal to deliver a unified, AI-powered security operations experience. Many customers have already made the move. Learn how to plan your transition and take advantage of new capabilities in the full blog post.954Views0likes0Comments