soar
147 TopicsMicrosoft Sentinel data lake FAQ
On September 30, 2025, Microsoft announced the general availability of the Microsoft Sentinel data lake, designed to centralize and retain massive volumes of security data in open formats like delta parquet. By decoupling storage from compute, the data lake supports flexible querying, while offering unified data management and cost-effective retention. The Sentinel data lake is a game changer for security teams, serving as the foundational layer for agentic defense, deeper security insights and graph-based enrichment. In this blog we offer answers to many of the questions we’ve heard from our customers and partners. General questions 1. What is the Microsoft Sentinel data lake? Microsoft has expanded its industry-leading SIEM solution, Microsoft Sentinel, to include a unified, security data lake, designed to help optimize costs, simplify data management, and accelerate the adoption of AI in security operations. This modern data lake serves as the foundation for the Microsoft Sentinel platform. It has a cloud-native architecture and is purpose-built for security—bringing together all security data for greater visibility, deeper security analysis and contextual awareness. It provides affordable, long-term retention, allowing organizations to maintain robust security while effectively managing budgetary requirements. 2. What are the benefits of Sentinel data lake? Microsoft Sentinel data lake is designed for flexible analytics, cost management, and deeper security insights. It centralizes security data in an open format like delta parquet for easy access. This unified view enhances threat detection, investigation, and response across hybrid and multi-cloud environments. It introduces a disaggregated storage and compute pricing model, allowing customers to store massive volumes of security data at a fraction of the cost compared to traditional SIEM solutions. It allows multiple analytics engines like Kusto, Spark, and ML to run on a single data copy, simplifying management, reducing costs, and supporting deeper security analysis. It integrates with GitHub Copilot and VS Code empowering SOC teams to automate enrichment, anomaly detection, and forensic analysis. It supports AI agents via the MCP server, allowing tools like GitHub Copilot to query and automate security tasks. The MCP Server layer brings intelligence to the data, offering Semantic Search, Query Tools, and Custom Analysis capabilities that make it easier to extract insights and automate workflows. Customers also benefit from streamlined onboarding, intuitive table management, and scalable multi-tenant support, making it ideal for MSSPs and large enterprises. The Sentinel data lake is purpose built for security workloads, ensuring that processes from ingestion to analytics meet cybersecurity requirements. 3. Is the Sentinel data lake generally available? Yes. The Sentinel data lake is generally available (GA) starting September 30, 2025. To learn more, see GA announcement blog. 4. What happens to Microsoft Sentinel SIEM? Microsoft is expanding Sentinel into an AI powered end-to-end security platform that includes SIEM and new platform capabilities - Security data lake, graph-powered analytics and MCP Server. SIEM remains a core component and will be actively developed and supported. Getting started 1. What are the prerequisites for Sentinel data lake? To get started: Connect your Sentinel workspace to Microsoft Defender prior to onboarding to Sentinel data lake. Once in the Defender experience see data lake onboarding documentation for next steps. Note: Sentinel is moving to the Microsoft Defender portal and the Sentinel Azure portal will be retired by July 2026. 2. I am a Sentinel-only customer, and not a Defender customer, can I use the Sentinel data lake? Yes. You must connect Sentinel to the Defender experience before onboarding to the Sentinel data lake. Microsoft Sentinel is generally available in the Microsoft Defender portal, with or without Microsoft Defender XDR or an E5 license. If you have created a log analytics workspace, enabled it for Sentinel and have the right Microsoft Entra roles (e.g. Global Administrator + Subscription Owner, Security Administrator + Sentinel Contributor), you can enable Sentinel in the Defender portal. For more details on how to connect Sentinel to Defender review these sources: Microsoft Sentinel in the Microsoft Defender portal 3. In what regions is Sentinel data lake available? For supported regions see: Geographical availability and data residency in Microsoft Sentinel | Microsoft Learn 4. Is there an expected release date for Microsoft Sentinel data lake in Government clouds? While the exact date is not yet finalized, we anticipate support for these clouds soon. 5. How will URBAC and Entra RBAC work together to manage the data lake given there is no centralized model? Entra RBAC will provide broad access to the data lake (URBAC maps the right permissions to specific Entra role holders: GA/SA/SO/GR/SR). URBAC will become a centralized pane for configuring non-global delegated access to the data lake. For today, you will use this for the “default data lake” workspace. In the future, this will be enabled for non-default Sentinel workspaces as well – meaning all workspaces in the data lake can be managed here for data lake RBAC requirements. Azure RBAC on the Log Analytics (LA) workspace in the data lake is respected through URBAC as well today. If you already hold a built-in role like log analytics reader, you will be able to run interactive queries over the tables in that workspace. Or, if you hold log analytics contributor, you can read and manage table data. For more details see: Roles and permissions in the Microsoft Sentinel platform | Microsoft Learn Data ingestion and storage 1. How do I ingest data into the Sentinel data lake? To ingest data into the Sentinel data lake, you can use existing Sentinel data connectors or custom connectors to bring data from Microsoft and third-party sources. Data can be ingested into the analytic tier and/or data lake tier. Data ingested into the analytics tier is automatically mirrored to the lake, while lake-only ingestion is available for select tables. Data retention is configured in table management. Note: Certain tables do not support data lake-only ingestion via either API or data connector UI. See here for more information: Custom log tables. 2. What is Microsoft’s guidance on when to use analytics tier vs. the data lake tier? Sentinel data lake offers flexible, built-in data tiering (analytics and data lake tiers) to effectively meet diverse business use cases and achieve cost optimization goals. Analytics tier: Is ideal for high-performance, real-time, end-to-end detections, enrichments, investigation and interactive dashboards. Typically, high-fidelity data from EDRs, email gateways, identity, SaaS and cloud logs, threat intelligence (TI) should be ingested into the analytics tier. Data in the analytics tier is best monitored proactively with scheduled alerts and scheduled analytics to enable security detections Data in this tier is retained at no cost for up to 90 days by default, extendable to 2 years. A copy of the data in this tier is automatically available in the data lake tier at no extra cost, ensuring a unified copy of security data for both tiers. Data lake tier: Is designed for cost-effective, long-term storage. High-volume logs like NetFlow logs, TLS/SSL certificate logs, firewall logs and proxy logs are best suited for data lake tier. Customers can use these logs for historical analysis, compliance and auditing, incident response (IR), forensics over historical data, build tenant baselines, TI matching and then promote resulting insights into the analytics tier. Customers can run full Kusto queries, Spark Notebooks and scheduled jobs over a single copy of their data in the data lake. Customers can also search, enrich and restore data from the data lake tier to the analytics tier for full analytics. For more details see documentation. 3. What does it mean that a copy of all new analytics tier data will be available in the data lake? When Sentinel data lake is enabled, a copy of all new data ingested into the analytics tier is automatically duplicated into the data lake tier. This means customers don’t need to manually configure or manage this process—every new log or telemetry added to the analytics tier becomes instantly available in the data lake. This allows security teams to run advanced analytics, historical investigations, and machine learning models on a single, unified copy of data in the lake, while still using the analytics tier for real-time SOC workflows. It’s a seamless way to support both operational and long-term use cases—without duplicating effort or cost. 4. Is there any cost for retention in the analytics tier? You will get 90 days of analytics retention free. Simply set analytics retention to 90 days or less. Total retention setting – only the mirrored portion that overlaps with the free analytics retention is free in the data lake. Retaining data in the lake beyond the analytics retention period incurs additional storage costs. See documentation for more details: Manage data tiers and retention in Microsoft Sentinel | Microsoft Learn 5. What is the guidance for Microsoft Sentinel Basic and Auxiliary Logs customers? If you previously enabled Basic or Auxiliary Logs plan in Sentinel: You can view Basic Logs in the Defender portal but manage it from the Log Analytics workspace. To manage it in the Defender portal, you must change the plan from Basic to Analytics. Existing Auxiliary Log tables will be available in the data lake tier for use once the Sentinel data lake is enabled. Prior to the availability of Sentinel data lake, Auxiliary Logs provided a long-term retention solution for Sentinel SIEM. Now once the data lake is enabled, Auxiliary Log tables will be available in the Sentinel data lake for use with the data lake experiences. Billing for Auxiliary Logs will switch to Sentinel data lake meters. Microsoft Sentinel customers are recommended to start planning their data management strategy with the data lake. While Basic and auxiliary Logs are still available, they are not being enhanced further. Please plan on onboarding your security data to the Sentinel data lake. Azure Monitor customers can continue to use Basic and Auxiliary Logs for observability scenarios. 6. What happens to customers that already have Archive logs enabled? If a customer has already configured tables for Archive retention, those settings will be inherited by the Sentinel data lake and will not change. Data in the Archive logs will continue to be accessible through Sentinel search and restore experiences. Mirrored data (in the data lake) will be accessible via lake explorer and notebook jobs. Example: If a customer has 12 months of total retention enabled on a table, 2 months after enabling ingestion into the Sentinel data lake, the customer will still have access to 12 months of archived data (through Sentinel search and restore experiences), but access to only 2 months of data in the data lake (since the data lake was enabled). Key considerations for customers that currently have Archive logs enabled: The existing archive will remain, with new data ingested into the data lake going forward; previously stored archive data will not be backfilled into the lake. Archive logs will continue to be accessible via the Search and Restore tab under Sentinel. If analytics and data lake mode are enabled on table, which is the default setting for analytics tables when Sentinel data lake is enabled, data will continue to be ingested into the Sentinel data lake and archive going forward. There will only be one retention billing meter going forward. Archive will continue to be accessible via Search and Restore. If Sentinel data lake-only mode is enabled on table, new data will be ingested only into the data lake; any data that’s not already in the Sentinel data lake won’t be migrated/backfilled. Data that was previously ingested under the archive plan will be accessible via Search and Restore. 7. What is the guidance for customers using Azure Data Explorer (ADX) alongside Microsoft Sentinel? Some customers might have set up ADX cluster to augment their Sentinel deployment. Customers can choose to continue using that setup and gradually migrate to Sentinel data lake for new data to receive the benefits of a fully managed data lake. For all new implementations it is recommended to use the Sentinel data lake. 8. What happens to the Defender XDR data after enabling Sentinel data lake? By default, Defender XDR retains threat hunting data in the XDR default tier, which includes 30 days of analytics retention, which is included in the XDR license. You can extend the table retention period for supported Defender XDR tables beyond 30 days. For more information see Manage XDR data in Microsoft Sentinel. Note: Today you can't ingest XDR tables directly to the data lake tier without ingesting into the analytics tier first. 9. Are there any special considerations for XDR tables? Yes, XDR tables are unique in that they are available for querying in advanced hunting by default for 30 days. To retain data beyond this period, an explicit change to the retention setting is required, either by extending the analytics tier retention or the total retention period. A list of XDR advanced hunting tables supported by Sentinel are documented here: Connect Microsoft Defender XDR data to Microsoft Sentinel | Microsoft Learn. KQL queries and jobs 1. Is KQL and Notebook supported over the Sentinel data lake? Yes, via the data lake KQL query experience along with a fully managed Notebook experience which enables spark-based big data analytics over a single copy of all your security data. Customers can run queries across any time range of data in their Sentinel data lake. In the future, this will be extended to enable SQL query over lake as well. 2. Why are there two different places to run KQL queries in Sentinel experience? Consolidating advanced hunting and KQL Explorer user interfaces is on the roadmap. Security analysts will benefit from unified query experience across both analytics and data lake tiers. 3. Where is the output from KQL jobs stored? KQL jobs are written into existing or new analytics tier table. 4. Is it possible to run KQL queries on multiple data lake tables? Yes, you can run KQL interactive queries and jobs using operators like join or union. 5. Can KQL queries (either interactive or via KQL jobs) join data across multiple workspaces? Yes, security teams can run multi-workspace KQL queries for broader threat correlation. Pricing and billing 1. How does a customer pay for Sentinel data lake? Sentinel data lake is a consumption-based service with disaggregated storage and compute business model. Customers continue to pay for ingestion. Customers set up billing as a part of their onboarding for storage and analytics over data in the data lake (e.g. Queries, KQL or Notebook Jobs). See Sentinel pricing page for more details. 2. What are the pricing components for Sentinel data lake? Sentinel data lake offers a flexible pricing model designed to optimize security coverage and costs. For specific meter definitions, see documentation. 3. What are the billing updates at GA? We are enabling data compression billed with a simple and uniform data compression rate of 6:1 across all data sources, applicable only to data lake storage. Starting October 1, 2025, the data storage billing begins on the first day data is stored. To support ingestion and standardization of diverse data sources, we are introducing a new Data Processing feature that applies a $0.10 per GB charge for all uncompressed data ingested into the data lake for tables configured for data lake only retention. (does not apply to tables configured for both analytic and data lake tier retention). 4. How is retention billed for tables that use data lake-only ingestion & retention? During the public preview, data lake-only tables included the first 30 days of retention at no cost. At GA, storage costs will be billed. In addition, when retention billing switches to using compressed data size (instead of ingested size), this will change, and charges will apply for the entire retention period. Because billing will be based on compressed data size, customers can expect significant savings on storage costs. 5. Does “Data processing” meter apply to analytics tier data duplicated in the data lake? No. 6. What happens to billing for customers that activate Sentinel data lake on a table with archive logs enabled? Customers will automatically be billed using the data lake storage meter. Note: This means that customers will be charged using the 6X compression rate for data lake retention. 7. How do I control my Sentinel data lake costs? Sentinel is billed based on consumption and prices vary based on usage. An important tool in managing the majority of the cost is usage of analytics “Commitment Tiers”. The data lake complements this strategy for higher-volume data like network and firewall data to reduce analytics tier costs. Use the Azure pricing calculator and the Sentinel pricing page to estimate costs and understand pricing. 8. How do I manage Sentinel data lake costs? We are introducing a new cost management experience (public preview) to help customers with cost predictability, billing transparency, and operational efficiency. These in-product reports provide customers with insights into usage trends over time, enabling them to identify cost drivers and optimize data retention and processing strategies. Customers will also be able to set usage-based alerts on specific meters to monitor and control costs. For example, you can receive alerts when query or notebook usage passes set limits, helping avoid unexpected expenses and manage budgets. See documentation to learn more. 9. If I’m an Auxiliary Logs customer, how will onboarding to the Sentinel data lake affect my billing? Once a workspace is onboarded to Sentinel data lake, all Auxiliary Logs meters will be replaced by new data lake meters. Thank you Thank you to our customers and partners for your continued trust and collaboration. Your feedback drives our innovation, and we’re excited to keep evolving Microsoft Sentinel to meet your security needs. If you have any questions, please don’t hesitate to reach out—we’re here to support you every step of the way.548Views1like5CommentsAutomate Security Workflows in Microsoft Sentinel with BlinkOps
Automate Security Workflows in Microsoft Sentinel with BlinkOps Security teams are under increasing pressure to respond faster to threats while managing growing complexity across their environments. Microsoft Sentinel’s elevated integration with BlinkOps helps address this challenge by enabling AI-powered, no-code automation that simplifies and accelerates security operations. Introducing BlinkOps for Microsoft Sentinel BlinkOps is a no-code security automation platform designed for security and platform operations teams. It allows users to build and scale workflows using natural language prompts and a library of over 30,000 pre-built actions. With BlinkOps, teams can automate incident response, compliance, and operational tasks—without writing a single line of code. Now with an enhanced integration with Microsoft Sentinel, BlinkOps enables customers to generate automated playbooks triggered by Sentinel alerts and incidents. This integration helps streamline threat response, reduce mean time to respond (MTTR), and improve operational efficiency. Why BlinkOps? Microsoft Sentinel customers may leverage Microsoft Sentinel’s SOAR capabilities through Logic Apps today. BlinkOps enables a new set of additional capabilities to Microsoft Sentinel-powered SOC teams, including: AI-generated workflows: Create automation using natural language prompts. Pre-built content: Access a rich library of templates tailored to Sentinel use cases. No-code experience: Empower security analysts to build and manage workflows without engineering support. Scalability: Deploy automation across multiple tenants and environments with ease. Key Use Cases The BlinkOps connector for Microsoft Sentinel supports several high-impact scenarios: Automated response to alerts and incidents: Trigger sophisticated BlinkOps process workflows based on Sentinel signals to ensure swift, consistent action. Incorporate humans in interactive workflows so that automation is complemented with human judgment and decisions. Template-driven playbooks: Leverage curated templates for common SOC tasks. Examples Consider this scenario: A SOC team wants an automation to help manage the response to phishing alerts in Microsoft Sentinel. The SOC team starts in BlinkOps by prompting the system to create a workflow. In this case a simple prompt is all it takes, “I would like an automation to respond to Phishing incidents in Microsoft Sentinel. We use Microsoft Security tooling (Teams, Defender, Entra etc.)” 1.BlinkOps Builder Prompt Which then builds out a workflow of how to automate the handling of a phishing alert in a few seconds. 2. Building Workflow A straightforward 6 step set of actions is generated: 3. Phishing Workflow Then, if the SOC team wants to refine or edit a specific workflow step, they can also use the BlinkOps builder AI to update individual steps. In this case, drafting the message to send to the broader security team. Builder-Editing Action Getting Started To get started using BlinkOps and Microsoft Sentinel: 1. Visit https://www.blinkops.com/ to learn more about the platform. 2. Explore the BlinkOps connector in the Microsoft Sentinel Content Hub. 3. Use natural language to create your first workflow and start automating your SOC operations.711Views1like0CommentsMulti-trigger Playbooks & Renamed Triggers
Hello Sentinel enthusiasts, In some cases, deploying a playbook with multiple triggers is a much easier solution than having 9 playbooks which do the same thing. In the specific example I'm going for I have developed a playbook which requires the user to change their password within a certain amount of time. We want to have the ability to have three triggers for the playbook - incident, entity and http (for external orchestration platforms) - then we also want to have the option for it to be instant, or within a configurable number of days/hours (1day, 7days). In this situation the native way would be to have 9 playbooks in total - seems like a big amount for a simple action. I attempted to initially develop these playbooks with all three triggers in a base template which is deployed using bicep - success. But what do you know, at first, I could only see the playbook in the automation playbook list saying "Sentinel Action" but I couldn't trigger it from an actual incident or entity. Turns out this was because I had given the trigger a different display name than the default. This specific case seems a bit odd to me because the underlying data is the same, nonetheless - not a huge issue, change the display name to the default and voila. My next surprise was when I realised that the Sentinel UI will only pick up the first trigger to run a playbook. i.e. if I define an entity trigger first then an incident trigger, I cannot see it appear to trigger for an incident and vice versa. So, I set on a mission and was able to create a chromium extension which will modify the resource response - to duplicate a playbook once for every trigger it has (only in the azure portal PWA) and what do I know - everything works perfectly as if it was fully supported. It would be great if these UI bugs could be fixed as they seem pretty trivial and don't seem to require a major change, considering it is solely a frontend bug - especially if I can create an extension which resolves the issue. Obviously, this is not an ideal scenario in production. Garnering some support to have this rectified would be great and it would also be cool to hear people's opinions on this ~Seb51Views0likes0CommentsError when running playbook Block-AADUser-Alert
Hello, I have personal account and I am trying Microsoft Sentinel. My senario is when user account (not admin) changes his authentication method, an alert is triggered and then I run built-in playbook Block-AADUser-Alert to disable this account. I get following error when running this playbook: { "error": { "code": "Request_ResourceNotFound", "message": "Resource '[\"leloc@hoahung353.onmicrosoft.com\"]' does not exist or one of its queried reference-property objects are not present.", "innerError": { "date": "2022-05-13T03:06:46", "request-id": "84bab933-eb79-4352-9bdf-e6d5444a1798", "client-request-id": "84bab933-eb79-4352-9bdf-e6d5444a1798" } } } I have tried to assign all required permissions (User.Read.All, User.ReadWrite.All, Directory.Read.All, Directory.ReadWrite.All), authorized api connection,.. but it can not solve the issue. Would anyone help advise how to solve ? Is it because of personal account ? Best Regards, AnSolved6.3KViews0likes30CommentsMicrosoft Sentinel - Alert suppression
Hello Tech Community, Working with Microsoft Sentinel, sometimes, we have to suppress alerts based on information about UPN, IP, hostname, and other. Let's imagine we need to suppress 20 combinations of UPN, IP hostname. Sometimes, sometimes, the suppressions fields should be empty or should be wildcarded (meaning it can be any value in the log that should be suppressed). What is the best way to suppress alerts? - Automation rules - seems not flexible and works only with entities. - Watchlist with "join" or "where" operator - good option, but doesn't support * (wildcard) - Hardcoded in KQL - not flexible, especially when you have SDLC processes Please, your ideas and advice.308Views0likes2CommentsAutomating Microsoft Sentinel: Part 2: Automate the mundane away
Welcome to the second entry of our blog series on automating Microsoft Sentinel. In this series, we’re showing you how to automate various aspects of Microsoft Sentinel, from simple automation of Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. So far, we’ve covered Part 1: Introduction to Automating Microsoft Sentinel where we talked about why you would want to automate as well as an overview of the different types of automation you can do in Sentinel. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: Introduction to Automating Microsoft Sentinel Part 2: Automation Rules [You are here] – Automate the mundane away Part 3: Playbooks 1 – Playbooks Part I – Fundamentals Part 4: Playbooks 2 – Playbooks Part II – Diving Deeper Part 5: Azure Functions / Custom Code Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 2: Automation Rules – Automate the mundane away Automation rules can be used to automate Sentinel itself. For example, let’s say there is a group of machines that have been classified as business critical and if there is an alert related to those machines, then the incident needs to be assigned to a Tier 3 response team and the severity of the alert needs to be raised to at least “high”. Using an automation rule, you can take one analytic rule, apply it to the entire enterprise, but then have an automation rule that only applies to those business-critical systems to make those changes. That way only the items that need that immediate escalation receive it, quickly and efficiently. Automation Rules In Depth So, now that we know what Automation Rules are, let’s dive in to them a bit deeper to better understand how to configure them and how they work. Creating Automation Rules There are three main places where we can create an Automation Rule: 1) Navigating to Automation under the left menu 2) In an existing Incident via the “Actions” button 3) When writing an Analytic Rule, under the “Automated response” tab The process for each is generally the same, except for the Incident route and we’ll break that down more in a bit. When we create an Automation Rule, we need to give the rule a name. It should be descriptive and indicative of what the rule is going to do and what conditions it applies to. For example, a rule that automatically resolves an incident based on a known false positive condition on a server named SRV02021 could be titled “Automatically Close Incident When Affected Machine is SRV02021” but really it’s up to you to decide what you want to name your rules. Trigger The next thing we need to define for our Automation Rule is the Trigger. Triggers are what cause the automation rule to begin running. They can fire when an incident is created or updated, or when an alert is created. Of the two options (incident based or alert based), it’s preferred to use incident triggers as they’re potentially the aggregation of multiple alerts and the odds are that you’re going to want to take the same automation steps for all of the alerts since they’re all related. It’s better to reserve alert-based triggers for scenarios where an analytic rule is firing an alert, but is set to not create an incident. Conditions Conditions are, well, the conditions to which this rule applies. There are two conditions that are always present: The Incident provider and the Analytic rule name. You can choose multiple criterion and steps. For example, you could have it apply to all incident providers and all rules (as shown in the picture above) or only a specific provider and all rules, or not apply to a particular provider, etc. etc. You can also add additional Conditions that will either include or exclude the rule from running. When you create a new condition, you can build it out by multiple properties ranging from information about the Incident all the way to information about the Entities that are tagged in the incident Remember our earlier Automation Rule title where we said this was a false positive about a server name SRV02021? This is where we make the rule match that title by setting the Condition to only fire this automation if the Entity has a host name of “SRV2021” By combining AND and OR group clauses with the built in conditional filters, you can make the rule as specific as you need it to be. You might be thinking to yourself that it seems like while there is a lot of power in creating these conditions, it might be a bit onerous to create them for each condition. Recall earlier where I said the process for the three ways of creating Automation Rules was generally the same except using the Incident Action route? Well, that route will pre-fill variables for that selected instance. For example, for the image below, the rule automatically took the rule name, the rules it applies to as well as the entities that were mapped in the incident. You can add, remove, or modify any of the variables that the process auto-maps. NOTE: In the new Unified Security Operations Platform (Defender XDR + Sentinel) that has some new best practice guidance: If you've created an automation using "Title" use "Analytic rule name" instead. The Title value could change with Defender's Correlation engine. The option for "incident provider" has been removed and replaced by "Alert product names" to filter based on the alert provider. Actions Now that we’ve tuned our Automation Rule to only fire for the situations we want, we can now set up what actions we want the rule to execute. Clicking the “Actions” drop down list will show you the options you can choose When you select an option, the user interface will change to map to your selected option. For example, if I choose to change the status of the Incident, the UX will update to show me a drop down menu with options about which status I would like to set. If I choose other options (Run playbook, change severity, assign owner, add tags, add task) the UX will change to reflect my option. You can assign multiple actions within one Automation Rule by clicking the “Add action” button and selecting the next action you want the system to take. For example, you might want to assign an Incident to a particular user or group, change its severity to “High” and then set the status to Active. Notably, when you create an Automation rule from an Incident, Sentinel automatically sets a default action to Change Status. It sets the automation up to set the Status to “Closed” and a “Benign Positive – Suspicious by expected”. This default action can be deleted and you can then set up your own action. In a future episode of this blog we’re going to be talking about Playbooks in detail, but for now just know that this is the place where you can assign a Playbook to your Automation Rules. There is one other option in the Actions menu that I wanted to specifically talk about in this blog post though: Incident Tasks Incident Tasks Like most cybersecurity teams, you probably have a run book of the different tasks or steps that your analysts and responders should take for different situations. By using Incident Tasks, you can now embed those runbook steps directly in the Incident. Incident tasks can be as lightweight or as detailed as you need them to be and can include rich formatting, links to external content, images, etc. When an incident with Tasks is generated, the SOC team will see these tasks attached as part of the Incident and can then take the defined actions and check off that they’ve been completed. Rule Lifetime and Order There is one last section of Automation rules that we need to define before we can start automating the mundane away: when should the rule expire and in what order should the rule run compared to other rules. When you create a rule in the standalone automation UX, the default is for the rule to expire at an indefinite date and time in the future, e.g. forever. You can change the expiration date and time to any date and time in the future. If you are creating the automation rule from an Incident, Sentinel will automatically assume that this rule should have an expiration date and time and sets it automatically to 24 hours in the future. Just as with the default action when created from an incident, you can change the date and time of expiration to any datetime in the future, or set it to “Indefinite” by deleting the date. Conclusion In this blog post, we talked about Automation Rules in Sentinel and how you can use them to automate mundane tasks in Sentinel as well as leverage them to help your SOC analysts be more effective and consistent in their day-to-day with capabilities like Incident Tasks. Stay tuned for more updates and tips on automating Microsoft Sentinel!1.5KViews1like0CommentsLogic app - Escaped Characters and Formatting Problems in KQL Run query and list results V2 action
I’m building a Logic App to detect sign-ins from suspicious IP addresses. The logic includes: Retrieving IPs from incident entities in Microsoft Sentinel. Enriching each IP using an external API. Filtering malicious IPs based on their score and risk level. Storing those IPs in an array variable (MaliciousIPs). Creating a dynamic KQL query to check if any of the malicious IPs were used in sign-ins, using the in~ operator. Problem: When I use a Select and Join action to build the list of IPs (e.g., "ip1", "ip2"), the Logic App automatically escapes the quotes. As a result, the KQL query is built like this: IPAddress in~ ([{"body":"{\"\":\"\\\"X.X.X.X\\\"\"}"}]) Instead of the expected format: IPAddress in~ ("X.X.X.X", "another.ip") This causes a parsing error when the Run Query and List Results V2 action is executed against Log Analytics. ------------------------ Here's the For Each action loop who contain the following issue: Dynamic compose to formulate the KQL query in a concat, since it's containing the dynamic value above : concat('SigninLogs | where TimeGenerated > ago(3d) | where UserPrincipalName == \"',variables('CurrentUPN'),'\" | where IPAddress in~ (',outputs('Join_MaliciousIPs_KQL'),') | project TimeGenerated, IPAddress, DeviceDetail, AppDisplayName, Status') The Current UPN is working as expected, using the same format in a Initialize/Set variable above (Array/String(for IP's)). The rest of the loop : Note: Even if i have a "failed to retrieve" error on the picture don't bother with that, it's just about the dynamic value about the Subscription, I've entered it manually, it's working fine. What I’ve tried: Using concat('\"', item()?['ip'], '\"') inside Select (causes extra escaping). Removing quotes and relying on Logic App formatting (resulted in object wrapping). Flattening the array using a secondary Select to extract only values. Using Compose to debug outputs. Despite these attempts, the query string is always malformed due to extra escaping or nested JSON structure. I would like to know if someone has encountered or have the solution to this annoying problem ? Best regardsSolved145Views0likes1CommentSentinel incident playbook - get alert entities
Hi! My main task is to get all alerts (alerts, not incidents) from sentinel (analytics rules and Defender XDR) to external case management. For different reasons we need to do this on alert level. Alert trigger by design works perfectly, but this does not trigger on Defender alerts on Sentinel, only analytic rules. When using Sentinel incident trigger, then i'm not able to extract entities related to alerts, only incident releated entities. Final output is sent with HTTP post to our external system using logic app. Any ideas how to get in logic app all alerts with their entities?332Views1like5CommentsTips on how to process firewall URL/DNS alerts
Greetings I've been tossing this around ever since I've started using Sentinel, or Defender XDR rather, a few years ago. How to process events from our firewalls. Specifically all the URL and DNS block alerts generated. I don't want to tune them out completely because they might be an indication of something bigger but as the situation is now it's almost impossible to process them. All alerts are created by an NDR rule that processes CommonSecurityLog entires based on syslog data and creates incidents with entities for the incident. The way Defender XDR then process these Sentinel incidents seems to be to create "mega incidents" where it dumps all incidents for a specific time. I can understand the logic behind this thinking where Defender XDR tries to piece together differenct incidents and alerts that have common elements, users, mitre attributes or any combination. But it becomes unmanagable. I would like some input from the community, or references to best practicies.209Views1like4CommentsBehavior Analytics, investigation Priority
Hello, Regarding the field investigation Priority in the Behavior Analytics table, what would be the value that Microsoft considers to be high/critical to look into the user's account? By analyzing the logs i would say, 7 or higher, if someone could tell me, and thank you in advance.154Views1like1Comment