siem
78 TopicsAutomating Microsoft Sentinel: Part 2: Automate the mundane away
Welcome to the second entry of our blog series on automating Microsoft Sentinel. In this series, we’re showing you how to automate various aspects of Microsoft Sentinel, from simple automation of Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. So far, we’ve covered Part 1: Introduction to Automating Microsoft Sentinel where we talked about why you would want to automate as well as an overview of the different types of automation you can do in Sentinel. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: Introduction to Automating Microsoft Sentinel Part 2: Automation Rules [You are here] – Automate the mundane away Part 3: Playbooks 1 – Playbooks Part I – Fundamentals Part 4: Playbooks 2 – Playbooks Part II – Diving Deeper Part 5: Azure Functions / Custom Code Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 2: Automation Rules – Automate the mundane away Automation rules can be used to automate Sentinel itself. For example, let’s say there is a group of machines that have been classified as business critical and if there is an alert related to those machines, then the incident needs to be assigned to a Tier 3 response team and the severity of the alert needs to be raised to at least “high”. Using an automation rule, you can take one analytic rule, apply it to the entire enterprise, but then have an automation rule that only applies to those business-critical systems to make those changes. That way only the items that need that immediate escalation receive it, quickly and efficiently. Automation Rules In Depth So, now that we know what Automation Rules are, let’s dive in to them a bit deeper to better understand how to configure them and how they work. Creating Automation Rules There are three main places where we can create an Automation Rule: 1) Navigating to Automation under the left menu 2) In an existing Incident via the “Actions” button 3) When writing an Analytic Rule, under the “Automated response” tab The process for each is generally the same, except for the Incident route and we’ll break that down more in a bit. When we create an Automation Rule, we need to give the rule a name. It should be descriptive and indicative of what the rule is going to do and what conditions it applies to. For example, a rule that automatically resolves an incident based on a known false positive condition on a server named SRV02021 could be titled “Automatically Close Incident When Affected Machine is SRV02021” but really it’s up to you to decide what you want to name your rules. Trigger The next thing we need to define for our Automation Rule is the Trigger. Triggers are what cause the automation rule to begin running. They can fire when an incident is created or updated, or when an alert is created. Of the two options (incident based or alert based), it’s preferred to use incident triggers as they’re potentially the aggregation of multiple alerts and the odds are that you’re going to want to take the same automation steps for all of the alerts since they’re all related. It’s better to reserve alert-based triggers for scenarios where an analytic rule is firing an alert, but is set to not create an incident. Conditions Conditions are, well, the conditions to which this rule applies. There are two conditions that are always present: The Incident provider and the Analytic rule name. You can choose multiple criterion and steps. For example, you could have it apply to all incident providers and all rules (as shown in the picture above) or only a specific provider and all rules, or not apply to a particular provider, etc. etc. You can also add additional Conditions that will either include or exclude the rule from running. When you create a new condition, you can build it out by multiple properties ranging from information about the Incident all the way to information about the Entities that are tagged in the incident Remember our earlier Automation Rule title where we said this was a false positive about a server name SRV02021? This is where we make the rule match that title by setting the Condition to only fire this automation if the Entity has a host name of “SRV2021” By combining AND and OR group clauses with the built in conditional filters, you can make the rule as specific as you need it to be. You might be thinking to yourself that it seems like while there is a lot of power in creating these conditions, it might be a bit onerous to create them for each condition. Recall earlier where I said the process for the three ways of creating Automation Rules was generally the same except using the Incident Action route? Well, that route will pre-fill variables for that selected instance. For example, for the image below, the rule automatically took the rule name, the rules it applies to as well as the entities that were mapped in the incident. You can add, remove, or modify any of the variables that the process auto-maps. NOTE: In the new Unified Security Operations Platform (Defender XDR + Sentinel) that has some new best practice guidance: If you've created an automation using "Title" use "Analytic rule name" instead. The Title value could change with Defender's Correlation engine. The option for "incident provider" has been removed and replaced by "Alert product names" to filter based on the alert provider. Actions Now that we’ve tuned our Automation Rule to only fire for the situations we want, we can now set up what actions we want the rule to execute. Clicking the “Actions” drop down list will show you the options you can choose When you select an option, the user interface will change to map to your selected option. For example, if I choose to change the status of the Incident, the UX will update to show me a drop down menu with options about which status I would like to set. If I choose other options (Run playbook, change severity, assign owner, add tags, add task) the UX will change to reflect my option. You can assign multiple actions within one Automation Rule by clicking the “Add action” button and selecting the next action you want the system to take. For example, you might want to assign an Incident to a particular user or group, change its severity to “High” and then set the status to Active. Notably, when you create an Automation rule from an Incident, Sentinel automatically sets a default action to Change Status. It sets the automation up to set the Status to “Closed” and a “Benign Positive – Suspicious by expected”. This default action can be deleted and you can then set up your own action. In a future episode of this blog we’re going to be talking about Playbooks in detail, but for now just know that this is the place where you can assign a Playbook to your Automation Rules. There is one other option in the Actions menu that I wanted to specifically talk about in this blog post though: Incident Tasks Incident Tasks Like most cybersecurity teams, you probably have a run book of the different tasks or steps that your analysts and responders should take for different situations. By using Incident Tasks, you can now embed those runbook steps directly in the Incident. Incident tasks can be as lightweight or as detailed as you need them to be and can include rich formatting, links to external content, images, etc. When an incident with Tasks is generated, the SOC team will see these tasks attached as part of the Incident and can then take the defined actions and check off that they’ve been completed. Rule Lifetime and Order There is one last section of Automation rules that we need to define before we can start automating the mundane away: when should the rule expire and in what order should the rule run compared to other rules. When you create a rule in the standalone automation UX, the default is for the rule to expire at an indefinite date and time in the future, e.g. forever. You can change the expiration date and time to any date and time in the future. If you are creating the automation rule from an Incident, Sentinel will automatically assume that this rule should have an expiration date and time and sets it automatically to 24 hours in the future. Just as with the default action when created from an incident, you can change the date and time of expiration to any datetime in the future, or set it to “Indefinite” by deleting the date. Conclusion In this blog post, we talked about Automation Rules in Sentinel and how you can use them to automate mundane tasks in Sentinel as well as leverage them to help your SOC analysts be more effective and consistent in their day-to-day with capabilities like Incident Tasks. Stay tuned for more updates and tips on automating Microsoft Sentinel!1.1KViews1like0CommentsAutomate Extraction of Microsoft Sentinel Analytical Rules from GitHub Solutions
🔧 Enhancing Pre-Deployment Rule Insights Extracting metadata like Rule Name, Severity, MITRE Tactics, and Techniques for out-of-the-box analytical rules across multiple solutions can be time-consuming when done manually—especially before the rules are deployed. 🚀 Script Overview The PowerShell script, hosted on GitHub, lets you: Provide the exact Microsoft Sentinel solution name as input, from Microsoft Sentinel GitHub: Azure-Sentinel/Solutions at master · Azure/Azure-Sentinel · GitHub Automatically query the [Microsoft Sentinel GitHub repo] Parse all associated analytical rule YAMLs under that solution Export relevant metadata into a structured CSV 📥 GitHub Link This is My GitHub repository where the custom PowerShell script is hosted. It allows you to extract built-in analytical rules from Microsoft Sentinel solutions based on the solution name: 🔗 GitHub - SentinelArtifactExtract (Optimized Script) 📝 Pre-Requisites: Generate GitHub Personal Access token: GitHub official page to generate PAT: Managing your personal access tokens - GitHub Docs Why GitHub PAT token: It will help us to Authenticate and overcome the GitHub API rate limit Error (403). Download the Script from GitHub to Azure CloudShell: Use Invoke-WebRequest or curl to download the raw script: Command to Download the Raw Script from GitHub: Invoke-WebRequest -Uri "https://raw.githubusercontent.com/vdabhi123/SentinelArtifactExtract/main/Extract%20Sentinel%20Analytical%20Rule%20with%20Solution%20Name%20prompt/OptimizedVersionPromptforSolutionNameOnly" -OutFile "ExtractRules.ps1 Invoke-WebRequest in Azure CloudShell Update the Script with you GitHub PAT (generated in pre-requisite 1) in main script: To update the PAT token you can use vim and ensure to run the updated script. 🧪 How to Use the Script Open Azure Cloud Shell (PowerShell). Upload and run the script. (This is Optional if Pre-requisite 3 is followed) Run the Script and Enter the **exact** solution name (e.g., `McAfee ePolicy Orchestrator`). The script fetches rule metadata and exports to CSV in the same directory. Download the CSV from Cloud Shell. & 2 as highlighted. 📤 Sample Output The script generates a CSV with the following columns: - `Solution` - `AnalyticalRuleName` - `Description` - `Severity` - `MITRE_Tactics` - `MITRE_Techniques` Example file name: Formatted Output with all Analytical Rule and other metadata for the Solution: ✅ Benefits Streamlines discovery of built-in analytical rules for initial Microsoft Sentinel deployments. Accelerates requirements gathering by exporting rules into a shareable CSV format. Enables collaborative planning—output can be shared with clients or Microsoft to determine which rules to implement or recommend. Eliminates manual effort of browsing GitHub or Microsoft Sentinel UI or exporting and reviewing full JSON rule files individually. 💡 Pro Tips Always verify the solution name from the official Microsoft Sentinel GitHub Solutions folder. Azure-Sentinel/Solutions at master · Azure/Azure-Sentinel · GitHub 📌 Final Thoughts This script was created in response to a real-world project need and is focused on improving the discovery and extraction of Microsoft Sentinel analytical rules. A follow-up blog covering the export of additional Sentinel artifacts—such as Playbooks, Workbooks, and Hunting Queries—will be published soon.954Views2likes0CommentsTransitioning from the HTTP Data Collector API to the Log Ingestion API…What does it mean for me?
This article is co-authored by Andrea Fisher, Brian Delaney, and Jon Shectman (Microsoft Customer Success Unit). Many customers have recently received an email sharing the information that the HTTP Data Collector API will be retired on September 14, 2026. What exactly does that mean for you? Either you have deployed a built-in Microsoft Sentinel Data Connector that is using the HTTP Data Collector API or you have configured a custom connector of your own that uses the API. In this blog, we’ll explain why you got (or will receive) this notification, what’s at stake, and what actions you need to take. But first, what is the HTTP Data Collector API Anyway? The HTTP Data Collector API is nothing more than a set of rules and protocols governing (you guessed it!) data collection – in this case to Azure Monitor (a “back end” for Microsoft Sentinel). This API has been deprecated in favor of a newer, improved API, the Azure Monitor Logs Ingestion API. Here is a copy of the email: What actions should I take? As you can see, the Account Information section only lists the Subscription name and ID that are calling the old API. It doesn’t state how your organization is calling it. Below are three possibilities. Do you have a custom application that you built or licensed? Do you have any custom data connectors (likely built as either Azure Functions or codeless connectors)? You have a data connector from the in-product Content Hub, provided by Microsoft or one of our partner ISVs – that will be rewritten prior to the API deprecation date. It’s also possible that you could be using more than one of the above methods in your workspace or in more than one workspace in your subscription. There are several steps you can take to start discovering your usage of this deprecated API. In your Log Analytics workspace, navigate to Settings, then Tables and examine the Type column. Any table built with data from the deprecated API will be of type Custom table (classic). Remember, some of these tables may not be in use anymore; there are many ways to identify tables that are in active use. One way is with a simple query - as in this example: InformationProtectionLogs_CL | where TimeGenerated > ago(90d) You could also examine the Usage and estimated costs chart in Log Analytics, or if you want to check regularly over time you could set up a log search alert rule. Now let’s examine built-in data connectors that use the deprecated API. Generally, they specify their usage in the details: To remediate: If you discover a custom application or data connector, you will need to follow these steps to transition to the Logs Ingestion API before the retirement date. We recommend that you do not wait but start the process early to give your organization time to thoroughly test and migrate all applications and connectors. For built-in data connectors, you’ll need to watch the Content Hub for updates and guidance as shown in these two screenshots: Advantages of the Azure Monitor Logs Ingestion API There are numerous advantages to using the new API: It supports transformations, which enable you to modify the data before it's ingested into the destination table, including filtering and data manipulation. It allows you send data to supported Azure tables or to custom tables that you create. You can extend the schema of Azure tables with custom columns to accept additional data. It lets you send data to multiple destinations. Last but certainly not least (we are security practitioners after all): it allows for granular role-based access controls (RBAC) to limit the ability to ingest data by data collection rule and identity. In Summary The transition from the HTTP Data Collector API to the Azure Monitor Logs Ingestion API is crucial for maintaining data ingestion functionality and security. The new API offers several advantages, including secure OAuth-based authentication, the ability to filter and transform data during ingestion, and granular RBAC. Organizations should proactively transition to the new API before the retirement date of September 14, 2026.1.2KViews0likes0CommentsMulti Workspace for Single tenant is now in Public Preview in Microsoft’s unified SecOps platform
We are excited to continue to expand the use cases addressed with our unified SecOps platform, which brings the capabilities of Microsoft Sentinel, Defender XDR, Security Copilot, Threat Intelligence and more into a single experience with new and more robust functionality. Now, customers can onboard and manage multiple workspaces across Microsoft Sentinel and Defender in one place. Key Benefits of Multi Workspace Experience The multi-workspace experience offers several key benefits that enhance security operations: Unified Entity View: Customers can view all relevant entity data from multiple workspaces in a single entity page, facilitating comprehensive investigations. Workspace Filtering: Users can filter data by workspace when needed, ensuring flexibility in investigations. Enhanced Context: Aggregates alerts, incidents, and timeline events from all workspaces, providing deeper insights into entity behavior. Introducing the Primary Workspace Concept A new concept in the unified SecOps platform is Primary Workspace, which acts as a central hub where Microsoft Sentinel alerts are correlated with XDR data, resulting in incidents that include both Microsoft Sentinel’s primary workspace and XDR alerts. All XDR alerts and incidents are synced back to this workspace, ensuring a cohesive and comprehensive view of security events. The XDR connector is automatically connected to the Primary Workspace upon onboarding and can be switched if necessary. One Primary Workspace must always be connected to use the unified platform effectively. Other onboarded workspaces are considered “Secondary” workspaces, with incidents created based on their individual data. We respect and protect your data boundaries- each workspace’s data will be synced with its own alerts only. Learn more: https://aka.ms/primaryWorkspace Multi Workspace Experience- Key Scenarios Onboarding multiple workspaces to the unified SecOps platform: Open the security portal: https://security.microsoft.com/ There are two options to connect workspaces, you can select either one: Option A: Connecting the workspace through the main home page: Click on” Connect a workspace” in the banner Select the workspaces you wish to onboard and click on “Next”. Select primary workspace Review the text and click on “Connect” After completing the connection, click on “Close”. Option B: Connecting the workspaces through the Settings page: Navigate to Settings and choose “Microsoft Sentinel” Click on "Connect workspace" Follow the same steps as Option A. Switching Primary Workspaces Navigate to Settings and choose "Microsoft Sentinel" On the workspace you wish to assign as Primary, click on the "3 dots" and choose "Set as primary" Confirm and proceed. Incidents and Alerts The incident queue is a single place for a SOC analyst to manage and investigate incidents. The alert queue centralized all your workspaces’ alert in the same place and provides the ability to see the alert page. In the unified queues, you are able now to view all incidents and alerts from all workloads and all workspaces and also filter by workspace. Each alert and incident are related to a single workspace to keep data boundaries. Bi-directional sync: Any change in the unified secOps portal is reflected to Sentinel portal and vice versa. Unified Entities The multi workspace aggregated view enhances entity pages in the unified portal by consolidating data from all relevant Sentinel workspaces into a single, unified experience. This feature enables security teams to gain a complete view of entity-related data without switching between workspaces, improving investigation efficiency and data accessibility. The unified entity page grants you with: Unified Entity View: Customers can see all relevant entity data from multiple workspaces in a single entity page. Workspace Filtering: Users can filter data by workspace when needed, ensuring flexibility in investigations. Enhanced Context: Aggregates alerts, incidents, and timeline events from all workspaces, providing deeper insights into entity behavior. Aggregated view: Provides a unified view of entity data across all workspaces. Supports a predefined logic to display key entity values across components. Introduces workspace filtering in Timeline, Incidents & Alerts, and Insights tabs. Entity Page Enhancements: Overview Section: Displays entity metadata aggregated from all workspaces. Timeline View: Supports events from all workspaces with workspace-based filtering. Incidents & Alerts: Aggregates incidents and alerts from multiple workspaces. Sentinel Tab: Defaults to the primary workspace but allows workspace filtering. Side Pane: Provides a summary view, dynamically updating based on workspace data. Advanced Hunting In Advanced Hunting, you'll be able to explore all your security data in a single place. For hunting and investigation purposes, you'll be able to: Query all Microsoft Sentinel workspaces data. Run queries across multiple workspaces using workspace operator. Access all Logs content of the workspace, including queries and functions, for read/ query Create custom detections on primary workspace Create Analytic rule with workspace operator on a secondary workspace. Microsoft Sentinel features + Using Workspace selector After you connect your workspace to the unified portal, Microsoft Sentinel is on the left-hand side navigation pane. Many of the existing Microsoft Sentinel features are integrated into the unified portal and are similar. Workspace selector: for users with permissions to multiple workspaces, in each Sentinel page, a workspace selector is added to the toolbox. User can easily switch between workspaces using the selector by clicking on “Select a workspace”. SOC Optimization The SOC Optimization feature is also available in the unified portal and contains data and recommendations for multiple workspaces. FAQ Who can onboard multiple workspaces? To onboard a primary workspace, user must be: Global admin/ Security admin AND Owner of subscription OR Global admin/ Security admin AND User access admin AND Microsoft Sentinel contributor To onboard secondary workspaces, user must be Owner of subscription OR User access admin and Microsoft Sentinel contributor. Who can change the primary workspace? Global admin or security admin can change workspace type (Primary/ Secondary) Do I need to onboard all my workspaces? You don’t need to onboard all your workspaces to use this feature, although we highly recommend you to, to ensure full coverage across all your environment. Will all users in my organization have access to all workspaces in the unified security operations portal? No - we respect the permissions granted for each user. Users can see only the data from the workspace they have permissions to. Will data from one workspace be synced to a second workspace? No, we keep the data boundaries between workspaces and ensure that each workspace will only be synced with its own data. When will multi-tenancy be available? Multi-tenancy in the unified SecOps platform for single workspace is already in GA. Multi-tenancy for multiple workspaces is released to public preview with this capability as well. Can I still access my environment in Azure? Yes, all experiences remain the same. We provide bi-directional sync to make sure all changes are up to date. Conclusion Microsoft’s unified SecOps platform support for multi workspace customers represents a significant leap forward in cybersecurity management. By centralizing operations and providing robust tools for detection, investigation, and automation, it empowers organizations to maintain a vigilant and responsive security posture. The platform’s flexibility and comprehensive view of security data make it an invaluable asset for modern security operations. With the public preview now available, organizations can experience firsthand the transformative impact of the Unified Security Operations Platform. Join us in pioneering a new era of cybersecurity excellence. Learn More Please visit our documentation to learn more on the scenarios supported and how to onboard multiple workspaces to the unified platform: https://aka.ms/OnboardMultiWS1.2KViews1like1CommentGo agentless with Microsoft Sentinel Solution for SAP
What a title during Agentic AI times 😂 Dear community, Bringing SAP workloads under the protection of your SIEM solution is a primary concern for many customers out there. The window for defenders is small “Critical SAP vulnerabilities being weaponized in less than 72 hours of a patch release and new unprotected SAP applications provisioned in cloud (IaaS) environments being discovered and compromised in less than three hours.” (SAP SE + Onapsis, Apr 6 2024) Having a turn-key solution as much as possible leads to better adoption of SAP security. Agent-based solutions running in Docker containers, Kubernetes, or other self-hosted environemnts are not for everyone. Microsoft Sentinel for SAP’s latest new capability re-uses the SAP Cloud Connector to profit from already existing setups, established integration processes, and well-understood SAP components. Meet agentless ❌🤖 The new integration path leverages SAP Integration Suite to connect Microsoft Sentinel with your SAP systems. The Cloud integration capability of SAP Integration Suite speaks all relevant protocols, has connectivity into all the places where your SAP systems might live, is strategic for most SAP customers, and is fully SAP RISE compatible by design. Are you deployed on SAP Business Technology Platform yet? Simply upload our Sentinel for SAP integration package (see bottom box in below image) to your SAP Cloud Integration instance, configure it for your environment, and off you go. Best of all: The already existing SAP security content (detections, workbooks, and playbooks) in Microsoft Sentinel continues to function the same way as it does for the Docker-based collector agent variant. The integration marks your steppingstone to bring your SAP threat signals into the Unified Security Operations Platform – a combination of Defender XDR and Sentinel – that looks beyond SAP at your whole IT estate. Microsoft Sentinel solution for SAP applications is certified for SAP S/4HANA Cloud, Private Edition RISE with SAP, and SAP S/4HANA on-premises. So, you are all good to go😎 You are already dockerized or agentless? Then proceed to this post to learn more about what to do once the SAP logs arrived in Sentinel. Final Words During the preview we saw drastically reduced deployment times for SAP customers being less familiar with Docker, Kubernetes and Linux administration. Cherry on the cake: the network challenges don’t have to be tackled again. The colleagues running your SAP Cloud Connector went through that process a long time ago. SAP Basis rocks 🤘 Get started from here on Microsoft Learn. Find more details on our blog on the SAP Community. Cheers Martin1.3KViews1like0CommentsMicrosoft Sentinel & Cyberint Threat Intel Integration Guide
Explore comprehensive guide on "Microsoft Sentinel & Cyberint Threat Intel Integration Guide," to learn how to integrate Cyberint's advanced threat intelligence with Microsoft Sentinel. This detailed resource will walk you through the integration process, enabling you to leverage enriched threat data for improved detection and response. Elevate your security posture and ensure robust protection against emerging threats. Read the guide to streamline your threat management and enhance your security capabilities.9.6KViews1like1CommentIntegrating Fluent Bit with Microsoft Sentinel
This guide will walk you through the steps required to integrate Fluent Bit with Microsoft Sentinel. Beware that in this article, we assume you already have a Sentinel workspace, a Data Collection Endpoint and a Data Collection Rule, an Entra ID application and finally a Fluent Bit installation. As mentioned above, log ingestion API supports ingestion both in custom tables as built-in tables, like CommonSecurityLog, Syslog, WindowsEvent and more. In case you need to check which tables are supported please the following article: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview#supported-tables Prerequisites: Before beginning the integration process, ensure you have the following: An active Azure subscription with Microsoft Sentinel enabled; Microsoft Entra ID Application taking note of the ClientID, TenantID and Client Secret – create one check this article: https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app?tabs=certificate A Data Collection Endpoint (DCE) – to create a data collection endpoint, please check this article: https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-endpoint-overview?tabs=portal A Data Collection Rule (DCR) – fields from the Data Collection Rule need to match exactly to what exists in table columns and also the fields from the log source. To create a DCR please check this article: https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-rule-create-edit?tabs=cli Depending on the source, it might require a custom table to be created or an existing table from log analytics workspace; Fluent Bit installed on your server or container – In case you haven’t yet installed Fluent Bit, in the following article you'll find the instructions per type of operating system: https://docs.fluentbit.io/manual/installation/getting-started-with-fluent-bit High level architecture: Step 1: Setting up Fluent Big configuration file Before we step-in into the configuration, Fluent Bit has innumerous output plugins and one of those is through Log Analytics Ingestion API both to supported Sentinel tables but also for custom tables. You can check more information about it here in Fluent Bit documentation: https://docs.fluentbit.io/manual/pipeline/outputs/azure_logs_ingestion Moving forwarder, in order to configure Fluent Bit to send logs into Sentinel log analytics workspace, please take note of the specific input plugin you are using or intend to use to receive logs and how can you use it to output the logs to Sentinel workspace. For example most of the Fluent Bit plugins allow to set a “tag” key which can be used within the output plugin so that there’s a match in which logs are intended to send. On the other hand, in a scenario where multiple input plugins are used and all are required send logs to Sentinel, then a match of type wildcard "*" could be used as well. Another example, in a scenario where there are multiple input plugins of type “HTTP” and you want to just send a specific one into Sentinel, then the “match” field must be set according to the position of the required input plugin, for example “match http.2”, if the input plugin would the 3 rd in the list of HTTP inputs. If nothing is specified in the "match" field, then it will assume "http.0" by default. For better understanding, here’s an example of how a Fluent Bit config file could look: First, the configuration file is located under the path ”/etc/fluent-bit/fluent-bit.conf” The first part is the definition of all “input plugins”, then follows the “filter plugins” which you can use for example to rename fields from the source to match for what exists within the data collection rule schema and Sentinel table columns and finally there’s the output plugins. Below is a screenshot of a sample config file: INPUT plugins section: In this example we’re going to use the “dummy input” to send sample messages to Sentinel. However, in your scenario you could leverage other’s input plugins within the same config file. After everything is configured in the input section, make sure to complete the “FILTER” section if needed, and then move forward to the output plugin section, screenshot below. OUTPUT plugins section: In this section, we have output plugins to write on a local file based on two tags “dummy.log” and “logger”, an output plugin that prints the outputs in json format and the output plugin responsible for sending data to Microsoft Sentinel. As you can see, this one is matching the “tag” for “dummy.log” where we’ve setup the message “{“Message”:”this is a sample message for testing fluent bit integration to Sentinel”, “Activity”:”fluent bit dummy input plugn”, “DeviceVendor”:”Ubuntu”}. Make sure you insert the correct parameters in the output plugin, in this scenario the "azure_logs_ingestion" plugin. Step 2: Fire Up Fluent Bit When the file is ready to be tested please execute the following: sudo /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf Fluent bit will start initialization all the plugins it has under the config file. Then you’re access token should be retrieved if everything is well setup under the output plugin (app registration details, data collection endpoint URL, data collection rule id, sentinel table and important to make sure the name of the output plugin is actually “azure_logs_ingestion”). In a couple of minutes you should see this data under your Microsoft Sentinel table, either an existing table or a custom table created for the specific log source purpose. Summary Integrating Fluent Bit with Microsoft Sentinel provides a powerful solution for log collection and analysis. By following this guide, hope you can set up a seamless integration that enhances your organization's ability to monitor and respond to security threats, just carefully ensure that all fields processed in Fluent Bit are mapped exactly to the fields in Data Collection Rule and Sentinel table within Log Analytics Workspace. Special thanks to “Bindiya Priyadarshini” that collaborated with me on this blog post. Cheers!1.3KViews2likes1CommentIntroducing Threat Intelligence Ingestion Rules
Microsoft Sentinel just rolled out a powerful new public preview feature: Ingestion Rules. This feature lets you fine-tune your threat intelligence (TI) feeds before they are ingested to Microsoft Sentinel. You can now set custom conditions and actions on Indicators of Compromise (IoCs), Threat Actors, Attack Patterns, Identities, and their Relationships. Use cases include: Filter Out False Positives: Suppress IoCs from feeds known to generate frequent false positives, ensuring only relevant intel reaches your analysts. Extending IoC validity periods for feeds that need longer lifespans. Tagging TI objects to match your organization's terminology and workflows Get Started Today with Ingestion Rules To create new “Ingestion rule”, navigate to “Intel Management” and Click on “Ingestion rules” With the new Ingestion rules feature, you have the power to modify or remove indicators even before they are integrated into Sentinel. These rules allow you to act on indicators currently in the ingestion pipeline. > Click on “Ingestion rules” Note: It can take up to 15 minutes for the rule to take effect Use Case #1: Delete IOC’s with less confidence score while ingesting When ingesting IOC's from TAXII/Upload API/File Upload, indicators are imported continuously. With pre-ingestion rules, you can filter out indicators that do not meet a certain confidence threshold. Specifically, you can set a rule to drop all indicators in the pipeline with a confidence score of 0, ensuring that only reliable data makes it through. Use Case #2: Extending IOC’s The following rule can be created to automatically extend the expiration date for all indicators in the pipeline where the confidence score is greater than 75. This ensures that these high-value indicators remain active and usable for a longer duration, enhancing the overall effectiveness of threat detection and response. Use Case #3: Bulk Tagging Bulk tagging is an efficient way to manage and categorize large volumes of indicators based on their confidence scores. With pre-ingestion rules, you can set up a rule to tag all indicators in the pipeline where the confidence score is greater than 75. This automated tagging process helps in organizing indicators, making it easier to search, filter, and analyze them based on their tags. It streamlines the workflow and improves the overall management of indicators within Sentinel. Managing Ingestion rules In addition to the specific use cases mentioned, managing ingestion rules gives you control over the entire ingestion process. 1. Reorder Rules You can reorder rules to prioritize certain actions over others, ensuring that the most critical rules are applied first. This flexibility allows for a tailored approach to data ingestion, optimizing the system's performance and accuracy. 2. Create From Creating new ingestion rules from existing ones can save you a significant amount of time and offer the flexibility to incorporate additional logic or remove unnecessary elements. Effectively duplicating these rules ensures you can quickly adapt to new requirements, streamline operations, and maintain a high level of efficiency in managing your data ingestion process. 3. Delete Ingestion Rules Over time, certain rules may become obsolete or redundant as your organizational needs and security strategies evolve. It's important to note that each workspace is limited to a maximum of 25 ingestion rules. Having a clean and relevant set of rules ensures that your data ingestion process remains streamlined and efficient, minimizing unnecessary processing and potential conflicts. Deleting outdated or unnecessary rules allows for a more focused approach to threat detection and response. It reduces clutter, which can significantly enhance the performance. By regularly reviewing and purging obsolete rules, you maintain a high level of operational efficiency and ensure that only the most critical and up-to-date rules are in place. Conclusion By leveraging these pre-ingestion rules effectively, you can enhance the quality and reliability of the IOC’s ingested into Sentinel, leading to more accurate threat detection and an improved security posture for your organization.4.3KViews3likes2CommentsIntroducing SOC Optimization Recommendations Based on Similar Organizations
One of the key challenges that security teams in modern SOCs regularly face is determining which new data sources to onboard and which detections to activate. This ongoing process takes time and requires constant evaluation of the organization’s assets and the value that the data brings to the SOC. "…determining which logs to ingest for better threat coverage is time-consuming and requires significant effort. I need to spend a long time identifying the appropriate logs..." Elie El Karkafi, Senior Solutions Architect, ampiO Solutions Today, we’re excited to announce the public preview of recommendations based on similar organizations - a first-of-its-kind capability for SOC optimizations. Recommendations based on similar organizations use peer-based insights to guide and accelerate your decision-making process. We believe that applying insights learned from the actions of organizations with similar profiles can provide great value. Recommendations based on similar organizations use advanced machine learning to suggest which data to ingest, based on organizations with similar ingestion patterns. The recommendations also highlight the security value you can gain by adding the data. They list out-of-the-box rules that are provided by Microsoft research, which you can activate to enhance your coverage. Use the new recommendations to swiftly pinpoint the next recommended data source for ingestion and determine the appropriate detections to apply. This can significantly reduce the time and costs typically associated with research or consulting external experts to gain the insights you need. Recommendations based on similar organizations are now available in the SOC optimization page, in both the Azure portal and the unified security operations platform: - unified security operations platform Use cases Let’s take a tour of the unified security operations platform, stepping into the shoes of a small tech company that benefited from recommendations based on similar organizations during its private preview phase. In the following image, the new recommendation identifies that the AADNonInteractiveUserSignInLogs table is used by organizations similar to theirs: Selecting View details button on the recommendation card allowed them to explore how other organizations use the recommended table. This includes insights into the percentage of organizations using the table for detection and investigation purposes. By selecting See details hyperlink, the SOC engineer was able to explore how coverage could be improved with respect to the MITRE ATT&CK framework, using Microsoft’s out-of-the box rules: By selecting Go to Content hub, the SOC engineer was able to view all the essential data connectors needed to start ingesting the recommended tables. This page also includes a detailed list of out of the box, recommended analytics rules, which can provide immediate value and enhanced protection for your environment: Finally, by following the recommendation, which uses the security practices of similar organizations as a benchmark, the tech company quickly ingested the AADNonInteractiveUserSignInLogs table and activated several recommended analytics rules. Overall, this resulted in improved security coverage, corresponding to the company's specific characteristics and needs. Feedback from private preview: “I think this is a great addition. Like being able to identify tables not being used, it is useful to understand what tables other organizations are utilizing which could reveal things that so far haven't been considered or missed...” Chris Hoard, infinigate.cloud "In my view, those free recommendations are always welcomed and we can justify cost saving and empowering SOC analysts (that we know are more and more difficult to find)." Cyrus Irandoust, IBM “These recommendations will help us to take a look at the left out stuffs” Emmanuel Karunya, KPMG “Nice overview and insights! Love the interface too - nice and easy overview!” Michael Morten Sonne, Microsoft MVP Q&A: Q1: Why don’t I see these recommendations? A: Not all workspaces are eligible for recommendations based on similar organizations. Workspaces only receive these recommendations if the machine learning model identifies significant similarities between your organization and others, and discovers tables that they have but you don’t. If no such similarities are identified, no extra recommendations are provided. You’re more likely to see these recommendations if your SOC is still in its onboarding process, rather than a more mature SOC. Q2: What makes an organization similar to mine? A: Similarity is determined based on ingestion trends, as well as your organization's industry and vertical, when available in our databases. Q3: Is any of my PII being used to make recommendations to other customers? A: No. The recommendations are generated using machine learning models that rely solely on Organizational Identifiable Information (OII) and system metadata. Customer log content is never accessed or analyzed, and no customer data, content, or End User Identifiable Information (EUII) is exposed during the analysis process. Microsoft prioritizes customer privacy and ensures that all processes comply with the highest standards of data protection. Looking forward Microsoft continues to use artificial intelligence and machine learning to help our customers defend against evolving threats and provide enhanced protection against cyberattacks. This ongoing innovation is a key part of SOC optimization’s commitment to help you maximize your value from your SIEM & XDR. Learn More: SOC optimization documentation: SOC optimization overview ; Recommendation's logic Short overview and demo: SOC optimization Ninja show In depth webinar: Manage your data, costs and protections with SOC optimization SOC optimization API: Introducing SOC Optimization API | Microsoft Community Hub3KViews2likes1Comment