security operations
19 TopicsHow to: Ingest Splunk alert data to Microsoft Sentinel SIEM
Thanks to Javier Soriano, Principal Product Manager - OneSOC Customer Experience Engineering, for the peer review Introduction Although the recommended approach is to not have multiple SIEM solutions in place, many organizations are still running dual-SIEM setups, sometimes even introducing additional ones in the mix. Combination most often seen is running a legacy solution and pairing it with modern SIEM solutions like Microsoft Sentinel SIEM. Side-by-side architecture is recommended for just long enough to complete the migration, train people and update processes - but organizations might stay with the side-by-side configuration longer when they are not ready to move away from legacy solutions. In such situations, organizations usually opt for sending alerts from their legacy SIEM to Sentinel SIEM: Cloud data is ingested and analyzed in Sentinel SIEM On-premises data is ingested and analyzed in legacy SIEM which generates alerts Alerts are forwarded from legacy SIEM to Sentinel SIEM With this approach, SOC teams have a single interface where they are able to cross-correlate and investigate alerts from their organizations while benefiting from full value of unified security operations in Microsoft Defender. Send Splunk alerts to Sentinel SIEM Splunk side When an alert is raised in Splunk, organizations have an option to set up following alert actions: Email notification action Webhook alert action Output results to a CSV lookup Log events Monitor triggered alerts Send alerts and dashboards to Splunk Mobile Users Interestingly, it is possible to work with Webhooks to make an HTTP POST request on a URL. The data is in JSON format which makes it easily consumable by Sentinel SIEM. For this to work, Splunk needs the hook URL from the target source (in this case, Sentinel SIEM). { "result": { "sourcetype" : "mongod", "count" : "8" }, "sid" : "scheduler_admin_search_W2_at_14232356_132", "results_link" : "http://web.example.local:8000/app/search/@go?sid=scheduler_admin_search_W2_at_14232356_132", "search_name" : null, "owner" : "admin", "app" : "search" } Example: Splunk alert JSON payload Microsoft side From Microsoft perspective, organizations can take advantage of Logs Ingestion API, which allows for any application that can make a REST API call to send data to Sentinel SIEM. To configure Logs Ingestion API, a couple of supporting resources are needed: Microsoft Entra application which will authenticate against the API Custom table in Log Analytics workspace, where the data will be sent to and accessible from Sentinel SIEM Data Collection Rule (DCR) which will direct data to the target custom table Entra application from the first step needs to have RBAC assigned on the DCR resource A solution to call Logs Ingestion API so the data can be sent to the Sentinel SIEM. In order to make this process streamlined and easy to deploy, a solution has been developed which will automate creation of all of these supporting resources and allow you to have a Webhook URL ready which can be just placed in your Splunk solution: https://github.com/Azure/Azure-Sentinel/tree/master/Tools/SplunkAlertIngestion Picture: Content of the solution The script with supporting ARM templates can be run directly from the Azure Cloud Shell and configured with a couple of parameters: ./SplunkAlertIngestion.ps1 -ServicePrincipalName "" -tableName "" -workspaceResourceId "" -dataCollectionRuleName "" -location "" ServicePrincipalName - mandatory, define a name for the SP tableName - mandatory, define a name for the custom table with the suffix _CL (example: SplunkAlertInfo_CL) workspaceResourceId - mandatory, the resourceId can be fetched from the Log Analytics Workspace Properties blade (/subscriptions/xxx-xxx/resourceGroups/xxx/providers/Microsoft.OperationalInsights/workspaces/xxx) dataCollectionRuleName - mandatory, define DCR name location - mandatory, define Azure location for the resources (example: northeurope, eastus2) LogicAppName - not mandatory, define the name for the LogicApp, otherwise it will be named SplunkAlertAutomationLogicApp Result The script will create all supporting resources that are needed and will provide the Webhook URL as an output. Use this URL to configure trigger action in Splunk: Picture: Instructions for configuring webhook alert action in Splunk Once the webhook is configured on Splunk side, any time the alert is raised it will trigger the webhook, which will initiate the Logic App resource on Azure side responsible for parsing the data and sending that data through Logs Ingestion API to the destination table in Sentinel SIEM. Picture: Workflow of the Logic App Conclusion Ingesting alert data from other solutions in your organization to Sentinel SIEM allows for security teams to take advantage of unified security operations in Microsoft Defender - easier cross-correlation between various data sources, comprehensive threat intelligence and case management experience.348Views1like0CommentsCase Management: Incidents, Cases, and When to Use Them
In March, Case Management went to GA status within the unified portal for customers. This introduced new functionality and experiences such as: A new case queue Custom statuses New Case task experience Linking incidents to cases This can be a little confusing for existing users who are familiar with incidents and the incident experience for either Microsoft Defender or Sentinel. Let’s break this down into more detail. What are Incidents? Incidents are artifacts that act as containers for alerts to signal that a noteworthy event took place that involves one or more malicious activities. These serve to be a single landing page for alerts, activities, entities, and more. When to use Incidents? Incidents are the default experience for analysts as they perform incident investigations and response. Incidents are where they will find any and all details available for alerts and entities while performing the basic tasks of a SOC analyst. Incidents should be used when investigating and responding to malicious activity within the environment. The current incident experience provides features such as: Alert timeline Entity mapping and tracking Entity investigation graph Copilot for Security Pre-performed investigations and responses What are Cases? Cases are artifacts that represent an actionable or trackable item, such as incident investigation, validating a threat hunting hypothesis, reviewing threat intelligence review, managing endpoint vulnerabilities, and more. They can exist without alerts or incidents. When to use Cases vs. Incidents? Cases serve as items that can be created to track important activities within the SOC, they don’t have to just be for incident response. A case can be created for any notable activity that the SOC performs, as mentioned above. Cases can be used as a collaboration tool within your SOC team. While cases may seem redundant to incident, that is not true one bit. Here are a few distinguishing points: As incidents are a container for alerts, cases can be a container for incidents, allowing multiple incidents to be worked on at once if they are related by threat actor, impacted entities, and more. Cases offer a native task experience, similar to the experience within Microsoft Sentinel in Azure. Cases offer attachment support, allowing analysts a more traditional case management experience that incidents do not have. Cases offer a stronger collaborative experience, offering rich text comments and communications within each case. Cases allow for more customization, such as custom statuses. Incidents do not offer custom statuses. Let’s look at two example scenarios: Cases with Incidents I am a SOC Analyst that is reviewing the incident queue. I find an incident that involves multiple threat types and scripts. I would like to work on this incident with my colleagues while tracking notable artifacts that we find in our investigation. For example: I visit the unified incident queue and see that I have a multi-stage incident, involving multiple alerts for multiple assets. I perform my initial triage and confirm that this is a true positive that should be addressed. I will then cut a case and attach this incident to it for collaboration. Within the case, I can add a code block to list any query that I have performed within Advanced Hunting, as well as paste results from my queries directly in the case for tracking. If using Copilot for Security, I can copy and paste the Copilot incident summary in the case so that my colleagues can get an incident summary without having to leave the case. Cases without Incidents I am a SOC Analyst that is responsible for remediating device vulnerabilities. I check our current CVE’s within Exposure Management and see that I have several devices that are currently vulnerable to CVE-2025-5419, a Microsoft Edge Chromium vulnerability. I save my list of devices to a CSV file so that I can attach it to my case. I also copy the description of the CVE to add the case notes to make it more convenient for my colleagues to join the case and not need to leave it. I then pivot to Advanced Hunting to review activities by any of these vulnerable devices. I have a match and would like to connect that result to my case, so I use Export > Copy to Clipboard so that I can paste it in the case. Back within the case, I begin uploading the CSV of exposed devices as evidence, I leave a message that is formatted to draw attention to the findings, and I paste my findings based on my query. Based on my findings, I begin generating new tasks for each device owner and pasting the instructions for remediation of the CVE. These are just some examples of the many uses for cases within the Defender Portal. Hopefully this highlights the versatility of case management today and how it can operate both with and without an incident involved. Keep an eye out for more improvements as Case Management matures. If looking to learn about case management, please check out the below resources: Public documentation: Manage security operations cases natively in the Microsoft Defender portal - Unified security operations | Microsoft Learn Video based learning: https://www.youtube.com/watch?v=G-vfMJSL11g Demo: Case Management in Microsoft Defender1.2KViews0likes0CommentsIntegrating Fluent Bit with Microsoft Sentinel
This guide will walk you through the steps required to integrate Fluent Bit with Microsoft Sentinel. Beware that in this article, we assume you already have a Sentinel workspace, a Data Collection Endpoint and a Data Collection Rule, an Entra ID application and finally a Fluent Bit installation. As mentioned above, log ingestion API supports ingestion both in custom tables as built-in tables, like CommonSecurityLog, Syslog, WindowsEvent and more. In case you need to check which tables are supported please the following article: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview#supported-tables Prerequisites: Before beginning the integration process, ensure you have the following: An active Azure subscription with Microsoft Sentinel enabled; Microsoft Entra ID Application taking note of the ClientID, TenantID and Client Secret – create one check this article: https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app?tabs=certificate A Data Collection Endpoint (DCE) – to create a data collection endpoint, please check this article: https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-endpoint-overview?tabs=portal A Data Collection Rule (DCR) – fields from the Data Collection Rule need to match exactly to what exists in table columns and also the fields from the log source. To create a DCR please check this article: https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-rule-create-edit?tabs=cli Depending on the source, it might require a custom table to be created or an existing table from log analytics workspace; Fluent Bit installed on your server or container – In case you haven’t yet installed Fluent Bit, in the following article you'll find the instructions per type of operating system: https://docs.fluentbit.io/manual/installation/getting-started-with-fluent-bit High level architecture: Step 1: Setting up Fluent Big configuration file Before we step-in into the configuration, Fluent Bit has innumerous output plugins and one of those is through Log Analytics Ingestion API both to supported Sentinel tables but also for custom tables. You can check more information about it here in Fluent Bit documentation: https://docs.fluentbit.io/manual/pipeline/outputs/azure_logs_ingestion Moving forwarder, in order to configure Fluent Bit to send logs into Sentinel log analytics workspace, please take note of the specific input plugin you are using or intend to use to receive logs and how can you use it to output the logs to Sentinel workspace. For example most of the Fluent Bit plugins allow to set a “tag” key which can be used within the output plugin so that there’s a match in which logs are intended to send. On the other hand, in a scenario where multiple input plugins are used and all are required send logs to Sentinel, then a match of type wildcard "*" could be used as well. Another example, in a scenario where there are multiple input plugins of type “HTTP” and you want to just send a specific one into Sentinel, then the “match” field must be set according to the position of the required input plugin, for example “match http.2”, if the input plugin would the 3 rd in the list of HTTP inputs. If nothing is specified in the "match" field, then it will assume "http.0" by default. For better understanding, here’s an example of how a Fluent Bit config file could look: First, the configuration file is located under the path ”/etc/fluent-bit/fluent-bit.conf” The first part is the definition of all “input plugins”, then follows the “filter plugins” which you can use for example to rename fields from the source to match for what exists within the data collection rule schema and Sentinel table columns and finally there’s the output plugins. Below is a screenshot of a sample config file: INPUT plugins section: In this example we’re going to use the “dummy input” to send sample messages to Sentinel. However, in your scenario you could leverage other’s input plugins within the same config file. After everything is configured in the input section, make sure to complete the “FILTER” section if needed, and then move forward to the output plugin section, screenshot below. OUTPUT plugins section: In this section, we have output plugins to write on a local file based on two tags “dummy.log” and “logger”, an output plugin that prints the outputs in json format and the output plugin responsible for sending data to Microsoft Sentinel. As you can see, this one is matching the “tag” for “dummy.log” where we’ve setup the message “{“Message”:”this is a sample message for testing fluent bit integration to Sentinel”, “Activity”:”fluent bit dummy input plugn”, “DeviceVendor”:”Ubuntu”}. Make sure you insert the correct parameters in the output plugin, in this scenario the "azure_logs_ingestion" plugin. Step 2: Fire Up Fluent Bit When the file is ready to be tested please execute the following: sudo /opt/fluent-bit/bin/fluent-bit -c /etc/fluent-bit/fluent-bit.conf Fluent bit will start initialization all the plugins it has under the config file. Then you’re access token should be retrieved if everything is well setup under the output plugin (app registration details, data collection endpoint URL, data collection rule id, sentinel table and important to make sure the name of the output plugin is actually “azure_logs_ingestion”). In a couple of minutes you should see this data under your Microsoft Sentinel table, either an existing table or a custom table created for the specific log source purpose. Summary Integrating Fluent Bit with Microsoft Sentinel provides a powerful solution for log collection and analysis. By following this guide, hope you can set up a seamless integration that enhances your organization's ability to monitor and respond to security threats, just carefully ensure that all fields processed in Fluent Bit are mapped exactly to the fields in Data Collection Rule and Sentinel table within Log Analytics Workspace. Special thanks to “Bindiya Priyadarshini” that collaborated with me on this blog post. Cheers!1.7KViews2likes1CommentIntroducing Threat Intelligence Ingestion Rules
Microsoft Sentinel just rolled out a powerful new public preview feature: Ingestion Rules. This feature lets you fine-tune your threat intelligence (TI) feeds before they are ingested to Microsoft Sentinel. You can now set custom conditions and actions on Indicators of Compromise (IoCs), Threat Actors, Attack Patterns, Identities, and their Relationships. Use cases include: Filter Out False Positives: Suppress IoCs from feeds known to generate frequent false positives, ensuring only relevant intel reaches your analysts. Extending IoC validity periods for feeds that need longer lifespans. Tagging TI objects to match your organization's terminology and workflows Get Started Today with Ingestion Rules To create new “Ingestion rule”, navigate to “Intel Management” and Click on “Ingestion rules” With the new Ingestion rules feature, you have the power to modify or remove indicators even before they are integrated into Sentinel. These rules allow you to act on indicators currently in the ingestion pipeline. > Click on “Ingestion rules” Note: It can take up to 15 minutes for the rule to take effect Use Case #1: Delete IOC’s with less confidence score while ingesting When ingesting IOC's from TAXII/Upload API/File Upload, indicators are imported continuously. With pre-ingestion rules, you can filter out indicators that do not meet a certain confidence threshold. Specifically, you can set a rule to drop all indicators in the pipeline with a confidence score of 0, ensuring that only reliable data makes it through. Use Case #2: Extending IOC’s The following rule can be created to automatically extend the expiration date for all indicators in the pipeline where the confidence score is greater than 75. This ensures that these high-value indicators remain active and usable for a longer duration, enhancing the overall effectiveness of threat detection and response. Use Case #3: Bulk Tagging Bulk tagging is an efficient way to manage and categorize large volumes of indicators based on their confidence scores. With pre-ingestion rules, you can set up a rule to tag all indicators in the pipeline where the confidence score is greater than 75. This automated tagging process helps in organizing indicators, making it easier to search, filter, and analyze them based on their tags. It streamlines the workflow and improves the overall management of indicators within Sentinel. Managing Ingestion rules In addition to the specific use cases mentioned, managing ingestion rules gives you control over the entire ingestion process. 1. Reorder Rules You can reorder rules to prioritize certain actions over others, ensuring that the most critical rules are applied first. This flexibility allows for a tailored approach to data ingestion, optimizing the system's performance and accuracy. 2. Create From Creating new ingestion rules from existing ones can save you a significant amount of time and offer the flexibility to incorporate additional logic or remove unnecessary elements. Effectively duplicating these rules ensures you can quickly adapt to new requirements, streamline operations, and maintain a high level of efficiency in managing your data ingestion process. 3. Delete Ingestion Rules Over time, certain rules may become obsolete or redundant as your organizational needs and security strategies evolve. It's important to note that each workspace is limited to a maximum of 25 ingestion rules. Having a clean and relevant set of rules ensures that your data ingestion process remains streamlined and efficient, minimizing unnecessary processing and potential conflicts. Deleting outdated or unnecessary rules allows for a more focused approach to threat detection and response. It reduces clutter, which can significantly enhance the performance. By regularly reviewing and purging obsolete rules, you maintain a high level of operational efficiency and ensure that only the most critical and up-to-date rules are in place. Conclusion By leveraging these pre-ingestion rules effectively, you can enhance the quality and reliability of the IOC’s ingested into Sentinel, leading to more accurate threat detection and an improved security posture for your organization.4.8KViews3likes2CommentsImprove SecOps collaboration with case management
Are you using a 3rd party case management system for the SecOps work you do in Microsoft Sentinel or Defender XDR? Do you struggle to find a solution that encompasses the specific needs of your security team? We are excited to announce a new case management solution, now in public preview. This is our first step towards providing a native, security-focused case management system that spans all SecOps workloads in the Defender portal, removing customer reliance on 3rd party SIEM/XDR and ticketing systems. This will be available for all Microsoft Sentinel customers that have onboarded to the unified SecOps platform.4.5KViews2likes0CommentsUse Azure DevOps to manage Sentinel for MSSPs and Multi-tenant Environments
Automate Sentinel resource deployment in multi-tenant scenarios using Azure DevOps and Sentinel Repositories. Enable version control, collaboration, and streamlined updates for consistent and secure configurations.11KViews5likes6Comments