microsoft sentinel
2 TopicsOptimize Azure Log Costs: Split Tables and Use the Auxiliary Tier with DCR
This blog is continuation of my previous blog where I discussed about saving ingestion costs by splitting logs into multiple tables and opting for the basic tier! Now that the transformation feature for Auxiliary logs has entered Public Preview stage, I’ll take a deeper dive, showing how to implement transformations to split logs across tables and route some of them to the Auxiliary tier. A quick refresher: Azure Monitor offers several log plans which our customers can opt for depending on their use cases. These log plans include: Analytics Logs – This plan is designed for frequent, concurrent access and supports interactive usage by multiple users. This plan drives the features in Azure Monitor Insights and powers Microsoft Sentinel. It is designed to manage critical and frequently accessed logs optimized for dashboards, alerts, and business advanced queries. Basic Logs – Improved to support even richer troubleshooting and incident response with fast queries while saving costs. Now available with a longer retention period and the addition of KQL operators to aggregate and lookup. Auxiliary Logs – Our new, inexpensive log plan that enables ingestion and management of verbose logs needed for auditing and compliance scenarios. These may be queried with KQL on an infrequent basis and used to generate summaries. Following diagram provides detailed information about the log plans and their use cases: More details about Azure Monitor Logs can be found here: Azure Monitor Logs - Azure Monitor | Microsoft Learn **Note** This blog will be focussed on switching to Auxiliary logs only. I would recommend going through our public documentation for detailed insights about feature-wise comparison for the log plans which should help you in taking right decisions for choosing the correct log plans. At this stage, I assume you’re aware about different log tiers that Azure Monitor offers and you’ve decided to switch to Auxiliary logs for high volume, low-fidelity logs. Let’s look at the high-level approach we’re going to follow to achieve this: Review the relevant tables and figure out which portion of the log can be moved to Auxiliary tier Create a DCR-based custom table which same schema as of the original table. For Ex. If you wish to split Syslog table and ingest a portion of the table into Auxiliary tier, then create a DCR-based custom table with same schema as of the Syslog table. At this point, switching table plan via UI is not possible, so I’d recommend using PowerShell script to create the DCR-based custom table. Once DCR-based custom table is created, implement DCR transformation to split the table. Configure total retention period of the Auxiliary table (this configuration will be done while creating the table) Let’s get started Use Case: In this demo, I’ll split Syslog table and route “Informational” logs to the Auxiliary table. Creating a DCR-based custom table: Previously a complex task, creating custom tables is now easy, thanks to a PowerShell script by MarkoLauren. Simply input the name of an existing table, and the script creates a DCR-based custom table with the same schema. Let’s see it in action now: Download the script locally. Update the resourceID details in this script and save it. Upload the updated script in Azure Shell. Load the file and enter the table name from which you wish to copy the schema. In my case, it's going to be "Syslog" table. Enter new table name, table type and total retention period, shown below: **Note** We highly recommend you review the PowerShell script thoroughly and do proper testing before executing it in production. We don't take any responsibility for the script. As you can see, Aux_Syslog_CL table has been created. Let’s validate in log analytics workspace > table section. Since the Auxiliary table has been created now, next step is to implement transformation logic at data collection rule level. The next step is to update the Data Collection Rule template to split the logs Since we already created custom table, we should create a transformation logic to split the Syslog table and route the logs with SeverityLevel “info” to the Auxiliary table. Let’s see how it works: Browse to Data Collection Rule blade. Open the DCR for Syslog table, click on Export template > Deploy > Edit Template as shown below: In the dataFlows section, I’ve created 2 streams for splitting the logs. Details about the streams as follows: 1 st Stream: It’s going to drop the Syslog messages where SeverityLevel is “info” and send rest of the logs to Syslog table. 2 nd Stream: It’s going to capture all Syslog messages where SeverityLevel is “info” and send the logs to Aux_Syslog_CL table. Save and deploy the updated template. Let’s see if it works as expected Browse to Azure > Microsoft Sentinel > Logs; and query the Auxiliary table to confirm if data is being ingested into this table. As we can see, the logs where SeverityLevel is “info” is being ingested in the Aux_Syslog_CL table and rest of the logs are flowing into Syslog table. Some nice cost savings are coming your way, hope this helps!Importing AWS Security Hub Findings into Microsoft Sentinel
This blog explores how to ingest AWS Security Hub findings into Microsoft Sentinel using native solutions. Although a GitHub reference for deploying an Azure Function-based solution is included, my experience assisting a customer with its implementation provided valuable insights. Instead of step-by-step instructions, I’ll provide a high-level overview and guidance to navigate potential challenges. Let’s dive in! Ingest AWS Security Hub Events to Azure Sentinel https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-SecurityHubFindings/README.md#ingest-aws-security-hub-events-to-azure-sentinel What is AWS Security Hub? AWS Security Hub is a cloud security posture management (CSPM) service that performs automated, continuous security best practice checks against your AWS resources to help you identify misconfigurations, and aggregates your security alerts (i.e. findings) in a standardized format so that you can more easily enrich, investigate, and remediate them. As of November 2024, we already have an S3 connector that ingests logs from specific AWS services: VPC Flow Logs, Amazon GuardDuty, CloudTrail, and CloudWatch, by pulling them from an S3 bucket. We will rely on this connector to receive findings from AWS Security Hub into the AWSCloudWatch table in Microsoft Sentinel, as these findings are sent to the CloudWatch service. While the data ingested into the AWSCloudWatch table may not be parsed exactly as expected, a KQL transformation rule will help address this — more on that later. The flow would look like this AWS services are configured to send their logs to S3 (Simple Storage Service) storage buckets. The S3 bucket sends notification messages to the SQS (Simple Queue Service) message queue whenever it receives new logs. The Microsoft Sentinel AWS S3 connector polls the SQS queue at regular, frequent intervals. If there is a message in the queue, it will contain the path to the log files. The connector reads the message with the path, then fetches the files from the S3 bucket. To connect to the SQS queue and the S3 bucket, Microsoft Sentinel uses a federated web identity provider (Microsoft Entra ID) for authenticating with AWS through OpenID Connect (OIDC), and assuming an AWS IAM role. The role is configured with a permissions policy giving it access to those resources. Reference https://learn.microsoft.com/en-us/azure/sentinel/connect-aws?tabs=s3#architecture-overview To forward findings from AWS Security Hub to CloudWatch, we will use EventBridge. First, we will create a CloudWatch log group to serve as the destination for these events before setting up the EventBridge rule. Creating a CloudWatch log group Follow the steps here for creating a CloudWatch log groups Working with log groups and log streams — Amazon CloudWatch Logs To be able to add a CloudWatch log group as a target in the EventBridge rule. The log group must start with /aws/events. For reference Configuring an EventBridge rule for Security Hub findings — AWS Security Hub Create a new EventBridge rule with event pattern In my example I am not filtering out any findings but if you like to filter for example based on severity like INFORMATIONAL, LOW you can update the event pattern. Refer here for Configuring an EventBridge rule for Security Hub findings — AWS Security Hub Exporting logs from CloudWatch log group to S3 bucket Logs arriving in the CloudWatch log group are in GZIP format, which is accepted by Microsoft Sentinel. These logs can now be sent over to Sentinel. Exporting log data to Amazon S3 — Amazon CloudWatch Logs We can even automate the process which is defined here. Automate! Export of Cloudwatch Logs to S3 Bucket Using Lambda with Eventbridge Trigger — DEV Community Now we can rely on the instructions of setting up S3 Connector in Sentinel Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data | Microsoft Learn We have already configured an AWS service(CloudWatch) to export logs to an S3 bucket so skip the mentioned step. Logs arriving in Microsoft Sentinel will appear in the AWSCloudWatch table but won't be parsed as expected. The finding is stored in the Message column. KQL Transformation We can use this KQL transformation query to parse the logs. AWSCloudWatch | where isnotempty(Message) // Ensure message is not empty | extend CleanMessage = replace_regex(Message, @"^\S+\s", "") // Remove the timestamp at the beginning | extend ParsedMessage = parse_json(CleanMessage) // Parse the cleaned message as JSON | where isnotempty(ParsedMessage) // Filter only rows where parsing was successful | extend FindingsArray = ParsedMessage.detail.findings // Extract 'findings' array | mv-expand FindingsArray // Expand findings array into rows | extend Findings = parse_json(FindingsArray) // Parse each finding as JSON | extend Region = tostring(ParsedMessage.region), // Extract region from the top-level message Severity = tostring(Findings.Severity.Label), // Extract Severity from findings Account = tostring(ParsedMessage.account), // Extract account from the top-level message Product = tostring(Findings.ProductName) // Extract Product from findings | project Title = tostring(Findings.Title), Region, Severity, Account, Product // Extract Title, Region, Severity, Account, and Product Tranformation query for parsing AWS Security Hub findings. Debugging Tips #1 Not able to add a CloudWatch log group as a target in the eventbridge rule. To add a CloudWatch log group as a target, you can either create a new log group or use an existing log group. The log group must start with /aws/events. #2 Cannot search the CloudWatch log group. Ensure that AWS service is selected in the target group. #3 Error enabling and configuring event notifications using the Amazon S3 console When configuring your S3 bucket to send notification messages to your SQS queue as part of the S3 connector setup, you might encounter an issue related to the queue name. In my experience, I received an access policy error even though the queue name was visible. If you face a similar issue, I recommend loosening the access policy attached to the queue temporarily and testing the setup again. Once it works, you can refine the policy to meet your security requirements.