abhisheksharan
13 Topics- Ingesting custom application logs Text/JSON file to Microsoft SentinelThis blog is in continuation to my previous blog on Demystifying Log Ingestion API where I discussed on ingesting custom log files to Microsoft Sentinel via Log Ingestion API approach. In this blog post I will delve into ingesting custom application logs in Text/JSON format to Microsoft Sentinel. Note: For my demo purposes I will use the log in JSON format. First, lets start with WHY is this important. Many applications and services will log information to a JSON/Text files instead of standard logging services such as Windows Event log or Syslog. There are several use cases where custom application logs are mandatory to be monitored and that’s why this integration becomes crucial part of SOC monitoring. How to implement this integration? Custom application logs in Text/JSON format can be collected with Azure Monitor Agent and stored in a Log Analytics workspace with data collected from other sources. There are two ways to do it: Creating DCR-based custom table and link it with Data Collection Rule and Data Collection Endpoint. Leverage Custom logs via AMA content hub solution. I will discuss both approaches in this blog. Let’s see it in action now. Leveraging DCR-based custom table to ingest custom application logs Using this approach, we will first create a DCR-based custom table and link it with DCR and DCE. Prerequisites for this approach: Log Analytics workspace where you have at least contributor rights. A data collection endpoint (DCE) in the same region as the Log Analytics workspace. See How to set up data collection endpoints based on your deployment for details. Either a new or existing DCR described in Collect data with Azure Monitor Agent. Basic Operations: The following diagram shows the basic operation of collecting log data from a json file. The agent watches for any log files that match a specified name pattern on the local disk. Each entry in the log is collected and sent to Azure Monitor. The incoming stream defined by the user is used to parse the log data into columns. A default transformation is used if the schema of the incoming stream matches the schema of the target table. Detailed steps as follows: Browse to Log Analytics Workspace > Settings > Tables > New custom log (DCR-based) Enter the table name, please note the suffix _CL will be automatically added. Use existing or create a new DCR and link a DCE. Upload the sample log file in JSON format to create table schema In my use case, I’ve created few columns like TimeGenerated, FilePath and Computer using the transformation query mentioned below: source | extend TimeGenerated = todatetime(Time), FilePath = tostring('C:\\Custom Application\\v.1.*.json'), Computer = tostring('DC-WinSrv22') Review and create the table. Go to the Data Collection Rule > Resources and Add the Application Server and link it with the DCE. If all configurations are correct, then in few minutes the data should populate in the custom table as shown below: Note: Ensure that the Application Server is reporting to the correct Log Analytics Workspace and the DCR, DCE are linked to the Server. Details of DCRs associated with a VM can be fetched from the following PowerShell Script: Get-AzDatacollectionRuleAssociation -TargetResourceId {ResourceID} Please note, 'Custom JSON log' data source configuration is currently unavailable through the Portal, you can use Azure CLI or ARM template for the configurations. However, ‘Custom Text Logs’ data source can be configured from Azure Portal (DCR>Data Sources) Leveraging Custom logs via AMA Data Connector We’ve recently released a content hub solution for ingesting custom logs via AMA. This approach is straightforward as the required columns like TimeGenerated and RawData gets created automatically. Detailed steps as follows: Browse to Microsoft Sentinel > Content Hub > Custom Logs AMA and install this solution Go to Manage > Open the connector page > Create Data Collection Rule Enter the Rule name, target VM and specify if you wish to create a new table. If so, provide a table name. You’ll also need to provide the file pattern (wildcards are supported) along with transformation logic (if applicable). In my use case, I am not using any transformation. Once DCR is created, wait for some time and validate if logs are streaming or not. If all the configurations are correct, then you’ll see the logs in the table as shown below: Please note, since we have used DCR-based custom tables we can switch the table plan to Basic if needed. Additionally, DCR-based custom tables support transformation so irrelevant data can be dropped or the incoming data can be split to multiple tables. References: Collect logs from a JSON file with Azure Monitor Agent - Azure Monitor | Microsoft Learn Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel - AMA | Microsoft Learn Demystifying Log Ingestion API | Microsoft Community Hub Save ingestion costs by splitting logs into multiple tables and opting for the basic tier! | Microsoft Community Hub Workspace & DCR Transformation Simplified | Microsoft Community Hub
- Break the 30,000 Rows Limit with Advanced Hunting API!In this blog post, I will explain how to utilize advanced hunting APIs to bypass the 30,000 rows limit in Defender XDR's advanced hunting feature. Before we delve into the topic, let’s understand what is an Advanced Hunting in Defender XDR and what problem we are trying to solve. Advanced Hunting in Defender XDR (Extended Detection and Response) is a powerful feature in Microsoft Defender that allows security professionals to query and analyse large volumes of raw data to uncover potential threats across an organization's environment. It provides a flexible query interface where users can write custom queries using Kusto Query Language (KQL) to search through data collected from various sources, such as endpoints, emails, cloud apps, and more. Key features of Advanced Hunting in Defender XDR include: Custom Queries: You can create complex queries to search for specific activities, patterns, or anomalies across different security data sources. Deep Data Analysis: It allows for deep analysis of raw data, going beyond the pre-defined alerts and detections to identify potential threats, vulnerabilities, or suspicious behaviours that might not be immediately visible. Cross-Platform Search: Advanced Hunting enables users to query across a wide range of data sources, including Microsoft Defender for Endpoint, Defender for Identity, Defender for Office 365, and Defender for Cloud Apps. Automated Response: It supports creating automated response actions based on the findings of advanced hunts, helping to quickly mitigate threats. Integration with Threat Intelligence: You can enrich your hunting queries with external threat intelligence to correlate indicators of compromise (IOCs) and identify malicious activities. Visualizations and Insights: Results from hunting queries can be visualized to help spot trends and patterns, making it easier to investigate and understand the data. Advanced Hunting is a valuable tool for proactive threat detection, investigation, and response within Defender XDR, giving security teams more flexibility and control over the security posture of their organization. Advanced Hunting quotas and service limits To keep the service performant and responsive, advanced hunting sets various quotas and usage parameters (also known as "service limits"). By design, each Advanced Hunting query can fetch up to 30,000 rows. Refer our public documentation for more information about the service limitations in Advanced Hunting. In this blog, we will focus on leveraging Advanced Hunting APIs to bypass the 30,000 rows service limit of Advanced Hunting. Usually when the query result exceeds 30,000 rows it’s recommended to: Try refining/optimizing the query by introducing filters to separate it into distinct segments, and then merge the results into a comprehensive report. Leverage Advanced Hunting API as it can fetch up to 100,000 rows: Advanced Hunting API - Microsoft Defender for Endpoint | Microsoft Learn We’re going to focus on the second approach here. Let's dive deeper into the process of fetching up to 100,000 records using the Advanced Hunting API. Login to Microsoft Defender XDR (https://security.microsoft.com/) Browse to Endpoints > Partners and APIs > API Explorer Submit a POST query along with the JSON with the Advanced Hunting query. POST https://api.securitycenter.microsoft.com/api/advancedqueries/run Let’s take an example of an AH query to fetch details about devices with open CVEs details. Sample Advanced Hunting query: DeviceTvmSoftwareVulnerabilities | join kind=inner ( DeviceTvmSoftwareVulnerabilitiesKB | extend CveId = tostring(CveId) // Cast CveId to string in the second leg of the join | project CveId, VulnerabilitySeverityLevel, CvssScore, PublishedDate, VulnerabilityDescription ) on CveId | project DeviceName, OSPlatform, OSVersion, CveId, VulnerabilitySeverityLevel, CvssScore, PublishedDate, VulnerabilityDescription, RecommendedSecurityUpdate Note: The advanced hunting query in the JSON template should be written in a single line. Let’s see it in action now. My JSON template is as follows: { "Query":"DeviceTvmSoftwareVulnerabilities| join kind=inner (DeviceTvmSoftwareVulnerabilitiesKB | extend CveId = tostring(CveId) | project CveId, VulnerabilitySeverityLevel, CvssScore, PublishedDate, VulnerabilityDescription) on CveId | project DeviceName, OSPlatform, OSVersion, CveId, VulnerabilitySeverityLevel, CvssScore, PublishedDate, VulnerabilityDescription, RecommendedSecurityUpdate" } Execute the query and it returns a response (as shown below) Copy the response; save it as a JSON file locally Use PowerShell to convert JSON to CSV format. For Ex: Following PowerShell script can be used to convert the JSON file to CSV report: Get-Content "<Location of JSON file>" | ConvertFrom-Json | select -Expand Results | ConvertTo-Csv -NoTypeInformation | Out-File "<Location to save CSV file>" -Encoding ASCII The CSV report should have up to 100,000 records. I would also recommend going through the limitations of Advanced Hunting APIs as well: Advanced Hunting API - Microsoft Defender for Endpoint | Microsoft Learn References: Advanced Hunting APIs: Advanced Hunting API - Microsoft Defender for Endpoint | Microsoft Learn Advanced Hunting Overview: Overview - Advanced hunting - Microsoft Defender XDR | Microsoft Learn
- Automate MDE Extension Status Checks with PowerShellIn this blog, I will dive into an automated approach for efficiently retrieving the installation status of MDE extensions (MDE.Windows or MDE.Linux) on Azure VMs safeguarded by Defender for Servers (P1 or P2) plans. This method not only streamlines the monitoring process but also ensures that your critical endpoints are continuously protected with the latest Defender capabilities. By leveraging automation, you can quickly identify any discrepancies or gaps in extension deployment, allowing for swift remediation and fortifying your organization’s security posture. Stay tuned as we explore how to seamlessly track the status of MDE extensions across your Azure VMs, ensuring robust and uninterrupted endpoint protection. Before we move forward, I’ll assume you’re already familiar with Defender for Servers’ powerful capability to automatically onboard protected servers to Microsoft Defender for Endpoint. This seamless integration ensures your endpoints are swiftly equipped with industry-leading threat protection, providing a crucial layer of defense without the need for manual intervention. With this foundation in place, we can now explore how to automate the process of monitoring and verifying the installation status of MDE extensions across your Azure VMs, ensuring your security remains proactive and uninterrupted. To provide some quick context, when the Defender for Servers (P1 or P2) plan is enabled in Microsoft Defender for Cloud, the "Endpoint Protection" feature is also enabled as per default configuration. With "Endpoint Protection" enabled, Microsoft Defender for Cloud deploys the MDE.Windows or MDE.Linux extension, depending on the operating system. These extensions play a crucial role in onboarding your Azure VMs to Microsoft Defender for Endpoint, ensuring they are continuously monitored and protected from emerging threats. However, there may be instances where the extensions fail to install on certain VMs due to various reasons. In these cases, it's crucial to identify the root cause of the failure in order to effectively plan and implement the necessary remediation actions. You can leverage Azure Resource Graph query or PowerShell to fetch this information. In this blog, we will focus on PowerShell approach to get the data. Let’s get started I’ve developed an interactive PowerShell script that allows you to easily retrieve data for MDE.Windows or MDE.Linux extensions. Below are the detailed steps to follow: Download the script locally from the GitHub Repo. Update the output file path as per your environment (line #84 in the script) and save the file. Sign-in to Azure Portal. Launch Cloud Shell. Upload the script in Cloud Shell that you’ve downloaded locally Load the uploaded script and read the “Disclaimer” section. If you agree then proceed further, if you disagree then type “no” and the script will terminate. The output is stored in CSV format, which you should download to further review the “Message” column detail. In “Manage Files” section, click on Download and provide the output file path to download the CSV report as shown below: Once the CSV file is downloaded, you can review the detailed information about the failure message of the extensions. Including the PowerShell script for reference: # Disclaimer Write-Host "************************* DISCLAIMER *************************" Write-Host "The author of this script provides it 'as is' without any guarantees or warranties of any kind." Write-Host "By using this script, you acknowledge that you are solely responsible for any damage, data loss, or other issues that may arise from its execution." Write-Host "It is your responsibility to thoroughly test the script in a controlled environment before deploying it in a production setting." Write-Host "The author will not be held liable for any consequences resulting from the use of this script. Use at your own risk." Write-Host "***************************************************************" Write-Host "" # Prompt the user for consent after displaying the disclaimer $consent = Read-Host -Prompt "Do you consent to proceed with the script? (Type 'yes' to continue)" # If the user does not consent, exit the script if ($consent -ne "yes") { Write-Host "You did not consent. Exiting the script." exit } # If consent is given, continue with the rest of the script Write-Host "Proceeding with the script..." # Get all VMs in the subscription $vms = Get-AzVM # Initialize an array to collect the output $outputData = @() # Loop through each VM and check extensions $vms | ForEach-Object { $vm = $_ # Get the VM status with extensions $vmStatus = Get-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name -Status $extensions = ($vmStatus).Extensions | Where-Object { $_.Name -eq "MDE.Windows" -or $_.Name -eq "MDE.Linux" } # Get the VM OS type (Windows/Linux) $osType = $vm.StorageProfile.OsDisk.OsType if ($extensions.Count -eq 0) { # If no MDE extensions found, append a message indicating they are missing $outputData += [PSCustomObject]@{ "Subscription Name" = (Get-AzContext).Subscription.Name "VM Name" = $vm.Name "VM OS" = $osType "Extension Name" = "MDE Extensions Missing" "Display Status" = "N/A" "Message" = "MDE.Windows or MDE.Linux extensions are missing." } } else { # Process the extensions if found $extensions | ForEach-Object { # Get the message and parse it into a single line $message = $_.Statuses.Message # Remove line breaks or newlines and replace them with spaces $singleLineMessage = $message -replace "`r`n|`n|`r", " " # If the message is JSON, we can parse it (optional) try { $parsedMessage = $singleLineMessage | ConvertFrom-Json # Convert the JSON back to a single-line string $singleLineMessage = $parsedMessage | ConvertTo-Json -Compress } catch { # If it's not JSON, keep the message as is } # Create a custom object for the table output with the single-line message $outputData += [PSCustomObject]@{ "Subscription Name" = (Get-AzContext).Subscription.Name "VM Name" = $vm.Name "VM OS" = $osType "Extension Name" = $_.Name "Display Status" = $_.Statuses.DisplayStatus "Message" = $singleLineMessage } } } } # Output to the console in a formatted table $outputData | Format-Table -Property "Subscription Name", "VM Name", "VM OS", "Extension Name", "Display Status", "Message" # Specify the CSV file path $csvFilePath = "/home/abhishek/MDEExtReport/mdeextreport_output.csv" # Update the path to where you want to store the CSV # Check if the directory exists $directory = [System.IO.Path]::GetDirectoryName($csvFilePath) if (-not (Test-Path -Path $directory)) { # Create the directory if it doesn't exist Write-Host "Directory does not exist. Creating directory: $directory" New-Item -ItemType Directory -Force -Path $directory } # Check if the file exists and create it if missing if (-not (Test-Path -Path $csvFilePath)) { Write-Host "File does not exist. Creating file: $csvFilePath" } # Save the output to a CSV file locally $outputData | Export-Csv -Path $csvFilePath -NoTypeInformation Write-Host "The report has been saved to: $csvFilePath" Disclaimer: The author of this script provides it 'as is' without any guarantees or warranties of any kind. By using this script, you acknowledge that you are solely responsible for any damage, data loss, or other issues that may arise from its execution. It is your responsibility to thoroughly test the script in a controlled environment before deploying it in a production setting. The author will not be held liable for any consequences resulting from the use of this script. Use at your own risk. I trust this script will significantly reduce the effort required to investigate the root cause of MDE extension installation failures, streamlining the troubleshooting process and enhancing operational efficiency.
- Detecting and Alerting on MDE Sensor Health Transitions Using KQL and Logic AppsIntroduction Maintaining the health of Microsoft Defender for Endpoint (MDE) sensors is essential for ensuring continuous security visibility across your virtual machine (VM) infrastructure. When a sensor transitions from an "Active" to an "Inactive" state, it indicates a loss of telemetry from that device and potentially creating blind spots in your security posture. To proactively address this risk, it's important to detect these transitions promptly and alert your security team for timely remediation. This guide walks you through a practical approach to automate this process using a Kusto Query Language (KQL) script to identify sensor health state changes, and an Azure Logic App to trigger email alerts. By the end, you'll have a fully functional, automated monitoring solution that enhances your security operations with minimal manual effort. Why Monitoring MDE Sensor Health Transitions is Important Ensures Continuous Security Visibility MDE sensors provide critical telemetry data from endpoints. If a sensor becomes inactive, that device stops reporting, creating a blind spot in your security monitoring. Prevents Delayed Threat Detection Inactive sensors can delay the identification of malicious activity, giving attackers more time to operate undetected within your environment. Supports Effective Incident Response Without telemetry, incident investigations become harder and slower, reducing your ability to respond quickly and accurately to threats. Identifies Root Causes Early Monitoring transitions helps uncover underlying issues such as service disruptions, misconfigurations, or agent failures that may otherwise go unnoticed. Closes Security Gaps Proactively Early detection of inactive sensors allows teams to take corrective action before adversaries exploit the lapse in coverage. Enables Automation and Scalability Using KQL and Logic Apps automates the detection and alerting process, reducing manual effort and ensuring consistent monitoring across large environments. Improves Operational Efficiency Automated alerts reduce the need for manual checks, freeing up security teams to focus on higher-priority tasks. Strengthens Overall Security Posture Proactive monitoring and fast remediation contribute to a more resilient and secure infrastructure. Prerequisites MDE Enabled: Defender for Endpoint must be active and reporting on all relevant devices. Stream DeviceInfo table (from Defender XDR connector) in Microsoft Sentinel’s workspace: Required to run KQL queries and manage alerts. Log Analytics Workspace: To run the KQL query. Azure Subscription: Needed to create and manage Logic Apps. Permissions: Sufficient RBAC access to Logic Apps, Log Analytics, and email connectors. Email Connector Setup: Outlook, SendGrid, or similar must be configured in Logic Apps. Basic Knowledge: Familiarity with KQL and Logic App workflows is helpful. High-level summary of the Logic Apps flow for monitoring MDE sensor health transitions: Trigger: Recurrence The Logic App starts on a scheduled basis (e.g., weekly or daily or hourly) using a recurrence trigger. Action: Run KQL Query Executes a Kusto Query against the Log Analytics workspace to detect devices where the MDE sensor transitioned from Active to Inactive in the last 7 days. Condition (Optional): Check for Results Optionally checks if the query returned any results to avoid sending empty alerts. Action: Send Email Notification If results are found, an email is sent to the security team with details of the affected devices using dynamic content from the query output. Logic Apps Flow KQL Query to Detect Sensor Transitions Use the following KQL query in Microsoft Defender XDR or Microsoft Sentinel to identify VMs where the sensor health state changed from Active to Inactive in the last 7 days: DeviceInfo | where Timestamp >= ago(7d) | project DeviceName, DeviceId, Timestamp, SensorHealthState | sort by DeviceId asc, Timestamp asc | serialize | extend PrevState = prev(SensorHealthState) | where PrevState == "Active" and SensorHealthState == "Inactive" | summarize FirstInactiveTime = min(Timestamp) by DeviceName, DeviceId | extend DaysInactive = datetime_diff('day', now(), FirstInactiveTime) | order by FirstInactiveTime desc This KQL query does the following: Detects devices whose sensors have stopped functioning (changed from Active to Inactive) in the past 7 days. Provides the first time this happened for each affected device. It also tells you how long each device has been inactive. Sample Email for reference How This Helps the Security Team Maintains Endpoint Visibility Detects when devices stop reporting telemetry, helping prevent blind spots in threat detection. Enables Proactive Threat Management Identifies sensor health issues before they become security incidents, allowing early intervention. Reduces Manual Monitoring Effort Automates the detection and alerting process, freeing up analysts to focus on higher-priority tasks. Improves Incident Response Readiness Ensures all endpoints are actively monitored, which is critical for timely and accurate incident investigations. Supports Compliance and Audit Readiness Demonstrates continuous monitoring and control over endpoint health, which is often required for regulatory compliance. Prioritizes Remediation Efforts Provides a clear list of affected devices, helping teams focus on the most recent or longest inactive endpoints. Integrates with Existing Workflows Can be extended to trigger ticketing systems, remediation scripts, or SIEM alerts, enhancing operational efficiency. Conclusion By combining KQL analytics with Azure Logic Apps, you can automate the detection and notification of sensor health issues in your VM fleet, ensuring continuous security coverage and rapid response to potential risks.