Are you interested in sending your Microsoft Defender for Cloud logs to Azure Data Explorer? The usual pattern for this would be to utilize Azure Log Analytic continuous export functionality but the Defender logs aren’t currently supported so let’s take a look at another option. Did you know that Microsoft Defender for Cloud has Continuous Export built into the product?
Continuously export Microsoft Defender for Cloud data
So lets walk through the steps to use this to export the data to Event Hub and then ingest it into Azure Data Explorer.
Now we need to setup our ingestion from Event Hub to Azure Data Explorer. At a very high level the process will be to:
If you haven’t used OneClick Ingestion I would suggest looking at the following video.
Azure Data Explorer One Click Ingestion for Azure Event Hub - YouTube
Now that you have the raw data landing in ADX you probably want to fork that out to individual tables via update policies based on the type of event. Below we’ll walk through the steps for one of those event types, subAssessments.
Here are the high-level steps we’ll take:
So lets start with the KQL query. First step is to filter down to only the event types that we want to see. We can utilize the data.Type property within the payload to do this
defenderraw
| where data.Type == 'Microsoft.Security/assessments/subAssessments'
Now that we have the events filtered we want to turn the properties in the data column (dynamic json) into individual columns. You could create individual columns using data.<field> but there is an easier way. KQL has bag_unpack that will do all the hard work for me.
defenderraw
| where data.Type == 'Microsoft.Security/assessments/subAssessments'
| evaluate bag_unpack(data)
This results in the json being flattened one level without you having to call into it
Before creating a new table based on the query lets check the DataTypes for the columns and determine which columns we want.
defenderraw
| where data.Type == 'Microsoft.Security/assessments/subAssessments'
| evaluate bag_unpack(data)
| getschema
Final step is to only select the columns that we want
defenderraw
| where data.Type == 'Microsoft.Security/assessments/subAssessments'
| evaluate bag_unpack(data)
| project Type, Id, Name, Properties, SecurityEventDataEnrichment, SubAssessmentEventDataEnrichment, TenantId
Once you have the query that will be used in your update policy there is a really simple method to create the table. Take 0 rows in the query (just return the schema) and use that in a .set command
.set subAssessments <|
defenderraw
| where data.Type == 'Microsoft.Security/assessments/subAssessments'
| evaluate bag_unpack(data)
| project Type, Id, Name, Properties, SecurityEventDataEnrichment, SubAssessmentEventDataEnrichment, TenantId
| take 0
Running this will create a table called subAssessments with the correct schema.
Now we’ll take this query, minus the take 0, and create a function.
.create function updateSubAssessments() {
defenderraw
| where data.Type == 'Microsoft.Security/assessments/subAssessments'
| evaluate bag_unpack(data)
| project Type, Id, Name, Properties, SecurityEventDataEnrichment,
SubAssessmentEventDataEnrichment, TenantId
}
Now we have a function called “updateSubAssessments” that can be used in our update policy.
Last step is to create an update policy that will monitor the raw table, grab any subAssessments, run the function, and place the results in our subAssessments table.
.alter table subAssessments policy update @'[{"IsEnabled": true, "Source": "defenderraw", "Query": "updateSubAssessments()", "IsTransactional": true, "PropagateIngestionProperties": false}]'
Once the update policy is in place any new “subAssessments” event type that lands in “defenderraw” will automatically be parsed and placed in the “subAssessments” table.
Follow these steps for any additional event types that you wish to parse from the “defenderraw” table.
You need to wait until at least one event of type “subAssessments” lands in the “defenderraw” table. Once events have landed in the raw table you should be able to query the “subAssessments” table to verify that parsed events are landing correctly.
Once you have the data forked to individual tables you’ll want to adjust the retention setting on the raw table. You can go all the way down to 0 retention if you wish and the update policies will still work. But my suggestion would be to keep a few hours so that if you needed to troubleshoot or look at the raw data coming in you can do this. The below command will set the raw table to 2 hours of retention:
.alter table defenderraw policy retention "{\"SoftDeletePeriod\": \"00.02:00:00\", \"Recoverability\": \"Enabled\"}"
We’ve walked through the process of sending Microsoft Defender for Cloud logs to Azure Data Explorer using continuous export to Azure Event Hub. At a high level you need to configure:
For the purposes of this article we walked through the manual steps but all of this can be automated using scripts or a Bicep Template
https://docs.microsoft.com/en-us/azure/templates/microsoft.kusto/clusters?tabs=bicep
Even the tables, functions, update policies, etc.. can be created in the template using configuration KQL scripts.
https://docs.microsoft.com/en-us/azure/data-explorer/database-script
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.