Log sources down

Copper Contributor

Hi All,

 

I'm new to sentinel and I'm trying to monitor silent log sources, for example if I have 10 ASA firewall sending syslog to Sentiel, how can I detect if one of them become silent ?

 

I found the below posts but it doesn't really answer my question, the second link is close to what I need but it seems (if I understand it correctly) that it list log source type or table not getting logs but not a specific log source :

https://techcommunity.microsoft.com/t5/microsoft-sentinel/how-to-monitor-log-sources-in-azure-sentin...

https://techcommunity.microsoft.com/t5/microsoft-sentinel/list-of-reporting-sourcetypes/m-p/906926

 

 

I suppose that I could do a query checking all log sources and reporting source that did not send log for more than X hours, But I cannot find a query that list all log sources.. Is it possible to do this ?

 

Thanks !

5 Replies
There is no query that can list all log sources. You could use a watchlist of which data connectors send data to which tables and use that to see if any of those tables have not received data in your given time span.

@LuxPL 

 

I'm fairly new to Sentinel as well and have been working on the same task (as I'm new, this may not be the most elegant or correct way to achieve the goal). However, here's what I've found to get the number of minutes since the last log for each Cisco ASA (Cisco ASA logs should be in the CommonSecurityLog table)...

 

CommonSecurityLog
| where DeviceVendor == "Cisco"
| where DeviceProduct == "ASA"
| summarize Last_Log = datetime_diff("minute",now(), max(TimeGenerated)) by Computer
| where Last_Log > 15

 

 

This one will look for multple DeviceVendors...

 

CommonSecurityLog
| where DeviceVendor matches regex "(Cisco|Check Point|Fortinet)"
| summarize Last_Log = datetime_diff("minute",now(), max(TimeGenerated)) by Computer, DeviceVendor
| where Last_Log > 15

 

 

Lastly, this one will union with the Syslog table and give you results for both tables...

 

CommonSecurityLog
| where DeviceVendor matches regex "(Cisco|Check Point|Fortinet)"
| union Syslog
| summarize Last_Log = datetime_diff("minute",now(), max(TimeGenerated)) by Computer, Type
| where Last_Log > 15


You can use these queries in Alerts, have those Alerts generate Incidents and then use a Logic App to send an email or open a Service Now ticket (or whatever) for each Incident.

 

Play around with the above queries and I’m sure you’ll figure out what you need.

Good luck.

@Kingston12 

 

You can also use a query like this, which I prefer as it also gives you the pattern (some sources can be up/down or delayed, so you can check a little history) 

CommonSecurityLog
| where DeviceVendor == "Cisco"
| where DeviceProduct == "ASA"
| make-series count() default=0 on TimeGenerated from ago(1h) to now() step 15m by Computer
| where count_ [-1] == 0   // look at the last record [-1] and only show events when last data point was equal to zero 
| project-away TimeGenerated

This shows me Computers with no data in the past 15mins and also the 3 previous 15min intervals (so I know in my Alert if this is normal or not.   
You can see from this screenshot (different use case), the count of logs differs in each 15min bin during the past two hours - showing me servers that are sending low volumes (or could be swapped to high) if you wanted.
Screenshot 2022-01-26 092023.png
If you wanted to take it further when you understand make-series you can also look for anomalies - this example is for Tables, but you can switch to the Cisco source yourself (note 30 or even 90days+ is better to lookback on for this type of query). 

union * 
| make-series count() default=0 on TimeGenerated from startofday(ago(30d)) to now()-1h step 1d by Type
// only show when last data point equals zero 
| where count_ [-1] == 0   
    | extend (anomalies, score, baseline) = series_decompose_anomalies(count_, 2.5, -1, 'linefit', 1, 'ctukey')
    | extend Score = score[-1]
    | extend expectedEventCounts=baseline[-1], actualEventCount=count_[-1], Score = score[-1], count_
| project Type, round(toreal(Score),2), count_ , anomalies //, baseline
// a high score means that the Table is normally up/sending data so this more anomolious 

So the output of the above is ONLY tables that sent "zero" data in the last bin/interval.  The Score column gives me a clue to how anomalous each is, -13 being the highest - you can see from the count_ column its unusual that Table sends zero records.

Screenshot 2022-01-26 092538.png 


Thank you all for all the great information ! I will explore this

@LuxPL - Have you found a good solution using the above or anything else since last year?