Forum Discussion
Log sources down
I'm fairly new to Sentinel as well and have been working on the same task (as I'm new, this may not be the most elegant or correct way to achieve the goal). However, here's what I've found to get the number of minutes since the last log for each Cisco ASA (Cisco ASA logs should be in the CommonSecurityLog table)...
CommonSecurityLog
| where DeviceVendor == "Cisco"
| where DeviceProduct == "ASA"
| summarize Last_Log = datetime_diff("minute",now(), max(TimeGenerated)) by Computer
| where Last_Log > 15
This one will look for multple DeviceVendors...
CommonSecurityLog
| where DeviceVendor matches regex "(Cisco|Check Point|Fortinet)"
| summarize Last_Log = datetime_diff("minute",now(), max(TimeGenerated)) by Computer, DeviceVendor
| where Last_Log > 15
Lastly, this one will union with the Syslog table and give you results for both tables...
CommonSecurityLog
| where DeviceVendor matches regex "(Cisco|Check Point|Fortinet)"
| union Syslog
| summarize Last_Log = datetime_diff("minute",now(), max(TimeGenerated)) by Computer, Type
| where Last_Log > 15
You can use these queries in Alerts, have those Alerts generate Incidents and then use a Logic App to send an email or open a Service Now ticket (or whatever) for each Incident.
Play around with the above queries and I’m sure you’ll figure out what you need.
Good luck.
- Clive_WatsonJan 26, 2022Bronze Contributor
You can also use a query like this, which I prefer as it also gives you the pattern (some sources can be up/down or delayed, so you can check a little history)
CommonSecurityLog | where DeviceVendor == "Cisco" | where DeviceProduct == "ASA" | make-series count() default=0 on TimeGenerated from ago(1h) to now() step 15m by Computer | where count_ [-1] == 0 // look at the last record [-1] and only show events when last data point was equal to zero | project-away TimeGeneratedThis shows me Computers with no data in the past 15mins and also the 3 previous 15min intervals (so I know in my Alert if this is normal or not.
You can see from this screenshot (different use case), the count of logs differs in each 15min bin during the past two hours - showing me servers that are sending low volumes (or could be swapped to high) if you wanted.
If you wanted to take it further when you understand make-series you can also look for anomalies - this example is for Tables, but you can switch to the Cisco source yourself (note 30 or even 90days+ is better to lookback on for this type of query).union * | make-series count() default=0 on TimeGenerated from startofday(ago(30d)) to now()-1h step 1d by Type // only show when last data point equals zero | where count_ [-1] == 0 | extend (anomalies, score, baseline) = series_decompose_anomalies(count_, 2.5, -1, 'linefit', 1, 'ctukey') | extend Score = score[-1] | extend expectedEventCounts=baseline[-1], actualEventCount=count_[-1], Score = score[-1], count_ | project Type, round(toreal(Score),2), count_ , anomalies //, baseline // a high score means that the Table is normally up/sending data so this more anomoliousSo the output of the above is ONLY tables that sent "zero" data in the last bin/interval. The Score column gives me a clue to how anomalous each is, -13 being the highest - you can see from the count_ column its unusual that Table sends zero records.