Determine events per second for a potential Sentinel deployment

Brass Contributor

I have been tasked to determine the estimate of EPS (events per second) for 4 subscriptions.  Need to get an idea of the cost of creating an Event Hub to send data to the SIEM.  Any assistance/guidance would be appreciated.  I was trying to use Monitor > Metrics but you have to drill down to a specific resource and I was hoping to get a general query per subscription.  

Please advise,

 

Serge

5 Replies

@SergioT1228 

 

If you have the data in a workspace already, you can query that for EPS, you may need to add a filter, something like this (not all tables store SubscriptionId though!)

| where SubscriptionId == "< sub id>" 

 

union withsource=_TableName1 *
| where _TimeReceived  > ago(1d)
| summarize count()  by bin(_TimeReceived, 1m), Type
| extend counttemp =count_ / 60
| summarize 
           ['Average Events per Second (eps)'] = avg(counttemp), ['Minimum eps']=min (counttemp),
           ['Maximum eps']=max(counttemp)

 

Hello Clive, Thank you for your reply. I'm new to gathering data from Azure. I mainly deal with ATP deployments and making sure to get all endpoints covered by Defender. I have been asked to help determine the EPS for some subscriptions. I have a couple of questions regarding your statement. I understand the need to specify which subscription.

 

Under Monitor > Logs, I have selected the scope to be a specific subscription. As far as withsource = _TableName1, which table are you referring? I AzureMetrics? Diagnostics? Activity?

 

Sorry if this should be obvious but I'm just getting started on learning how to obtain logs/data from Azure. I did do a count for the Three tables I saw, are those counts worth anything?

 

<

AzureActivity | where SubscriptionId == "subscriptionId" | count

/>

@CliveWatson 

 

Hey Clive,

Ok, I think I figured it out. the _TableName1 is a way to run through all tables without naming a specific table which allows you to search all Tables available.
also, after reviewing the TimeReceived information in this table:
https://docs.microsoft.com/en-us/azure/azure-monitor/logs/data-ingestion-time#checking-ingestion-tim...
I was able to substitute as needed. I think I got the needed information. Thank you again.

@snteran 

 

I'm glad you figured it out.  You can also do a similar thing in M365 - in "Advanced Hunting".  Rather than union you can name the single Table or event use union to wildcard ie. 

union withsource =MDTables Device*

 

union withsource=MDTables *
| where Timestamp  > ago(1d)
| summarize count() by bin(Timestamp, 1m), MDTables
| extend EPS = count_ /60
| summarize avg(EPS) by MDTables 
| sort by avg_EPS desc

// Also show as GBytes (estimated, using 500bytes as a default value)


let bytes_ = 500;
union withsource=MDTables *
| where Timestamp  > ago(1d)
| summarize count() by bin(Timestamp, 1m), MDTables
| extend EPS = count_ /60
| summarize avg(EPS), estimatedGBytes = (avg(EPS) * bytes_) / (1024*1024*1024) by MDTables 
| sort by toint(estimatedGBytes) desc

 

@CliveWatson 

 

That worked perfect.  I also added "by" statement to get the logs per table:

 

union withsource=_TableName1 *

| where TimeGenerated > ago(1d)

| summarize count() by bin(TimeGenerated, 1m), Type

| extend counttemp =count_ / 60

| summarize

['Average Events per Second (eps)'] = avg(counttemp), ['Minimum eps']=min (counttemp),

['Maximum eps']=max(counttemp)

by ['Table Name']=Type
 
It gave me the table names, hopefully this is correct.  A lot to learn.  Hopefully sharing this will help others.
 
Cheers,