Detection tuning – “Making the tuning process simple - one step at a time.”
Published Nov 03 2021 12:50 AM 6,535 Views


Creating a new detection consists of identifying the security vulnerability, writing the detection, and tuning it. This delicate and continuous process of balancing between making sure nothing important is missed and reducing false and benign results can consume up to 60% of the time.
Microsoft Sentinel helps make this process as efficient as possible, reducing the time to tune and the false positives (FP) rate in the customer environment.
This tuning process consists of excluding specific entities or properties from the result set of the query in order to remove false positives.
The tuning process requires the detection engineer to correlate between the various false positives of the rule.
The correlation work is easy when looking at one entity but when we are asked to correlate several entities or properties the task becomes significantly harder.


Detection tuning in Sentinel:

When looking on the detection tuning process we are trying to answer two questions – what to tune? And how to tune it?

What to tune - find a candidate for tuning, this is based on the load created from the rule and the amount of FP (False Positive)/TP (True Positive). Taking both factors into account will allow us to understand where we should invest our limited SOC engineer time.
For generating the tile, we are looking at data over the last 14 days.


How to tune –

Entity occurrence - provides statistics on the top 4 entities that will provide information on how the rule behaved over the last 14 days. In the example below we can see that the user “John.Snow” appeared in 50% of our alerts over the last 14 days.


Entity exclusion tuning suggestion - by looking at the closed incident labeling we are trying to automate the detection engineer’s way of thinking. By using Machine Learning models and data science modules we are analyzing the labeling done by the analysts and look for a common denominator between entities in the false positive incidents.


Sample use-case – “Palo Alto - possible internal to external port scanning

After you've connected your data sources to Sentinel, you'll want to be notified when something suspicious occurs. That's why Sentinel provides out-of-the-box, built-in templates to help you create threat detection rules.
Rule templates were designed by Microsoft's team of security experts and analysts based on known threats, common attack vectors, and suspicious activity escalation chains. Rules created from these templates will automatically search across your environment for any activity that looks suspicious. Many of the templates can (and should) be customized to search for activities, or filter them out, according to your needs.

For example we will take the “Palo Alto - possible internal to external port scanningtemplate.

this template was written to identify a list of internal Source IPs that have triggered 10 or more non-graceful TCP server resets from one or more Destination IPs.


When editing the rule we will try to tune it the best we can using the in-rule simulation abilities.


After this process we will create a rule from this template and let the rule run for a while in our environment.

This rule will create incidents over time, the analyst will investigate the incidents and will label them.


The image above is an example of an incident created from the rule.

This incident was closed as false positive since the IP address is used for port scanning in our organization.

After a few labeling actions, an indicator on the rule will appear, the indicator tells us there is a recommendation to apply on the rule.


Note that these recommendations are generated once an hour.

We will enter the rule details to see what kind of insights there are.


We can see that over the last 14 days, we had 7 incidents, 5 of them were true positives and 2 of them were False positives.
We have 1 recommendation to exclude the – which make sense since like we said it is used for port scanning in our organization.



Applying the recommendations will add the following line into our detection:


We can see that not only did the entity get excluded but also our graph changed and looks different since the false positive events were filtered out.
Since we know this entity is an indicator for false positive, applying the change was straightforward in this case, but generally we would like to test and examine the recommendation.

Moving forward

What we just saw is the first step towards an automated tuning process.
Over time we plan to add more and more modules that will help in rule creation and tuning and that will eventually result with a better MTTR and coverage.


Have an idea for additional tuning recommendations or insights? post them in the comment section below!

Version history
Last update:
‎Nov 04 2021 11:24 AM
Updated by: