Interpreting Exploit Guard ASR audit alerts

In my previous blog, I talked about how you can leverage Windows Defender ATP’s Advanced hunting to monitor Attack Surface Reduction (ASR) alerts in audit mode and dig a little deeper into the potential application compatibility impact of enforcing more rules.

Like many app compat scenarios; however, it’s not exactly that easy. And, as one of my colleagues from Windows Defender Research was quick to point out, blocks are not guaranteed to cause application failure (for example, if the app expects and gracefully handles the result).


(For any non-nerds reading this – are there really non-nerds reading my blog? – that stands for too long; didn’t read.)

The easiest process to transition from audit to enforce is as follows:

Audit -> Exclude impacted apps -> Enforce

The more secure way to transition from audit to enforce is:

Audit -> Test potentially impacted apps -> Exclude verified impacted apps -> Enforce

The gory details

An audit alert being triggered does not guarantee that something bad has happened. Let’s look at an example with a (very small) environment we were running. We found two applications that were signaling an ASR block signal, but we must interpret what that signal means to understand if the app was actually impacted.

In this case, we found two apps that appear to be impacted by a single ASR rule:


If we look up what that ASR rule actually is, we discover that it corresponds with this rule:

Block credential stealing from the Windows local security authority subsystem (lsass.exe)

This rules flags whether we allow an app to open lsass process memory. Now, I’m reasonably sure that neither of the potentially impacted apps are actively trying to steal credentials and/or secrets. So, what legitimate task could they be doing that would flag this?

Well, the GoogleUpdate.exe process is probably looking to update a collection of software and walk the process list to see if you have any stuff already running (so it can offer to close it for the update and avoid a reboot). And taskmgr.exe is actively trying to enumerate all the processes on the device (i.e. doing its job).

One way to discover all of the running tasks on a system is to capture a snapshot of all running processes; for example, by calling CreateToolhelp32Snapshot, and specify for dwFlags TH32CS_SNAPALL. If you then want to get information on each task, you can then open up the process. If you call the OpenProcess API, you can then specify the dwDesiredAccess as PROCESS_ALL_ACCESS, because, hey, just asking for everything is easiest, right?

That pattern is likely to show up a few times because, as it turns out, we wrote that exact code for you.

Following this pattern, when you get to lsass.exe in the process list, you’ll be alerted to the fact that you used an overly permissive request and, in block mode, Windows Defender ATP will decline it. But, as you can see in the demo code, you can check for failure. If you check for a null handle and respond appropriately, then the failure doesn’t break your app.

And, in the case of this ASR rule, we have come across zero impact to application compatibility by switching to enforced mode. (Note: that doesn’t mean there will never be a compatibility issue. It’s just that, in this case, it appears to be statistically unlikely.) Yes, Windows Defender ATP will flag an alert and, in block mode, prevent you from being granted a handle, but doing so doesn’t necessarily break your app.

When should I test, and when shouldn’t I?

To answer this question, we must take this deeper understanding of a rule and take some action. In this case, we should look at the potential impact of enforcing this rule. If I exclude an application, then I ever so slightly increase my security risk. If a malicious actor were to try using one of the exploits you were trying to protect against (in this case, accessing LSASS memory) through this application, we would no longer block it because of the exception. Configuring ASR rules in Block mode for all but a couple of apps is good, but configuring it in Block mode for all apps is better (as long as nothing breaks)!

But, pragmatically, it may not be feasible to test absolutely everything every time. Testing fatigue is a real thing in today’s fast-moving technology world! So, I’d be tempted to evaluate the potential impact of the rule, and then decide if I want to block by default (if there is a very low potential for app impact) or exclude the app (if the potential for app impact is higher). If I choose the latter, I would also want to look at how I might be able to incorporate testing the next time I happen to be testing the app for some other reason. Additionally, keep in mind that you can customize the Windows Security Center, so you can supplement blocking with a link in the toast that pops up to help users more quickly submit feedback that something was indeed broken!

Learn more

Hopefully by peeling the onion a bit, we’ve shed some light on some of the subtleties when it comes to making pragmatic decisions for securing your organization. Going quickly can be better than doing nothing, but pausing briefly to evaluate your decision can often pay off in terms of security. Going forward, we want to continue to give you data points to help guide you in your risk/return decisions!

For more details on the full Exploit Guard stack, see Windows Defender Exploit Guard: Reduce the attack surface against next-generation malware.

For more details on app compat strategy in a faster-moving world (which applies to all changes, not just OS changes), see my posts on Windows 10 app compat strategy and Defining app tranches to drive your app compat testing.