A day in the life of a Defender Experts for XDR analyst
Published Sep 19 2023 02:58 PM 22K Views
Microsoft

Introduction

This June Microsoft officially launched Defender Experts for XDR, a new first-party managed extended detection and response (MXDR) service. Since public preview of the service was announced last November, a frequently asked question has been “What actually happens in a day in the life of a Defender Experts analyst?” We can reveal that the analyst team spends their days in customer environments investigating and responding to incidents, threat hunting, and providing guidance on overarching security posture improvements. In fact, we even have a real-life Defender Experts analyst here to share some of their experiences! Read on for case studies that show what a day in the life of a Defender Experts analyst really looks like.

 

Incident investigation & response

Incident investigation and response is the core of our managed XDR service. Whenever an in-scope incident is triggered in a customer environment, an analyst will investigate the incident leveraging the Microsoft 365 Defender suite in concert with Microsoft Threat Intelligence. Once the investigation is complete, the analyst will publish a detailed investigation summary and remediation actions directly to the customer’s Microsoft 365 Defender portal.

 

What's in scope?

Defender Experts for XDR investigates medium and high severity incidents in the customer environment, excluding custom detections and compliance incidents. Incidents not in scope for XDR are still covered by managed threat hunting to help ensure that no potential attacker activity surfaced by low severity, informational, or custom detections goes unnoticed.

 

Now that we’ve established what Defender Experts for XDR covers, let’s dive into two case studies of investigations into real customer incidents. These case studies are presented by Phoebe Rogers, an analyst who has been part of the team since the very beginning of our public preview.

 

Adversary-in-the-middle (AiTM) phishing

This investigation began with a single suspicious sign-in attempt to one user account. Examining the authentication logs, I determined that this sign-in was from an internet service provider (ISP), browser, device, location, and IP that were anomalous relative to the user’s typical behavior. Most concerningly, I saw that the sign-in attempt was successful and included valid multifactor authentication (MFA) claims. I did not identify any threat intelligence detections on this IP from Microsoft or third-party sources, but the anomalous nature of the activity indicated that it was a true positive. An attacker had successfully gained access to the account and thereby internal corporate resources.

 

PhoebeRogers_1-1695159738687.png

Figure 1. This graphic shows the flow of an adversary-in-the-middle phishing campaign at the stage where an attacker has authenticated and signed in via a stolen session cookie.

 

Pivoting to determine the cause of the compromise, I observed that the user had clicked a URL in an inbound email just one minute prior to the sign-in. Based on the sender address and its authentication data, I could tell the email was from a well-known file sharing service provider notifying the recipient that someone had shared a file with them. Searching the email logs for the name of the ‘file sharer’ in subject lines, I found that the same email with some variations on the file name had been sent to over 40 users that day. Because these emails were from a legitimate file sharing service and included a link to that same service, they had been able to avoid detection by Microsoft Defender for Office 365, Microsoft’s email and collaboration security solution.

 

In the Defender for Office 365 data, I also saw that shortly after the initial set of emails was sent, an email address with the same name as the ‘file sharer’ sent emails containing the same content to multiple additional employees. In total, more than five different recipients were found to have clicked on the URL.

 

From this point, I was able to confirm within Microsoft Defender for Endpoint, Microsoft's endpoint security solution, that successful network connections to the file sharing service occurred from the devices of all users that clicked the URL. Knowing that there was a successful malicious sign-in on the account of one of these clickers, I then reviewed the authentication history for all of these accounts. From this review I identified successful sign-ins on multiple accounts from the same suspicious IP address. All included valid MFA claims, meaning that malicious actors had gained access to all of these accounts. In addition, I found that multiple clickers downloaded the PDF lure from the file sharing service onto their workstations, one of whom was sure to do so twice.

 

At this point in the investigation, I was ready to share my findings with the customer. I wrote an investigation summary covering all the information above including the affected users and devices, phishing emails, and PDF lures. I also included custom advanced hunting queries (AHQs) to allow the customer to retrieve the most up to date data themselves within the Microsoft 365 Defender portal.

 

 

 

// AHQ - phishing emails
EmailEvents
| where Subject has "Email Subject"
| distinct NetworkMessageID

// AHQ - clickers
EmailEvents
| where Subject has "Email Subject"
| join kind=leftouter UrlClickEvents on NetworkMessageId
| where ActionType == "ClickAllowed"
| distinct AccountUpn

// AHQ - phishing file creations
DeviceFileEvents
| where Timestamp > ago(1d) and ActionType == "FileCreated" and FileName has "File Name"
| distinct DeviceName, DeviceId, FolderPath, SHA256

 

 

 

Finally, I published these recommended remediation actions to the customer portal:

  1. Delete the phishing emails from user inboxes.
  2. Reset passwords and revoke refresh tokens for known compromised accounts.
  3. Delete the PDF lures from the devices of the users that downloaded them.
  4. Run full AV scans on the devices of users that clicked on the URL.
  5. Create a custom indicator with a 90-day lifespan to block the IP address seen signing into the compromised accounts.

Shortly after publishing this, the customer reached out via chat requesting a walk-through of how to perform all these actions themselves. I quickly set up a call to explain in detail and reviewed with them the different ways to perform the remediation. A few days later, the file sharing service sent the recipients of the original email a friendly reminder that the file had been shared with them, leading to one final user clicking the URL.

 

In absence of the detection, investigation, and response efforts by Defender Experts for XDR, the outcomes from this incident could have been far more severe. Luckily multiple end users did report the initial email as phishing, but alone the customer may not have detected the malicious sign-ins to multiple user accounts. This could have resulted in unauthorized data access, persistent account compromise, compromise of additional higher privileged accounts, and/or fraud. See this Microsoft Security blog post about modern AiTM phishing attacks for more details.  

 

I felt a real sense of satisfaction after resolving this incident. An attack like this targeting so many users and successfully compromising multiple accounts could have had major negative outcomes for the customer. But knowing that my investigation was immediate and thorough, and watching the remediation happen with my own eyes, I felt confident that the customer would stay secure another day.

 

Our next case study takes us away from email-based threats and into the world of Microsoft Defender for Identity.

 

Brute force attacks

A Microsoft Defender for Identity incident initiated this investigation, indicating ‘Suspected brute-force attack (Kerberos, NTLM) on multiple endpoints.’ The incident stated that actors on two internal IP addresses had generated suspicious failed sign-ins on over 30 accounts, eventually leading to successful sign-ins on over 15 accounts.

 

First analyzing the sources of the traffic, I determined that the authentication attempts were being relayed to the domain controller by two devices that I’ll refer to as gateway-01 and gateway-02. From there I queried the data on these devices and determined that they both had public-facing IP addresses, and that they in fact shared the same public-facing IP. Putting on my attacker hat, I conducted a quick port scan of the IP and found a network firewall sign-in portal. Mystery solved! The two source devices were simply the two backend devices sharing the load for the public-facing firewall. As legitimate (or illegitimate) sign-in attempts were submitted to the firewall portal, those requests were passed back to the domain controller for authentication.

 

PhoebeRogers_2-1695159962395.png

Figure 2. This graphic shows a network firewall sign-in portal discovered during a port scan.

 

This configuration made the sign-in portal a prime target for password attacks. With some quick testing I determined that the portal permitted multiple failed sign-in attempts prior to locking out the target account. I know from my prior experience in penetration testing that with a good user/employee list, a good set of passwords to guess, and a bit of patience, an attacker had a reasonable shot at identifying legitimate employee credentials. This sort of password spraying attack appeared to be the cause of the activity that triggered the initial alert.

 

Analyzing the authentication logs, I identified more than 15 accounts with failed sign-in attempts followed by successful sign-ins. The remaining accounts had only failed sign-in attempts. With this information in hand, I published the summary of my investigation to the customer. The summary included the following AHQ for the customer to view the relevant authentication events themselves.

 

 

 

// AHQ – sign-in events on affected accounts
let users = AlertEvidence
| where AlertId in ("Alert ID #1", "Alert ID #2") and EntityType == "User"
| distinct AccountName;
IdentityLogonEvents
| where Timestamp > ago(1d)
| where DeviceName in ("gateway-01", "gateway-02", "Internal IP #1", "Internal IP #2")
| where AccountName in (users)

 

 

 

I also published the following remediation actions to the customer:

  1. Reset passwords for accounts with successful brute force sign-ins.
  2. Restrict public internet access to firewall sign-in portal unless required for business purposes.

In absence of the XDR investigation and response to this incident, the customer may have seen that the sign-ins originated from internal devices and simply resolved it. With multiple valid sets of domain credentials in hand, attackers may have successfully gained access to corporate resources and/or internal infrastructure. See Brute Force (T1110) for examples of how this technique has been applied by real threat actors in order to gain access to their targets.

 

Having reviewed case studies on incident investigation and response, let’s learn about how Defender Experts analysts perform threat hunting for our customers.

 

Threat hunting

Another key service provided by Defender Experts for XDR is managed threat hunting within customer environments. Leveraging a wide array of internal Microsoft threat intelligence and data sources, we scour customer environments daily for malicious and suspicious activity that has not generated alerts. This managed threat hunting surfaces suspicious activity utilizing a multitude of techniques, including:

  • proprietary threat intelligence
  • known suspicious activity patterns
  • anomalous activity patterns

Often there is some overlap in the results of these methods. The next case is one such example.

 

Masquerading malware

This activity was surfaced via my analysis of anomalous network connection activity. Specifically, I found that a file unique to a single endpoint in the organization was initiating outbound network connections to various IP addresses, including many over non-standard ports. Notably, many of the IP addresses I saw this file connecting to had active threat intelligence detections for a variety of malicious activities.

 

The file in question was called aaaaaa.exe. This file was detected as malware by multiple threat intelligence sources, had low worldwide prevalence (~500 devices), and was unsigned. These data points indicated that the file was suspicious, but I needed more information. Searching across the organization, I found a second device with a different version of aaaaaa.exe which was signed and clean. This told me that the first aaaaaa.exe was not a legitimate internal tool at this organization, and that I was on the trail of something.

 

Unfortunately, the file had been created on the device months previously. Due to data retention limits, this meant I would not be able to see the file creation event to determine where it came from. But I could determine that there had been multiple executions of aaaaaa.exe within the retention window, each of which resulted in calls to the following suspicious executables. As mentioned before, these executions also resulted in large numbers of suspicious outbound network connections.

 

File name: bbbbb.exe

Threat Intel: File is signed but has detections from multiple threat intel sources, one of which detects this file as CobaltStrike.

 

File name: ccccc.exe

Threat Intel: File is signed but is detected as a worm by one threat intel source.

 

After discovering all this information, I immediately sent a communication to the customer. In addition to the information above, I included the following AHQ so that the customer could review the network connections from these files.

 

 

 

// AHQ – network connections from suspicious executables
DeviceNetworkEvents
| where Timestamp > ago(10d) and DeviceId == "Device ID"
    and ActionType == "ConnectionSuccess"
    and InitiatingProcessFileName in~ ("aaaaaa.exe", "bbbbb.exe", "ccccc.exe")

 

 

 

Finally, I provided the following recommendations to the customer.

  1. Stop and quarantine aaaaaa.exe, bbbbb.exe, and ccccc.exe.
  2. Delete the [REDACTED] folder and all of its contents from the device.
  3. Run a full AV scan on the device.
  4. Create indicators to block the file SHA256 values.

In absence of the Defender Experts for XDR discovery of this issue, the outcome could have been a complete device compromise and potentially compromise of the extended customer network. These files may indeed have been malware dropped on the computer in order to gain control of it and beacon out to external command and control (C2) servers. In this case the device could have ended up being an entry point for a widespread compromise of the customer environment.

 

Now we’ve gotten a taste of Defender Experts managed threat hunting, let’s learn how the XDR team helps customers improve their overall security posture.

 

Security posture improvements

A final key aspect of the Defender Experts for XDR service is collaboration with the customer on overarching security posture improvements. While not actively responding to incidents or threat hunting, Defender Experts analysts spend their time identifying vulnerabilities, misconfigurations, and other security issues in our customers’ environments. Any acute issues are of course reported to the customer for remediation immediately. Larger-scale and longer-term improvements are also evaluated and recommended to customers based on their risk profiles.

 

ETR override analysis

I performed this analysis as a result of multiple conversations with a customer. They had reported that their queue was full of alerts with the title ‘Phish delivered due to an ETR override.’ The customer stated that all these alerts were caused by their continuous phishing simulations, but in my own experience investigating the customer’s incidents I had doubts that this was accurate. For background, ETR in this context refers to Exchange transport rule, more commonly known as mail flow rules. These are rules that an organization has configured to identify and take action on emails with specific characteristics.

 

In order to have all the facts in future discussions with this customer, I began a deep-dive analysis on these alerts and their causes. The outcomes of this analysis, including queries that would allow the customer to reproduce the results in their own Microsoft 365 Defender portal, were as follows.

 

Over the previous 30 days, ~30% of the emails in ETR override alerts came from senders other than the phishing simulation provider. Over the previous 90 days, ~20% of emails in ETR override alerts came from senders other than the phishing simulation provider.

 

 

 

let _startTime = ago(30d);
let nmids = AlertInfo
| where Timestamp > _startTime and Title == "Phish delivered due to an ETR override"
| join kind=leftouter (AlertEvidence | where Timestamp > _startTime) on AlertId
| distinct NetworkMessageId;
//
EmailEvents
| where Timestamp > _startTime and NetworkMessageId in (nmids)
| summarize arg_max(Timestamp, *) by NetworkMessageId
| extend IsPhishSim = iff(InternetMessageId has "phishsimidentifier", true, false)
| summarize count() by IsPhishSim

 

 

 

Diving further into the non-phishing simulation emails being delivered due to ETR overrides, over the previous 30 days nearly 100% of them were detected as phishing with high confidence. The vast majority of these originated from a small set of public IP addresses. These IPs all resolved to customer mail servers and were explicitly allowed by the organization’s inbound spam bypass mail flow rule (ETR).

 

 

 

let _startTime = ago(30d);
let nmids = AlertInfo
| where Timestamp > _startTime and Title == "Phish delivered due to an ETR override"
| join kind=leftouter (AlertEvidence | where Timestamp > _startTime) on AlertId
| distinct NetworkMessageId;
//
EmailEvents
| where Timestamp > _startTime and NetworkMessageId in (nmids)
| summarize arg_max(Timestamp, *) by NetworkMessageId
| extend IsPhishSim = iff(InternetMessageId has "phishsimidentifier", true, false)
| where not(IsPhishSim) and ThreatTypes has "Phish" and ConfidenceLevel has "High"
| summarize count() by SenderIPv4

 

 

 

Furthermore, the set of emails involved in ETR override alerts that did not come from the phishing simulation provider had gone on to generate over 50 additional alerts over the previous 30 days (not including the ETR override alerts themselves).

 

 

 

let _startTime = ago(30d);
let nmids = AlertInfo
| where Timestamp > _startTime and Title == "Phish delivered due to an ETR override"
| join kind=leftouter (AlertEvidence | where Timestamp > _startTime) on AlertId
| distinct NetworkMessageId;
//
let nonPhishSimNmids = EmailEvents
| where Timestamp > _startTime and NetworkMessageId in (nmids)
| summarize arg_max(Timestamp, *) by NetworkMessageId
| extend IsPhishSim = iff(InternetMessageId has "phishsimidentifier", true, false)
| where not(IsPhishSim)
| distinct NetworkMessageId;
//
AlertEvidence
| where Timestamp > _startTime and NetworkMessageId in (nonPhishSimNmids)
| join kind=leftouter (AlertInfo | where Timestamp > _startTime) on AlertId
| where Title != "Phish delivered due to an ETR override"
| distinct AlertId, Title

 

 

 

Based on this analysis, I recommended that the customer remove the explicit IP allowlist for spam filter bypass. This could be accomplished in the Exchange Admin Center (EAC) or via PowerShell by disabling the spam filter bypass rule or simply removing the IPs from it (see documentation). Leaving it in place would continue to allow phishing and spam emails to be delivered to employee inboxes without being fully filtered by Defender for Office 365. This in turn would lead to increased rates of phishing and spam emails being successfully delivered, and thereby an increased likelihood of employees succumbing to those attempts.

 

Conclusion

The life of a Defender Experts for XDR analyst is a busy one, and one that draws us into many different niches within the security field depending on the day. Every day, our mission is to keep our customers’ environments secure and help ensure they are protected from future threats. If you’re interested in learning more about Defender Experts for XDR, see the following resources:

 

2 Comments
Co-Authors
Version history
Last update:
‎Sep 19 2023 03:01 PM
Updated by: