microsoft detection and response team (dart)
36 TopicsTotal Identity Compromise: Microsoft Incident Response lessons on securing Active Directory
Total Identity Compromise: Microsoft Incident Response lessons on securing Active Directory When Microsoft Incident Response (formerly DART/CRSP) is engaged during an incident, almost all environments include an on-premises Active Directory component. In most of these engagements, threat actors have taken full control of Active Directory –i.e., total domain compromise. Total domain compromise often starts with the compromise of a regular non-privileged user rather than a domain admin. Threat actors can use that account to discover misconfiguration and attack paths in Active Directory that lead to full domain control. Oftentimes, threat actors leverage freely available tools such as AdFind, AD Explorer, or BloodHound to find attack paths through Active Directory environments. After total domain compromise, restoring trust back into Active Directory can take significant time and investment. To aid in our investigations, Microsoft Incident Response leverages a custom-built Active Directory enumeration tool to retrieve metadata about users, groups, permissions, group policies and more. Microsoft Incident Response uses this data to not only aid in the investigation, but also to shape attacker eviction and compromise recovery plans and to provide best practice recommendations on taking back and maintaining positive identity control. In addition to the Microsoft Incident Response custom tool, there are other tools, such as Defender for Identity, and open-source tools such as BloodHound and PingCastle, that you can use to secure Active Directory in your own environment. Across all industry verticals, Microsoft Incident Response often finds similar issues within Active Directory environments. In this blog, we will be highlighting some of the most common issues seen in on-premises Active Directory environments and provide guidance on how to secure those weaknesses. These include: Initial Access – Weak password policies, excessive privilege and poor credential hygiene, insecure account configuration Credential Access – Privileged credential exposure, Kerberoasting, insecure delegation configuration, Local Administrator Password Solution (LAPS) misconfiguration, excessive privilege via built-in groups Privilege Escalation – Access control list (ACL) abuse, escalation via Exchange permissions, Group Policy abuse, insecure trust configuration, compromise of other Tier 0 assets Initial Access Weak Password Policies It is not uncommon for Microsoft Incident Response to engage with customers where accounts have weak or easy to guess credentials, including those of privileged users such as Domain Admins. Simple password spray attacks can lead to the compromise of such accounts. If these standard user accounts are provided VPN or remote access without multi-factor authentication, the risk is increased: threat actors can connect to the VPN via devices in their control and begin reconnaissance of the environment remotely. From here, a threat actor can then attempt to escalate to Domain Admin privileges via weaknesses in Active Directory. Recommendation Where possible, Microsoft Incident Response recommends deploying passwordless authentication technology, such as Windows Hello for Business (which uses biometrics such as facial recognition or fingerprints) or FIDO2 security keys. Fully deploying passwordless authentication allows you to disallow password authentication, which eliminates password-based attack vectors (such as password sprays and phishing) for those users. If you are not yet ready to begin deploying passwordless solutions, you can increase the strength of your on-premises password policy through group policy and fine-grained password policies. Current guidance recommends longer (14 or more characters) but less complex passwords (no special characters or similar required), with users having to change them much less frequently. This discourages users from cycling through easy-to-guess passwords to satisfy complexity and rotation requirements, such as changing their password from ‘Monday10’ to ‘Monday11’. If you are licensed for Azure Active Directory P1 or higher, you can also deploy Azure Active Directory Password Protection, which can disallow your users from using easy to guess passwords even in on-premises Active Directory. You can also ban custom words unique to your business, such as the name of your company or the city in which you operate. For both passwordless and stronger password policies, if you work in a complex environment where the rollout of those solutions may take some time, start by targeting your Domain Admins and other privileged users. Your Tier 0 accounts can, and should, be held to a higher security standard. Excessive Privilege and Poor Credential Hygiene One of the most common issues found in Active Directory is accounts, particularly service accounts, being assigned too much privilege. In Microsoft Incident Response engagements, it is not uncommon to find multiple service accounts and named user accounts to be granted Domain Admin privileges. Additionally, service accounts used for applications or scripts are often granted local administrative access over all workstations and servers. Though this is an easy way to allow a product or script to function, these accounts then become a weak point in your security. Service accounts are attractive targets because the passwords are rarely rotated. Security controls for them are often weaker, and they can’t be protected by MFA. Furthermore, the passwords for these accounts are often stored in clear text, whether that be sent in email, saved to text files on devices, or used in clear text in command line arguments. The combination of many Domain Admin accounts and poor technical controls over those accounts increase the risk for credential theft. This excessive privilege also extends to too many users having local administrative rights on their own devices. If a compromised user does not have local administrative rights on their device, it is harder for a threat actor to continue to move laterally in the environment from that device. Recommendation There is no specific guidance on how many Domain Admin accounts should exist in each environment, as the requirements for privileged accounts will be unique for each environment. With that said, any requests to have additional Domain Admins should be scrutinized closely, with the preference always being to grant a lower level of privilege, particularly to service accounts. Even though adding service accounts to Domain Admins is an easy way to ensure an application works, most of these accounts can be assigned much less privilege and still function correctly. They can also often be granted access to only a subset of devices rather than all workstations and servers. If you don’t have strong controls to govern secure credential practice for your most important accounts, then the more Domain Admin level accounts you add, the more risk you incur. For service accounts, investigate whether Group Managed Service Accounts (gMSA), which can provide automatic password management, would be suitable for the workload. Insecure Account Configuration In Active Directory, misconfiguration can be a reason the security of individual user accounts is weaker than it could be. Some of the settings to scrutinize for proper configuration include: Do not require Kerberos pre-authentication. Store password using reversible encryption. Password not required. Password stored with weak encryption. Enabling any of these settings drastically reduces the security of an account. An adversary can enumerate a directory with relative ease to find any accounts that have these flags. The credentials for these accounts may then be easier for a threat actor to retrieve. Recommendation Defender for Identity, via the Secure Score portal, provides an excellent summary of these risky account flags. For each configuration item, it lists which accounts are affected and how to remediate the issue. Other tooling such as BloodHound or PingCastle can also flag these account issues. Credential Access Privileged Credential Exposure During cyber-attacks, adversaries often seek to obtain privileged credentials. These credentials are viewed as ‘crown jewels’ because they allow threat actors to complete their objectives. Once privileged credentials are obtained, they can be used to: Add additional persistence mechanisms, such as scheduled tasks, installing services, or creating additional user accounts. Disable or bypass endpoint antivirus or other security controls. Deploy malware or ransomware. Exfiltrate sensitive data. As administrators log on to devices directly or connect to devices remotely to complete their day-to-day work, they may leave behind privileged credentials. Threat actors can leverage tools such as Mimikatz or secretsdump (part of the Impacket framework) to retrieve those credentials. As privileged users log on to more and more machines, attackers have additional opportunities to locate and extract those credentials. For example, if members of the Domain Admins group regularly log on to end user workstations to troubleshoot issues, then Domain Admin credentials may be exposed on each device, increasing a threat actor’s chances of locating and extracting them. To help customers understand this privileged credential spread, Microsoft Incident Response collects logon telemetry from the event logs on devices, signals from Microsoft Defender for Endpoint, or both. From that data, a map is created of Tier 0 accounts, such as Domain Admins, logging onto devices that are not considered Tier 0, such as member servers and workstations. Figure 1: Map of Domain Admin login paths to non-Tier 0 servers. In this visual, the green circles represent Domain Admins. The red dotted lines represent RDP logons to devices, while the black dotted lines represent network logons. In this example, we can see two domain admins that have logged onto three different servers. Should a threat actor compromise one of these three servers, there is potential for theft of a Domain Admin level credential. The larger the environment, the more this issue becomes apparent. If we add additional admins and other non-Tier 0 devices, we can see the immediate impact on our footprint. Figure 2: Map of Domain Admin login paths to non-Tier 0 servers and other devices. With additional Domain Admins and those admins logging onto other non-tier 0 devices, credential exposure has increased significantly. The goal of these diagrams is to help customers visualize where privileged credentials are being left on their network and to start thinking from the mindset of a threat actor. In large and complex Active Directory environments, these diagrams can become immense, and the number of endpoints climbs into the thousands. Recommendation Microsoft’s solution to reduce privileged credential exposure is to implement the enterprise access model. This administration model seeks to reduce the spread of privileged credentials by restricting the devices that Domain Admins (and similar accounts) can log on to. In large and complex environments, it is safe to assume that some users and devices will be compromised. Your most privileged accounts should only access Tier 0 assets from hardened devices, known as privileged access workstations (PAWs). Using least-privileged access is a key part of Zero Trust principals. By reducing the opportunity to extract privileged credentials, we reduce the impact of compromise on a single device or user. Deploying the enterprise access model is a journey, and every organization is at a different stage of that journey. Regardless of your current posture, you can always reduce privilege credential spread, both through technical controls and changes to the way staff work. This table in the Microsoft Learn documentation lists various logon types and whether credentials are left on the destination device. When administering remote systems, Microsoft Incident Response recommends using methods that do not leave credentials behind wherever possible. Defender for Identity also maps these lateral movement paths, showing paths where compromise of a regular user can lead to domain compromise. These are integrated directly within user and computer objects in the Microsoft 365 Defender portal. Figure 3: Defender for Identity page for mapping a user’s lateral movement paths. The highest risk users and computers are also shown in the Secure Score portal, allowing you to remediate objects most at risk. Kerberoasting Kerberoasting is a technique used by threat actors to crack the passwords of accounts, generally service accounts, that have service principal names (SPNs) associated with them. If a regular user in Active Directory is compromised, they can request a ticket (which includes the hashed password) via tools such as Rubeus for any account with an SPN configured. The threat actor can then extract this hash from memory and attempt to crack the password offline. If they can crack the password, they can then authenticate as and assume the privileges of that service account. It is also not uncommon for Microsoft Incident Response to detect SPNs registered to privileged admin accounts or service accounts that have been added to privileged groups. Often these SPNs are configured for testing and then never removed, leaving them vulnerable to Kerberoasting. Recommendation Microsoft Incident Response recommends that you review all accounts configured with SPNs to ensure they are still required. For those that are actively in use, ensure the passwords associated for those accounts are extremely complex and rotated where practical. Defender for Identity includes logic to detect Kerberoasting activity in your environment. By taking signals from your domain controllers, Defender for Identity can help detect users enumerating your domain looking for Kerberoast-able accounts or attempts to actively exploit those accounts. Insecure Delegation Configuration Unconstrained Kerberos delegation provides the ability for an entity to impersonate other users. This helps with authentication through multi-tier applications. For example, a web server running IIS may have delegation configured to access an SQL server, which stores the data for the web site. When you log onto the web server, the web server then uses delegation to authenticate on behalf of you to SQL. In doing this, the user’s Kerberos Ticket Granting Ticket (TGT) is stored in memory on the web server. If a threat actor compromises that web server, they could retrieve those tickets and impersonate any users that had logged on. If a Domain Admin happened to log on, then the threat actor would have access to a Domain Admin TGT and could assume full control of Active Directory. Recommendation Review all the users and devices that are enabled for delegation. These are available in the Defender for Identity section of Secure Score. If delegation is required it should be restricted to only the required services, not fully unconstrained. Administrative accounts should never be enabled for delegation. You can prevent these privileged accounts from being targeted by enabling the ‘Account is sensitive and cannot be delegated’ flag on them. You can optionally add these accounts to the ‘Protected Users’ group. This group provides protections over and above just preventing delegation and makes them even more secure; however, it may cause operational issues, so it is worth testing in your environment. Local Administrator Password Solution (LAPS) Microsoft Incident Response often encounters situations where LAPS has not been deployed to an environment. LAPS is the Microsoft solution to automatically manage the password for the built-in Administrator account on Windows devices. When machines are built or imaged, they often have the same password for the built-in Administrator account. If this is never changed, a single password can give local administrative rights to all machines and may provide opportunities for lateral movement. LAPS solves this problem by ensuring each device has a unique local administrator password and rotates it regularly. Additionally, even in cases where LAPS has been deployed, sometimes it has not been fully operationalized by the business. As a result, despite LAPS managing the local administrator account on these devices, there are still user groups that have local administrative rights over all the workstations or all the servers, or both. These groups can contain numerous users, generally belonging to the service desk or other operations staff. These operational staff then use their own accounts to administer those devices, rather than the LAPS credentials. IT configurations can also exist where a secondary, non-LAPS managed account still exists with an easy to guess password, defeating the benefit gained by deploying LAPS. As noted earlier, groups with broad administrative access give threat actors additional opportunity to compromise privileged credentials. Should an endpoint where one of these accounts has logged onto become compromised, a threat actor could have credentials to compromise all the devices in the network. Recommendation It is important to not only deploy LAPS to endpoints, but to ensure that IT standard operating procedures are updated to ensure LAPS is used. This allows companies to remove privilege from administrative accounts and reduce credential theft risk across the business Additionally, it is crucial to understand which users can retrieve the LAPS password for use, since the ability to retrieve this password grants local administrative access to that device. The ability to read the LAPS password is controlled via the ‘ms-Mcs-AdmPwd’ attribute and should be audited to ensure access is only granted to users that require it. Excessive Privilege via Built-In Groups During incident response, companies often have alerting and monitoring in place for changes to groups like Domain and Enterprise Admins. These groups are widely known to hold the highest level of privilege in Active Directory. However, there are other privileged built-in groups that are attractive to threat actors and are often not held to the same level of scrutiny. Groups such as Account and Server Operators have wide ranging privilege over your Active Directory. For example, by default Server Operators can log on to Domain Controllers, restart services, backup and restore files, and more. Recommendation It is recommended that, where possible, privileged built-in groups not contain any users. Instead, the appropriate privilege should be granted specifically to users that require it. Additionally, Microsoft Incident Response recommends reviewing the current membership of those groups and adding additional alerting to changes to them in the same way you would alert on Domain or Enterprise admin changes. Privilege Escalation Access Control List Abuse Access Control Lists (ACL) misconfiguration is one of the most common issues Microsoft Incident Response finds in Active Directory environments. Active Directory ACLs are exceptionally granular, complex, and easy to configure incorrectly. It is easy to reduce the security posture of your Active Directory environment without having any operational impact on your users. As ACLs are configured in your environment through business-as-usual activities, attack paths start to form. These attack paths create an escalation path from a low privileged user to total domain control. A threat actor can take advantage of the paths created by the combination of excessive privilege and scope on ACLs. A threat actor can take advantage of the paths created by the combination of excessive privilege and scope on ACLs. Two common ACLs that Microsoft Incident Response sees regularly in Active Directory are: GenericAll – this privilege is the same as Full Control access. If a user was compromised and that user had GenericAll over a highly privileged group, then the threat actor could add additional members to that group. WriteDacl – this privilege allows manipulation of the ACL on an object. With this privilege a threat actor can change the ACL on an object such as a group. If a user was compromised and that user had WriteDacl over a highly privileged group, the threat actor could add a new ACL to that group. That new ACL could then give them access to add additional members to the group, such as themselves. These permissions are often set at the top of the Active Directory hierarchy. They are also often applied to users and groups that do not require those permissions, effectively granting those group members full domain control. The members of these groups are very rarely secured in the same way that Domain and Enterprise Admins are. In addition, Microsoft Incident Response often detects insecure ACLs on the AdminSdHolder object, which is responsible for managing permissions on protected users and groups. If an adversary can manipulate the ACL on AdminSdHolder, it will be propagated on to those protected users and groups when the SDProp process runs. The adversary will then have rights to change membership of protected groups such as Domain Admins, allowing them to add themselves. The documentation for BloodHound describes these and several other ACL ‘edges’ that can be abused for privilege escalation. Recommendation Microsoft Incident Response recommends auditing permissions through your Active Directory environment using tools such as Defender for Identity and running sanctioned audits of attack paths using BloodHound and remediating paths that can lead to domain compromise. Escalation via Exchange Permissions Prior to the use of corporate cloud email services such as Office 365, customers ran their own on-premises Exchange environments. Many customers still maintain a complete on-premises Exchange environment. On-premises Exchange and Active Directory have always been tied closely together, with Exchange maintaining high privilege through Active Directory. Even in environments that have migrated user mailboxes to Office 365, an on-premises Exchange footprint often remains. It may exist to manage users not yet migrated, for legacy applications that are not able to integrate with Office 365, or to service non-internet connected workloads. These on-premises Exchange environments often retain high privilege through Active Directory, and groups such as ‘Exchange Trusted Subsystem’ and ‘Exchange Servers’ can have a direct path to total domain control. On-premises Exchange is also often internet-facing to allow users to access resources such as Outlook Web Access. Like any internet facing service, this increases the surface area for attack. If a threat actor can obtain SYSTEM privilege on an Exchange server and Exchange still retains excessive permissions in Active Directory, then it can lead to complete domain compromise. Recommendation It is possible to decouple the privilege held by Exchange in Active Directory by deploying the split permissions model for Exchange. By deploying this model, permissions for Active Directory and Exchange are separated. After deploying the Exchange split permissions model, there are operational changes required by staff who administer both Exchange and Active Directory. If you don’t want to deploy the entire split permissions model, you can still reduce the permissions Exchange has in Active Directory by implementing the changes in the following Microsoft guidance. If you have completely migrated to Office 365 but maintain on-premises Exchange servers for ease of management, you may now be able to turn off those on-premises servers. Group Policy ACL Abuse Group Policy is often a tool used by threat actors to establish persistence (via the creation of scheduled tasks), create additional accounts, or deploy malware. It is also used as a ransomware deployment mechanism. If a threat actor has not yet compromised a Domain Admin, they may have been able to compromise an account that maintains permissions over Group Policy Objects. For instance, the ability to create, update, or even link policies may have been delegated to other groups. If an existing Group Policy is configured to run a startup script, a threat actor can change the path of that script to then have it execute a malicious payload. If a group policy exists to disable endpoint security tooling as an exemption, a threat actor could leverage the permissions to link policy by applying that policy to all devices in the environment. This would not require the threat actor to update the policy, just change the scope of it. Additionally, regular users can be given additional privileges via User Rights Assignments. These privileges are often not required by the users and are granted accidentally. Recommendation The ability to manipulate Group Policy is a highly privileged action and users and groups with delegated responsibility to manage it should be held to the same standards as Domain Admins or similar. Ensure that permissions to create, update, and link group policies are in line with least privilege principles. In large and complex environments, the number of Group Policies in use can be overwhelming and it is not clear what policies apply to which users and devices. Using tools such as Resultant Set of Policy (RSoP), you can model your Group Policy objects to see the overall effect on your users and devices. Insecure Trust Configuration SID history is a capability in Active Directory to aid in domain migration. It allows Domain Admins to simplify migration by applying the SID history (in simple terms, a list of permissions) from the old account to the new account. This helps the user retain access to resources once they are migrated. An adversary can target this capability by inserting the SIDs of groups such as Domain Admins into the SID history of an account in the trusted forest and using that account to take control of the trusting forest. This can be especially relevant during mergers and acquisitions, where trusts between Active Directory environments are configured to allow migrations. Recommendation Active Directory trusts should only be configured when absolutely required. If they are part of an acquisition or migration, then they should be decommissioned once migration is complete. For trusts that need to remain for operational reasons, SID filtering and Selective Authentication should be configured to reduce the attack path from other domains and forests. Compromise of other Tier 0 assets Historically, domain controllers have been at the center of Tier 0 infrastructure. While that is still true, Tier 0 has now expanded to include several interconnected systems. As ways of working have evolved, so has the underlying technology required to drive modern identity systems. In line with that, your Tier 0 footprint has also evolved and may now include systems such as: Active Directory Federation Services Azure Active Directory Connect Active Directory Certificate Services It also includes any other services or infrastructure, including 3 rd party providers, that form part of your identity trust chain, such as privileged access management and identity governance systems. Figure 4: Example of how Tier 0 assets connect to an identity trust chain. It is important that all systems that form part of your end-to-end identity chain are included in Tier 0, and the security controls you apply to domain controllers also apply to these systems. Due to the interconnected nature of these systems, compromise of any one of them could lead to complete domain compromise. Only Tier 0 accounts should retain local administrative privileges over these systems and, where practical, access to them should be via a privileged access workstation. Summary In large and complex environments, Microsoft Incident Response often sees combinations of the above issues that reduce identity security posture significantly. These misconfigurations allow threat actors to elevate from a single non privileged user all the way to your crown jewel Domain Admin accounts. Figure 5: Kill chain showing how domain compromise can start from a single compromised user. For example, in the above kill chain, the first user was compromised due to a weak password policy. Through the initial poor password policy, additional bad credential hygiene, and ACL misconfiguration, the domain was compromised. When you multiply all the combinations of user accounts, group access, and permissions in Active Directory, many paths can exist to Domain Admin. Attacks on Active Directory are ever evolving, and this blog covers only some of the more common issues Microsoft Incident Response observes in customer environments. Ultimately, any changes made to Active Directory can either increase or decrease the risk that a threat actor can take control of your environment. To ensure that risk is consistently decreasing, Microsoft Incident Response recommends a constant cycle of the below: Reduce Privilege – assign all access according to the principle of least privilege. Additionally, deploy the enterprise access model to reduce privileged credential exposure. Combined, these will reduce the likelihood that a single device or user compromise leads to total domain compromise. Audit Current Posture – use tools such as Defender for Identity and sanctioned use of BloodHound and PingCastle to audit your current Active Directory security posture and remediate the issues both surfaced through those tools and described in this blog. Monitor Changes – monitor for changes to your Active Directory environment that can reduce your security posture or expose additional attack paths to domain compromise. Actively Detect – alert on potential signs of compromise using Defender for Identity or custom detection rules. A message that Microsoft Incident Response often leaves customers with is that securing Active Directory requires continued governance. You can’t ‘deploy’ Active Directory security and never have to look at it again. Active Directory security is about constant improvement and ensuring those misconfigurations and attack paths are mitigated before an adversary finds them.80KViews18likes6CommentsHunting for MFA manipulations in Entra ID tenants using KQL
Cloud security is a top priority for many organizations, especially given that threat actors are constantly looking for ways to compromise cloud accounts and access sensitive data. One of the common, and highly effective, methods that attackers use is changing the multi-factor authentication (MFA) properties for users in compromised tenants. This can allow the attacker to satisfy MFA requirements, disable MFA for other users, or enroll new devices for MFA. Some of these changes can be hard to detect and monitor, as they are typically performed as part of standard helpdesk processes and may be lost in the noise of all the other directory activities occurring in the Microsoft Entra audit log. In this blog, we will show you how to use Kusto Query Language (KQL) to parse and hunt for MFA modifications in Microsoft Entra audit logs. We will explain the different types of MFA changes that can occur, how to identify them, and how to create user-friendly outputs that can help you investigate and respond to incidents involving these techniques. We will also share some tips and best practices for hunting for MFA anomalies, such as looking for unusual patterns, locations, or devices. By the end of this blog, you will have a better understanding of how to track MFA changes in compromised tenants using KQL queries and how to improve your cloud security posture. Kusto to the rescue Microsoft Entra audit logs record changes to MFA settings for a user. When a user's MFA details are changed, two log entries are created in the audit log. One is logged by the service “Authentication Methods” and category “UserManagement” where the activity name is descriptive (e.g., “User registered security info”) but lacks details about what alterations were made. The other entry has the activity name “Update User” that shows the modified properties. This artifact is challenging because “Update User” is a very common operation and occurs in many different situations. Using the Microsoft Entra portal here can pose challenges due to the volume of data, especially in large tenants, but KQL can help simplify this task. By default, Microsoft Entra audit logs are available through the portal for 30 days, regardless of the license plan; however, getting this data via KQL requires pre-configuration. In this blog, we provide ready-to-use KQL queries for both Azure Log Analytics and Microsoft Defender 365 Advanced Hunting, allowing you to analyze and find these scenarios in your own tenant. Figure 1: Diagram of data flow of logs related to account manipulation Table 1: Comparison between Azure Log Analytics and Defender 365 Advanced Hunting Azure Log Analytics Defender 365 Advanced Hunting Interface Azure Portal, but can be connected to Azure Data Explorer Defender 365 Portal Retention Configurable 30 days Pre-requisite Log Analytics Workspace Microsoft Defender for Cloud Apps License Cost Minimal cost No additional cost Required configuration Diagnostics settings need to be configured in Microsoft Entra ID to send Audit Logs to Log Analytics Microsoft 365 Connector needs to be enabled in Microsoft Defender for Cloud Apps Column containing modified properties TargetResources RawEventData Know your data There are 3 key different MFA properties that can be changed, all of which can be found in the “Update User” details: 1. StrongAuthenticationMethod: The registered MFA methods for the user and the default method chosen. The methods are represented as numbers ranging from 0 to 7 as follows: Table 2: Mapping Strong Authentication Methods numbers to names Method Name Description 0 TwoWayVoiceMobile Two-way voice using mobile phone 1 TwoWaySms Two-way SMS message using mobile phone 2 TwoWayVoiceOffice Two-way voice using office phone 3 TwoWayVoiceOtherMobile Two-way voice using Alternative Mobile phone numbers 4 TwoWaySmsOtherMobile Two-way SMS message using Alternative Mobile phone numbers 5 OneWaySms One-way SMS message using mobile phone 6 PhoneAppNotification Notification based MFA in Microsoft Authenticator mobile app. (Code and Notification) 7 PhoneAppOTP OTP based 2FA in Microsoft Authenticator mobile app, third-party Authenticator app without push notifications, Hardware or Software OATH token which requires the user enter a code displayed in Mobile application or device. (Code only) 2. StrongAuthenticationUserDetails: User information for the following MFA methods: - Phone Number - Email - Alternative Phone Number - Voice Only Phone Number 3. StrongAuthenticationAppDetail: Information about Microsoft Authenticator App registered by the user. This property contains many fields, but we are mainly interested in the following: - Device Name: the name of the device that has Authenticator App installed on - Device Token: a unique identifier for the device Note: This information is available when the method used is PhoneAppNotification. For PhoneAppOTP, you will see DeviceName as NO_DEVICE and DeviceToken as NO_DEVICE_TOKEN, making it a popular choice for threat actors. Let’s go hunting! Now that we know there are 3 different types of MFA properties that might be modified, and each one has a different format in the "Update User" activity, we require a different query for each type. Even though the queries may seem complex, the outcome is certainly nice! Note: The KQL queries provided in this article do not have any time filters. Add time filters in the query or select it in the GUI as desired. 1. StrongAuthenticationMethod JSON structure for modified properties: "modifiedProperties": [{ "displayName": "StrongAuthenticationMethod", "oldValue": "[{"MethodType":3,"Default":false},{"MethodType":7,"Default":true}]", "newValue": "[{"MethodType":6,"Default":true},{"MethodType":7,"Default":false}]" }] In the JSON above, we can compare the elements in the oldValue array against the newValue array to see which methods have been added or removed, and whether the Default method is different. By performing this comparison using KQL, we can extract the Changed value, old value and new value from each log entry and generate a friendly description alongside the Timestamp, Actor, and Target. If multiple properties were changed in the same operation, a separate row will be displayed for each in the output. In Advanced Hunting: //Advanced Hunting query to parse modified StrongAuthenticationMethod let AuthenticationMethods = dynamic(["TwoWayVoiceMobile","TwoWaySms","TwoWayVoiceOffice","TwoWayVoiceOtherMobile","TwoWaySmsOtherMobile","OneWaySms","PhoneAppNotification","PhoneAppOTP"]); let AuthenticationMethodChanges = CloudAppEvents | where ActionType == "Update user." and RawEventData contains "StrongAuthenticationMethod" | extend Target = tostring(RawEventData.ObjectId) | extend Actor = tostring(RawEventData.UserId) | mv-expand ModifiedProperties = parse_json(RawEventData.ModifiedProperties) | where ModifiedProperties.Name == "StrongAuthenticationMethod" | project Timestamp,Actor,Target,ModifiedProperties,RawEventData,ReportId; let OldValues = AuthenticationMethodChanges | extend OldValue = parse_json(tostring(ModifiedProperties.OldValue)) | mv-apply OldValue on (extend Old_MethodType=tostring(OldValue.MethodType),Old_Default=tostring(OldValue.Default) | sort by Old_MethodType); let NewValues = AuthenticationMethodChanges | extend NewValue = parse_json(tostring(ModifiedProperties.NewValue)) | mv-apply NewValue on (extend New_MethodType=tostring(NewValue.MethodType),New_Default=tostring(NewValue.Default) | sort by New_MethodType); let RemovedMethods = AuthenticationMethodChanges | join kind=inner OldValues on ReportId | join kind=leftouter NewValues on ReportId,$left.Old_MethodType==$right.New_MethodType | project Timestamp,ReportId,ModifiedProperties,Actor,Target,Old_MethodType,New_MethodType | where Old_MethodType != New_MethodType | extend Action = strcat("Removed (" , AuthenticationMethods[toint(Old_MethodType)], ") from Authentication Methods.") | extend ChangedValue = "Method Removed"; let AddedMethods = AuthenticationMethodChanges | join kind=inner NewValues on ReportId | join kind=leftouter OldValues on ReportId,$left.New_MethodType==$right.Old_MethodType | project Timestamp,ReportId,ModifiedProperties,Actor,Target,Old_MethodType,New_MethodType | where Old_MethodType != New_MethodType | extend Action = strcat("Added (" , AuthenticationMethods[toint(New_MethodType)], ") as Authentication Method.") | extend ChangedValue = "Method Added"; let DefaultMethodChanges = AuthenticationMethodChanges | join kind=inner OldValues on ReportId | join kind=inner NewValues on ReportId | where Old_Default != New_Default and Old_MethodType == New_MethodType and New_Default == "true" | join kind=inner OldValues on ReportId | where Old_Default1 == "true" and Old_MethodType1 != New_MethodType | extend Old_MethodType = Old_MethodType1 | extend Action = strcat("Default Authentication Method was changed to (" , AuthenticationMethods[toint(New_MethodType)], ").") | extend ChangedValue = "Default Method"; union RemovedMethods,AddedMethods,DefaultMethodChanges | project Timestamp,Action,Actor,Target,ChangedValue,OldValue=case(isempty(Old_MethodType), "",strcat(Old_MethodType,": ", AuthenticationMethods[toint(Old_MethodType)])),NewValue=case(isempty( New_MethodType),"", strcat(New_MethodType,": ", AuthenticationMethods[toint(New_MethodType)])) | distinct * In Azure Log Analytics: //Azure Log Analytics query to parse modified StrongAuthenticationMethod let AuthenticationMethods = dynamic(["TwoWayVoiceMobile","TwoWaySms","TwoWayVoiceOffice","TwoWayVoiceOtherMobile","TwoWaySmsOtherMobile","OneWaySms","PhoneAppNotification","PhoneAppOTP"]); let AuthenticationMethodChanges = AuditLogs | where OperationName == "Update user" and TargetResources contains "StrongAuthenticationMethod" | extend Target = tostring(TargetResources[0].userPrincipalName) | extend Actor = case(isempty(parse_json(InitiatedBy.user).userPrincipalName),tostring(parse_json(InitiatedBy.app).displayName) ,tostring(parse_json(InitiatedBy.user).userPrincipalName)) | mvexpand ModifiedProperties = parse_json(TargetResources[0].modifiedProperties) | where ModifiedProperties.displayName == "StrongAuthenticationMethod" | project TimeGenerated,Actor,Target,TargetResources,ModifiedProperties,Id; let OldValues = AuthenticationMethodChanges | extend OldValue = parse_json(tostring(ModifiedProperties.oldValue)) | mv-apply OldValue on (extend Old_MethodType=tostring(OldValue.MethodType),Old_Default=tostring(OldValue.Default) | sort by Old_MethodType); let NewValues = AuthenticationMethodChanges | extend NewValue = parse_json(tostring(ModifiedProperties.newValue)) | mv-apply NewValue on (extend New_MethodType=tostring(NewValue.MethodType),New_Default=tostring(NewValue.Default) | sort by New_MethodType); let RemovedMethods = AuthenticationMethodChanges | join kind=inner OldValues on Id | join kind=leftouter NewValues on Id,$left.Old_MethodType==$right.New_MethodType | project TimeGenerated,Id,ModifiedProperties,Actor,Target,Old_MethodType,New_MethodType | where Old_MethodType != New_MethodType | extend Action = strcat("Removed (" , AuthenticationMethods[toint(Old_MethodType)], ") from Authentication Methods.") | extend ChangedValue = "Method Removed"; let AddedMethods = AuthenticationMethodChanges | join kind=inner NewValues on Id | join kind=leftouter OldValues on Id,$left.New_MethodType==$right.Old_MethodType | project TimeGenerated,Id,ModifiedProperties,Actor,Target,Old_MethodType,New_MethodType | where Old_MethodType != New_MethodType | extend Action = strcat("Added (" , AuthenticationMethods[toint(New_MethodType)], ") as Authentication Method.") | extend ChangedValue = "Method Added"; let DefaultMethodChanges = AuthenticationMethodChanges | join kind=inner OldValues on Id | join kind=inner NewValues on Id | where Old_Default != New_Default and Old_MethodType == New_MethodType and New_Default == "true" | join kind=inner OldValues on Id | where Old_Default1 == "true" and Old_MethodType1 != New_MethodType | extend Old_MethodType = Old_MethodType1 | extend Action = strcat("Default Authentication Method was changed to (" , AuthenticationMethods[toint(New_MethodType)], ").") | extend ChangedValue = "Default Method"; union RemovedMethods,AddedMethods,DefaultMethodChanges | project TimeGenerated,Action,Actor,Target,ChangedValue,OldValue=case(isempty(Old_MethodType), "",strcat(Old_MethodType,": ", AuthenticationMethods[toint(Old_MethodType)])),NewValue=case(isempty( New_MethodType),"", strcat(New_MethodType,": ", AuthenticationMethods[toint(New_MethodType)])) | distinct * If we run the above queries, we get example output as below. In the output below, we can see a few examples of users who have had their MFA settings changed, who performed the change, and the old/new comparison, giving us areas to focus our attention on. Figure 2: Example output from running the StrongAuthenticationMethods parsing query 2. StrongAuthenticationUserDetails JSON structure for modified properties: "ModifiedProperties": [{ "Name": "StrongAuthenticationUserDetails", "NewValue": "[{"PhoneNumber": "+962 78XXXXX92","AlternativePhoneNumber": null,"Email": " contoso@contoso.com","VoiceOnlyPhoneNumber": null}]", "OldValue": "[{"PhoneNumber": "+962 78XXXXX92","AlternativePhoneNumber": null,"Email": null,"VoiceOnlyPhoneNumber": null}]" }] Again, we are interested in comparing values in OldValue and NewValue to see what details were changed, deleted, or updated. In the above example, we can see that Email was (null) in OldValue and (contoso@contoso.com) in NewValue, which means an email address was added to MFA details for this user. In Advanced Hunting: //Advanced Hunting query to parse modified StrongAuthenticationUserDetails CloudAppEvents | where ActionType == "Update user." and RawEventData contains "StrongAuthenticationUserDetails" | extend Target = RawEventData.ObjectId | extend Actor = RawEventData.UserId | extend reportId= RawEventData.ReportId | mvexpand ModifiedProperties = parse_json(RawEventData.ModifiedProperties) | where ModifiedProperties.Name == "StrongAuthenticationUserDetails" | extend NewValue = parse_json(replace_string(replace_string(tostring(ModifiedProperties.NewValue),"[",""),"]","")) | extend OldValue = parse_json(replace_string(replace_string(tostring(ModifiedProperties.OldValue),"[",""),"]","")) | mv-expand NewValue | mv-expand OldValue | where (tostring( bag_keys(OldValue)) == tostring(bag_keys(NewValue))) or (isempty(OldValue) and tostring(NewValue) !contains ":null") or (isempty(NewValue) and tostring(OldValue) !contains ":null") | extend ChangedValue = tostring(bag_keys(NewValue)[0]) | extend OldValue = tostring(parse_json(OldValue)[ChangedValue]) | extend NewValue = tostring(parse_json(NewValue)[ChangedValue]) | extend OldValue = case(ChangedValue == "PhoneNumber" or ChangedValue == "AlternativePhoneNumber", replace_strings(OldValue,dynamic([' ','(',')']), dynamic(['','',''])), OldValue ) | extend NewValue = case(ChangedValue == "PhoneNumber" or ChangedValue == "AlternativePhoneNumber", replace_strings(NewValue,dynamic([' ','(',')']), dynamic(['','',''])), NewValue ) | where tostring(OldValue) != tostring(NewValue) | extend Action = case(isempty(OldValue), strcat("Added new ",ChangedValue, " to Strong Authentication."),isempty(NewValue),strcat("Removed existing ",ChangedValue, " from Strong Authentication."),strcat("Changed ",ChangedValue," in Strong Authentication.")) | project Timestamp,Action,Actor,Target,ChangedValue,OldValue,NewValue In Azure Log Analytics: //Azure Log Analytics query to parse modified StrongAuthenticationUserDetails AuditLogs | where OperationName == "Update user" and TargetResources contains "StrongAuthenticationUserDetails" | extend Target = TargetResources[0].userPrincipalName | extend Actor = parse_json(InitiatedBy.user).userPrincipalName | mv-expand ModifiedProperties = parse_json(TargetResources[0].modifiedProperties) | where ModifiedProperties.displayName == "StrongAuthenticationUserDetails" | extend NewValue = parse_json(replace_string(replace_string(tostring(ModifiedProperties.newValue),"[",""),"]","")) | extend OldValue = parse_json(replace_string(replace_string(tostring(ModifiedProperties.oldValue),"[",""),"]","")) | mv-expand NewValue | mv-expand OldValue | where (tostring(bag_keys(OldValue)) == tostring(bag_keys(NewValue))) or (isempty(OldValue) and tostring(NewValue) !contains ":null") or (isempty(NewValue) and tostring(OldValue) !contains ":null") | extend ChangedValue = tostring(bag_keys(NewValue)[0]) | extend OldValue = tostring(parse_json(OldValue)[ChangedValue]) | extend NewValue = tostring(parse_json(NewValue)[ChangedValue]) | extend OldValue = case(ChangedValue == "PhoneNumber" or ChangedValue == "AlternativePhoneNumber", replace_strings(OldValue,dynamic([' ','(',')']), dynamic(['','',''])), OldValue ) | extend NewValue = case(ChangedValue == "PhoneNumber" or ChangedValue == "AlternativePhoneNumber", replace_strings(NewValue,dynamic([' ','(',')']), dynamic(['','',''])), NewValue ) | where tostring(OldValue) != tostring(NewValue) | extend Action = case(isempty(OldValue), strcat("Added new ",ChangedValue, " to Strong Authentication."),isempty(NewValue),strcat("Removed existing ",ChangedValue, " from Strong Authentication."),strcat("Changed ",ChangedValue," in Strong Authentication.")) | project TimeGenerated,Action,Actor,Target,ChangedValue,OldValue,NewValue After running the above queries, we get the output below. Here we can see phone numbers and emails being added/modified which may or may not be expected or desired. Figure 3: Example output from running the StrongAuthenticationUserDetails parsing query Further analysis: To hunt for anomalies, we can extend our query to look for MFA user details that have been added to multiple users by adding the following lines (for Log Analytics queries, replace Timestamp with TimeGenerated): | where isnotempty(NewValue) | summarize min(Timestamp),max(Timestamp),make_set(Target) by NewValue | extend UserCount = array_length(set_Target) | where UserCount > 1 The output looks like this: Here we can see that the phone number (+14424XXX657) has been added as MFA phone number to 3 different users between 2024-04-12 10:24:09 and 2024-04-17 11:24:09 and the email address (Evil@hellomail.net) has been added as MFA Email for 2 different users between 2024-04-12 10:24:09 and 2024-04-17 11:24:09. We can also monitor users who switch their phone number to a different country code than their previous one. We can achieve this by adding the following lines to the original KQL query, which checks if the first 3 characters of the new value are different from the old value (This may not give the desired results for US and Canada country codes): | where (ChangedValue == "PhoneNumber" or ChangedValue == "AlternativePhoneNumber") and isnotempty(OldValue) and isnotempty(NewValue) | where substring(OldValue,0,2) != substring(NewValue,0,2) 3. StrongAuthenticationAppDetail JSON structure for modified properties: "ModifiedProperties": [{ "Name": "StrongAuthenticationPhoneAppDetail", "NewValue": "[ { "DeviceName": "Samsung", "DeviceToken": "cH1BCUm_XXXXXXXXXXXXXX_F5VYZx3-xxPibuYVCL9xxxxdVR", "DeviceTag": "SoftwareTokenActivated", "PhoneAppVersion": "6.2401.0119", "OathTokenTimeDrift": 0, "DeviceId": "00000000-0000-0000-0000-000000000000", "Id": "384c3a59-XXXX-XXXX-XXXX-XXXXXXXX166d ", "TimeInterval": 0, "AuthenticationType": 3, "NotificationType": 4, "LastAuthenticatedTimestamp": "2024-XX-XXT09:20:16.4364195Z ", "AuthenticatorFlavor": null, "HashFunction": null, "TenantDeviceId": null, "SecuredPartitionId": 0, "SecuredKeyId": 0 }, { "DeviceName": "iPhone", "DeviceToken": "apns2-e947c2a3b41XXXXXXXXXXXXXXXXXXXXXXXXXXXXa1d3930", "DeviceTag": "SoftwareTokenActivated", "PhoneAppVersion": "6.8.7", "OathTokenTimeDrift": 0, "DeviceId": "00000000-0000-0000-0000-000000000000", "Id": "8da1XXXX-XXXX-XXXX-XXXX-XXXXXXa6028", "TimeInterval": 0, "AuthenticationType": 3, "NotificationType": 2, "LastAuthenticatedTimestamp": "2024-XX-XXT11:XX:XX.5184213Z", "AuthenticatorFlavor": null, "HashFunction": null, "TenantDeviceId": null, "SecuredPartitionId": 0, "SecuredKeyId": 0 }]", "OldValue": "[ { "DeviceName": "Samsung", "DeviceToken": " cH1BCUm_XXXXXXXXXXXXXX_F5VYZx3-xxPibuYVCL9xxxxdVR", "DeviceTag": "SoftwareTokenActivated", "PhoneAppVersion": "6.2401.0119", "OathTokenTimeDrift": 0, "DeviceId": "00000000-0000-0000-0000-000000000000", "Id": "384c3a59-XXXX-XXXX-XXXX-XXXXXXXX166d", "TimeInterval": 0, "AuthenticationType": 3, "NotificationType": 4, "LastAuthenticatedTimestamp": "2024-XX-XXT09:20:16.4364195Z", "AuthenticatorFlavor": null, "HashFunction": null, "TenantDeviceId": null, "SecuredPartitionId": 0, "SecuredKeyId": 0 }]" }] Just like with our other values, the goal is to contrast the values in OldValue and NewValue, this time paying attention to DeviceName and DeviceToken to see if the Authenticator App was set up on a different device or deleted for a current device for the user. From the JSON example above, we can infer that the user already had a device (Samsung) registered for Authenticator App and added another device (iPhone). In Advanced Hunting: //Advanced Hunting query to parse modified StrongAuthenticationPhoneAppDetail let DeviceChanges = CloudAppEvents | where ActionType == "Update user." and RawEventData contains "StrongAuthenticationPhoneAppDetail" | extend Target = tostring(RawEventData.ObjectId) | extend Actor = tostring(RawEventData.UserId) | mv-expand ModifiedProperties = parse_json(RawEventData.ModifiedProperties) | where ModifiedProperties.Name == "StrongAuthenticationPhoneAppDetail" | project Timestamp,Actor,Target,ModifiedProperties,RawEventData,ReportId; let OldValues= DeviceChanges | extend OldValue = parse_json(tostring(ModifiedProperties.OldValue)) | mv-apply OldValue on (extend Old_DeviceName=tostring(OldValue.DeviceName),Old_DeviceToken=tostring(OldValue.DeviceToken) | sort by tostring(Old_DeviceToken)); let NewValues= DeviceChanges | extend NewValue = parse_json(tostring(ModifiedProperties.NewValue)) | mv-apply NewValue on (extend New_DeviceName=tostring(NewValue.DeviceName),New_DeviceToken=tostring(NewValue.DeviceToken) | sort by tostring(New_DeviceToken)); let RemovedDevices = DeviceChanges | join kind=inner OldValues on ReportId | join kind=leftouter NewValues on ReportId,$left.Old_DeviceToken==$right.New_DeviceToken,$left.Old_DeviceName==$right.New_DeviceName | extend Action = strcat("Removed Authenticator App Device (Name: ", Old_DeviceName , ", Token: ", Old_DeviceToken , ") from Strong Authentication"); let AddedDevices = DeviceChanges | join kind=inner NewValues on ReportId | join kind=leftouter OldValues on ReportId,$left.New_DeviceToken==$right.Old_DeviceToken,$left.New_DeviceName==$right.Old_DeviceName | extend Action = strcat("Added Authenticator App Device (Name: ", New_DeviceName , ", Token: ", New_DeviceToken , ") to Strong Authentication"); union RemovedDevices,AddedDevices | where Old_DeviceToken != New_DeviceToken | project Timestamp,Action,Actor,Target,Old_DeviceName,Old_DeviceToken,New_DeviceName,New_DeviceToken | distinct * In Azure Log Analytics: //Azure Log Analytics query to parse modified StrongAuthenticationPhoneAppDetail let DeviceChanges = AuditLogs | where OperationName == "Update user" and TargetResources contains "StrongAuthenticationPhoneAppDetail" | extend Target = tostring(TargetResources[0].userPrincipalName) | extend Actor = case(isempty(parse_json(InitiatedBy.user).userPrincipalName),tostring(parse_json(InitiatedBy.app).displayName) ,tostring(parse_json(InitiatedBy.user).userPrincipalName)) | mvexpand ModifiedProperties = parse_json(TargetResources[0].modifiedProperties) | where ModifiedProperties.displayName == "StrongAuthenticationPhoneAppDetail" | project TimeGenerated,Actor,Target,TargetResources,ModifiedProperties,Id; let OldValues= DeviceChanges | extend OldValue = parse_json(tostring(ModifiedProperties.oldValue)) | mv-apply OldValue on (extend Old_DeviceName=tostring(OldValue.DeviceName),Old_DeviceToken=tostring(OldValue.DeviceToken) | sort by tostring(Old_DeviceToken)); let NewValues= DeviceChanges | extend NewValue = parse_json(tostring(ModifiedProperties.newValue)) | mv-apply NewValue on (extend New_DeviceName=tostring(NewValue.DeviceName),New_DeviceToken=tostring(NewValue.DeviceToken) | sort by tostring(New_DeviceToken)); let RemovedDevices = DeviceChanges | join kind=inner OldValues on Id | join kind=leftouter NewValues on Id,$left.Old_DeviceToken==$right.New_DeviceToken,$left.Old_DeviceName==$right.New_DeviceName | extend Action = strcat("Removed Authenticator App Device (Name: ", Old_DeviceName , ", Token: ", Old_DeviceToken , ") from Strong Authentication"); let AddedDevices = DeviceChanges | join kind=inner NewValues on Id | join kind=leftouter OldValues on Id,$left.New_DeviceToken==$right.Old_DeviceToken,$left.New_DeviceName==$right.Old_DeviceName | extend Action = strcat("Added Authenticator App Device (Name: ", New_DeviceName , ", Token: ", New_DeviceToken , ") to Strong Authentication"); union RemovedDevices,AddedDevices | where Old_DeviceToken != New_DeviceToken | project TimeGenerated,Action,Actor,Target,Old_DeviceName,Old_DeviceToken,New_DeviceName,New_DeviceToken | distinct * If we run the above query, we can find users who registered or removed Authenticator App on/from a device based on Device Name and Device Token. Figure 4: Example output from running the StrongAuthenticationAppDetails parsing query Further analysis: Now that we know which devices were added for which users, we can hunt broadly for malicious activity. One example would be finding mobile devices that are being used by multiple users for Authenticator App using Device Token field, which is unique per device. This can be achieved by appending the following lines to the query (for Log Analytics queries, replace Timestamp with TimeGenerated): | where isnotempty(New_DeviceToken) and New_DeviceToken != "NO_DEVICE_TOKEN" | summarize min(Timestamp),max(Timestamp),make_set(Target) by DeviceToken=New_DeviceToken, DeviceName=New_DeviceName | extend UserCount = array_length(set_Target) | where UserCount > 1 The output looks like this: It is evident that the Device Token (apns2-e947c2a3b41eae3fbd27aec9a1c2e62bxxxxxxxxxxxxx44ea5b9fee09a1d3930) has registered for Authenticator App for 3 different users between 2024-04-12 10:24:09 and 2024-04-17 11:24:09. This may indicate that a threat actor compromised these accounts and registered their device for MFA to establish persistence. Occasionally this is done legitimately by IT administrators; however, it must be said this is not a secure practice, unless both accounts belong to the same user. In summary With MFA now being widespread across the corporate world, threat actors are increasingly interested in manipulating MFA methods as part of their initial access strategy and are using token theft via Attacker-in-the-Middle scenarios, social engineering, or MFA prompt bombing to get their foot in the door. Following this initial access, Microsoft Incident Response invariably sees changes to the authentication methods on a compromised account. We trust this article has provided clarity on the architecture and various forms of MFA modifications in Microsoft Entra audit logs. These queries, whether they are utilized for threat detection or alert creation, can empower you to spot suspicious or undesirable activities relating to MFA in your organization, and take rapid action to assess and rectify possibly illegitimate scenarios. Disclaimer: User Principal Names, GUIDs, Email Address, Phone Numbers and Device Tokens in this article are for demonstration purposes and do not represent real data.57KViews25likes5CommentsNew Blog Post | Data Connectors for Azure Log Analytics and Data Explorer Now in Public Preview
Data Connectors for Azure Log Analytics and Data Explorer Now in Public Preview - Microsoft Community Hub The Microsoft Defender EASM (Defender EASM) team is excited to share that new Data Connectors for Azure Log Analytics and Azure Data Explorer are now available in public preview. Defender EASM continuously discovers an incredible amount of up-to-the-minute Attack Surface Data, so connecting and automating this data flow to all our customers’ mission-critical systems that keep their organizations secure is essential. The new Data Connectors for Log Analytics and Azure Data Explorer can easily augment existing workflows by automating recurring exports of all asset inventory data and the set of potential security issues flagged as insights to specified destinations to keep other tools continually updated with the latest findings from Defender EASM. Original Post: New Blog Post | Data Connectors for Azure Log Analytics and Data Explorer Now in Public Preview - Microsoft Community Hub47KViews0likes0CommentsFuzzy hashing logs to find malicious activity
During incident response, the investigators aims to find all traces of threat actor activity in forensic logs. We introduce a new tool called JsonHash that investigators can use to discover groups of related activity. JsonHash is a fuzzy hash algorithm designed for finding similar log entries. JsonHash is explained through an example webshell threat hunting scenario.41KViews0likes11CommentsPart 1: LockBit 2.0 ransomware bugs and database recovery attempts
Microsoft Incident Response (formerly DART/CRSP) researchers have uncovered “buggy code” and critical inconsistencies in the new version of the LockBit ransomware as a result of an engagement with a customer afflicted with LockBit 2.0. This post serves to illustrate the steps that Microsoft Incident Response researchers took to uncover this faulty crypto, and the efforts made to overcome and eventually restore, as much as was possible, the destroyed database files of this affected customer.