defender experts for hunting
26 TopicsCloud forensics: Why enabling Microsoft Azure Storage Account logs matters
Co-authors - Christoph Dreymann - Shiva P Introduction Azure Storage Accounts are frequently targeted by threat actors. Their goal is to exfiltrate sensitive data to an external infrastructure under their control. Because diagnostic logging is not always fully enabled by default, valuable evidence of their malicious actions may be lost. With this blog, we will explore realistic attack scenarios and demonstrate the types of artifacts those activities generate. By properly enabling Microsoft Azure Storage Account logs investigators gain a better understanding of the scope of the incident. The information can also provide guidance for remediating the environment and on preventing data theft from occurring again. Storage Account A Storage Account provides scalable, secure, and highly available storage for storing and managing data objects. Due to the variety of sensitive data that can be stored, it is another highly valued target by a threat actor. Threat actors exploit misconfigurations, weak access controls, and leaked credentials to gain unauthorized access. Key risks include Shared Access Signature token (SAS) misuse that allows threat actors to access or modify exposed blob storages. Storage Account key exposure could grant privileged access to the data plane. Investigating storage-related security incidents requires familiarity with Azure activity logs and Diagnostic logs. Diagnostic log types for Storage accounts are StorageBlob, StorageFile, StorageQueue, and StorageTable. These logs can help identify unusual access patterns, role changes, and unauthorized SAS token generation. This blog is centered around StorageBlob activity logs. Storage Account logging The logs for a Storage Account aren’t enabled by default. These logs capture operations, requests, and use such as read, write, and delete actions/requests on storage objects like blobs, queues, files, or tables. NOTE: There are no license requirements to enable Storage Account logging, but Log Analytics charges based on ingestion and retention (Pricing - Azure Monitor | Microsoft Azure) For more information on enabling logging for a Storage Account can be found here. Notable fields The log entries contain various fields which are of use not only during or after an incident, but for general monitoring of a storage account during normal operations (for a full list, see what data is available in the Storage Logs). Once the storage log is enabled, one of the key tables within Log Analytics is StorageBlobLogs, which provides details about blob storage operations, including read, write, and delete actions. Key columns such as OperationName, AuthenticationType, StatusText, and UserAgentHeader capture essential information about these activities. The OperationName field identifies operations on a storage account, such as “PutBlob” for uploads or “DeleteBlob” and “DeleteFile” for deletions. The UserAgentHeader fields offer valuable insights into the tools used to access a Blob storage. Accessing blob storages through the Azure portal is typically logged with a generic user agent, which indicates the application used to perform the access, such as a web browser like Mozilla Firefox. In contrast, tools like AzCopy or Microsoft Azure Storage Explorer are explicitly identified in the logs. Analyzing the UserAgentHeader provides crucial details about the access method, helping determine how the blob storage was accessed. The following table includes additional investigation fields, Field name Description TimeGenerated [UTC] The date and time of the operation request. AccountName Name of the Storage account. OperationName Name of the operation. A detailed list of for StorageBlob operation can be found here. AuthenticationType The type of authentication that was used to make this request. StatusCode The HTTP status code for the request. If the request is interrupted, this value might be set to Unknown. StatusText The status of the requested operation. Uri Uniform resource identifier that is requested. CallerIpAddress The IP address of the requester, including the port number. UserAgentHeader The User-Agent header value. ObjectKey Provides the path of the object requested. RequesterUpn User Principal Name of the requester. AuthenticationHash Hash of the authentication token used during a request. Request authenticated with SAS token includes a SAS signature specifying the hash derived from the signature part of the SAS token. For a full list, see what data is available in the Storage Logs. How a threat actor can access a Storage Account Threat actors can access the Storage Account through Azure-assigned RBAC, a SAS token (including User delegated SAS token), Azure Storage Account Keys and Anonymous Access (if configured). Storage Account Access Keys Azure Storage Account Access Keys are shared secrets that enable full access to Azure storage resources. When creating a storage account, Azure generates two access keys, both can be used for authentication with the storage account. These keys are permanent and do not have an expiration date. Both Storage Account Owners and roles such as Contributor or any other role with the assigned action of Microsoft.Storage/storageAccounts/listKeys/action can retrieve and use these credentials to access the storage account. Account Access Keys can be rotated/regenerated but if done unintentionally, it could disrupt applications or services dependent on the key for authentication. Additionally, this action invalidates any SAS tokens derived from that key, potentially revoking access to dependent workflows. Monitoring key rotations can help detect unexpected changes and mitigate disruptions. Query: This query can help identify instances of account key rotations in the logs AzureActivity | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/REGENERATEKEY/ACTION" | where ActivityStatusValue has "Start" | extend resource = parse_json(todynamic(Properties).resource) | extend requestBody = parse_json(todynamic(Properties).requestbody) | project TimeGenerated, OperationNameValue, resource, requestBody, Caller, CallerIpAddress Shared Access Signature SAS tokens offer a granular method for controlling access to Azure storage resources. SAS tokens enable specific permitted actions on a resource and their duration. They can be generated for blobs, queues, tables, and file shares within a storage account, providing precise control over data access. A SAS token allows access via a signed URL. A Storage Account Owner can generate a SAS token and connection strings for various resources within the storage account (e.g., blobs, containers, tables) without restrictions. Additionally, roles with Microsoft.Storage/storageAccounts/listKeys/action rights can also generate SAS tokens. SAS tokens enable access to storage resources using tools such as Azure Storage Explorer, Azure CLI, or PowerShell. It is important to note that the logs do not indicate when a SAS token was generated [How a shared access signature works]. However, their usage can be inferred by tracking configuration changes that enable the use of storage account keys option which is disabled by default. Figure 1: Configuration setting to enable account key access Query: This query is designed to detect configuration changes made to enable access via storage account keys AzureActivity | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE" | where ActivityStatusValue has "Success" | extend allowSharedKeyAccess = parse_json(tostring(parse_json(tostring(parse_json(Properties).responseBody)).properties)).allowSharedKeyAccess | where allowSharedKeyAccess == "true" User delegated Shared Access Signature A User Delegation SAS is a type of SAS token that is secured using Microsoft Entra ID credentials rather than the storage account key. For more details see Authorize a user delegation SAS. To request a SAS token using the user delegation key, the identity must possess the Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey action (see Assign permissions with RBAC). Azure Role-Based Access Control A threat actor must identify a target (an identity) that can assign roles or already holds specific RBAC roles. To assign Azure RBAC roles, an identity must have Microsoft.Authorization/roleAssignments/write, which allows the assignment of roles necessary for accessing storage accounts. Some examples of roles that provide permissions to access data within Storage Account (see Azure built-in roles for blob): Storage Account Contributor (Read, Write, Manage Access) Storage Blob Data Contributor (Read, Write) Storage Blob Data Owner (Read, Write, Manage Access) Storage Blob Data Reader (Read Only) Additionally, to access blob data in the Azure portal, a user must also be assigned the Reader role (see Assign an Azure role). More information about Azure built-in roles for a Storage Account can be found here Azure built-in roles for Storage. Anonymous Access If the storage account configuration 'Allow Blob anonymous access' is set to enabled and a container is created with anonymous read access, a threat actor could access the storage contents from the internet without any authorization. Figure 2: Configuration settings for Blob anonymous access and container-level anonymous access. Query: This query helps identify successful configuration changes to enable anonymous access AzureActivity | join kind=rightouter (AzureActivity | where TimeGenerated > ago(30d) | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE" | where Properties has "allowBlobPublicAccess" | extend ProperTies = parse_json(Properties) | evaluate bag_unpack(ProperTies) | extend allowBlobPublicAccess = todynamic(requestbody).properties["allowBlobPublicAccess"] | where allowBlobPublicAccess has "true" | summarize by CorrelationId) on CorrelationId | extend ProperTies = parse_json(Properties) | evaluate bag_unpack(ProperTies) | extend allowBlobPublicAccess_req = todynamic(requestbody).properties["allowBlobPublicAccess"] | extend allowBlobPublicAccess_res = todynamic(responseBody).properties["allowBlobPublicAccess"] | extend allowBlobPublicAccess = case (allowBlobPublicAccess_req!="", allowBlobPublicAccess_req, allowBlobPublicAccess_res!="", allowBlobPublicAccess_res, "") | project OperationNameValue, ActivityStatusValue, ResourceGroup, allowBlobPublicAccess, Caller, CallerIpAddress, ResourceProviderValue Key notes regarding the authentication methods When a user accesses Azure Blob Storage via the Azure portal, the interaction is authenticated using OAuth and is authorized by the Azure RBAC roles configuration for the given user. In contrast, authentication using Azure Storage Explorer and AzCopy depends on the method used: If a user interactively signs in via the Azure portal or utilizes the Device code flow, authentication appears as OAuth based. When using a SAS token, authentication is recorded as SAS-based for both tools. Access via Azure RBAC is logged in Entra ID Sign-in Logs, however, activity related to SAS token usage does not appear in the sign-in logs, as it provides pre-authorized access. Log analysis should consider all operations, since initial actions can reveal the true authentication method even OAuth-based access may show as SAS in logs. The screenshot below illustrates three distinct cases, each showcasing different patterns of authentication types used when accessing storage resources. A SAS token is consistently used across various operations, where the SAS token is the primary access method. The example below highlighted as ‘2’ demonstrates a similar pattern, with OAuth (using assigned Azure RBAC role) serving as the primary authentication method for all listed operations. Lastly, example number ‘3’, Operations start with OAuth authentication (using an assigned Azure RBAC role for authorization) and then uses a SAS token, indicating mixed authentication types. Figure 3: Different patterns of authentication types Additionally, when using certain applications such as Azure Storage Explorer with Account Access Keys authentication, the initial operations such as ListContainers and ListBlob are logged with the authentication type reported as “AccountKey”. However, for subsequent actions like file uploads or downloads, the authentication type switches to SAS in the logs. To accurately determine whether an Account Access Keys or SAS was used, it's important to correlate these actions with the earlier enumeration or sync activity within the logs. With this understanding, let’s proceed to analyze specific attack scenarios by utilizing the log analytics, such as the StorageBlobLogs table. Attack scenario This section will examine the typical steps that a threat actor might take when targeting a Storage Account. We will specifically focus on the Azure Resource Manager layer, where Azure RBAC initially dictates what a threat actor can discover. Enumeration During enumeration, a threat actor’s goal is to map out the available storage accounts. The range of this discovery is decided by the access privileges of a compromised identity. If that identity holds at least a minimum level of access (similar to a Reader) at the subscription level, it can view storage account resources without making any modifications. Importantly, this permission level does not grant access to the actual data stored within the Azure Storage itself. Hence, a threat actor is limited to interacting only with those storage accounts that are visible to them. To access and download files from Blob Storage, a threat actor must be aware of the names of containers (Operation: ListContainers) and the files within those containers (Operation: ListBlobs). All interactions with these storage elements are recorded in the StorageBlobLogs table. Containers or blobs in a container can be listed by a threat actor with the appropriate access rights. If access is not authorized, attempts to do so will result in error codes shown in the StatusCode field. A high number of unauthorized attempts resulting in errors would be a key indicator of suspicious activity or misconfiguration. Figure 4: Failure attempts to list blobs/containers Query: This query serves as a starting point for detecting a spike in unauthorized attempts to enumerate containers, blobs, files, or queues union Storage* | extend StatusCodeLong = tolong(StatusCode) | where OperationName has_any ("ListBlobs", "ListContainers", "ListFiles", "ListQueues") | summarize MinTime = min(TimeGenerated), MaxTime = max(TimeGenerated), OperationCount = count(), UnauthorizedAccess = countif(StatusCodeLong >= 400), OperationNames = make_set(OperationName), ErrorStatusCodes = make_set_if(StatusCode, StatusCodeLong >= 400), StorageAccountName = make_set(AccountName) by CallerIpAddress | where UnauthorizedAccess > 0 Note: The UnauthorizedAccess filter attribute must be adjusted based on your environment. Data exfiltration Let’s use the StorageBlobLogs to analyze two different attack scenarios. Scenario 1: Compromised user has access to a storage account In this scenario, the threat actor either compromises a user account with access to one or more storage accounts or alternatively, obtains a leaked Account Access Key or SAS token. With a compromised identity, the threat actor can either enumerate all storage accounts the user has permissions to (as covered in enumeration) or directly access a specific blob or container if the leaked key grants scoped access. Account Access Keys (AccountKey)/SAS tokens The threat actor might either use the storage account’s access keys or SAS token retrieved through the compromised user account provided they have the appropriate permissions or the leaked key itself may already be either an Account access key or SAS token. Access keys grant complete control while SAS key can generate a time-bound access, to authorize data transfers enabling them to view, upload, or download data at will. Figure 5: Account key used to download/view data Figure 6: SAS token used to download/view data Query: This query helps identify cases where an AccountKey/SAS was used to download/view data from a storage account StorageBlobLogs | where OperationName has "GetBlob" | where AuthenticationType in~ ("AccountKey", "SAS") | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project TimeGenerated, AccountName, OperationName, RequesterUpn, AuthenticationType, Uri, ObjectKey, StatusText, UserAgentHeader, CallerIpAddress, AuthenticationHash User Delegation SAS Available for Blob Storage only, a User Delegation SAS functions similar to a SAS but is protected with Microsoft Entra ID credentials rather than the storage account key. The creation of a User Delegation SAS is tracked as a corresponding "GetUserDelegationKey" log entry in StorageBlobLogs table. Figure 7: User-Delegation Key created Query: This query helps identify creation of a User-Delegation Key. The RequesterUpn provides the identity of the user account creating this key. StorageBlobLogs | where OperationName has "GetUserDelegationKey" | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project TimeGenerated, AccountName, OperationName, RequesterUpn, Uri, CallerIpAddress, ObjectKey, AuthenticationType, StatusCode, StatusText Figure 8: User-Delegation activity to download/read Query: This query helps identify cases where a download/read action was initiated while authenticated via a User delegation key StorageBlobLogs | where AuthenticationType has "DelegationSas" | where OperationName has "GetBlob" | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project Type, TimeGenerated, OperationName, AccountName, UserAgentHeader, ObjectKey, AuthenticationType, StatusCode, CallerIpAddress, Uri The operation "GetUserDelegationKey" within the StorageBlobLogs captures the identity responsible for generating a User Delegation SAS token. The AuthenticationHash field shows the Key used to sign the SAS token. When the SAS token is used, any operations will include the same SAS signature hash enabling you to correlate various actions performed using this token even if the originating IP addresses differ. Query: The following query extracts a SAS signature hash from the AuthenticationHash field. This helps to track the token's usage, providing an audit trail to identify potentially malicious activity. StorageBlobLogs | where AuthenticationType has "DelegationSas" | extend SasSHASignature = extract(@"SasSignature\((.*?)\)", 1, AuthenticationHash) | project Type, TimeGenerated, OperationName, AccountName, UserAgentHeader, ObjectKey, AuthenticationType, StatusCode, CallerIpAddress In the next scenario, we examine how a threat actor already in control of a compromised identity uses Azure RBAC to assign permissions. With administrative privileges over a storage account, the threat actor can grant access to additional accounts and establish long-term access to the storage accounts. Scenario 2: A user account is controlled by the threat actor and has elevated access to the Storage Account An identity named Bob was identified as compromised due to an unauthorized IP login. The investigation triggers when Azure Sign-in logs reveal logins originating from an unexpected location. This account has owner permissions for a resource group, allowing full access and role assignments in Azure RBAC. The threat actor grants access to another account they control, as shown in the AzureActivity logs. The AzureActivity logs in the figure below show that Reader, Data Access, and Storage Account Contributor roles were assigned to Hacker2 for a Storage Account within Azure: Figure 9: Assigning a role to a user Query: This query helps identify if a role has been assigned to a user AzureActivity | where Caller has "Bob" | where OperationNameValue has "MICROSOFT.AUTHORIZATION/ROLEASSIGNMENTS/WRITE" | extend RoleDefintionIDProperties = parse_json(Properties) | evaluate bag_unpack(RoleDefintionIDProperties) | extend RoleDefinitionIdExtracted = tostring(todynamic(requestbody).Properties.RoleDefinitionId) | extend RoleDefinitionIdExtracted = extract(@"roleDefinitions/([a-f0-9-]+)", 1, RoleDefinitionIdExtracted) | extend RequestedRole = case( RoleDefinitionIdExtracted == "ba92f5b4-2d11-453d-a403-e96b0029c9fe", "Storage Blob Data Contributor", RoleDefinitionIdExtracted == "b7e6dc6d-f1e8-4753-8033-0f276bb0955b", "Storage Blob Data Owner", RoleDefinitionIdExtracted == "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1", "Storage Blob Data Reader", RoleDefinitionIdExtracted == "db58b8e5-c6ad-4a2a-8342-4190687cbf4a", "Storage Blob Delegator", RoleDefinitionIdExtracted == "c12c1c16-33a1-487b-954d-41c89c60f349", "Reader and Data Access", RoleDefinitionIdExtracted == "17d1049b-9a84-46fb-8f53-869881c3d3ab","Storage Account Contributor", "") | extend roleAssignmentScope = tostring(todynamic(Authorization_d).evidence.roleAssignmentScope) | extend AuthorizedFor = tostring(todynamic(requestbody).Properties.PrincipalId) | extend AuthorizedType = tostring(todynamic(requestbody).Properties.PrincipalType) | project TimeGenerated, RequestedRole, roleAssignmentScope, ActivityStatusValue, Caller, CallerIpAddress, CategoryValue, ResourceProviderValue, AuthorizedFor, AuthorizedType Note: Refer to this resource for additional Azure in-built role IDs that can be used in this query. The Sign-in logs indicate that Hacker2 successfully accessed Azure from the same malicious IP address. We can examine StorageBlobLogs to determine if the user accessed data of the blob storage since specific roles related to the Storage Account were assigned to them. The activities within the blob storage indicate several entries attributed to the Hacker2 user, as shown in the figure below. Figure 10: User access to blob storage Query: This query helps identify access to blob storage from a malicious IP StorageBlobLogs | where TimeGenerated > ago (30d) | where CallerIpAddress has {{IPv4}} | extend ObjectName= ObjectKey | project TimeGenerated, AccountName, OperationName, AuthenticationType, StatusCode, StatusText, RequesterUpn, CallerIpAddress, UserAgentHeader, ObjectName, Category An analysis of the StorageBlobLogs, as shown in the figure below, reveals that Hacker2 performed a "StorageRead" operation on three files. This indicates that data was accessed or downloaded from blob storage. Figure 11: Blob Storage Read/Download activities The UserAgentHeader suggests that the storage account was accessed through the Azure portal. Consequently, the SignInLogs can offer further detailed information. Query: This query checks for read, write, or delete operations in blob storage and their access methods, StorageBlobLogs | where TimeGenerated > ago(30d) | where CallerIpAddress has {{IPv4}} | where OperationName has_any ("PutBlob", "GetBlob", "DeleteBlob") and StatusText == "Success" | extend Notes = case( OperationName == "PutBlob" and Category == "StorageWrite" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was written through Azure Storage Explorer", OperationName == "PutBlob" and Category == "StorageWrite" and UserAgentHeader has "AzCopy", "Blob was written through AzCopy Command", OperationName == "PutBlob" and Category == "StorageWrite" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was written through Azure portal", OperationName == "GetBlob" and Category == "StorageRead" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was Read/Download through Azure Storage Explorer", OperationName == "GetBlob" and Category == "StorageRead" and UserAgentHeader has "AzCopy", "Blob was Read/Download through AzCopy Command", OperationName == "GetBlob" and Category == "StorageRead" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was Read/Download through Azure portal", OperationName == "DeleteBlob" and Category == "StorageDelete" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was deleted through Azure Storage Explorer", OperationName == "DeleteBlob" and Category == "StorageDelete" and UserAgentHeader has "AzCopy", "Blob was deleted through AzCopy Command", OperationName == "DeleteBlob" and Category == "StorageDelete" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was deleted through Azure portal","") | project TimeGenerated, AccountName, OperationName, AuthenticationType, StatusCode, CallerIpAddress, ObjectName=ObjectKey, Category, RequesterUpn, Notes The log analysis confirms that the threat actor successfully extracted data from a storage account. Storage Account summary Detecting misuse within a Storage Account can be challenging, as routine operations may hide malicious activities. However, enabling logging is essential for investigation to help track accesses, especially when compromised identities or misused SAS tokens or keys are involved. Unusual changes in user permissions and irregularities in role assignments which are documented in the Azure Activity Logs, can signal unauthorized access, while Microsoft Entra ID sign-in logs can help identify compromised UPNs and suspicious IP addresses that ties into OAuth-based storage account access. By thoroughly analyzing Storage Account logs which details operation types and access methods, investigators can identify abuse and determine the scope of compromise. That not only helps when remediating the environment but can also provide guidance on preventing unauthorized data theft from occurring again.Cloud forensics: Prepare for the worst -implement security baselines for forensic readiness in Azure
Forensic readiness in the cloud Forensic readiness in the cloud refers to an organization’s ability to collect, preserve, and analyze digital evidence in preparation for security incidents. Forensic readiness is increasingly important as more organizations migrate workloads to the cloud. Achieving an appropriate security posture ensures that organizations are adequately equipped for forensic investigations. This requires more than just the presence of logs; logging and monitoring configurations must be thoughtfully scoped and proactively enabled. Additionally, the adoption of cloud environments presents unique challenges for forensic investigations. First, capturing the right evidence can be difficult due to the dynamic nature of cloud data. Second, in a shared responsibility model, organizations must work closely with their cloud providers to ensure preparedness for forensic investigations. Azure’s multi-tenant architecture adds another layer of complexity, as data from multiple customers may reside on the same physical hardware. Therefore, strict access controls and robust logging are essential. To maintain forensic readiness, organizations must implement comprehensive monitoring and logging across all cloud services to ensure evidence is available when needed. Preparing your Azure environment for forensic readiness When the Azure environment is set up correctly and configured with accurate logging in place, it becomes easier to quickly identify the scope of a security breach, trace the attacker’s actions, and identify the Tactics, Techniques, and Procedures (TTP) employed by a threat actor. Through the implementation of these measures, organizations can ensure that data required to support forensic investigations is available, hence ensuring compliance with auditing requirements, improving security, and ensuring security incidents are resolved efficiently. With that granularity of log data in the environment, organizations are more well-equipped to respond to an incident if it occurs. Case study: Forensic investigation disrupted due to lack of forensic readiness in Azure In a recent cybersecurity incident, a large company utilizing Azure experienced a major setback in its forensic investigation. This case study outlines the critical steps and logs that were missed, leading to a disrupted investigation. Step 1: Initial detection of the compromise The organization’s Security Operations Centre (SOC), identified anomalous outbound traffic originating from a compromised Azure virtual machine (VM) named “THA-VM.” Unfortunately, the absence of diagnostic settings significantly hindered the investigation. Without access to Guest OS logs and data plane logs, the team was unable to gain deeper visibility into the threat actor’s activities. The lack of critical telemetry—such as Windows Event Logs, Syslog, Network Security Group (NSG) flow logs, and resource-specific data plane access logs—posed a major challenge in assessing the full scope of the compromise. Had these diagnostic settings been properly configured, the investigation team would have been better equipped to uncover key indicators of compromise, including local account creation, process execution, command-and-control (C2) communications, and potential lateral movement. Figure 1: Diagnostic settings not configured on the virtual machine resource Step 2: Evidence collection challenges During the forensic analysis of the compromised virtual machine, the team attempted to capture a snapshot of the OS disk but discovered that restore points had not been configured and no backups were available—severely limiting their ability to preserve and examine critical disk-based artefacts such as malicious binaries, tampered system files, or unauthorized persistence mechanisms. Restore points, which are not enabled by default in Azure virtual machines, allow for the creation of application-consistent or crash-consistent snapshots of all managed disks, including the OS disk. These snapshots are stored in a restore point collection and serve as a vital tool in forensic investigations, enabling analysts to preserve the exact state of a VM at a specific point in time and maintain evidence integrity throughout the investigation process. Step 3: Analysis of the storage blob The team then turned to storage blobs after identifying unusual files that appeared to be associated with threat actor tool staging such as scanning utilities and credential dumping tools. However, because diagnostic settings for the storage account had not been enabled, the investigators were unable to access essential data plane logs. These logs could have revealed who uploaded or accessed the blobs and when those actions occurred. Since storage diagnostics are not enabled by default in Azure, this oversight significantly limited visibility into attacker behavior and impeded efforts to reconstruct the timeline and scope of malicious activity—an essential component of any effective forensic investigation. Step 4: Slow response and escalation In the absence of tailored logging and monitoring configurations, response timelines were delayed, and the full incident response process that was required was not initiated quickly enough to minimize the impact. Step 5: Recovery and lessons learned Despite the delays, the team pieced together elements of the story based on the data they had available, without determining the initial access vector largely because the necessary diagnostic data wasn't available. This absence of forensic readiness highlights the importance of configuring diagnostic settings, enabling snapshots, and using centralized logging solutions like Microsoft Sentinel, which will bring all this telemetry into a single pane of glass, providing real-time visibility and historical context in one place. This unified view enables faster incident detection, investigation, and response. Its built-in analytics and AI capabilities help surface anomalies that might otherwise go unnoticed, while retaining a searchable history of events for post-incident forensics. Recommended practices for forensic readiness in Azure The table below outlines key recommendations for deploying and administering workloads securely and effectively in Azure. Each recommendation is categorized by focus area and includes a best practice description, specific action to take, and a reference to supporting documentation or resources to assist with implementation. Category Best Practice Recommended Action Resource/Link Identity and Access Enable MFA for all users. [ ] Enable Multi-Factor Authentication (MFA) for all Azure AD Users. MFA in Azure AD Monitor Access Review and Role Assignments [ ] Regularly review identities (SPNs, Managed Identities, Users), role assignments and permissions for anomalies. Azure Identity Protection Implement RBAC with least privilege. [ ] Use Role-Based Access Control (RBAC) and assign least-privilege roles to users. Azure RBAC Overview Configure PIM for privileged roles. [ ] Configure Privileged Identity Management (PIM) for all privileged roles. Require approval for high privilege roles. PIM in Azure AD Enable Sign-in and Audit Logs. [ ] Ensure all sign-in activities and audit logs are enabled and logging in Azure AD. Azure Entra (AD) Sign-In Logs Conditional Access Policies: Protect high-risk resources from unauthorized access. [ ] Set Conditional Access policies to enforce MFA or access restrictions based on conditions like risk or location. Conditional Access in Azure Entra (AD) Logging and Monitoring Enable Azure Monitor [ ] Enable Azure Monitor to collect telemetry data from resources. Azure Monitor Overview Activate Microsoft Defender for Cloud. [ ] Activate and configure Microsoft Defender for Cloud for enhanced security monitoring. Microsoft Defender for Cloud Enable Diagnostic logging for VM and Applications. [ ] Configure Diagnostic logging for Azure VMs, and other critical resources. Azure Diagnostics Logging Centralize Logs in Log Analytics Workspace. [ ] Consolidate all logs into a Log Analytics Workspace for centralized querying. Log Analytics Workspace Set Audit logs retention to 365+ days. [ ] Ensure audit logs are retained for a minimum of 365 days to meet Forensic needs. Audit Log Retention Enable Advanced Threat Detection. [ ] Enable Microsoft Defender for Cloud and Sentinel to detect anomalous behavior and security threats in real time. Azure Sentinel Overview Data Protection Ensure Data encrypted at rest and in transit. [ ] Enable encryption for data at rest and in transit for all Azure resources. Azure Encryption Use Azure Key Vault for Key management. [ ] Store and manage encryption key, certificates and secrets in Azure Key Vault. Azure Key Vault Rotate Encryption Keys Regularly. Regularly rotate encryption key, certificates and secrets in Azure Key Vault. Manage Keys in Key Vault Configure Immutable Backups. [ ] Set up immutable backups for critical data to prevent tampering. Immutable Blob Storage Implement File Integrity Monitoring [ ] Enable File Integrity Monitoring in Azure Defender for Storage to detect unauthorized modifications. Azure Defender for Storage Network Security Configure Network Security Groups (NSGs). [ ] set up NSGs to restrict inbound/outbound traffic for VM’s and services. Network Security Groups Enable DDoS Protection. [ ] Implement DDoS Protection for critical resources to safeguard against distributed denial-of-service attacks. DDoS Protection in Azure Use VPNs or ExpressRoute for secure connectivity. [ ] Establish VPNs or ExpressRoute for secure, private network connectivity. Azure VPN Gateway Incident Response Set Up Alerts for suspicious activities. [ ] Configure alerts for suspicious activities such as failed login attempts or privilege escalation. Create Alerts in Azure Sentinel Automate incident response. [ ] Automate incident response workflows using Azure Automation or Logic Apps. Azure Logic Apps Integrate Threat intelligence with Sentinel. [ ] Integrate external threat intelligence feeds into Microsoft Sentinel to enrich detection capabilities Threat Intelligence in Azure Sentinel Run Advanced KQL Queries for Incident Investigations. [ ] Use Kusto Query Language (KQL) queries in Sentinel to investigate and correlate incidents. KQL Queries in Sentinel Establish Incident Response Plan [ ] Document and formalize your organization’s incident response plan with clear steps and procedures. Incident Response in Azure Policies and Processes Define a Forensic Readiness Policy. [ ] Establish and document a Forensic Readiness policy that outlines roles, responsibilities, and procedures. Azure Security Policies Conduct Administrator training. [ ] Provide regular training for administrators on security best practices, forensic procedures, and incident response. Azure Security Training By using Microsoft’s tools and implementing these recommended best practices, organizations can improve their forensic readiness and investigation capabilities in Azure. This approach not only helps in responding effectively to incidents but also enhances an organization’s overall security posture. By staying ahead of potential threats and maintaining forensic readiness, you’ll be better equipped to protect your organization and meet regulatory requirements. Conclusion Forensic readiness in Azure is not a one-time effort, it is an ongoing commitment that involves proactive planning, precise configuration, and strong coordination across security, operations, and governance teams. Key practices such as enabling diagnostic logging, centralizing telemetry, enforcing least-privilege access, and developing cloud-tailored incident response playbooks are essential. Together, these measures improve your ability to detect, investigate, and respond to security incidents in a timely and effective manner.1.4KViews4likes1CommentPost-breach browser abuse: a new frontier for threat actors
Co-authors - Raae Wolfram | Sam Gardener Once an attacker has gained access to a system, the browser becomes a rich source of credentials, a platform for persistence, and a stealthy channel for data exfiltration. This blog outlines key abuse techniques and provides actionable detection strategies using Microsoft Defender for Endpoint and Microsoft Defender XDR. Why browsers matter after the breach Post-compromise, browsers offer attackers: Access to credentials (cookies, tokens, autofill data) Control over peripherals (camera, microphone, location) A trusted execution environment for evasion A platform for persistence via extensions or debugging interfaces These capabilities make browsers a high-value target even after initial access has been achieved. Key abuse techniques and detection strategies 1. Credential theft via memory scraping Attackers can extract sensitive data directly from browser memory using tools like Mimikittenz. Security team members can proactively hunt for threats with advanced hunting in Microsoft Defender. Advanced hunting detection query: let PROCESS_VM_READ=0x0010; DeviceEvents | where ActionType == "OpenProcessApiCall" and FileName in~ ("chrome.exe", "msedge.exe", "firefox.exe", "brave.exe", "opera.exe") | project FileName, InitiatingProcessFileName, DesiredAccess=tolong(parse_json(AdditionalFields).DesiredAccess) | where binary_and(DesiredAccess, PROCESS_VM_READ) != 0 Learn more at about hunting queries: Overview - Advanced hunting - Microsoft Defender XDR | Microsoft Learn 2. TLS key logging for passive credential capture Setting the SSLKEYLOGFILE environment variable allows attackers to dump TLS pre-master secrets, enabling decryption of HTTPS traffic. Detection query: DeviceRegistryEvents | where RegistryKey =~ @"SYSTEM\CurrentControlSet\Control\Session Manager\Environment" and RegistryValueName =~ "SSLKEYLOGFILE" 3. Remote debugging port abuse Chromium-based browsers support remote debugging via WebSocket. Attackers can launch browsers with flags like --remote-debugging-port and control them programmatically. Detection queries: DeviceProcessEvents | where FileName in~ ("chrome.exe", "msedge.exe", "brave.exe", "opera.exe") and ProcessCommandLine contains "--remote" DeviceNetworkEvents | where RemotePort in (9222, 9223, 9229) | where RemoteIP == "127.0.0.1" | where InitiatingProcessFileName !in~ ("chrome.exe", "msedge.exe", "brave.exe", "opera.exe") DeviceProcessEvents | where FileName has_any ("chrome", "msedge", "brave", "opera") and ProcessCommandLine contains "--remote" 4. Persistence via malicious extensions Attackers can sideload or auto-update malicious extensions using enterprise policies or developer mode. Detection queries: DeviceProcessEvents | where ProcessCommandLine has "--load-extension" | where FileName in~ ("chrome.exe", "msedge.exe") DeviceRegistryEvents | where RegistryKey has "ExtensionInstallForcelist" | where RegistryValueData has_any ("http", "crx") 5. Anomalous child process spawning Unexpected child processes from browsers may indicate injection, persistence, or evasion. Detection query: DeviceProcessEvents | where InitiatingProcessFileName in~ ("chrome.exe", "msedge.exe", "firefox.exe", “brave.exe”, “opera.exe”) | where FileName !in~ ("chrome.exe", "msedge.exe", "firefox.exe") Recommendations for defenders: Monitor for debugging flags in browser launch commands. Alert on unexpected registry or file modifications related to extensions. Track environment variable usage that affects browser behavior. Investigate RWX memory pages in browser processes. Use Defender for Endpoint to correlate these signals with broader attack chains. Conclusion Post-breach browser abuse is a growing concern that blends stealth, persistence, and credential access into a single threat vector. By understanding these techniques and implementing the detection strategies outlined above, defenders can close a critical visibility gap and better protect their environments. See what our experts have to say. Watch the recorded webinar, download the presentation - and learn more about - Post-Breach Browsers: The Hidden Threat You’re Overlooking.Elevate your protection with expanded Microsoft Defender Experts coverage
Co-authors: Henry Yan, Sr. Product Marketing Manager and Sylvie Liu, Principal Product Manager Security Operations Centers (SOCs) are under extreme pressure due to a rapidly evolving threat landscape, an increase in volume and frequency of attacks driven by AI, and a widening skills gap. To address these challenges, organizations across industries are relying on Microsoft Defender Experts for XDR and Microsoft Defender Experts for Hunting to bolster their SOC and stay ahead of emerging threats. We are committed to continuously enhancing Microsoft Defender Experts services to help our customers safeguard their organizations and focus on what matters most. We are excited to announce the general availability of expanded Defender Experts coverage. With this update, Defender Experts for XDR and Defender Experts for Hunting now deliver around the clock protection and proactive threat hunting for your cloud workloads, starting with hybrid and multicloud servers in Microsoft Defender for Cloud. Additionally, third-party network signals from Palo Alto Networks, Zscaler, and Fortinet can now be used for incident enrichment in Defender Experts for XDR, enabling faster and more accurate detection and response. Extend 24/7, expert-led defense and threat hunting to your hybrid and multicloud servers As cloud adoption accelerates, the sophistication and frequency of cloud attacks are on the rise. According to IDC, in 2024, organizations experienced an average of more than nine cloud security incidents, with 89% reporting an increase year over year. Furthermore, cloud security is the leading skills gap with almost 40% of respondents in the O’Reilly 2024 State of Security Survey identifying it as the top area in need of skilled professionals. Virtual machines (VMs) are the backbone of cloud infrastructure, used to run critical applications with sensitive data while offering flexibility, efficiency, and scalability. This makes them attractive targets for attackers as compromised VMs can be used to potentially carry out malicious activities such as data exfiltration, lateral movement, and resource exploitation. Defender Experts for XDR now delivers 24/7, expert-led managed extended detection and response (MXDR) for your hybrid and multicloud servers in Defender for Cloud. Our security analysts will investigate, triage, and respond to alerts on your on-premises and cloud VMs across Microsoft Azure, Amazon Web Services, and Google Cloud Platform. With Defender Experts for Hunting, which is included in Defender Experts for XDR and also available as a standalone service, our expert threat hunters will now be able to hunt across hybrid and multicloud servers in addition to endpoints, identities, emails, and cloud apps, reducing blind spots and uncovering emerging cloud threats. Figure 1: Incidents from servers in Defender for Cloud investigated by Defender Experts Incident enrichment for improved detection accuracy and faster response By enriching Defender incidents with third-party network signals from Palo Alto Networks (PAN-OS Firewall), Zscaler (Zscaler Internet Access and Zscaler Private Access), and Fortinet (FortiGate Next-Generation Firewall), our security analysts gain deeper insights into attack paths. The additional context helps Defender Experts for XDR identify patterns and connections across domains, enabling more accurate detection and faster response to threats. Figure 2: Third-party enrichment data in Defender Experts for XDR report In this hypothetical scenario, we explore how incident enrichment with third-party network signals helped Defender Experts for XDR uncover lateral movement and potential data exfiltration attempts. Detection: Microsoft Defender for Identity flagged an "Atypical Travel" alert for User A, showing sign-ins from India and Germany within a short timeframe using different devices and IPs, suggesting possible credential compromise or session hijacking. However, initial identity and cloud reviews showed no signs of malicious activity. Correlation: From incident enrichment with third-party network signals, Palo Alto firewall logs revealed attempts to access unauthorized remote tools, while Zscaler proxy data showed encrypted traffic to an unprotected legacy SharePoint server. Investigation: Our security analysts uncovered that the attacker authenticated from a managed mobile device in Germany. Due to token reuse and a misconfigured Mobile Device Management profile, the device passed posture checks and bypassed Conditional Access, enabling access to internal SharePoint. Insights from third-party network signals helped Defender Experts for XDR confirm lateral movement and potential data exfiltration. Response: Once malicious access was confirmed, Defender Experts for XDR initiated a coordinated response, revoking active tokens, isolating affected devices, and hardening mobile policies to enforce Conditional Access. Flexible, cost-effective pricing Defender Experts coverage of servers in Defender for Cloud is priced per server per month, with charges based on the total number of server hours each month. You have the flexibility to scale your servers as needed while ensuring cost effectiveness as you only pay for Defender Experts coverage based on resources you use. For example, if you have a total of 4000 hours across all servers protected by Defender for Cloud in June (June has a total of 720 hours), you will be charged for a total of 5.56 servers in June (4000/720 = 5.56). There is no additional charge for third-party network signal enrichment beyond the data ingestion charge through Microsoft Sentinel. Please contact your Microsoft account representative for more information on pricing. Get started today Defender Experts coverage of servers in Defender for Cloud will be available as an add-on to Defender Experts for XDR and Defender Experts for Hunting. To enable coverage, you must have the following: Defender Experts for XDR or Defender Experts for Hunting license Defender for Servers Plan 1 or Plan 2 in Defender for Cloud You only need a minimum of 1 Defender Experts for XDR or Defender Experts for Hunting license to enable coverage of all your servers in Defender for Cloud. If you are interested in purchasing Defender Experts for XDR or the add-on for Defender Experts coverage of servers in Defender for Cloud, please complete this interest form. Third-party network signals for enrichment are available only for Defender Experts for XDR customers. To enable third-party network signals for enrichment, you must have the following: Microsoft Sentinel instance deployed Microsoft Sentinel onboarded to Microsoft Defender portal At least one of the supported network signals ingested through Sentinel built-in connectors: Palo Alto Networks (PAN-OS Firewall) Zscaler (Zscaler Internet Access and Zscaler Private Access) Fortinet (FortiGate Next-Generation Firewall) If you are an existing Defender Experts for XDR customer and are interested in enabling third-party network signals for enrichment, please reach out to your Service Delivery Manager. Learn more Technical Documentation Microsoft Defender Experts for XDR Microsoft Defender Experts for Hunting Third-party network signals for enrichment Plan Defender for Servers deployment Defender Experts Ninja Training2.5KViews3likes0CommentsWelcome to the Microsoft Defender Experts Ninja Hub!
Updated August 11, 2025 Microsoft Defender Experts for XDR Microsoft Defender Experts for XDR is a managed extended detection and response (MXDR) service that triages, investigates, and responds to incidents for you to help stop cyberattackers and prevent future compromise. Defender Experts for XDR delivers human expertise to security teams quickly to help address coverage gaps and augment their overall security operations. The documentation links below provide more information on the service, requirements, and FAQs: What is Microsoft Defender Experts for XDR offering | Microsoft Learn Before you begin using Defender Experts for XDR | Microsoft Learn Get started with Microsoft Defender Experts for XDR | Microsoft Learn How to use the Microsoft Defender Experts for XDR service | Microsoft Learn Communicating with Microsoft Defender Experts | Microsoft Learn How to search the audit logs for actions performed by Defender Experts | Microsoft Learn Additional information related to Defender Experts for XDR | Microsoft Learn FAQs related to Microsoft Defender Experts for XDR | Microsoft Learn What is third-party network signal enrichment in Microsoft Defender Experts for XDR?| Microsoft Learn Microsoft Defender Experts for Hunting Microsoft Defender Experts for Hunting , which is included with Defender Experts for XDR or offered separately, proactively looks for threats 24/7/365 using unparalleled visibility of cross-domain telemetry and leading threat intelligence to extend your team’s threat hunting capabilities and improve overall SOC response. The documentation links below provide more information on the service, requirements, and reporting: What is Microsoft Defender Experts for Hunting offering | Microsoft Learn Key infrastructure requirements for Microsoft Defender Experts for Hunting | Microsoft Learn How to subscribe to Microsoft Defender Experts for Hunting | Microsoft Learn Understand the Defender Experts for Hunting report in Microsoft Defender XDR | Microsoft Learn Ninja Show episodes featuring Defender Experts Season 7, Episode 8: Day in the life of a SOC analyst Season 5, Episode 5: Improve your security posture with Microsoft Defender Experts for XDR Season 3, Episode 4: Defender Experts for Hunting Overview On-demand event sessions and webinars featuring Defender Experts RSAC 2025 (NEW): Bolser your SOC with Microsoft's Managed Extended Detection and Response (MXDR) Webinar: MDR and Generative AI: Better Together - A conversation with guest speaker Jeff Pollard Defender Experts videos Explainer Video: Microsoft Defender Experts for XDR Explainer Video (NEW): Microsoft Defender Experts for Hunting Video: Adversary in the Middle Hunting Story Video: Get started with onboarding | Microsoft Defender Experts for XDR Video: Get started with managed response | Microsoft Defender Experts for XDR Video: Get started with reporting | Microsoft Defender Experts for XDR Deep dives from the Microsoft Security blog featuring Defender Experts Microsoft Copilot for Security provides immediate impact for the Microsoft Defender Experts team Detecting and mitigating a multi-stage AiTM phishing and BEC campaign Looking for the ‘Sliver’ lining: Hunting for emerging command-and-control frameworks One way Microsoft Defender Experts for Hunting prioritizes customer defense Phish, Click, Breach: Hunting for a Sophisticated Cyber Attack Podcasts Microsoft Security Insights Show Episode 218: Michael Melone Microsoft Security insights Show Episode 198: Raae Wolfram Microsoft Security Insights Show Episode 181: Brian Hooper and Phoebe Rogers: A day in the life of a Defender Experts for XDR analyst Microsoft Security Insights Show Episode 168: Steve Lee, Defender Experts To learn more about Defender Experts, click here.467Views1like0CommentsHow Microsoft Defender Experts uses AI to cut through the noise
Microsoft Defender Experts manages and investigates incidents for some of the world’s largest organizations. We understand the challenges facing our customers and are always looking for ways to respond quicker and scale our services to meet their needs. Teaching AI to think like a security expert We're leveraging AI to help Defender Experts expand our services and respond even faster to threats facing our customers. AI-based incident classification allows us to filter noise up front without compromising on detecting real threats. This AI-based capability is trained by security experts, built for precision, and designed to scale and act at speed. Our approach doesn't just rely on static rules or traditional filtering. Instead, our AI is powered by insights from hundreds of thousands of real investigations conducted by Defender Experts security analysts. These investigations form a goldmine of expert knowledge—how analysts think, what signals they trust, and how they separate benign and false positives from true threats. We use historical intelligence to evaluate each new incident. AI-based incident classification looks at various signals, such as evidence, tenant details, context from IOCs, and TI information. It assigns a similarity score based on those signals. By using a similarity algorithm, the AI-based system compares each new incident to known outcomes from the past—deciding whether it closely resembles true positives, false positives, or is benign. At a certain threshold, it confidently assigns the grade. If the pattern matches past false positives, the system de-grades the incident as noise. If the pattern looks similar to a known higher-risk threat, it escalates it faster. This helps us focus first on what matters most— true, actionable threats, which results in quicker response times for our customers. Human-centric and safe We know that trust is everything in cybersecurity. So even though AI helps us filter noise, we've built guardrails to make sure no real threats are missed: Tiered decisioning: Incidents that are classified as noise are reviewed by Defender Expert analysts to ensure they match the classification and other criteria for noise. Feedback loops: For continuous learning, anything classified as noise is sent to an analyst for validation so that there are no accidental misses of true threats. The feedback from them continuously improves the system. Transparency: classification decisions are visible, helping analysts understand why something is marked as noise or not. This approach strikes the right balance. AI does the heavy lifting up front, and our human security experts remain firmly in control of what is investigated. Quicker response for our customers AI-based incident classification in Defender Experts: 50% of noise is automatically triaged by AI-based incident classification with 100% precision Our experts respond faster to meaningful threats to our customer’s environment. “We no longer waste time chasing dead ends. The system helps us focus on what truly matters and our customers appreciate how quickly we can respond.” — Defender Experts Tier2 Analyst What’s next? We’re continuing to refine this system with more granular risk scoring per entity, deeper tenant-based similarity correlation, IOC based weightage, and additional real-time feedback from Defender Experts analysts. Final thoughts AI alone isn’t the answer—but AI guided by experts is a force multiplier. With AI-based incident classification, Defender Experts is showing what the future of SOCs can look like: faster, smarter, safer, and scalable. AI-based classification has helped reduce 50% of the noise from the analyst queue with 100% accuracy, saving analyst time so they can focus on what matters most. If you're a Defender Experts customer, you’re already seeing the benefit of quicker response times to true security threats. If you're a security leader struggling with alert overload, Microsoft Defender Experts for XDR, Microsoft’s MXDR (managed extended detection and response) service, can deliver around the clock, expert-led protection. For more information, please visit Microsoft Defender Experts for XDR | Microsoft SecurityMemory under siege: The silent evolution of credential theft
From memory dumps to filesystem browsing Historically, threat groups like Lorenz have relied on tools such as Magnet RAM Capture to dump volatile memory for offline analysis. While this approach can be effective, it comes with significant operational overhead—dumping large memory files, transferring them, and parsing them with additional forensic tools is time-consuming. But adversaries are evolving. They are shifting toward real-time, low-footprint techniques like MemProcFS, a forensic tool that exposes system memory as a browsable virtual filesystem. When paired with Dokan, a user-mode library that enables filesystem mounting on Windows, MemProcFS can mount live memory—not just parse dumps—giving attackers direct access to volatile data in real time. This setup eliminates the need for traditional bulky memory dumps and allows attackers to interact with memory as if it were a local folder structure. The result is faster, more selective data extraction with minimal forensic trace. With this capability, attackers can: Navigate memory like folders, skipping raw dump parsing Directly access processes like lsass.exeto extract credentials swiftly Evade traditional detection, as no dump files are written to disk This marks a shift in post-exploitation tactics—precision, stealth, and speed now define how memory is harvested. Sample directory structure of live system memory mounted using MemProcFS (attacker’s perspective) Case study Microsoft Defender Experts, in late April 2025, observed this technique in an intrusion where a compromised user account was leveraged for lateral movement across the environment. The attacker demonstrated a high level of operational maturity, using stealthy techniques to harvest credentials and exfiltrate sensitive data. Attack Path summary as observed by Defender Experts After gaining access, the adversary deployed Dokan and MemProcFS to mount live memory as a virtual filesystem. This allowed them to interact with processes like lsass.exe in real-time, extracting credentials without generating traditional memory dumps—minimizing forensic artifacts. To further their access, the attacker executed vssuirun.exe to create a Volume Shadow Copy, enabling access to locked system files such as SAM and SYSTEM. These files were critical for offline password cracking and privilege escalation. Once the data was collected, it was compressed into an archive and exfiltrated via an SSH tunnel. Attackers compress the LSASS minidump from mounted memory into an archive for exfiltration This case exemplifies how modern adversaries combine modular tooling, real-time memory interaction, and encrypted exfiltration to operate below the radar and achieve their objectives with precision. Unmasking stealth: Defender Experts in action The attack outlined above exemplifies the stealth and sophistication of today’s threat actors—leveraging legitimate tools, operating in-memory, and leaving behind minimal forensic evidence. Microsoft Defender Experts successfully detected, investigated, and responded to this memory-resident threat by leveraging rich telemetry, expert-led threat hunting, and contextual analysis that goes far beyond automated detection. From uncovering evasive techniques like memory mounting and credential harvesting to correlating subtle signals across endpoints, Defender Experts bring human-led insight to the forefront of your cybersecurity strategy. Our ability to pivot quickly, interpret nuanced behaviors, and deliver tailored guidance ensures that even the most covert threats are surfaced and neutralized—before they escalate. Detection guidance The alert Memory forensics tool activity by Microsoft Defender for Endpoint might indicate threat activity associated with this technique. Microsoft Defender XDR customers can run the following query to identify suspicious use of MemProcFS: DeviceProcessEvents | where ProcessVersionInfoOriginalFileName has "MemProcFS" | where ProcessCommandLine has_all (" -device PMEM") Recommendations To reduce exposure to this emerging technique, Microsoft Defender Experts recommend the following actions: Educate security teamson memory-based threats and the offensive repurposing of forensic tools. Monitor for memory mounting activity, especially virtual drive creation linked to unusual processes or users. Restrict execution of dual-use toolslike MemProcFS via application control policies. Track filesystem driver installations, flagging Dokan usage as a potential precursor to memory access. Correlate SSH activity with data staging, especially when sensitive files are accessed or archived. Submit suspicious samplesto the Microsoft Defender Security Intelligence (WDSI) portal for analysis. Final thoughts We all agree - Memory is no longer just a post-incident artifact—it’s the new frontline in credential theft What we’re witnessing isn’t just a clever use of forensic tooling, it’s a strategic shift in how adversaries interact with volatile data. By mounting live memory as a virtual filesystem, attackers gain real-time access to a wide range of sensitive information—not just credentials. From authentication tokens and encryption keys to in-memory malware, clipboard contents, and application data, memory has become a rich, dynamic source of intelligence. Tools like MemProcFS and Dokan enable adversaries to extract this data with speed, precision, and minimal forensic footprint—often without leaving behind the traditional signs defenders rely on. This evolution challenges defenders to go beyond surface-level detection. We must monitor for subtle signs of memory access abuse, understand how legitimate forensic tools are being repurposed offensively, and treat memory as an active threat surface—not just a post-incident artifact. To learn more about how our human-led managed security services can help you stay ahead of similar emerging threats, please visit Microsoft Defender Experts for XDR, our managed extended detection and response (MXDR) service, and Microsoft Defender Experts for Hunting (included in Defender Experts for XDR and as a standalone service), our managed threat hunting service.Cloud forensics: Why enabling Microsoft Azure Key Vault logs matters
Co-authors - Christoph Dreymann - Abul Azed - Shiva P. Introduction As organizations increase their cloud adoption to accelerate AI readiness, Microsoft Incident Response has observed the rise of cloud-based threats as attackers seek to access sensitive data and exploit vulnerabilities stemming from misconfigurations often caused by rapid deployments. In this blog series, Cloud Forensics, we share insights from our frontline investigations to help organizations better understand the evolving threat landscape and implement effective strategies to protect their cloud environments. This blog post explores the importance of enabling and analyzing Microsoft Azure Key Vault logs in the context of security investigations. Microsoft Incident Response has observed cases where threat actors specifically targeted Key Vault instances. In the absence of proper logging, conducting thorough investigations becomes significantly more difficult. Given the highly sensitive nature of the data stored in Key Vault, it is a common target for malicious activity. Moreover, attacks against this service often leave minimal forensic evidence when verbose logging is not properly configured during deployment. We will walk through realistic attack scenarios, illustrating how these threats manifest in log data and highlighting the value of enabling comprehensive logging for detection. Key Vault Key Vault is a cloud service designed for secure storage and retrieval of critical secrets such as passwords or database connection strings. In addition to secrets, it can store other information such as certificates and cryptographic keys. To ensure effective monitoring of activities performed on a specific instance of Key Vault, it is essential to enable logging. When audit logging is not enabled, and there is a security breach, it is often difficult to ascertain which secrets were accessed without comprehensive logs. Given the importance of the assets protected by Key Vault, it is imperative to enable logging during the deployment phase. How to enable logging Logging must be enabled separately for each Key Vault instance either in the Microsoft Azure portal, Azure command-line interface (CLI) or Azure PowerShell. How to enable logging can be found here. Alternatively, it can be configured within the default log analytics workspace as an Azure Policy. How to use this method can be found here. By directing these logs to a Log Analytics workspace, storage account, or event hub for security information and event management (SIEM) ingestion, they can be utilized for threat detection and, more importantly, to ascertain when an identity was compromised and which type of sensitive information was accessed through that compromised identity. Without this logging, it is difficult to confirm whether any material has been accessed and therefore may need to be treated as compromised. NOTE: There are no license requirements to enable logging within Key Vault, but Log Analytics charges based on ingestion and retention for usage of that service (Pricing - Azure Monitor | Microsoft Azure) Next, we will review the structure of the Audit Logs originating from the Key Vault instance. These logs are located in the AzureDiagnostics table. Interesting fields Below is a good starting query to begin investigating activity performed against a Key Vault instance: AzureDiagnostics | where ResourceType == 'VAULTS' The "operationName" field is of particular significance as it indicates the type of operation that took place. An overview of Key Vault operations can be found here. The "Identity" field includes details about the identity responsible for an activity, such as the object identifier and UPN. Lastly, the “callerIpAddress” shows which IP address the requests originate from. The table below displays the fields highlighted and used in this article. Field name Description time Date and time in UTC. resourceId The Key Vault resource ID uniquely identifies a Key Vault in Azure and is used for various operations and configurations. callerIpAddress IP address of the client that made the request. Identity The identity structure includes various information. The identity can be a "user," a "service principal," or a combination such as "user+appId" when the request originates from an Azure PowerShell cmdlet. Different fields are available based on this. The most important ones are: identity_claim_upn_s: Specifies the upn of a user identity_claim_appid_g: Contains the appid identity_claim_idtyp_s: Shows what type of identity was used OperationName The name of the operation, for instance SecretGet Resource Key Vault Name ResourceType Always “VAULTS” requestUri_s The requested Key Vault API call contains valuable information. Each API call has its own structure. For example, the SecretGet request URI is: {vaultBaseUrl}/secrets/{secret-name}/{secret-version}?api-version=7.4. For more information, please see: https://learn.microsoft.com/en-us/rest/api/keyvault/?view=rest-keyvault-keys-7.4 httpStatusCode_d Indicates if an API call was successful A complete list of fields can be found here. To analyze further, we need to understand how a threat actor can access a Key Vault by examining the Access Policy and Azure role-based access control (RBAC) permission model used within it. Access Policy permission model vs Azure RBAC The Access Policy Permission Model operates solely on the data plane, specifically for Azure Key Vault. The data plane is the access pathway for creating, reading, updating, and deleting assets stored within the Key Vault instance. Via a Key Vault Access Policy, you can assign individual permissions and grant access to security principals such as users, groups, service principals, and managed identities, at the Key Vault scope with appropriate Control Plane privileges. This model provides flexibility by granting access to keys, secrets, and certificates through specific permissions. However, it is considered a legacy authorization system native to Key Vault. Note: The Access Policies permission model has privilege escalation risks and lacks Privileged Identity Management support. It is not recommended for critical data and workloads. On the other hand, Azure RBAC operates on both Azure's control and data planes. It is built on Azure Resource Manager, allowing for centralized access management of Azure resources. Azure RBAC controls access by creating role assignments, which consist of a security principal, a role definition (predefined set of permissions), and a scope (a group of resources or an individual resource). RBAC offers several advantages, including a unified access control model for Azure resources and integration with Privileged Identity Management. More information regarding Azure RBAC can be found here. Now, let’s dive into how threat actors can gain access to a Key Vault. How a threat actor can access a Key Vault When a Key Vault is configured with Access Policy permission, privileges can be escalated under certain circumstances. If a threat actor gains access to an identity that has been assigned the Key Vault Contributor Azure RBAC role, Contributor role or any role that includes 'Microsoft.KeyVault/vaults/write' permissions, they can escalate their privileges by setting a Key Vault access policy to grant themselves data plane access, which in turn allows them to read and modify the contents of the Key Vault. Modifying the permission model requires 'Microsoft.Authorization/roleAssignments/write' permission, which is included in the Owner and User Access Administrator roles. Therefore, a threat actor cannot change the permission model without one of these roles. Any change to the authorization mode will be logged in the Activity Logs of the subscription, as shown in the figure below: If a new Access Policy is added, it will generate the following entry within the Azure Activity Log: When Azure RBAC is the permissions model for a Key Vault, a threat actor must identify an identity within the Entra ID tenant that has access to sensitive information or one capable of assigning such permissions. Information about Azure RBAC roles for Key Vault access, specifically those who can access Secrets, can be found here. A threat actor that has compromised an identity with an Owner role is authorized to manage all operations, including resources, access policies, and roles within the Key Vault. In contrast, a threat actor with a Contributor role can handle management operations but does not have access to keys, secrets, or certificates. This restriction applies when the RBAC model is used within a Key Vault. The following section will examine the typical actions performed by a threat actor after gathering permissions. Attack scenario Let's review the common steps threat actors take after gaining initial access to Microsoft Azure. We will focus on the Azure Resource Manager layer (responsible for deploying and managing resources), as its Azure RBAC or Access Policy permissions determine what a threat actor can view or access within Key Vault(s). Enumeration Initially, threat actors aim to understand the existing organizations' attack surface. As such, all Azure resources will be enumerated. The scope of this enumeration is determined by the access rights held by the compromised identity. If the compromised identity possesses access comparable to that of a reader or a Key Vault reader at the subscription level (reader permission is included in a variety of Azure RBAC roles), it can read numerous resource groups. Conversely, if the identity's access is restricted, it may only view a specific subset of resources, such as Key Vaults. Consequently, a threat actor can only interact with those Key Vaults that are visible to them. Once the Key Vault name is identified, a threat actor can interact with the Key Vault, and these interactions will be logged within the AzureDiagnostics table. List secrets / List certificates Operation With the Key Vault Name, a threat actor could list secrets or certificates (Operation: SecretList and CertificateList) if they have the appropriate rights (while this is not the final secret, it indicates under which name the secret or certificate can be retrieved). If not, access attempts would appear as unsuccessful operations within the httpStatusCode_d field, aiding in detecting such activities. Therefore, a high number of unauthorized operations on different Key Vaults could be an indicator of suspicious activity as shown in the figure below: The following query assists in detecting potential unauthorized access patterns. Query: AzureDiagnostics | where ResourceType == 'VAULTS' and OperationName != "Authentication" | summarize MinTime = min(TimeGenerated), MaxTime = max(TimeGenerated), OperationCount=count(), UnauthorizedAccess=countif(httpStatusCode_d >= 400), OperationNames = make_set(OperationName), make_set_if(httpStatusCode_d, httpStatusCode_d >= 400), VaultName=make_set(Resource) by CallerIPAddress | where OperationNames has_any ("SecretList", "CertificateList") and UnauthorizedAccess > 0 When a threat actor uses a browser for interaction, the VaultGet operation is usually the first action when accessing a Key Vault. This operation can also be performed via direct API calls and is not limited to browser use. High-Privileged account store Next, we assume a successful attempt to access a global admin password for Entra ID. Analyzing Secret retrieval When an individual has the identifier of a Key Vault and has SecretList and SecretGet access rights, they can list all the secrets stored within the Key Vault (OperationName SecretList). In this instance, this secret includes a password. Upon identifying the secret name, the secret value can be retrieved (OperationName SecretGet). The image below illustrates what appears in the AzureDiagnostics table. The HTTP status code indicates that these actions were successful. The requestUri contains the name of the secret, such as "BreakGlassAccountTenant" for the SecretGet operation. With this information, one can ascertain what secret has been accessed. The requestUri_s format for the SecretGet operation is as follows: {vaultBaseUrl}/secrets/{secret-name}/{secret-version}?api-version=7.4 When the browser accesses the Key Vault service through the Azure portal, additional API calls are often involved due to the various views within the Key Vault services in Azure. The figure below illustrates this process. When someone accesses a specific Key Vault via a browser, the VaultGet operation is followed by SecretList. To further distinguish actions, SecretListVersion will also be used, as the Key Vault service shows different versions of a Secret, which may indicate direct browser usage. The final SecretGet Operation retrieves the actual secret. When using the Key Vault, SecretList operations can be accompanied by SecretGet operations. This is less common for emergency accounts since these accounts are infrequently used. Setting up alerts when certain secrets are retrieved can assist in identifying unusual activity. Entra ID Application certificate store In addition to storing secrets, certificates that provide access to Entra ID applications can also be managed within a Key Vault. When creating an Entra ID application with a certificate for authentication, you can automatically store that certificate within a Key Vault of your choice. Access to such certificates could allow a threat actor to leverage the access rights of the associated Entra ID application and gain access to Entra ID. For instance, if the Entra ID application possesses significant permissions, the extracted certificate could be utilized to exercise those permissions. Various Entra ID roles can be leveraged to elevate privileges; however, for this scenario, we assume the targeted application holds the "RoleManagement.ReadWrite.Directory" permission. Consequently, the Entra ID application would have the capability to assign the Global Admin role to a user account controlled by the threat actor. We have also described this scenario here. Analyzing Certificate retrieval The figure below outlines the procedure for a threat actor to download a certificate and its private key using the Key Vault API. First, the CertificateList operation displays all certificates within a Key Vault. Next, the SecretGet operation retrieves a specific certificate along with its private key (the SecretGet operation is required to obtain both the certificate and its private key). When a threat actor uses the browser through the Azure portal, the sequence of actions should resemble those in the figure below: When a Certificate object is selected within a specific Key Vault view, all certificates are displayed (Operation: CertificateList). Upon selecting a particular certificate in this view, the operations CertificateGet and CertificateListVersions are executed. Subsequently, when a specific version is selected, the CertificateGet operation will be invoked again. When "Download in PFX/PEM format" is selected, the SecretGet Operation downloads the Certificate and private key within the Browser. With the downloaded certificate, the threat actor can sign in as the Entra application and utilize the assigned permissions. Key Vault summary Detecting misuse of a Key Vault instance can be challenging, as operations like SecretGet can be legitimate. A threat actor might easily masquerade their activities among legitimate users. Nevertheless, unusual attributes, such as IP addresses or peculiar access patterns, could serve as indicators. If an identity is known to be compromised and has utilized Key Vaults, the Key Vault logs must be checked to determine what has been accessed to respond appropriately. Coming up next Stay tuned for the next blog in the Cloud Forensics series. If you haven’t already, please read our previous blog about hunting with Microsoft Graph activity logs.From social engineering to rogue VMs: The emerging tradecraft in human-directed ransomware attacks
Co-authors - Ateesh Rajak - Balaji Venkatesh Overview: What if an attacker didn’t need malware, phishing kits, or exploits to break into your environment—just a convincing voice and a tool you already trust? That’s exactly the play we’re seeing. Ransomware operators and hands-on-keyboard intruders are skipping traditional phishing lures and going straight to the human. By impersonating IT support over phone or Microsoft Teams, they convince users to launch Microsoft Quick Assist, handing over remote access under the guise of troubleshooting. There’s no payload at this point— only manipulation. Once access is established, the attacker downloads and executes a VBScript that launches a QEMU-based rogue virtual machine on the target system. This VM provides an isolated, persistent environment where the attacker can perform internal reconnaissance, collect credentials, move laterally, and lay the groundwork for ransomware deployment—all while staying outside the visibility of host-based security tools. These aren’t opportunistic intrusions. This is calculated tradecraft—a multi-stage operation that begins with trust, escalates with virtualization-based stealth, and often culminates in data exfiltration, lateral movement, or ransomware deployment. The real risk? Attackers are no longer just bypassing —they’re building infrastructure within enterprise environments. Read this blog to learn about this emerging attack technique as well as how Defender Experts can help protect your organization. Attack Flow: Social Engineering Meets Hypervisor Abuse This attack chain combines psychological manipulation with technical evasion, enabling attackers to quietly establish footholds in victim environments. Recent incidents observed by Defender Experts highlight the use of this tradecraft against organizations in the pharmaceutical and consumer goods sectors. Stage One: Distraction and Deception The intrusion begins with an email bombing campaign, flooding the target’s inbox with hundreds of nuisance messages. Shortly afterward, the user receives a Microsoft Teams message or PSTN call from someone impersonating IT support. “We noticed issues with your mailbox. Let me help you fix it.” The victim is guided to launch Microsoft Quick Assist, granting the attacker remote access to the device without raising suspicion. Stage Two: Remote Execution and Rogue VM Deployment With remote access established, the attacker executes initial reconnaissance to enumerate host, network, and domain details. They then download and execute a VBScript, often hosted on cloud storage platforms such as Google Drive, which spins up a QEMU-based virtual machine on the endpoint. This VM becomes an isolated operational enclave—fully controlled by the attacker and invisible to traditional EDR and host-based telemetry. Note: Defender Experts have observed attackers leveraging QEMU’s flexible command line options to evade detection. By frequently changing parameters like RAM size, network setup (e.g., -netdev/-device vs. -nic), and using configuration files instead of inline arguments, attackers bypass static detection rules based on command signatures. Stage Three: Persistence and Expansion Within the rogue VM, the attacker performs the following actions: Executes internal network scans Establishes command-and-control (C2) communication through the VM’s virtual NIC Initiates lateral movement Stores payloads and tools within disk images (.qcow2, .iso, .img) to maintain persistence Because all post-compromise activity takes place within the guest VM, most host monitoring solutions are unable to observe these behaviors—allowing attackers to operate undetected. Why This Technique Matters The use of rogue virtual machines in active intrusions represents a significant evolution in attacker tradecraft. This method enables: Host-level evasion: Traditional endpoint agents cannot monitor activities inside virtual machines, reducing detection coverage. Persistent access: VMs can survive reboots and maintain remote shell capabilities. Stealth infrastructure: Malicious traffic originating from within the VM often blends into normal network activity. Reduced forensic artifacts: Since activity is isolated to the guest OS, forensic artifacts on the host are minimal—making incident reconstruction difficult. Organizations lacking behavioral monitoring and layered defense strategies may miss early indicators of compromise until after significant impact. How Defender Experts Adds Defense-in-Depth Value Defender Experts goes beyond Defender detections to surface rogue VM–based intrusions, especially when attackers rely on trusted tools and human manipulation instead of malware. Defender Experts bridges this gap by delivering expert-led detection and response at every critical phase of the intrusion: Teams Phishing Detection: Defender Experts monitors for suspicious Microsoft Teams messages sent from anomalous or newly created identities—flagging potential social engineering activity early. Quick Assist Misuse Monitoring: When a Teams phishing message leads to remote access via Quick Assist, Defender Experts identifies and correlates this as part of an active intrusion, even in the absence of malware. QEMU Execution Detection: Defender Experts hunting queries spotlight scripted QEMU launches—detecting virtual machine deployment before lateral movement begins. AnyDesk and Persistence Tooling: Defender Experts observes signs of persistence via unauthorized tools like AnyDesk and correlates these with pre-compromise behavior. By connecting these discrete signals—Teams phishing, Quick Assist abuse, QEMU execution, and persistence setup—Defender Experts offers a unified picture of emerging tradecraft. Customers benefit from: Early human-led detection before ransomware or data exfiltration occurs Tailored hunting queries and response guidance mapped to real-world threats Defender Experts doesn’t just detect individual behaviors—it maps the entire intrusion kill chain and guides customers through containment and recovery. Detection Guidance Although visibility is limited inside the rogue VM, defenders can detect the setup process. The following advanced hunting query can help identify suspicious VM launches initiated via scripting engines: DeviceProcessEvents | where InitiatingProcessFileName in~ ("powershell.exe", "wscript.exe", "cscript.exe") | where ProcessVersionInfoInternalFileName has "qemu" and ProcessCommandLine !has "qemu" //Renamed execution of the QEMU emulator This query focuses on scripted invocations of QEMU with memory and network flags—signs of programmatic VM deployment via Windows scripting engines. Recommendations To reduce exposure to this emerging technique, Defender Experts recommends the following actions: User awareness training: Educate employees on recognizing vishing and social engineering tactics. Disable or control remote access tools: Block or uninstall Microsoft Quick Assist if unused. Organizations using Microsoft Intune can adopt Remote Help, which offers enhanced security and authentication controls. Enable behavioural network monitoring: Unusual internal scan activity or unexpected outbound traffic may signal VM-based operations. Proactively hunt for rogue VM activity: o Use the hunting query above to identify scripted QEMU executions o Isolate affected hosts to prevent further C2 or lateral movement o Remove VBScript files, QEMU executables, and disk images (.qcow2, .img, .iso) o Rebuild compromised systems using trusted images and rotate credentials Submit samples to Microsoft for analysis: Upload suspicious scripts and binaries to the Microsoft Defender Security Intelligence (WDSI) portal for deep inspection. Conclusion This technique represents more than just a clever evasion strategy—it marks a significant shift in adversary tradecraft. Attackers are no longer solely focused on bypassing antivirus or executing malware payloads. Instead, they are building persistent infrastructure within enterprise environments by abusing trusted tools and user workflows. By combining social engineering with virtualization-based stealth, these intrusions enable threat actors to extend dwell time, reduce detection surface, and operate below the radar of traditional response mechanisms. This activity underscores the importance of behavioural monitoring, layered defenses, and user awareness. What appears to be a routine IT interaction may, in reality, be the entry point for a full-fledged rogue virtual machine—and a persistent threat operating in plain sight. To learn more about how our human-led managed security services can help you stay ahead of similar emerging threats, please visit Microsoft Defender Experts for XDR, our managed extended detection and response (MXDR) service, and Microsoft Defender Experts for Hunting (included in Defender Experts for XDR), our managed threat hunting service.