microsoft security enterprise services
7 TopicsCloud forensics: Why enabling Microsoft Azure Key Vault logs matters
Co-authors - Christoph Dreymann - Abul Azed - Shiva P. Introduction As organizations increase their cloud adoption to accelerate AI readiness, Microsoft Incident Response has observed the rise of cloud-based threats as attackers seek to access sensitive data and exploit vulnerabilities stemming from misconfigurations often caused by rapid deployments. In this blog series, Cloud Forensics, we share insights from our frontline investigations to help organizations better understand the evolving threat landscape and implement effective strategies to protect their cloud environments. This blog post explores the importance of enabling and analyzing Microsoft Azure Key Vault logs in the context of security investigations. Microsoft Incident Response has observed cases where threat actors specifically targeted Key Vault instances. In the absence of proper logging, conducting thorough investigations becomes significantly more difficult. Given the highly sensitive nature of the data stored in Key Vault, it is a common target for malicious activity. Moreover, attacks against this service often leave minimal forensic evidence when verbose logging is not properly configured during deployment. We will walk through realistic attack scenarios, illustrating how these threats manifest in log data and highlighting the value of enabling comprehensive logging for detection. Key Vault Key Vault is a cloud service designed for secure storage and retrieval of critical secrets such as passwords or database connection strings. In addition to secrets, it can store other information such as certificates and cryptographic keys. To ensure effective monitoring of activities performed on a specific instance of Key Vault, it is essential to enable logging. When audit logging is not enabled, and there is a security breach, it is often difficult to ascertain which secrets were accessed without comprehensive logs. Given the importance of the assets protected by Key Vault, it is imperative to enable logging during the deployment phase. How to enable logging Logging must be enabled separately for each Key Vault instance either in the Microsoft Azure portal, Azure command-line interface (CLI) or Azure PowerShell. How to enable logging can be found here. Alternatively, it can be configured within the default log analytics workspace as an Azure Policy. How to use this method can be found here. By directing these logs to a Log Analytics workspace, storage account, or event hub for security information and event management (SIEM) ingestion, they can be utilized for threat detection and, more importantly, to ascertain when an identity was compromised and which type of sensitive information was accessed through that compromised identity. Without this logging, it is difficult to confirm whether any material has been accessed and therefore may need to be treated as compromised. NOTE: There are no license requirements to enable logging within Key Vault, but Log Analytics charges based on ingestion and retention for usage of that service (Pricing - Azure Monitor | Microsoft Azure) Next, we will review the structure of the Audit Logs originating from the Key Vault instance. These logs are located in the AzureDiagnostics table. Interesting fields Below is a good starting query to begin investigating activity performed against a Key Vault instance: AzureDiagnostics | where ResourceType == 'VAULTS' The "operationName" field is of particular significance as it indicates the type of operation that took place. An overview of Key Vault operations can be found here. The "Identity" field includes details about the identity responsible for an activity, such as the object identifier and UPN. Lastly, the “callerIpAddress” shows which IP address the requests originate from. The table below displays the fields highlighted and used in this article. Field name Description time Date and time in UTC. resourceId The Key Vault resource ID uniquely identifies a Key Vault in Azure and is used for various operations and configurations. callerIpAddress IP address of the client that made the request. Identity The identity structure includes various information. The identity can be a "user," a "service principal," or a combination such as "user+appId" when the request originates from an Azure PowerShell cmdlet. Different fields are available based on this. The most important ones are: identity_claim_upn_s: Specifies the upn of a user identity_claim_appid_g: Contains the appid identity_claim_idtyp_s: Shows what type of identity was used OperationName The name of the operation, for instance SecretGet Resource Key Vault Name ResourceType Always “VAULTS” requestUri_s The requested Key Vault API call contains valuable information. Each API call has its own structure. For example, the SecretGet request URI is: {vaultBaseUrl}/secrets/{secret-name}/{secret-version}?api-version=7.4. For more information, please see: https://learn.microsoft.com/en-us/rest/api/keyvault/?view=rest-keyvault-keys-7.4 httpStatusCode_d Indicates if an API call was successful A complete list of fields can be found here. To analyze further, we need to understand how a threat actor can access a Key Vault by examining the Access Policy and Azure role-based access control (RBAC) permission model used within it. Access Policy permission model vs Azure RBAC The Access Policy Permission Model operates solely on the data plane, specifically for Azure Key Vault. The data plane is the access pathway for creating, reading, updating, and deleting assets stored within the Key Vault instance. Via a Key Vault Access Policy, you can assign individual permissions and grant access to security principals such as users, groups, service principals, and managed identities, at the Key Vault scope with appropriate Control Plane privileges. This model provides flexibility by granting access to keys, secrets, and certificates through specific permissions. However, it is considered a legacy authorization system native to Key Vault. Note: The Access Policies permission model has privilege escalation risks and lacks Privileged Identity Management support. It is not recommended for critical data and workloads. On the other hand, Azure RBAC operates on both Azure's control and data planes. It is built on Azure Resource Manager, allowing for centralized access management of Azure resources. Azure RBAC controls access by creating role assignments, which consist of a security principal, a role definition (predefined set of permissions), and a scope (a group of resources or an individual resource). RBAC offers several advantages, including a unified access control model for Azure resources and integration with Privileged Identity Management. More information regarding Azure RBAC can be found here. Now, let’s dive into how threat actors can gain access to a Key Vault. How a threat actor can access a Key Vault When a Key Vault is configured with Access Policy permission, privileges can be escalated under certain circumstances. If a threat actor gains access to an identity that has been assigned the Key Vault Contributor Azure RBAC role, Contributor role or any role that includes 'Microsoft.KeyVault/vaults/write' permissions, they can escalate their privileges by setting a Key Vault access policy to grant themselves data plane access, which in turn allows them to read and modify the contents of the Key Vault. Modifying the permission model requires 'Microsoft.Authorization/roleAssignments/write' permission, which is included in the Owner and User Access Administrator roles. Therefore, a threat actor cannot change the permission model without one of these roles. Any change to the authorization mode will be logged in the Activity Logs of the subscription, as shown in the figure below: If a new Access Policy is added, it will generate the following entry within the Azure Activity Log: When Azure RBAC is the permissions model for a Key Vault, a threat actor must identify an identity within the Entra ID tenant that has access to sensitive information or one capable of assigning such permissions. Information about Azure RBAC roles for Key Vault access, specifically those who can access Secrets, can be found here. A threat actor that has compromised an identity with an Owner role is authorized to manage all operations, including resources, access policies, and roles within the Key Vault. In contrast, a threat actor with a Contributor role can handle management operations but does not have access to keys, secrets, or certificates. This restriction applies when the RBAC model is used within a Key Vault. The following section will examine the typical actions performed by a threat actor after gathering permissions. Attack scenario Let's review the common steps threat actors take after gaining initial access to Microsoft Azure. We will focus on the Azure Resource Manager layer (responsible for deploying and managing resources), as its Azure RBAC or Access Policy permissions determine what a threat actor can view or access within Key Vault(s). Enumeration Initially, threat actors aim to understand the existing organizations' attack surface. As such, all Azure resources will be enumerated. The scope of this enumeration is determined by the access rights held by the compromised identity. If the compromised identity possesses access comparable to that of a reader or a Key Vault reader at the subscription level (reader permission is included in a variety of Azure RBAC roles), it can read numerous resource groups. Conversely, if the identity's access is restricted, it may only view a specific subset of resources, such as Key Vaults. Consequently, a threat actor can only interact with those Key Vaults that are visible to them. Once the Key Vault name is identified, a threat actor can interact with the Key Vault, and these interactions will be logged within the AzureDiagnostics table. List secrets / List certificates Operation With the Key Vault Name, a threat actor could list secrets or certificates (Operation: SecretList and CertificateList) if they have the appropriate rights (while this is not the final secret, it indicates under which name the secret or certificate can be retrieved). If not, access attempts would appear as unsuccessful operations within the httpStatusCode_d field, aiding in detecting such activities. Therefore, a high number of unauthorized operations on different Key Vaults could be an indicator of suspicious activity as shown in the figure below: The following query assists in detecting potential unauthorized access patterns. Query: AzureDiagnostics | where ResourceType == 'VAULTS' and OperationName != "Authentication" | summarize MinTime = min(TimeGenerated), MaxTime = max(TimeGenerated), OperationCount=count(), UnauthorizedAccess=countif(httpStatusCode_d >= 400), OperationNames = make_set(OperationName), make_set_if(httpStatusCode_d, httpStatusCode_d >= 400), VaultName=make_set(Resource) by CallerIPAddress | where OperationNames has_any ("SecretList", "CertificateList") and UnauthorizedAccess > 0 When a threat actor uses a browser for interaction, the VaultGet operation is usually the first action when accessing a Key Vault. This operation can also be performed via direct API calls and is not limited to browser use. High-Privileged account store Next, we assume a successful attempt to access a global admin password for Entra ID. Analyzing Secret retrieval When an individual has the identifier of a Key Vault and has SecretList and SecretGet access rights, they can list all the secrets stored within the Key Vault (OperationName SecretList). In this instance, this secret includes a password. Upon identifying the secret name, the secret value can be retrieved (OperationName SecretGet). The image below illustrates what appears in the AzureDiagnostics table. The HTTP status code indicates that these actions were successful. The requestUri contains the name of the secret, such as "BreakGlassAccountTenant" for the SecretGet operation. With this information, one can ascertain what secret has been accessed. The requestUri_s format for the SecretGet operation is as follows: {vaultBaseUrl}/secrets/{secret-name}/{secret-version}?api-version=7.4 When the browser accesses the Key Vault service through the Azure portal, additional API calls are often involved due to the various views within the Key Vault services in Azure. The figure below illustrates this process. When someone accesses a specific Key Vault via a browser, the VaultGet operation is followed by SecretList. To further distinguish actions, SecretListVersion will also be used, as the Key Vault service shows different versions of a Secret, which may indicate direct browser usage. The final SecretGet Operation retrieves the actual secret. When using the Key Vault, SecretList operations can be accompanied by SecretGet operations. This is less common for emergency accounts since these accounts are infrequently used. Setting up alerts when certain secrets are retrieved can assist in identifying unusual activity. Entra ID Application certificate store In addition to storing secrets, certificates that provide access to Entra ID applications can also be managed within a Key Vault. When creating an Entra ID application with a certificate for authentication, you can automatically store that certificate within a Key Vault of your choice. Access to such certificates could allow a threat actor to leverage the access rights of the associated Entra ID application and gain access to Entra ID. For instance, if the Entra ID application possesses significant permissions, the extracted certificate could be utilized to exercise those permissions. Various Entra ID roles can be leveraged to elevate privileges; however, for this scenario, we assume the targeted application holds the "RoleManagement.ReadWrite.Directory" permission. Consequently, the Entra ID application would have the capability to assign the Global Admin role to a user account controlled by the threat actor. We have also described this scenario here. Analyzing Certificate retrieval The figure below outlines the procedure for a threat actor to download a certificate and its private key using the Key Vault API. First, the CertificateList operation displays all certificates within a Key Vault. Next, the SecretGet operation retrieves a specific certificate along with its private key (the SecretGet operation is required to obtain both the certificate and its private key). When a threat actor uses the browser through the Azure portal, the sequence of actions should resemble those in the figure below: When a Certificate object is selected within a specific Key Vault view, all certificates are displayed (Operation: CertificateList). Upon selecting a particular certificate in this view, the operations CertificateGet and CertificateListVersions are executed. Subsequently, when a specific version is selected, the CertificateGet operation will be invoked again. When "Download in PFX/PEM format" is selected, the SecretGet Operation downloads the Certificate and private key within the Browser. With the downloaded certificate, the threat actor can sign in as the Entra application and utilize the assigned permissions. Key Vault summary Detecting misuse of a Key Vault instance can be challenging, as operations like SecretGet can be legitimate. A threat actor might easily masquerade their activities among legitimate users. Nevertheless, unusual attributes, such as IP addresses or peculiar access patterns, could serve as indicators. If an identity is known to be compromised and has utilized Key Vaults, the Key Vault logs must be checked to determine what has been accessed to respond appropriately. Coming up next Stay tuned for the next blog in the Cloud Forensics series. If you haven’t already, please read our previous blog about hunting with Microsoft Graph activity logs.Enhancing Threat Hunting with Microsoft Defender Experts Plugin
In today's rapidly evolving digital landscape, cybersecurity threats are becoming increasingly sophisticated, requiring organizations to adopt proactive measures to safeguard their assets. Recognizing this need, Microsoft has introduced the Defender Experts Plugin—a powerful addition to Copilot for Security’s GitHub. This plugin is designed to elevate your cybersecurity defenses by integrating proactive threat hunting capabilities across your entire organization, including Office 365, cloud applications, and identity platforms. What is Defender Experts for Hunting? Defender Experts for Hunting is a specialized managed service from Microsoft that provides proactive, human-led threat hunting across a broad range of organizational environments. Unlike automated detection, this service involves active threat hunting by Microsoft’s seasoned security experts, who analyze activities across endpoints, cloud applications, email, and identity platforms. Defender Experts for Hunting focuses on detecting advanced threats and human adversary behaviors, particularly those involving sophisticated or “hands-on-keyboard” attacks, and provides organizations with detailed alerts, expert guidance, and remediation recommendations. Overview of the Plugin Microsoft’s Defender Experts Plugin is a comprehensive threat hunting tool that expands traditional security boundaries. It goes beyond endpoints to investigate Office 365, cloud applications, and identity platforms, where Microsoft’s seasoned security professionals build detections to investigate these suspicious activities. The plugin specializes in tracking sophisticated threats, especially those posed by human adversaries and hands-on-keyboard attacks. The plugin is skills-based leaning on KQL for Advanced Hunting Queries (AHQs) to scan across Defender tables for risky behaviors and suspicious activities, with support for tables such as CloudAppEvents, EmailEvents, EmailAttachmentInfo, and AADSignIn. These queries are not a one-off, as Defender Experts will continue to contribute to the plugin in line with our normal research efforts. Some of the threat detection “skills” included in this plugin are: Suspicious Use of AzureHound: Flags potentially unauthorized data gathering using AzureHound on devices. Reconnaissance Activity Using Network Logs: Detects reconnaissance behavior by analyzing network logs and identifying specific command-line activity. Cobalt Strike DNS Beaconing: Detects suspicious DNS queries associated with Cobalt Strike beacons. By leveraging Microsoft’s Defender Experts Plugin, organizations can benefit from the deep expertise and proactive approach of Microsoft’s security researchers. This tool ensures that potential threats are not only identified but also thoroughly investigated and addressed with the eventual addition of Promptbooks, thus enhancing the overall security posture of the organization. Furthermore, the integration of the Defender Experts Plugin with Copilot for Security’s GitHub allows for seamless collaboration and information sharing among the greater security community. Step-by-Step Guided Walkthrough Getting started with the Defender Experts Security Copilot Plugin is straightforward: 1 - Download the Defender Experts plugin (YAML) from GitHub. 2 - Access Security Copilot 3 - In the bottom-left corner, click the Plugins icon. 4 - Under Custom upload, select Upload plugin. 5 - Upload the Defender Experts Plugin. 6 - Click Add to finalize. 7 - Find the plugin under Custom. 8 - Your installation will now include specialized prompts in Defender Experts, with skills tailored for effective collaboration with Copilot for Security’s capabilities. Conclusion The Defender Experts Plugin is a vital addition to any organization’s cybersecurity arsenal. By incorporating proactive threat hunting and leveraging the expertise of Microsoft’s security analysts, this plugin helps organizations to stay ahead of potential threats and maintain a robust security posture. Embrace this powerful tool and take your cybersecurity defenses to the next level. Let’s get started securing your environment with Defender Experts for Hunting! If you’re interested in learning more about our Defender Experts services, visit the following resources: Microsoft Defender Experts for XDR web page Microsoft Defender Experts for XDR docs page Microsoft Defender Experts for Hunting web page Microsoft Defender Experts for Hunting docs page1.5KViews1like1CommentFrom prevention to recovery: Microsoft Unified’s holistic cybersecurity approach
Author - Paul Saigar The latest Microsoft Digital Defense Report states that 80 percent of organizations have attack paths that expose critical assets. Furthermore, Microsoft has observed a 2.75x increase year over year in ransomware attacks among our customers. Cyber-enabled financial fraud is also rising globally. According to our report, the daily traffic volume for Tech scams – a type of fraud that tricks users by impersonating legitimate services or using fake tech support and ads – has skyrocketed by 400 per cent since 2022. This is a stark contrast to the 180 per cent increase in malware and 30 per cent in phishing over the same period. Microsoft is committed to helping organizations meet this growing challenge with a suite of integrated technologies and services designed to let customers operate with confidence. Microsoft Unified services and the role of Microsoft IR (incident response) Microsoft IR is backed by our elite Detection and Response Team (DART) and is an essential component of Microsoft’s overall cybersecurity offering for customers. This team consists of highly skilled cybersecurity professionals with extensive backgrounds in threat hunting and intelligence, digital forensics and tactical recovery, with experience in handling both proactive and reactive incident response. DART’s approach is twofold: it focuses on immediate incident response and pre-emptive measures to prevent security breaches before they occur. Proactive measures: Microsoft IR, backed by DART, conducts comprehensive assessments of organizational security infrastructures, seeking out vulnerabilities and potential threats. By evaluating the security readiness of identity and endpoint management systems, our DART experts provide customized recommendations to enhance security measures. Reactive strategies: In the event of a cybersecurity incident, DART’s response is swift and effective. The team engages directly with the threat, isolating affected systems to prevent further damage while conducting a thorough analysis to identify the source and nature of the attack. Recovery processes are implemented to restore integrity to the systems and data affected. Throughout the cybersecurity response, our DART experts provide continuous support and updates to ensure stakeholders are informed and prepared for necessary actions. This comprehensive approach is supported by Microsoft’s vast threat intelligence, which analyses 78 trillion security signals daily, and state-of-the-art technologies. That includes proprietary tools and widely recognized solutions such as the Microsoft Defender suite and Microsoft Sentinel. The depth of expertise within DART ensures it is equipped to manage complex cyber threats efficiently, making the team a trusted and vital component of our cybersecurity offering. Expanding Microsoft Unified’s cybersecurity offering Recognizing the critical need for rapid and robust incident management, Microsoft IR, our Cybersecurity Incident Response (CIR) service, is being offered through Microsoft Unified. This offering provides access to our global network of cybersecurity experts, who offer onsite and remote support, ensuring comprehensive coverage and swift action. Our CIR offering also integrates seamlessly with our broader Microsoft Unified framework. Initial contact: Our Unified team serves as the first line of contact for triage and validation of suspected cybersecurity incidents, providing timely and efficient incident isolation and remediation. Escalated response: When an incident escalates beyond initial containment, our CIR team takes comprehensive control, ensuring extensive investigation, containment, and recovery. The suite of services that make up CIR includes prioritized response times, with DART experts available within two hours to address security incidents. It also includes comprehensive services ranging from threat investigation, digital forensics, and malware analysis to complete recovery and remediation efforts. Organizations can also access proactive compromise assessments that delve deep into their environments to unearth vulnerabilities, potential indicators of compromise, potential attack vectors, and inform roadmaps to bolster their defenses. These services are complemented by regular threat intelligence briefings tailored to specific industry and geographical threats to keep organizations informed and prepared. Engage with Microsoft Unified Microsoft Unified provides an indispensable resource for organizations aiming to enhance their cybersecurity readiness. We integrate proactive assessments with rapid, effective incident response capabilities to equip businesses with the necessary tools and expertise to confront and mitigate cyber threats. To learn more about how Microsoft can help protect your organization from cyber threats, visit our Microsoft Unified page. To learn more about Microsoft IR (incident response), please visit Microsoft Incident Response page.1.4KViews2likes0CommentsFrom social engineering to rogue VMs: The emerging tradecraft in human-directed ransomware attacks
Co-authors - Ateesh Rajak - Balaji Venkatesh Overview: What if an attacker didn’t need malware, phishing kits, or exploits to break into your environment—just a convincing voice and a tool you already trust? That’s exactly the play we’re seeing. Ransomware operators and hands-on-keyboard intruders are skipping traditional phishing lures and going straight to the human. By impersonating IT support over phone or Microsoft Teams, they convince users to launch Microsoft Quick Assist, handing over remote access under the guise of troubleshooting. There’s no payload at this point— only manipulation. Once access is established, the attacker downloads and executes a VBScript that launches a QEMU-based rogue virtual machine on the target system. This VM provides an isolated, persistent environment where the attacker can perform internal reconnaissance, collect credentials, move laterally, and lay the groundwork for ransomware deployment—all while staying outside the visibility of host-based security tools. These aren’t opportunistic intrusions. This is calculated tradecraft—a multi-stage operation that begins with trust, escalates with virtualization-based stealth, and often culminates in data exfiltration, lateral movement, or ransomware deployment. The real risk? Attackers are no longer just bypassing —they’re building infrastructure within enterprise environments. Read this blog to learn about this emerging attack technique as well as how Defender Experts can help protect your organization. Attack Flow: Social Engineering Meets Hypervisor Abuse This attack chain combines psychological manipulation with technical evasion, enabling attackers to quietly establish footholds in victim environments. Recent incidents observed by Defender Experts highlight the use of this tradecraft against organizations in the pharmaceutical and consumer goods sectors. Stage One: Distraction and Deception The intrusion begins with an email bombing campaign, flooding the target’s inbox with hundreds of nuisance messages. Shortly afterward, the user receives a Microsoft Teams message or PSTN call from someone impersonating IT support. “We noticed issues with your mailbox. Let me help you fix it.” The victim is guided to launch Microsoft Quick Assist, granting the attacker remote access to the device without raising suspicion. Stage Two: Remote Execution and Rogue VM Deployment With remote access established, the attacker executes initial reconnaissance to enumerate host, network, and domain details. They then download and execute a VBScript, often hosted on cloud storage platforms such as Google Drive, which spins up a QEMU-based virtual machine on the endpoint. This VM becomes an isolated operational enclave—fully controlled by the attacker and invisible to traditional EDR and host-based telemetry. Note: Defender Experts have observed attackers leveraging QEMU’s flexible command line options to evade detection. By frequently changing parameters like RAM size, network setup (e.g., -netdev/-device vs. -nic), and using configuration files instead of inline arguments, attackers bypass static detection rules based on command signatures. Stage Three: Persistence and Expansion Within the rogue VM, the attacker performs the following actions: Executes internal network scans Establishes command-and-control (C2) communication through the VM’s virtual NIC Initiates lateral movement Stores payloads and tools within disk images (.qcow2, .iso, .img) to maintain persistence Because all post-compromise activity takes place within the guest VM, most host monitoring solutions are unable to observe these behaviors—allowing attackers to operate undetected. Why This Technique Matters The use of rogue virtual machines in active intrusions represents a significant evolution in attacker tradecraft. This method enables: Host-level evasion: Traditional endpoint agents cannot monitor activities inside virtual machines, reducing detection coverage. Persistent access: VMs can survive reboots and maintain remote shell capabilities. Stealth infrastructure: Malicious traffic originating from within the VM often blends into normal network activity. Reduced forensic artifacts: Since activity is isolated to the guest OS, forensic artifacts on the host are minimal—making incident reconstruction difficult. Organizations lacking behavioral monitoring and layered defense strategies may miss early indicators of compromise until after significant impact. How Defender Experts Adds Defense-in-Depth Value Defender Experts goes beyond Defender detections to surface rogue VM–based intrusions, especially when attackers rely on trusted tools and human manipulation instead of malware. Defender Experts bridges this gap by delivering expert-led detection and response at every critical phase of the intrusion: Teams Phishing Detection: Defender Experts monitors for suspicious Microsoft Teams messages sent from anomalous or newly created identities—flagging potential social engineering activity early. Quick Assist Misuse Monitoring: When a Teams phishing message leads to remote access via Quick Assist, Defender Experts identifies and correlates this as part of an active intrusion, even in the absence of malware. QEMU Execution Detection: Defender Experts hunting queries spotlight scripted QEMU launches—detecting virtual machine deployment before lateral movement begins. AnyDesk and Persistence Tooling: Defender Experts observes signs of persistence via unauthorized tools like AnyDesk and correlates these with pre-compromise behavior. By connecting these discrete signals—Teams phishing, Quick Assist abuse, QEMU execution, and persistence setup—Defender Experts offers a unified picture of emerging tradecraft. Customers benefit from: Early human-led detection before ransomware or data exfiltration occurs Tailored hunting queries and response guidance mapped to real-world threats Defender Experts doesn’t just detect individual behaviors—it maps the entire intrusion kill chain and guides customers through containment and recovery. Detection Guidance Although visibility is limited inside the rogue VM, defenders can detect the setup process. The following advanced hunting query can help identify suspicious VM launches initiated via scripting engines: DeviceProcessEvents | where InitiatingProcessFileName in~ ("powershell.exe", "wscript.exe", "cscript.exe") | where ProcessVersionInfoInternalFileName has "qemu" and ProcessCommandLine !has "qemu" //Renamed execution of the QEMU emulator This query focuses on scripted invocations of QEMU with memory and network flags—signs of programmatic VM deployment via Windows scripting engines. Recommendations To reduce exposure to this emerging technique, Defender Experts recommends the following actions: User awareness training: Educate employees on recognizing vishing and social engineering tactics. Disable or control remote access tools: Block or uninstall Microsoft Quick Assist if unused. Organizations using Microsoft Intune can adopt Remote Help, which offers enhanced security and authentication controls. Enable behavioural network monitoring: Unusual internal scan activity or unexpected outbound traffic may signal VM-based operations. Proactively hunt for rogue VM activity: o Use the hunting query above to identify scripted QEMU executions o Isolate affected hosts to prevent further C2 or lateral movement o Remove VBScript files, QEMU executables, and disk images (.qcow2, .img, .iso) o Rebuild compromised systems using trusted images and rotate credentials Submit samples to Microsoft for analysis: Upload suspicious scripts and binaries to the Microsoft Defender Security Intelligence (WDSI) portal for deep inspection. Conclusion This technique represents more than just a clever evasion strategy—it marks a significant shift in adversary tradecraft. Attackers are no longer solely focused on bypassing antivirus or executing malware payloads. Instead, they are building persistent infrastructure within enterprise environments by abusing trusted tools and user workflows. By combining social engineering with virtualization-based stealth, these intrusions enable threat actors to extend dwell time, reduce detection surface, and operate below the radar of traditional response mechanisms. This activity underscores the importance of behavioural monitoring, layered defenses, and user awareness. What appears to be a routine IT interaction may, in reality, be the entry point for a full-fledged rogue virtual machine—and a persistent threat operating in plain sight. To learn more about how our human-led managed security services can help you stay ahead of similar emerging threats, please visit Microsoft Defender Experts for XDR, our managed extended detection and response (MXDR) service, and Microsoft Defender Experts for Hunting (included in Defender Experts for XDR), our managed threat hunting service.