microsoft incident response (ir)
30 TopicsMicrosoft Defender Experts - S.T.A.R. Series
Co-author: Samantha Gardener To stay ahead of today’s sophisticated cyber threats, organizations must embrace a proactive defense strategy that includes these three pillars: emerging trends, adaptive strategies, and actionable insights. Threat actors are increasingly leveraging AI-driven attacks, supply chain compromises, and identity-based exploits. Modern strategies focus on zero trust principles, continuous threat hunting, and leveraging advanced threat intelligence to predict and neutralize risks before they escalate. By integrating real-time analytics, automated response capabilities, and cross-platform visibility, security teams can transform insights into decisive action to help ensure resilience against evolving attack vectors and safeguard critical assets in an ever-changing landscape Our popular S.T.A.R. webinar series features panels of our experts who discuss trends, strategies, and insights that will help you defend against today’s sophisticated threats. Gain Expert Insights: Learn from Microsoft Defender Experts who share their knowledge on the latest threats and trends in cybersecurity. Bolster their Security Program: Receive actionable guidance and strategies to effectively combat emerging threats and strengthen defenses. Meet the Experts: Get to know the Defender Experts and understand their roles in safeguarding organizations. For additional insights, some episodes are accompanied by informative blogs that include even include real-world threat hunting patterns Microsoft Defender Experts - S.T.A.R. series episodes Episode 1 - November 2024 Crafting Chaos: The Amplified Tactics of Social Engineering - Hunt, Halt, and Evict Ep. 1- Crafting Chaos: The Amplified Tactics of Social Engineering Description Explore amplified tactics of social engineering with our Defender Experts. We cover Quick Assist email spam floods, RMM tool abuse, and the ClickFix Powershell copy/paste technique. We highlight how attackers leverage legitimate services like SharePoint, Dropbox, and Google Drive for phishing campaigns. Key Topics: Quick Assist Email Spam Flood: Abusing QuickAssist to gain initial access and deploy ransomware. RMM Tools: Increased abuse of RMM tools for delivering trojans or infostealers. ClickFix Powershell Copy/Paste: Users tricked into copying and pasting malicious code. Abuse of File Hosting Platforms: Using legitimate services for phishing campaigns. Advanced Hunting Queries: KQL queries for detecting suspicious activities. Video Link Episode 1 - Crafting Chaos: The Amplified Tactics of Social Engineering - Hunt, Halt, and Evict Episode 2 - February 2025 Rise of Infostealers, ClickFix, and More Ep. 2 – Rise of InfoStealers Description Delve into the latest threat landscape, featuring notorious actors like Hazel Sandstorm, Sangria Tempest, and Midnight Blizzard. Understand the insidious ClickFix technique, a social engineering marvel that exploits users' natural tendencies to click prompts and buttons. Learn more about the growing trend of renamed binaries and how adversaries are using them to evade detection. Key Topics: Infostealers Unveiled: Functions and examples of infostealers like LummaStealer, DarkGate, and DanaBot. ClickFix Technique: Combining phishing, malvertising, and malicious scripting. Identity Compromise: Techniques like AiTM, BiTM, and BiTB attacks. Advanced Hunting Queries: KQL queries for detecting suspicious activities Video Link Episode 2 - Rise of Infostealers, ClickFix, and More Episode 3 - June 2025 The Case Against ClickFix Ep. 3 – Case Against ClickFix Description Deep dive into the ClickFix technique, a rising social engineering threat that manipulates users into executing malicious scripts through fake prompts like CAPTCHA verifications. Key Topics How adversaries are leveraging ClickFix to deploy infostealers, remote access tools, and loaders, while also evading detection through renamed binaries and obfuscated scripting. Technique: ClickFix combines phishing, malvertising, and drive-by compromises with fake CAPTCHA overlays. Users are tricked into copying and executing malicious commands via the Windows Run dialog. Compromise: ClickFix mimics identity compromise tactics by hijacking user trust, using spoofed interfaces, clipboard hijacking, and executing obfuscated scripts via LOLBins like PowerShell, mshta, and rundll32. Advanced Hunting Queries (AHQs): Suspicious RunMRU registry entries. Use of LOLBins and obfuscated PowerShell commands. Indicators such as shortened URLs, fake CAPTCHA text, and encoded payloads. Video Link Episode 3 - The Case Against ClickFix Episode 4 - Aug 2025 Post-Breach Browsers: The Hidden Threat You’re Overlooking Ep. 4- Post-Breach Attacks on Modern Browsers Description Modern browsers aren’t just attack entry points; they’re post-breach goldmines. In this episode, Microsoft Defender Experts are joined by JBO, the architect behind cross-platform research at Microsoft Defender and a leading voice in offensive security, exploitation, and vulnerability research. Key Topics: Post-Breach Tradecraft How adversaries weaponize browser memory, debugging ports, and extensions to maintain access and evade detection. Detection That Cuts Through the Noise Spot stealthy abuse: anomalous COM calls, rogue child processes, TLS key leaks, and more. Expert-Led Defense JBO and the Defender Experts team bring real-world insights from the frontlines, including techniques used to uncover and mitigate browser-based threats across Windows, macOS, and Linux. If you think browser security ends at patching, think again. This episode is your essential guide to defending against the post-breach browser threatscape. Video Link Episode 4 - Post-Breach Browsers: The Hidden Threat You’re Overlooking Learn more – read the blog Post-breach browser abuse: a new frontier for threat actors | Microsoft Community Hub Modern browsers are among the most complex and trusted applications on any endpoint. While they are often discussed in the context of initial access (through phishing, drive-by downloads, or zero-day exploits) this post focuses on a less explored but increasingly relevant threat vector: post-breach browser abuse. Episode 5 – October 2025 TCC You Later: Spotlights Metadata Mischief in macOS Ep. 5 - TCC You Later Description Threat actors are exploiting overlooked macOS features. Join our experts as they discuss trends, strategies, and insights that will help you defend against this new attack vector. Key Topics: Understand how AI features and Spotlight indexing expose sensitive metadata, while weaknesses in TCC controls increase exploitation potential. Learn how unsigned Spotlight plugins can bypass privacy safeguards, granting access to confidential files and Apple Intelligence data. Defend better by strengthening detection for anomalous Spotlight activity, enforce patching, and manage updates through Intune for proactive defense. Video Link Episode 5 - TCC You Later: Spotlights Metadata Mischief in macOS Episode 6 – February 2025 Shai Hulud 2.0 - Breaking the Supply Chain Chaos Engine Ep. 6 – Halting Shai Hulud 2.0 with MISA partner Ontinue Description Together with our MISA partner, Ontinue, we will unlock supply-chain attacks and drill into campaigns like “Shai Hulud.” Learn how attackers abuse trust and developer workflows, why detection is challenging, and gain practical insight on using Microsoft Defender to strengthen CI/CD and supply-chain security. Key Topics Evolution of Software Supply‑Chain Attacks: NotPetya, CCleaner, ASUS ShadowHammer, SolarWinds, 3CX, to XZ Utils. NPM Ecosystem Risks & Abuse: Why attackers target Node Package Manager (NPM). Breaking down ‘Shai Hulud’: Attack flow & detection, scripts, lifecycle hooks, automated propagation Why is detection hard: trust abuse vs. exploit abuse. Defend Better: Hunting queries, GitHub security, Defender tools Locking down your supply chain: CI/CD hardening, credential rotation, SBOM-based scanning, agentless code scanning, optimize Defender. Video Link Episode 6 - Shai Hulud 2.0: Breaking the Supply Chain Chaos Engine Learn more – read the blog The invisible attack surface: hunting AI threats in Defender XDR | Microsoft Community Hub As organizations embed AI across their business, the same technology that drives productivity also introduces a new class of risk: prompts that can be manipulated, data that can be leaked, and AI systems that can be tricked into doing things they shouldn’t. Attackers are already testing these boundaries, and defenders need visibility into how AI is being used - not just where it’s deployed. Follow us for more S.T.A.R. episodes - Microsoft Defender Experts for XDR | Microsoft Security1.4KViews0likes0CommentsCloud forensics: Forensic readiness and incident response in Azure Virtual Desktop
Co-authors: Dan Weinstock and Christoph Dreymann Azure Virtual Desktop (AVD) has rapidly become a core tool for enabling remote work at scale. Consequently, it’s also emerging as a target for threat actors. Recent Microsoft Incident Response engagements show that threat actors are exploiting AVD deployments for lateral movement and persistence. By hijacking legitimate AVD user accounts, they gain what is essentially a trusted “endpoint” inside the network without having to install malware. Why does this matter to incident responders? AVD intrusions can be stealthy and fast-moving, and responders must be prepared to detect and investigate suspicious AVD usage quickly. AVD’s architecture is not the same as a typical endpoint or server, so traditional forensic approaches alone may fall short. This blog post focuses on how to build forensic readiness in AVD and outlines investigation strategies to handle AVD-related incidents. We’ll cover the unique forensic challenges AVD presents, methods to collect critical data and best practices to allow incident responders to approach AVD investigation with confidence. Real-World Threat Actor Behaviors in AVD To put theory into context, consider some real threat actor behaviors Microsoft Incident Response – the Detection and Response Team (DART) has observed in AVD environments: Threat actors used stolen helpdesk credentials with AVD access to log in from a foreign IP address. In another case, threat actors compromised identities, leveraged their AVD sessions to pivot into on-premises resources using the AVD Virtual Machine (VM) as a steppingstone to RDP into other machines for lateral movement. Another threat actor accessed the browser in AVD to search internal SharePoint sites for sensitive data and intellectual property. Once threat actors hijack a legitimate, often poorly monitored AVD session, they gain a privileged execution environment without needing to deploy traditional malware. This enables threat actors to perform identity discovery, pivot across cloud/on-premises boundaries, exfiltrate data, and stage ransomware operations, emphasizing the necessity for robust logging, monitoring, and least-privilege configurations in virtual desktop environments. Forensic Challenges Unique to AVD Incident responders should enable forensic readiness by combining cloud-native and traditional methods to investigate, enable core logs for visibility, and understand how profiles are distributed across VMs and remote storage. AVD from the lens of a Threat Hunter An understanding of AVD architecture is essential for detection and response. AVD’s distinctive management of user sessions, profiles, and application delivery presents specific benefits and complexities when monitoring adversary activity. The system has the following fundamental components: Session Hosts – AVD’s multi-session architecture allows multiple users (or threat actors) to share a single underlying VM. Because these hosts are often ephemeral – frequently spun up and deallocated – offline evidence such as event logs or memory artifact can be lost quickly without proactive collection. When FSLogix is in use, user profiles are stored as VHDs in remote storage, rather than on the session host; browsing history, downloads, registry hives (i.e., NTUSER.DAT), MRU lists, and startup files live in these offloaded containers. Workspaces, Application Groups, & Host pools Workspaces serve as the top-level container, grouping resources and providing users with a single-entry point to their virtual desktops and applications. Application Groups determine which apps or desktops are published and define who can access them, shaping both user experience and permissions. Host Pools, on the other hand, are collections of session hosts – virtual machines that supply the compute power needed for user sessions. Azure Storage Accounts & FSLogix VHDs – because FSLogix stores each user’s profile as a VHD or VHDX file on Azure Files or a similar storage solution, investigators must know how to identify and export these VHDs and analyze them using tools like Autopsy or Eric Zimmerman’s utilities to recover user-centric artifact (i.e., browser history, downloads, NTUSER.DAT registry hives). This process is crucial when FSLogix is enabled, as no user data remains on the AVD host after logout. The following illustration demonstrates the relationship between the key components discussed above: Further information on that can be found within Azure Virtual Desktop for the enterprise - Azure Architecture Center. This clarifies that AVD is not simply "another VM"; rather, it is an Infrastructure-as-a-Service (IaaS) offering with a corresponding shared responsibility model, in which customers are accountable for identity management and the deployment of individual Azure resources. Logging Considerations Many actions such as user session brokering and application publishing are logged not on the VM but in Azure – if logging is enabled. By default, Azure Monitor diagnostics logs for AVD are not turned on. If an organization hasn’t enabled these, responders may lack crucial logs like when sessions started/ended, which applications were opened, or from what client IP address. Entra ID handles AVD authentication and therefore, Entra ID sign-in logs will provide further details and should be retained appropriately as well. Forensics implication: Misconfigured or missing diagnostic settings can hide evidence. Log sources The following table outlines various sources that provide comprehensive insights in the event of an incident: Artifact / Log Source Location & Access Method Host Forensics artifact On the AVD session host VM. AVD Diagnostic Logs (WVD* tables in Log Analytics) Azure Log Analytics workspace (if AVD Diagnostic Settings were enabled for Host Pools, Workspaces, and App Groups). Access via Kusto queries in the Logs blade or via Microsoft Sentinel. Key tables: WVDFeeds - Shows what published applications were visible. Also shows ClientIP and ClientType. WVDConnections - Shows Client IP address, clientType, ConnectionType, ResourceAlias and which host they connected to. WVDCheckpoints - Find which published App was used and Path for execution of Published App Entra ID Sign-In Logs Entra ID (Azure AD) tenant logs. Access via Azure Portal > Entra ID > Sign-in logs, or query the SigninLogs table in Sentinel/Log Analytics. Endpoint Detection and Response (EDR) Telemetry, for example Microsoft Defender for Endpoint (MDE) Specifically for MDE, Looking at Microsoft 365 Defender portal (Devices section) or via Advanced Hunting queries (DeviceProcessEvents, DeviceNetworkEvents). This requires the AVD session host VMs to be onboarded as monitored devices in MDE.* Network Flow Logs (NSG and Firewall logs) Azure Network Watcher’s NSG flow logs (if enabled on the VM’s subnet/NIC) stored in a storage account or Log Analytics; and/or Azure Firewall logs if an Azure Firewall is in the path. These may also be surfaced via Microsoft Sentinel. FSLogix User Profile Container (VHD/VHDX) Stored on Azure Storage (Azure Files share or NetApp Files). Access by locating the user’s profile .vhd(x) in the storage account and downloading it (i.e., via Azure Storage Explorer or generating an SAS URL). * Onboard non-persistent virtual desktop infrastructure (VDI) devices - Microsoft Defender for Endpoint | Microsoft Learn Enabling diagnostic logs Enabling diagnostic logs in AVD Azure resources offers valuable insights into session activities and remote application usage for detecting anomalies, tracing threat actor activity, and supporting forensic investigations. For threat hunting, the most relevant log tables include: WVDFeeds – shows which published applications were available, including ClientIP and ClientType details. WVDConnections – lists information such as Client IP address, clientType, ConnectionType, ResourceAlias, and host connection data. WVDCheckpoints – indicates which published application was accessed and supplies its execution path. You can enable these logs via the diagnostics settings for Host Pool, Application Group, and Workspace Azure resources. If an incident occurs, these logs help determine exactly what a threat actor interacted with, which is vital for assessing whether additional logs should be collected. Forensic Data Collection Strategies When an incident is suspected in an AVD environment, investigators need a plan for acquiring and preserving evidence: Live vs. Offline Collection – disk and memory artifact can be captured either from a running system or from an offline snapshot. Live collection can recover volatile data like memory (RAM) but might miss user profile content when FSLogix is in use. In such cases, focus on offline acquisition by generating a snapshot of the relevant disk or file share and downloading the associated VHD. For live or offline captures, you can leverage widely used forensic tools such as Velociraptor (for remote collection across endpoints) or disk forensics tools like Autopsy to extract and analyze forensic data. Disk Snapshots – to acquire disks for offline collection and analysis, create a snapshot of the AVD Host VM disk and export it to your designated collection and analysis environment. The high-level steps are captured here. Storage account VHD Extraction – if FSLogix is enabled, user profiles are stored as VHD containers in Azure Storage. To review user data such as browser history, downloads, NTUSER.DAT, MRU lists, startup folders, and other profile artifacts you have to download the VHD that matches the username within the Azure Storage account. More about Storage account forensics can be found within Cloud forensics: Why enabling Microsoft Azure Storage Account logs matters. Artifact Analysis: What to Look for on a Host Once the user’s profile VHD and VM Host disk snapshot are acquired, analysts should focus on: Browser Data – cached web history, downloads, and session cookies can reveal malicious downloads or credential collection attempts. Threat actors frequently exploit post-breach browser access to scrape credentials, hijack sessions, or implant malicious extensions. User Registry Hives – files like NTUSER.DAT contain valuable user-specific configuration settings and MRU (Most Recently Used) lists that can reveal executed commands or opened documents. Startup Items and Temporary Directories – malicious payloads or scripts often persist by storing components in startup folders or system temp directories. These may be visible only within the user VHD if FSLogix is in use. Jump Lists and Shell Bags – these systems cache file access and folder navigation actions, offering timelines and insights into threat actoractivities. Threat Hunting in AVD Environments Threat hunting in AVD environments requires three critical dimensions: Identity, Azure platform, and host/endpoint artifact. Each layer provides unique signals and telemetry that, when correlated, help uncover malicious activity and reduce dwell time. Identity – investigate authentication patterns, privilege escalations, and anomalous sign-ins across Entra ID and related identity providers. Azure – examine control-plane operations, resource configurations, and subscription-level activities for indicators of compromise or persistence mechanisms. Host/Endpoint Artifacts – analyze session hosts for forensic traces such as browser history, file events, and registry changes that may reveal lateral movement or payload deployment. By combining insights from these components, defenders can build a threat-hunting strategy tailored for AVD’s distributed architecture. Core Hunting Objectives When hunting in AVD environments, start with three fundamental questions: Where did the threat actor log in from? – identify unusual or foreign IP addresses and devices accessing your host pools. What did they do once inside? – track actions within the session host, such as which applications were launched, commands executed, or files accessed. Where did they go next? – look for lateral movement to other resources (cloud or on-premises) and evidence of persistence (i.e., credential artifacts, backdoor accounts). These questions form the basis for iterative, hypothesis-driven hunts, allowing you to “follow the thread” and adapt as you uncover new evidence – be it new accounts, unexpected machines, or suspicious process trees. The following diagram illustrates a standard hunting process from the initial event through hunting within the entire environment and cross-domain correlation: Today’s security tools can make this process easier. Microsoft Sentinel combines Entra ID logs, AVD diagnostics, and MDE events into one place, so you can use KQL queries and workbooks to link related data, like joining Sign-in Logs and WVDConnections using the same user and timestamp. With MDE’s Advanced Hunting feature, you can query across endpoints; for example, you might review all AVD host devices for certain indicators or activity patterns, such as listing processes that ran during a suspected breach. The aim is to build a complete picture of the incident. Sample Hunting Queries Below are example Sentinel queries tailored to address the core hunting objectives. These should be adapted to your environment and replace dummy values with real user names, host identifiers, or IP ranges. Where did the threat actor log in from? Hunt for the suspicious user in the Entra Sign-in logs: SigninLogs | where TimeGenerated >= ago(7d) //update to reflect specific time period of interest | where UserPrincipalName has "<<Suspicious User>>" | where AppDisplayName has "Azure Virtual Desktop | project-reorder TimeGenerated, UserPrincipalName, AppDisplayName, ResultType, ClientAppUsed, DeviceDetail, IPAddress, UserAgent | summarize FirstDate=min(TimeGenerated), LastDate=max(TimeGenerated), count() by UserPrincipalName, AppDisplayName, ClientAppUsed, IPAddress, UserAgent Sign-in events to AVD always use the below Entra Applications. Both the application name and ID can be hunted on: What did they do once inside? To investigate a potential security incident within AVD, it is important to review the user activity within the diagnostic logs. Therefore, we are looking next into the diagnostic logs tables WVDFeeds, WVDConnections, and WVDCheckpoints to get a better understanding of what has happened: WVDFeeds Displays the published applications that are accessible, as well as the ClientIP and ClientType information. WVDFeeds | where TimeGenerated >= ago(7d) //update to reflect specific time period of interest | where UserName has <<UserName>> //splitting the IP as the ClientIP is shown with port i.e. 1.1.1.1:54282 | extend ClientIP = tostring(split(ClientSideIPAddress, ":")[0]) | summarize FirstDate = min(TimeGenerated), LastDate = max(TimeGenerated), count() by UserName, ClientOS, ClientType, ClientIP WVDConnections Displays the Client IP address, client type, connection type (Desktop or App), resource alias, and the host (from the host pool) to which the client has connected. WVDConnections | where TimeGenerated >= ago(7d) //update to reflect specific time period of interest | where UserName has <<UserName>> | project-reorder TimeGenerated, UserName,ClientSideIPAddress, ClientOS, ConnectionType, SessionHostName, SessionHostPoolType, State | sort by TimeGenerated asc WVDCheckpoints Determine which published application was utilized. It shows also the Operation conducted when a session was started. WVDCheckpoints | where TimeGenerated >= ago(7d) //update to reflect specific time period of interest | extend ConnectionStage = tostring(Parameters.connectionStage) | extend AppFileName = tostring(Parameters.filename) //It is only populated when it published App and not Desktop Session | extend Operation= Name | project-reorder TimeGenerated, UserName, ActivityType, Source, Operation, AppFileName, ConnectionStage Advanced Hunting in Microsoft Defender for Endpoint Now that the AVD host has been identified, you can then deep dive further by focusing on EDR and host forensic logs. If MDE Advanced Hunting is enabled, the following tables helps often to understand what happened on a system: DeviceNetworkEvents DeviceProcessEvents DeviceRegistryEvents Further tables within MDE and the Device Timeline help to get a full picture of what happened. Lateral Movement After gaining access to an AVD session host, threat actors often attempt lateral movement to expand their foothold or establish persistence. This can involve pivoting to other session hosts, Azure resources, or even on-premises systems. Focus on the following tables/log sources: Firewall logs (On premise / Cloud) DeviceNetworkEvents (Cloud: Defender) DeviceProcessEvents (Cloud: Defender) DeviceRegistryEvents (Cloud: Defender) NSG logs (Diagnostic Logs) Windows EventsLogs (Collected host logs) Best Practices for AVD Forensic Readiness The following measures enhance forensic processes for Azure Virtual Desktop: Implement an endpoint detection and response product – deploy an endpoint detection and response (EDR) solution to provide advanced detection and response capabilities on AVD VMs. Enable and export comprehensive logging – activate diagnostic logging for all host pools, workspaces, and application groups. Ensure these logs are exported to a Log Analytics workspace or a security information and event management (SIEM) platform such as Microsoft Sentinel. Develop forensic workflows for AVD – establish and document clear forensic workflows for AVD incident response. This should cover the entire process, from detecting suspicious activity, through live or offline evidence acquisition, to verifying FSLogix profile stores, exporting VHDs, and analyzing collected artifacts. Security Recommendations for Azure Virtual Desktop When implementing AVD, it is vital to follow recommended security practices. These strategies complement the best practices for forensic readiness and ensure that both proactive and reactive security measures are in place. Review Microsoft AVD Security Guidance – utilize the official Microsoft documentation for Azure Virtual Desktop security to guide your deployment and operational security policies. The resource provides up-to-date recommendations on securing your AVD environment. Enable MFA on all users – implement multi-factor authentication (MFA) for every user accessing Azure Virtual Desktop to ensure an additional layer of security beyond just passwords. Where possible, implement phish-resistant MFA for maximum protection. If full adoption isn’t practical, prioritize high-risk accounts and critical roles first. Enable Conditional Access – utilize Conditional Access policies to proactively manage and mitigate risks before granting access to your AVD environment. This helps ensure only legitimate users and trusted devices can connect to your resources. Regularly review Conditional Access policies for exceptions and minimize them wherever possible. Implement detection and alerting for any changes to CAP configurations to maintain a strong security posture. More information can be found in the following article from Microsoft - Security recommendations for Azure Virtual Desktop - Azure Virtual Desktop | Microsoft Learn Conclusion As the threat landscape evolves, cloud-hosted virtual desktops become both enablers of productivity and channels for threat actor persistence. By applying robust logging configurations, understanding the unique forensic challenges of FSLogix and AVD, and enhancing your threat hunting capabilities, you can achieve higher levels of forensic readiness. This preparedness not only accelerates investigations when incidents occur but also strengthens the overall security posture of your organization. Proactive planning today is essential to respond to tomorrow’s threats with confidence and clarity.1.4KViews0likes0CommentsMicrosoft Incident Response works hand-in-hand with insurers, brokers, and law firms
During a cyberattack, speed and coordination can make all the difference. It's not just about technical expertise; it's about having the right people working together when every second matters. Successful incident response today often means bringing together technical responders, insurers, insurance brokers, and legal experts into a single, focused team. Microsoft Incident Response (IR) fits directly into that framework, delivering containment and recovery services while keeping cyber insurance and legal needs in mind. For decades, Microsoft Incident Response has been on the front lines of some of the world's most complex and high-profile cyber incidents. With direct access to Microsoft's engineering resources and the support of Microsoft Threat Intelligence, Microsoft IR offers unmatched capabilities to organizations navigating a cyber crisis. Comprehensive and coordinated incident response Major insurance carriers like Chubb and Beazley recognize Microsoft IR as an approved incident response vendor. That means when customers file claims, reactive incident response services are often reimbursable - an important reassurance for organizations already facing the financial strain of a costly cyber event. That standing is the result of years of collaboration and consistently strong outcomes for joint customers, making it easier for organizations to move fast when it counts. Insurance brokers play a key role too. Firms like Marsh, Lockton, and Gallagher work closely with customers to include IR services into new and existing policies. Their familiarity with our offerings helps ensure IR can be activated quickly, without unexpected policy hurdles slowing down the response. Legal partners are another crucial piece of the puzzle. Many cyber insurance policies come with pre-appointed law firms to help organizations meet legal compliance obligations and mitigate the risk of lawsuits and regulatory enforcement actions. They also help to protect the legal posture of organizations that are managing cybersecurity incidents. Firms such as Mullen Coughlin, Constangy Brooks Smith & Prophete, and Buchanan Ingersoll & Rooney have extensive experience partnering with Microsoft IR. Microsoft also partners with a broader network of legal firms beyond formal insurance panels. Firms like Hunton Andrews Kurth, Goodwin, Mayer Brown, and Morrison Foerster, among others, have collaborated with Microsoft to help clients respond swiftly and strategically. Microsoft teams are well-versed in aligning with legal counsel to support regulatory compliance and effective response strategies. By working hand-in-hand with insurers, brokers, and law firms, Microsoft Incident Response helps organizations investigate, contain, and recover from cyber incidents with speed and precision. This coordinated approach doesn't just minimize risk - it also reduces downtime and streamlines the overall recovery process. When the right players are already in place, organizations don't have to waste precious time figuring out the next steps. They know who to call, how to act, and what comes next. Whether customers bring Microsoft IR into their incident response plan directly or through their insurance provider, the goal remains the same: a smooth, confident path to recovery. Learn more To learn more, see the datasheet to explore how Microsoft Incident Response teams with industry recognized law firms and leading insurance providers and brokers to help ensure organizations have a streamlined and comprehensive response to cyber incidents.635Views0likes3CommentsMicrosoft Defender Experts Disrupt Jasper Sleet’s Insider Access Campaign
By: Mukta Agarwal and Parth Jamodkar Threat actors are increasingly infiltrating organizations by securing legitimate jobs, often through falsified credentials or insider recruitment. Recently, Microsoft Defender Experts, powered by Microsoft Threat Intelligence, successfully thwarted a sophisticated campaign by Jasper Sleet (formerly Storm-0287), a North Korean state-sponsored threat actor known for stealthy infiltration tactics. Rather than compromising victims directly, these actors pose as job applicants or contractors, infiltrating organizations to gain long-term insider access under false identities. Organizations in the information technology segment throughout the United States have been the primary targets for Jasper Sleet. However, as this threat actor has grown more sophisticated and expanded their reach, other industries including consumer retail, healthcare, financial services, critical manufacturing, and energy across different regions have also become targets. The challenge Jasper Sleet leveraged social engineering and identity fraud to bypass traditional security controls. By impersonating remote IT contractors, the actors blend into legitimate workflows, using shared devices for MFA and VPN services to mask their origin. These tactics enabled persistence through long-lived sessions and authentication tokens, creating a high risk of privilege abuse and data exfiltration if left undetected. Indicators observed during investigation Shared devices for MFA: During authentication analysis, it was observed that a single device was repeatedly used to complete multifactor authentication (MFA) for multiple user accounts within the same tenant. MFA is intended to enforce strong identity assurance by binding authentication to a unique user-device pairing. When this control is circumvented, it raises a security concern. Further investigation revealed consistent technical signals across these events, including identical session identifiers, common ISP, and geolocation. Such behavior is strongly indicative of fraudulent personas operating from shared workstations, pooled environments such as Azure Virtual Desktop (AVD), or compromised devices. Suspicious login patterns: Initial access attempts originated from US or Western IP addresses to simulate remote work, followed by logins from Russian, Chinese, or other Asian IPs and often linked to VPN services such as Astrill. Certain sessions exhibited AVD-like login patterns originating from Russian-based instances. Session persistence: Long-lived authentication sessions, (such as non-password based) cookies and tokens allow malicious insiders to maintain access even after password resets, as these tokens often remain valid without re-authentication. Together, these behaviors pointed to a stealthy, long-term operation designed to blend in with legitimate activity and maintain persistent access through pre-existing privileges associated with IT roles. Defender Experts response Defender Experts took a proactive approach by meticulously analyzing the suspicious behavioral patterns and successfully uncovering multiple customers who had been impacted by the Jasper Sleet campaign. Upon identification, we immediately reached out to the affected organizations through managed response and Defender Expert Notifications, ensuring they were promptly informed about the threat and the necessary actions to be taken. Recognizing the potential for broader impact, we also issued proactive threat advisories to all Defender Experts customers. We actively engaged with the customers, initiating collaborative sessions to validate the attack vectors, discuss findings, and the recommended steps. This open channel of communication fostered a collective defense posture, where shared intelligence and real-time feedback between Defender Experts and customer teams amplified the speed and effectiveness of the response. Actions included: Immediate alerts to affected organizations via Defender Experts Notifications titled “Microsoft Defender Experts: Potential malicious activity linked to threat actor observed in your environment”, which included observed indicators and recommendations. Proactive threat advisories to all Defender Experts customers (subject- Microsoft Defender Experts Threat Advisory: Jasper Sleet), informing them of key adversarial tactics, impact, and encouraging them to remain vigilant and review their own environments for similar indicators of compromise. Direct collaboration with customers to facilitate joint sessions aimed at validating attack vectors and discussing findings. Outcome and impact The coordinated and transparent partnership between Defender Experts and our customers played a critical role in containing the threat before it could escalate. Customers responded promptly by disabling compromised contract employee accounts, effectively mitigating the risk of further misuse. Through these actions, customers addressed immediate risks and strengthened their long-term security posture by implementing best practices and incorporating lessons learned from the incident. This case highlights the significant value of robust collaboration between Defender Experts and customers in countering sophisticated, targeted cyber-attacks, demonstrating that collective efforts enhance the ability to defend against evolving threats. Together, we are stronger and more capable of defending against evolving threats. Customers reported that timely alerts and expert guidance prevented downstream compromise and strengthened their security posture. Some of the testimonials from the customers: “A big thank you to the Microsoft team for all their efforts and coordination in detecting this, which has been immensely helpful.” “Appreciate Defender Experts for finding this. You guys just signed next year’s renewal with this" Key takeaways (public safe metrics) Impact Area Metric Alerts and logs analyzed >10,000 correlated logs and events Organizations protected 40 + enterprise tenants Defender proactive notifications Over 200+ sent Time to notify < 30 mins, immediately after first detection Reference Jasper Sleet: North Korean remote IT workers’ evolving tactics to infiltrate organizations | Microsoft Security Blog Stay vigilant - Stay protected Learn more about how Microsoft Defender Experts can help safeguard your organisation against sophisticated threats. Partner with Microsoft Defender Experts to stay ahead of advanced threats and protect your organization with confidence. Microsoft Defender Experts for XDR - Expert-led monitoring and response across your extended detection and response Microsoft Defender Experts for Hunting - Proactive threat hunting to identify and stop attacks before they impact your business. By: Mukta Agarwal and Parth JamodkarMeet DART, the people behind Microsoft Incident Response
When threat actors infiltrate a company to steal documents and other critical business information, Microsoft Incident Response - the Detection and Response Team (DART) responds. With more than 4,500 engagements in 2024 and more than a millennium of combined experience, these responders are the calm in the storm when a compromise occurs. Whether they’re confronting ransomware, nation-state actors, or zero-day exploits, DART blends their knowledge and years of experience with agility, empathy, and storytelling to guide organizations from disruption to clarity. This blog introduces the people behind DART, which includes forensic analysts, infrastructure specialists, and threat hunters whose mission is to contain attacks, restore trust, and build resilience for the future. DART DART isn’t just a lifeline during a crisis, they’re strategic partners in building cyber resilience before threat actors strike. Through a suite of proactive services, the team helps organizations identify vulnerabilities, harden critical systems, and prepare for the unexpected. Proactive offerings like compromise assessments, identity reviews, and threat briefings allow customers to uncover hidden risks and receive tailored recommendations without the pressure of an active breach. By leveraging Microsoft’s vast threat intelligence and engineering access, organizations gain insights into emerging tactics and trends, ensuring their security posture evolves ahead of the threat landscape. In short, DART empowers defenders to out-think threat actors by mapping potential pathways, closing gaps, and rehearsing responses - so when the real thing happens, they’re ready. Cyber Frontier: Real-World Insights from Microsoft & DART The cyber threat landscape is constantly evolving, forcing defenders to rethink traditional security models. The past year, DART has observed shifts in tactics driven by the rise of AI and the relentless pursuit of identity compromise. Threat actors are no longer just exploiting vulnerabilities; they’re weaponizing automation, scaling social engineering, and monetizing stolen access in ways that blur the line between cybercrime and organized business models. In this new paradigm, resilience, global collaboration, and anticipatory defense aren’t optional; they’re survival. Microsoft Digital Defense Report 2025[1] ranks the largest causes of cyberattacks and prominent trends: Phishing remained the most common initial access method. While there were many changes in the threat landscape, multifactor authentication (MFA) still blocks over 99% of unauthorized access attempts, making it the single most important security measure an organization can implement. While phishing remains a common initial access method, campaigns are moving away from simple phishing as defense approaches evolve. Threat actors increasingly leverage multi-stage attack chains that mix technical exploits and social engineering to gain unauthorized access and maintain persistence. In more than 90% of cases where cyberattacks progressed to the ransom stage, the threat actor had leveraged unmanaged devices in the network. This highlights the impact of gaps in endpoint visibility on threat actor progression. Just as businesses are leveraging generative AI for productivity gains, threat actors are also using AI to augment traditional cyberattacks by automating activities from vulnerability discovery to deepfake content generation. As AI adoption becomes increasingly commonplace, these tools’ access to proprietary and sensitive data has also made AI systems an attractive target for malicious activity. Nation-state actor and ransomware-as-a-service (RaaS) actor activity also evolved, moving away from centralized command-and-control servers and instead leveraging cover, peer-to-peer networks to avoid detection and survive through takedowns by redistributing workloads across participants. Meet some of the DARTians DART members are a diverse and highly skilled team of individuals with the depth of expertise and experience to solve problems quickly. They continually adjust their actions based on what they learn. They can lock down a network, isolate an endpoint, or remove a threat actor before damage is done. CISOs, security teams, and our IR people share this common passion, commitment, and goal - to protect organizations against threats. We recently interviewed several of our team members about how they got into IR work, their most memorable engagements, advice they have for customers, and more. Here are some key highlights from those interviews: How did you get your start? “My journey began in 2005 when I enlisted in the US Marines and was assigned to the data field, managing servers, routers, and switches, and ensuring the network kept running smoothly. That technical foundation soon led me to a cybersecurity role for the Marine Corps at Quantico, where I first tasted what it meant to defend against real threats. As a Security Operations Center (SOC) analyst, I quickly discovered that cybersecurity is both broad and deep. It opened my eyes to how vulnerabilities are exploited and how a simple oversight can become a major incident.” – Edwin “My path into cybersecurity wasn’t exactly a straight line. During the early parts of my career, I worked in business advisory and enterprise sales and was pursuing a path to an MBA from one of the major business schools. While corporate strategy and incident response may feel worlds apart, being able to bridge both has been my tactical advantage in working with our customers to demystify the events that take place in the cyber trenches I now call home.” – Pru “The idea of tracking digital adversaries and solving complex puzzles always fascinated me, so when an opportunity arose in 2014 to join a Cyber Warfare unit, I took it. It felt like stepping into a story told in the movies, only the stakes were real, and the learning was relentless.” – Max Most memorable DART engagement moments “We quickly gained trusted advisor status and were brought into the investigation. Together, we uncovered internet-facing servers with weak permissions that allowed the threat actor to pivot between on-premises and cloud environments using privileged accounts. In the end, we shut down the threat actor’s access, rebuilt compromised systems, reduced excessive permissions, and delivered a roadmap to strengthen defenses.” – Adrian Advice for customers "Nearly every major incident we see starts with compromised identity: weak credentials, stolen passwords, or missed phishing alerts. Enabling multi-factor authentication (MFA) is critical.” – Max “Proactive defense pays off: Tools like Secure Score and strategies such as the tiering model can drastically reduce risk. Don’t wait until an incident to play catch up and try to fortify defenses while also putting out a fire, it is best to act now.” – Edwin “It may sound simple or even boring, but over 99% of cyberattacks can be prevented with MFA in place. It’s the digital equivalent of locking your front door and having an alarm system, and it’s easy and inexpensive to implement.” – Pru “Teamwork is essential: No one succeeds in cybersecurity alone. Collaboration, both within your team and across organizations, is often the deciding factor in a successful response.” – Adrian “Tools powered by AI, like Microsoft Copilot, have become invaluable in helping to quickly analyze suspicious processes, understand command lines, and connect the dots in complex investigations.” – Max Building a safer digital world, together DART doesn’t just solve technical problems - they help people through some of the most challenging moments of their careers. Every compromise is a puzzle, every engagement a chance to restore trust and build resilience. What drives them isn’t just the scale or complexity of the work - it’s the impact they can make, one incident at a time. Learn more To learn more about Microsoft’s Incident Response capabilities and DART, please visit Microsoft Incident Response | Microsoft Security. Read the latest installment of the Cybersecurity Series. To read more about Microsoft Incident Response in action, read our latest installment of the Cyberattack Series, Retail at risk: How one alert uncovered a persistent cyberthreat. Sources 1 Microsoft Digital Defense Report 2025 | Microsoft1.1KViews0likes0CommentsCloud forensics: Why enabling Microsoft Azure Key Vault logs matters
Co-authors - Christoph Dreymann - Abul Azed - Shiva P. Introduction As organizations increase their cloud adoption to accelerate AI readiness, Microsoft Incident Response has observed the rise of cloud-based threats as attackers seek to access sensitive data and exploit vulnerabilities stemming from misconfigurations often caused by rapid deployments. In this blog series, Cloud Forensics, we share insights from our frontline investigations to help organizations better understand the evolving threat landscape and implement effective strategies to protect their cloud environments. This blog post explores the importance of enabling and analyzing Microsoft Azure Key Vault logs in the context of security investigations. Microsoft Incident Response has observed cases where threat actors specifically targeted Key Vault instances. In the absence of proper logging, conducting thorough investigations becomes significantly more difficult. Given the highly sensitive nature of the data stored in Key Vault, it is a common target for malicious activity. Moreover, attacks against this service often leave minimal forensic evidence when verbose logging is not properly configured during deployment. We will walk through realistic attack scenarios, illustrating how these threats manifest in log data and highlighting the value of enabling comprehensive logging for detection. Key Vault Key Vault is a cloud service designed for secure storage and retrieval of critical secrets such as passwords or database connection strings. In addition to secrets, it can store other information such as certificates and cryptographic keys. To ensure effective monitoring of activities performed on a specific instance of Key Vault, it is essential to enable logging. When audit logging is not enabled, and there is a security breach, it is often difficult to ascertain which secrets were accessed without comprehensive logs. Given the importance of the assets protected by Key Vault, it is imperative to enable logging during the deployment phase. How to enable logging Logging must be enabled separately for each Key Vault instance either in the Microsoft Azure portal, Azure command-line interface (CLI) or Azure PowerShell. How to enable logging can be found here. Alternatively, it can be configured within the default log analytics workspace as an Azure Policy. How to use this method can be found here. By directing these logs to a Log Analytics workspace, storage account, or event hub for security information and event management (SIEM) ingestion, they can be utilized for threat detection and, more importantly, to ascertain when an identity was compromised and which type of sensitive information was accessed through that compromised identity. Without this logging, it is difficult to confirm whether any material has been accessed and therefore may need to be treated as compromised. NOTE: There are no license requirements to enable logging within Key Vault, but Log Analytics charges based on ingestion and retention for usage of that service (Pricing - Azure Monitor | Microsoft Azure) Next, we will review the structure of the Audit Logs originating from the Key Vault instance. These logs are located in the AzureDiagnostics table. Interesting fields Below is a good starting query to begin investigating activity performed against a Key Vault instance: AzureDiagnostics | where ResourceType == 'VAULTS' The "operationName" field is of particular significance as it indicates the type of operation that took place. An overview of Key Vault operations can be found here. The "Identity" field includes details about the identity responsible for an activity, such as the object identifier and UPN. Lastly, the “callerIpAddress” shows which IP address the requests originate from. The table below displays the fields highlighted and used in this article. Field name Description time Date and time in UTC. resourceId The Key Vault resource ID uniquely identifies a Key Vault in Azure and is used for various operations and configurations. callerIpAddress IP address of the client that made the request. Identity The identity structure includes various information. The identity can be a "user," a "service principal," or a combination such as "user+appId" when the request originates from an Azure PowerShell cmdlet. Different fields are available based on this. The most important ones are: identity_claim_upn_s: Specifies the upn of a user identity_claim_appid_g: Contains the appid identity_claim_idtyp_s: Shows what type of identity was used OperationName The name of the operation, for instance SecretGet Resource Key Vault Name ResourceType Always “VAULTS” requestUri_s The requested Key Vault API call contains valuable information. Each API call has its own structure. For example, the SecretGet request URI is: {vaultBaseUrl}/secrets/{secret-name}/{secret-version}?api-version=7.4. For more information, please see: https://learn.microsoft.com/en-us/rest/api/keyvault/?view=rest-keyvault-keys-7.4 httpStatusCode_d Indicates if an API call was successful A complete list of fields can be found here. To analyze further, we need to understand how a threat actor can access a Key Vault by examining the Access Policy and Azure role-based access control (RBAC) permission model used within it. Access Policy permission model vs Azure RBAC The Access Policy Permission Model operates solely on the data plane, specifically for Azure Key Vault. The data plane is the access pathway for creating, reading, updating, and deleting assets stored within the Key Vault instance. Via a Key Vault Access Policy, you can assign individual permissions and grant access to security principals such as users, groups, service principals, and managed identities, at the Key Vault scope with appropriate Control Plane privileges. This model provides flexibility by granting access to keys, secrets, and certificates through specific permissions. However, it is considered a legacy authorization system native to Key Vault. Note: The Access Policies permission model has privilege escalation risks and lacks Privileged Identity Management support. It is not recommended for critical data and workloads. On the other hand, Azure RBAC operates on both Azure's control and data planes. It is built on Azure Resource Manager, allowing for centralized access management of Azure resources. Azure RBAC controls access by creating role assignments, which consist of a security principal, a role definition (predefined set of permissions), and a scope (a group of resources or an individual resource). RBAC offers several advantages, including a unified access control model for Azure resources and integration with Privileged Identity Management. More information regarding Azure RBAC can be found here. Now, let’s dive into how threat actors can gain access to a Key Vault. How a threat actor can access a Key Vault When a Key Vault is configured with Access Policy permission, privileges can be escalated under certain circumstances. If a threat actor gains access to an identity that has been assigned the Key Vault Contributor Azure RBAC role, Contributor role or any role that includes 'Microsoft.KeyVault/vaults/write' permissions, they can escalate their privileges by setting a Key Vault access policy to grant themselves data plane access, which in turn allows them to read and modify the contents of the Key Vault. Modifying the permission model requires 'Microsoft.Authorization/roleAssignments/write' permission, which is included in the Owner and User Access Administrator roles. Therefore, a threat actor cannot change the permission model without one of these roles. Any change to the authorization mode will be logged in the Activity Logs of the subscription, as shown in the figure below: If a new Access Policy is added, it will generate the following entry within the Azure Activity Log: When Azure RBAC is the permissions model for a Key Vault, a threat actor must identify an identity within the Entra ID tenant that has access to sensitive information or one capable of assigning such permissions. Information about Azure RBAC roles for Key Vault access, specifically those who can access Secrets, can be found here. A threat actor that has compromised an identity with an Owner role is authorized to manage all operations, including resources, access policies, and roles within the Key Vault. In contrast, a threat actor with a Contributor role can handle management operations but does not have access to keys, secrets, or certificates. This restriction applies when the RBAC model is used within a Key Vault. The following section will examine the typical actions performed by a threat actor after gathering permissions. Attack scenario Let's review the common steps threat actors take after gaining initial access to Microsoft Azure. We will focus on the Azure Resource Manager layer (responsible for deploying and managing resources), as its Azure RBAC or Access Policy permissions determine what a threat actor can view or access within Key Vault(s). Enumeration Initially, threat actors aim to understand the existing organizations' attack surface. As such, all Azure resources will be enumerated. The scope of this enumeration is determined by the access rights held by the compromised identity. If the compromised identity possesses access comparable to that of a reader or a Key Vault reader at the subscription level (reader permission is included in a variety of Azure RBAC roles), it can read numerous resource groups. Conversely, if the identity's access is restricted, it may only view a specific subset of resources, such as Key Vaults. Consequently, a threat actor can only interact with those Key Vaults that are visible to them. Once the Key Vault name is identified, a threat actor can interact with the Key Vault, and these interactions will be logged within the AzureDiagnostics table. List secrets / List certificates Operation With the Key Vault Name, a threat actor could list secrets or certificates (Operation: SecretList and CertificateList) if they have the appropriate rights (while this is not the final secret, it indicates under which name the secret or certificate can be retrieved). If not, access attempts would appear as unsuccessful operations within the httpStatusCode_d field, aiding in detecting such activities. Therefore, a high number of unauthorized operations on different Key Vaults could be an indicator of suspicious activity as shown in the figure below: The following query assists in detecting potential unauthorized access patterns. Query: AzureDiagnostics | where ResourceType == 'VAULTS' and OperationName != "Authentication" | summarize MinTime = min(TimeGenerated), MaxTime = max(TimeGenerated), OperationCount=count(), UnauthorizedAccess=countif(httpStatusCode_d >= 400), OperationNames = make_set(OperationName), make_set_if(httpStatusCode_d, httpStatusCode_d >= 400), VaultName=make_set(Resource) by CallerIPAddress | where OperationNames has_any ("SecretList", "CertificateList") and UnauthorizedAccess > 0 When a threat actor uses a browser for interaction, the VaultGet operation is usually the first action when accessing a Key Vault. This operation can also be performed via direct API calls and is not limited to browser use. High-Privileged account store Next, we assume a successful attempt to access a global admin password for Entra ID. Analyzing Secret retrieval When an individual has the identifier of a Key Vault and has SecretList and SecretGet access rights, they can list all the secrets stored within the Key Vault (OperationName SecretList). In this instance, this secret includes a password. Upon identifying the secret name, the secret value can be retrieved (OperationName SecretGet). The image below illustrates what appears in the AzureDiagnostics table. The HTTP status code indicates that these actions were successful. The requestUri contains the name of the secret, such as "BreakGlassAccountTenant" for the SecretGet operation. With this information, one can ascertain what secret has been accessed. The requestUri_s format for the SecretGet operation is as follows: {vaultBaseUrl}/secrets/{secret-name}/{secret-version}?api-version=7.4 When the browser accesses the Key Vault service through the Azure portal, additional API calls are often involved due to the various views within the Key Vault services in Azure. The figure below illustrates this process. When someone accesses a specific Key Vault via a browser, the VaultGet operation is followed by SecretList. To further distinguish actions, SecretListVersion will also be used, as the Key Vault service shows different versions of a Secret, which may indicate direct browser usage. The final SecretGet Operation retrieves the actual secret. When using the Key Vault, SecretList operations can be accompanied by SecretGet operations. This is less common for emergency accounts since these accounts are infrequently used. Setting up alerts when certain secrets are retrieved can assist in identifying unusual activity. Entra ID Application certificate store In addition to storing secrets, certificates that provide access to Entra ID applications can also be managed within a Key Vault. When creating an Entra ID application with a certificate for authentication, you can automatically store that certificate within a Key Vault of your choice. Access to such certificates could allow a threat actor to leverage the access rights of the associated Entra ID application and gain access to Entra ID. For instance, if the Entra ID application possesses significant permissions, the extracted certificate could be utilized to exercise those permissions. Various Entra ID roles can be leveraged to elevate privileges; however, for this scenario, we assume the targeted application holds the "RoleManagement.ReadWrite.Directory" permission. Consequently, the Entra ID application would have the capability to assign the Global Admin role to a user account controlled by the threat actor. We have also described this scenario here. Analyzing Certificate retrieval The figure below outlines the procedure for a threat actor to download a certificate and its private key using the Key Vault API. First, the CertificateList operation displays all certificates within a Key Vault. Next, the SecretGet operation retrieves a specific certificate along with its private key (the SecretGet operation is required to obtain both the certificate and its private key). When a threat actor uses the browser through the Azure portal, the sequence of actions should resemble those in the figure below: When a Certificate object is selected within a specific Key Vault view, all certificates are displayed (Operation: CertificateList). Upon selecting a particular certificate in this view, the operations CertificateGet and CertificateListVersions are executed. Subsequently, when a specific version is selected, the CertificateGet operation will be invoked again. When "Download in PFX/PEM format" is selected, the SecretGet Operation downloads the Certificate and private key within the Browser. With the downloaded certificate, the threat actor can sign in as the Entra application and utilize the assigned permissions. Key Vault summary Detecting misuse of a Key Vault instance can be challenging, as operations like SecretGet can be legitimate. A threat actor might easily masquerade their activities among legitimate users. Nevertheless, unusual attributes, such as IP addresses or peculiar access patterns, could serve as indicators. If an identity is known to be compromised and has utilized Key Vaults, the Key Vault logs must be checked to determine what has been accessed to respond appropriately. Coming up next Stay tuned for the next blog in the Cloud Forensics series. If you haven’t already, please read our previous blog about hunting with Microsoft Graph activity logs.Sploitlight: Hunting Beyond the Patch
Many people aren’t aware that Microsoft security isn't just about Microsoft, it’s also about the platforms supporting the products we build. This means our reach extends across all operating systems: iOS, Android, Linux, and macOS! In early 2025 Microsoft disclosed CVE-2025-31199, a macOS vulnerability that abused Spotlight, macOS’s metadata importer framework to bypass Transparency, Consent, and Control (TCC). After the Defender team reported this to Apple, a patch was released that closed the hole. But, the underlying behavior behind the threat still matters to Microsoft! Once attackers learn that trusted macOS services can be redirected, they will reuse the method for nefarious purposes, so it is important to track them down. The next variant won’t look the same, and Spotlight is a commonly targeted service. [1] So, in this article, we teach you how to hunt beyond the patch! Why Hunt for Sploitlight Spotlight importers (.mdimporter) extend macOS indexing. They normally process metadata for search visibility. Attackers can twist that design to index protected files, extract sensitive data, or trigger code execution, perhaps with elevated system trust and privileges. Even with the patch in place, the same logic paths remain valuable targets for attackers. We recommend hunting for patterns around importers, indexing behavior, and TCC privileged binaries to help detect attempts to rebuild this chain of abuse. Advanced Hunting Queries (AHQs) 1. Detect Unusual Spotlight Importer Activity Looking for manual invocations of mdimport may tip you off to attacker activity DeviceProcessEvents |where ProcessCommandLine contains "mdimport" OR DeviceProcessEvents | where ProcessCommandLine contains "mdimport" | where isempty(extract(@"-(\w+)", 1, ProcessCommandLine)) == false | extend mdimportFlag = extract(@"-(\w+)", 1, ProcessCommandLine) | where mdimportFlag in~ ("r", "i", "t", "L") Why it’s important: A Spotlight plugin being developed or tested will be called from the command line using the mdimport utility. For a wide-sweeping query, just search for mdimport alone. However, to get more granular, you can search for it with common parameters such as "r", "i", "t", or "L". 2. Investigate Anomalous Spotlight Activity Use this query to monitor Spotlight activity in the background DeviceProcessEvents | where FileName in~ ("mdworker", "mdworker_shared") Why it’s important: The Advanced Hunting Portal creates timelines for you to quickly zoom in on abnormal behavior, and peaks can show when new Spotlight plugins are invoked. Defender Recommendations Establish a baseline of normal Spotlight activity before setting detection thresholds. Tag importer activity by TCC domain to surface unexpected access. Correlate unsigned importer drops with system events such as privilege escalation or installer execution. Deploy these AHQs in Microsoft Defender XDR or Sentinel for continuous telemetry review. The Bigger Picture The point isn’t to memorize CVEs. It’s to understand the logic that made them possible and look for it everywhere else. Threat actors don’t repeat exploits; they repeat success patterns. Visibility is the only real control. If a process touches data, moves it, or indexes it, it’s part of your attack surface. Treat it that way. 👉 Join the Defender Experts S.T.A.R. Forum to see Sploitlight detection strategies and live hunting demonstrations: Defender Experts Webinar Series [1] References: https://theevilbit.github.io/posts/macos_persistence_spotlight_importers/ https://www.blackhat.com/docs/us-15/materials/us-15-Wardle-Writing-Bad-A-Malware-For-OS-X.pdf https://newosxbook.com/home.html https://www.microsoft.com/en-us/security/blog/2025/07/28/sploitlight-analyzing-a-spotlight-based-macos-tcc-vulnerability/Cloud shadows: How attackers exploit Azure’s elasticity for stealth and scale
Threats like password spray or adversary-in-the-middle (AiTM) are routine and too easily overlooked in an endless stream of security alerts. But what if these routine threats are only a small part of a much deeper, more sophisticated attack? Since June 2025, Microsoft Defender Experts has been closely monitoring a sophisticated and continuously evolving attack campaign targeting poorly managed Azure cloud environments. What sets these threats apart is their use of Azure’s elasticity and interconnected structure, which allows users and attackers alike to move more easily through multi-tenant environments and avoid basic detection. By specifically targeting student and Pay-As-You-Go accounts that are improperly secured and poorly monitored, adversaries can rapidly move across tenants, weaponize ephemeral resources, and manipulate quotas—constructing a resilient and dynamic ecosystem. Their methods blend so seamlessly with legitimate cloud activity that they frequently evade basic threat detection methods, taking full advantage of trusted cloud features to ensure persistence and scale. The campaigns demonstrate how today’s adversaries can transform even a single compromised credential into a sprawling and complex attack across multiple tenants. Attackers no longer simply establish static footholds; instead, every compromised account becomes a possible springboard, every tenant a new beachhead. Their arsenal is thoroughly cloud-native: rapidly deploying short-lived virtual machines, registering OAuth applications for ongoing access, manipulating service quotas to expand their attack infrastructure, and abusing machine learning workspaces for covert activity. The result is an attack ecosystem that’s agile, elusive, and built to endure in the fast-moving world of the cloud. Why are these attacks worth watching? These attacks represent a strategic evolution in threat actor behavior—blending into legitimate cloud activity, evading traditional detection, and exploiting the very features that drive business agility. The scale, adaptability, and persistence demonstrated in this campaign is a wake-up call: defenders must look beyond the surface, understand the full lifecycle of cloud-native attacks, and be prepared to counter adversaries who are already mastering the art of stealth and scale. This blog doesn’t just recount what happened, it breaks down the anatomy of a cloud-scale attack. Whether you're a security analyst, cloud architect, or threat hunter, the goal is to help you recognize the signs, understand the methods, and prepare your defenses. With the cloud, organizations benefit from scale, global access, and agility. But if not properly secured, those attributes also benefit threat actors. Resource development: exploiting the weakest links Microsoft Defender Experts has observed ongoing, large-scale campaigns on Azure environments. Student and Pay-As-You-Go (PAYG) accounts, were exploited due to poor security hygiene. These accounts often lacked essential protections: weak or default passwords, no multi-factor authentication (MFA), and no active security monitoring or Defender for Cloud subscription. Initial access was achieved via adversary in the middle (AiTM) attacks or password sprays against Azure User Profile Application (UPA) accounts, commonly using infrastructure hosted by M247 Europe SRL & LTD (New York) and Latitude. Weaponizing ephemeral infrastructure Once access was established using a compromised account, the attacker created new Resource Groups and deployed short-lived Virtual Machines (VMs). These VMs ran for as little as 3–4 hours and up to 1–2 days before being deleted. This approach enabled rapid rotation of attack infrastructure, minimal forensic footprint, and evasion of long-term detection. From these ephemeral VMs, large-scale password spray attacks were launched (predominantly utilizing user agents—BAV2ROPC, python-requests/2.32.3, python-requests/2.32.4) against thousands of accounts across multiple Azure tenants. Operating within Azure’s ecosystem helped the campaign stay below conventional alerting thresholds. Alerts that did occur were often dismissed as false positives or benign because they originated from legitimate Azure associated IP addresses. Scaling through multi-hop and multitenant techniques The sophistication of this campaign lies in their multi-hop and multitenant architecture: Multi-hop: Attacker used compromised Azure VMs to pivot and launch attacks on other accounts, masking their origin and complicating attribution. Multitenant: By controlling multiple Azure tenants, attackers distribute their operations, scale attacks, and maintain resilience against takedowns. This cross-tenant movement within the Azure environment allows attackers to expand their footprint more easily, making detection more challenging. Impact: spam, financial fraud, phishing, and sextortion campaigns Following each successful password spray attack, the campaign expanded across compromised Azure tenants. Using access gained from earlier stages, the attacker repurposed virtual machines within these tenants to send large volumes of phishing and scam emails. These phishing campaigns were carefully crafted to deceive users in compromised tenants, often leading to financial fraud involving URL shorteners like rebrand.ly, redirecting victims to fraudulent non-work related websites such as those with personal interest, entertainment, or leisure activity content. On those fake sites, users were prompted to: Complete surveys or questionnaires Provide personal information Download malicious Android APKs such as FM WhatsApp or Yo WhatsApp Note: The APK is a resigned WhatsApp clone trojan that exploits elevated WhatsApp permissions to harvest private data (contacts, files) while mimicking legitimate registration by communicating with official servers to evade detection. Its malicious actions are triggered via commands hosted in a compromised GitHub repo (xiaoqaingkeke/Stat), indicating a GitHub based C2. In some cases, victims were lured to enter their mobile numbers for chat services or install additional video calling apps—further expanding the attacker’s reach and enabling data harvesting and even extortion. Persistence and expansion Privileged access and the infrastructure the attacker compromised, built, and used in this campaign are worthless if the attacker cannot maintain control. To maintain and strengthen their foothold, the adversary deployed multiple persistence mechanisms. Below is a summary of the persistence techniques used by the attacker, as observed by Microsoft Defender experts across compromised tenants during the investigation. Abuse of OAuth applications Once access to an Azure tenant was obtained, the campaign escalated by registering OAuth applications. Two distinct types of applications were observed: Azure CLI–themed apps (named like "Azure-CLI-2025-06-DD-HH-MM-SS" and "Azure CLI") were registered with the compromised tenant as owner. The attacker added password credentials and created service principals for these apps to enable persistent backdoors (even attempted to re-enable a disabled subscription). In one instance, two custom Azure CLI apps were used to authenticate to Azure Databricks so access would survive account disables. The attacker registered a malicious custom application named MyNewApp, which was used to send large volumes of phishing emails and was successfully traced the campaign by analysing Microsoft Graph API calls, which revealed delivery and engagement patterns Quota manipulation To amplify the campaign’s infrastructure, the attacker exploited compromised credentials to submit service tickets requesting quota increases for Azure VM families: A request was made to raise the quota for the DaV4 VM family to 960 cores across multiple regions. A guest account, added during the attack, submitted a similar request for the EIADSv5 VM family. These actions reflect a deliberate effort to scale up the virtual machine farm, enabling broader password spray operations and phishing campaigns. Notably, the VM farm created by the compromised user was dismantled within three hours, while the farm initiated by the guest account remained active for a full day. This highlights the risk of guest access persistence, which often remains unless explicitly revoked. Advanced abuse in Azure: ML workspaces, Key Vaults, and beyond The recent campaign against a poorly managed, monitored, and configurated Azure environment was marked by a sophisticated, multi-stage attack that leveraged the elasticity and trusted features of cloud-native infrastructures for stealth and scale. The attacker’s operations were not limited to simple credential theft or cross-tenant movement—they demonstrated advanced abuse of Azure’s Machine Learning (ML) services, notebook proxies, Key Vaults, and blob storage to automate, persist, and exfiltrate at scale. ML workspaces and notebook proxies: a stealthy execution layer The attacker repeatedly created Machine Learning workspaces (Microsoft.MachineLearningServices/workspaces/write) and deployed notebook proxies (Microsoft.Notebooks/NotebookProxies/write) using both compromised user accounts and invited guest identities. Attackers can abuse Azure ML to run cryptominers or malicious jobs disguised as training, poison or deploy compromised models, use workspaces/notebooks for persistent RCE, and exfiltrate data via linked storage. They scale with automated pipelines and quota requests, all while blending into normal AI workflows to evade detection. Blob storage exploitation: payload staging and data exfiltration Simultaneously, the attacker provisioned blob storage containers (Microsoft.Storage/storageAccounts/blobServices/containers/write) to stage payloads, host malicious scripts, and store sensitive datasets. The global accessibility and high availability of blob storage made it an ideal channel for covert data exfiltration and operational agility, minimizing the likelihood of detection. Key Vault manipulation: securing persistence The creation and modification of Key Vaults (Microsoft.KeyVault/vaults/write) suggests a deliberate effort to store secrets, credentials, and access tokens. That allowed the attacker to automate interactions with other Azure services and maintain long-term persistence. By embedding themselves into the fabric of cloud identity and access management, they ensured continued access even if initial entry points were remediated. Damage statistics from the campaign controlled by single attacker machine The impact? Staggering. In a matter of days, a single attacker machine was able to: Target nearly 1.9 million global users and compromise over 51,000 accounts. Infiltrate 35 Azure tenants and abuse 36 subscriptions. Spin up 154 virtual machines with 86 used specifically for password spray attacks. Raise over 800,000 Defender alerts, flooding security teams and masking true malicious activity. Send 2.6million spam emails. Abuse Azure’s machine learning services, register malicious OAuth apps, and manipulate quotas to scale up attacks—all while maintaining persistence and evading detection. Recommendations Harden identity to prevent attackers from exploiting low-hanging student subscriptions. Enforce MFAand password protection as most of the users often don't enroll in MFA. Investigate and auto remediate risky users/sign ins; enable token protection (where available) to reduce the blast radius of stolen cookies. Microsoft’s public AiTM guidance consolidates these defenses, and XDR’s AiTM disruption revokes cookies and disables users during active compromise. Constrain abuse pathways in Azure. Apply least privilege RBAC, review guest invitations, and monitor for role promotions on a schedule and via near Realtime analytics, as outlined in Microsoft’s subscription compromise post. Watch for subscription directory/transfer changes and couple with approval style processes; remember transfer can move management (and thus logs) while billing may not change by default. Treat quota as a credit limit and instrument alerts for large, fast, or multiregion quota consumption to spot bursts (legitimate or not). Microsoft’s ML quota docs explain defaults, VM family splits (e.g., “Nseries” GPUs default to zero), and how to request increases. If you suspect your subscription is being misused Start an investigation using Microsoft’s playbooks (password spray) and the hunting queries below; prioritize containment of accounts with risky sign ins and recent ARM writes. If you’re a CSP partner, subscribe to Unauthorized Party Abuse (UPA) alerts and follow the documented response steps for compromised Azure subscriptions. These alerts help surface anomalous consumption and abuse earlier. Clean up tenants/subscriptions you don’t need and understand transfer/cancellation mechanics (“Protect tenants and subscriptions from abuse and fraud attacks”). This both reduces your attack surface and simplifies response. Report abuse (e.g., spam, DoS, brute force, malware) observed from Azure IPs or URLs via the MSRC reporting portal; this ensures the platform teams can act on infra being used against others. A practical hunting mini playbook 1) Azure resource writes, role assignments, etc (last 24h) from high-risk sign-in accounts. let RiskySignin = SigninLogs | where TimeGenerated > ago(24h) | where RiskLevelAggregated == "high" | project RiskTime = TimeGenerated, UserPrincipalName, IPAddress; AzureActivity | where TimeGenerated > ago(24h) | where OperationNameValue has_any ( "Microsoft.MachineLearningServices/workspaces/write", "Microsoft.MachineLearningServices/workspaces/computes/write", "Microsoft.Compute/virtualMachines/extensions/write", "Microsoft.Authorization/roleAssignments/write", "Microsoft.Resources/subscriptions/resourceGroups/write", // Optional: include the VM create/update itself (not just extensions) "Microsoft.Compute/virtualMachines/write" ) or (ActivityStatusValue == "Success" and OperationNameValue == "Microsoft.Subscription/aliases/write") | extend CallerIP = coalesce(CallerIpAddress, tostring(parse_json(Properties).callerIpAddress)) | join kind=inner (RiskySignin) on $left.Caller == $right.UserPrincipalName | where TimeGenerated between (RiskTime .. RiskTime + 2h) | summarize Ops = count(), DistinctOps = dcount(OperationNameValue) by Caller, CallerIP, bin(TimeGenerated, 30m) | order by Ops desc 2) Azure Activity (Sentinel): Support ticket creation before ML service deployment for Quota abuse //Below query shows the risky users writing support tickets which involve quota increase let RiskySignin = SigninLogs | where TimeGenerated > ago(24h) | where RiskLevelAggregated == "high" | project RiskTime = TimeGenerated, UserPrincipalName, IPAddress; AzureActivity | where TimeGenerated > ago(24h) | where OperationNameValue has_any ("supportTickets/write","usages/write") | project QuotaTime = TimeGenerated, Caller, CallerIpAddress = tostring(parse_json(Properties).callerIpAddress) | join kind=inner (RiskySignin) on $left.Caller == $right.UserPrincipalName | where QuotaTime between (RiskTime .. RiskTime + 2h) In conclusion The cloud offers organizations many important benefits. Unfortunately, threat actors are leveraging cloud attributes such as elasticity, scale, and interconnectedness to orchestrate persistent, multitenant attacks that evade traditional defenses. As demonstrated, even a single compromised account can rapidly escalate into a widespread attack, affecting thousands of users and tenants. To counter those evolving threats, defenders must adopt proactive measures: enforce strong identity controls, monitor for suspicious activity, limit privileges, and regularly audit cloud resources. Ultimately, maintaining vigilance and adapting security practices to the dynamic nature of cloud environments, such as Azure, is essential to protect against increasingly stealthy and scalable adversaries and making your cloud more secure. What's next? Join us at Microsoft Ignite in San Francisco on November 17–21, or online, November 18–20, for deep dives and practical labs to help you maximize your Microsoft Defender investments and to get more from the Microsoft capabilities you already use. Security is a core focus at Ignite this year, with the Security Forum on November 17th, deep dive technical sessions, theater talks, and hands-on labs designed for security leaders and practitioners Featured sessions BRK237: Identity Under Siege: Modern ITDR from Microsoft Join experts in Identity and Security to hear how Microsoft is streamlining collaboration across teams and helping customers better protect, detect, and respond to threats targeting your identity fabric. BRK240 – Endpoint security in the AI era: What's new in Defender Discover how Microsoft Defender’s AI-powered endpoint security empowers you to do more, better, faster. BRK236 – Your SOC’s ally against cyber threats, Microsoft Defender Experts See how Defender Experts detect, halt, and manage threats for you, with real-world outcomes and demos. LAB541 – Defend against threats with Microsoft Defender Get hands-on with Defender for Office 365 and Defender for Endpoint, from onboarding devices to advanced attack mitigation. Explore and filter the full security catalog by topic, format, and role: aka.ms/SessionCatalogSecurity. Why attend? Ignite is the place to learn about the latest Defender capabilities, including new agentic AI integrations and unified threat protection. We will also share future-facing innovations in Defender, as part of our ongoing commitment to autonomous defense. Security Forum—Make day 0 count (November 17) Kick off with an immersive, in person preday focused on strategic security discussions and real-world guidance from Microsoft leaders and industry experts. Select Security Forum during registration. Register for Microsoft Ignite >Cloud forensics: Why enabling Microsoft Azure Storage Account logs matters
Co-authors - Christoph Dreymann - Shiva P Introduction Azure Storage Accounts are frequently targeted by threat actors. Their goal is to exfiltrate sensitive data to an external infrastructure under their control. Because diagnostic logging is not always fully enabled by default, valuable evidence of their malicious actions may be lost. With this blog, we will explore realistic attack scenarios and demonstrate the types of artifacts those activities generate. By properly enabling Microsoft Azure Storage Account logs investigators gain a better understanding of the scope of the incident. The information can also provide guidance for remediating the environment and on preventing data theft from occurring again. Storage Account A Storage Account provides scalable, secure, and highly available storage for storing and managing data objects. Due to the variety of sensitive data that can be stored, it is another highly valued target by a threat actor. Threat actors exploit misconfigurations, weak access controls, and leaked credentials to gain unauthorized access. Key risks include Shared Access Signature token (SAS) misuse that allows threat actors to access or modify exposed blob storages. Storage Account key exposure could grant privileged access to the data plane. Investigating storage-related security incidents requires familiarity with Azure activity logs and Diagnostic logs. Diagnostic log types for Storage accounts are StorageBlob, StorageFile, StorageQueue, and StorageTable. These logs can help identify unusual access patterns, role changes, and unauthorized SAS token generation. This blog is centered around StorageBlob activity logs. Storage Account logging The logs for a Storage Account aren’t enabled by default. These logs capture operations, requests, and use such as read, write, and delete actions/requests on storage objects like blobs, queues, files, or tables. NOTE: There are no license requirements to enable Storage Account logging, but Log Analytics charges based on ingestion and retention (Pricing - Azure Monitor | Microsoft Azure) For more information on enabling logging for a Storage Account can be found here. Notable fields The log entries contain various fields which are of use not only during or after an incident, but for general monitoring of a storage account during normal operations (for a full list, see what data is available in the Storage Logs). Once the storage log is enabled, one of the key tables within Log Analytics is StorageBlobLogs, which provides details about blob storage operations, including read, write, and delete actions. Key columns such as OperationName, AuthenticationType, StatusText, and UserAgentHeader capture essential information about these activities. The OperationName field identifies operations on a storage account, such as “PutBlob” for uploads or “DeleteBlob” and “DeleteFile” for deletions. The UserAgentHeader fields offer valuable insights into the tools used to access a Blob storage. Accessing blob storages through the Azure portal is typically logged with a generic user agent, which indicates the application used to perform the access, such as a web browser like Mozilla Firefox. In contrast, tools like AzCopy or Microsoft Azure Storage Explorer are explicitly identified in the logs. Analyzing the UserAgentHeader provides crucial details about the access method, helping determine how the blob storage was accessed. The following table includes additional investigation fields, Field name Description TimeGenerated [UTC] The date and time of the operation request. AccountName Name of the Storage account. OperationName Name of the operation. A detailed list of for StorageBlob operation can be found here. AuthenticationType The type of authentication that was used to make this request. StatusCode The HTTP status code for the request. If the request is interrupted, this value might be set to Unknown. StatusText The status of the requested operation. Uri Uniform resource identifier that is requested. CallerIpAddress The IP address of the requester, including the port number. UserAgentHeader The User-Agent header value. ObjectKey Provides the path of the object requested. RequesterUpn User Principal Name of the requester. AuthenticationHash Hash of the authentication token used during a request. Request authenticated with SAS token includes a SAS signature specifying the hash derived from the signature part of the SAS token. For a full list, see what data is available in the Storage Logs. How a threat actor can access a Storage Account Threat actors can access the Storage Account through Azure-assigned RBAC, a SAS token (including User delegated SAS token), Azure Storage Account Keys and Anonymous Access (if configured). Storage Account Access Keys Azure Storage Account Access Keys are shared secrets that enable full access to Azure storage resources. When creating a storage account, Azure generates two access keys, both can be used for authentication with the storage account. These keys are permanent and do not have an expiration date. Both Storage Account Owners and roles such as Contributor or any other role with the assigned action of Microsoft.Storage/storageAccounts/listKeys/action can retrieve and use these credentials to access the storage account. Account Access Keys can be rotated/regenerated but if done unintentionally, it could disrupt applications or services dependent on the key for authentication. Additionally, this action invalidates any SAS tokens derived from that key, potentially revoking access to dependent workflows. Monitoring key rotations can help detect unexpected changes and mitigate disruptions. Query: This query can help identify instances of account key rotations in the logs AzureActivity | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/REGENERATEKEY/ACTION" | where ActivityStatusValue has "Start" | extend resource = parse_json(todynamic(Properties).resource) | extend requestBody = parse_json(todynamic(Properties).requestbody) | project TimeGenerated, OperationNameValue, resource, requestBody, Caller, CallerIpAddress Shared Access Signature SAS tokens offer a granular method for controlling access to Azure storage resources. SAS tokens enable specific permitted actions on a resource and their duration. They can be generated for blobs, queues, tables, and file shares within a storage account, providing precise control over data access. A SAS token allows access via a signed URL. A Storage Account Owner can generate a SAS token and connection strings for various resources within the storage account (e.g., blobs, containers, tables) without restrictions. Additionally, roles with Microsoft.Storage/storageAccounts/listKeys/action rights can also generate SAS tokens. SAS tokens enable access to storage resources using tools such as Azure Storage Explorer, Azure CLI, or PowerShell. It is important to note that the logs do not indicate when a SAS token was generated [How a shared access signature works]. However, their usage can be inferred by tracking configuration changes that enable the use of storage account keys option which is disabled by default. Figure 1: Configuration setting to enable account key access Query: This query is designed to detect configuration changes made to enable access via storage account keys AzureActivity | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE" | where ActivityStatusValue has "Success" | extend allowSharedKeyAccess = parse_json(tostring(parse_json(tostring(parse_json(Properties).responseBody)).properties)).allowSharedKeyAccess | where allowSharedKeyAccess == "true" User delegated Shared Access Signature A User Delegation SAS is a type of SAS token that is secured using Microsoft Entra ID credentials rather than the storage account key. For more details see Authorize a user delegation SAS. To request a SAS token using the user delegation key, the identity must possess the Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey action (see Assign permissions with RBAC). Azure Role-Based Access Control A threat actor must identify a target (an identity) that can assign roles or already holds specific RBAC roles. To assign Azure RBAC roles, an identity must have Microsoft.Authorization/roleAssignments/write, which allows the assignment of roles necessary for accessing storage accounts. Some examples of roles that provide permissions to access data within Storage Account (see Azure built-in roles for blob): Storage Account Contributor (Read, Write, Manage Access) Storage Blob Data Contributor (Read, Write) Storage Blob Data Owner (Read, Write, Manage Access) Storage Blob Data Reader (Read Only) Additionally, to access blob data in the Azure portal, a user must also be assigned the Reader role (see Assign an Azure role). More information about Azure built-in roles for a Storage Account can be found here Azure built-in roles for Storage. Anonymous Access If the storage account configuration 'Allow Blob anonymous access' is set to enabled and a container is created with anonymous read access, a threat actor could access the storage contents from the internet without any authorization. Figure 2: Configuration settings for Blob anonymous access and container-level anonymous access. Query: This query helps identify successful configuration changes to enable anonymous access AzureActivity | join kind=rightouter (AzureActivity | where TimeGenerated > ago(30d) | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE" | where Properties has "allowBlobPublicAccess" | extend ProperTies = parse_json(Properties) | evaluate bag_unpack(ProperTies) | extend allowBlobPublicAccess = todynamic(requestbody).properties["allowBlobPublicAccess"] | where allowBlobPublicAccess has "true" | summarize by CorrelationId) on CorrelationId | extend ProperTies = parse_json(Properties) | evaluate bag_unpack(ProperTies) | extend allowBlobPublicAccess_req = todynamic(requestbody).properties["allowBlobPublicAccess"] | extend allowBlobPublicAccess_res = todynamic(responseBody).properties["allowBlobPublicAccess"] | extend allowBlobPublicAccess = case (allowBlobPublicAccess_req!="", allowBlobPublicAccess_req, allowBlobPublicAccess_res!="", allowBlobPublicAccess_res, "") | project OperationNameValue, ActivityStatusValue, ResourceGroup, allowBlobPublicAccess, Caller, CallerIpAddress, ResourceProviderValue Key notes regarding the authentication methods When a user accesses Azure Blob Storage via the Azure portal, the interaction is authenticated using OAuth and is authorized by the Azure RBAC roles configuration for the given user. In contrast, authentication using Azure Storage Explorer and AzCopy depends on the method used: If a user interactively signs in via the Azure portal or utilizes the Device code flow, authentication appears as OAuth based. When using a SAS token, authentication is recorded as SAS-based for both tools. Access via Azure RBAC is logged in Entra ID Sign-in Logs, however, activity related to SAS token usage does not appear in the sign-in logs, as it provides pre-authorized access. Log analysis should consider all operations, since initial actions can reveal the true authentication method even OAuth-based access may show as SAS in logs. The screenshot below illustrates three distinct cases, each showcasing different patterns of authentication types used when accessing storage resources. A SAS token is consistently used across various operations, where the SAS token is the primary access method. The example below highlighted as ‘2’ demonstrates a similar pattern, with OAuth (using assigned Azure RBAC role) serving as the primary authentication method for all listed operations. Lastly, example number ‘3’, Operations start with OAuth authentication (using an assigned Azure RBAC role for authorization) and then uses a SAS token, indicating mixed authentication types. Figure 3: Different patterns of authentication types Additionally, when using certain applications such as Azure Storage Explorer with Account Access Keys authentication, the initial operations such as ListContainers and ListBlob are logged with the authentication type reported as “AccountKey”. However, for subsequent actions like file uploads or downloads, the authentication type switches to SAS in the logs. To accurately determine whether an Account Access Keys or SAS was used, it's important to correlate these actions with the earlier enumeration or sync activity within the logs. With this understanding, let’s proceed to analyze specific attack scenarios by utilizing the log analytics, such as the StorageBlobLogs table. Attack scenario This section will examine the typical steps that a threat actor might take when targeting a Storage Account. We will specifically focus on the Azure Resource Manager layer, where Azure RBAC initially dictates what a threat actor can discover. Enumeration During enumeration, a threat actor’s goal is to map out the available storage accounts. The range of this discovery is decided by the access privileges of a compromised identity. If that identity holds at least a minimum level of access (similar to a Reader) at the subscription level, it can view storage account resources without making any modifications. Importantly, this permission level does not grant access to the actual data stored within the Azure Storage itself. Hence, a threat actor is limited to interacting only with those storage accounts that are visible to them. To access and download files from Blob Storage, a threat actor must be aware of the names of containers (Operation: ListContainers) and the files within those containers (Operation: ListBlobs). All interactions with these storage elements are recorded in the StorageBlobLogs table. Containers or blobs in a container can be listed by a threat actor with the appropriate access rights. If access is not authorized, attempts to do so will result in error codes shown in the StatusCode field. A high number of unauthorized attempts resulting in errors would be a key indicator of suspicious activity or misconfiguration. Figure 4: Failure attempts to list blobs/containers Query: This query serves as a starting point for detecting a spike in unauthorized attempts to enumerate containers, blobs, files, or queues union Storage* | extend StatusCodeLong = tolong(StatusCode) | where OperationName has_any ("ListBlobs", "ListContainers", "ListFiles", "ListQueues") | summarize MinTime = min(TimeGenerated), MaxTime = max(TimeGenerated), OperationCount = count(), UnauthorizedAccess = countif(StatusCodeLong >= 400), OperationNames = make_set(OperationName), ErrorStatusCodes = make_set_if(StatusCode, StatusCodeLong >= 400), StorageAccountName = make_set(AccountName) by CallerIpAddress | where UnauthorizedAccess > 0 Note: The UnauthorizedAccess filter attribute must be adjusted based on your environment. Data exfiltration Let’s use the StorageBlobLogs to analyze two different attack scenarios. Scenario 1: Compromised user has access to a storage account In this scenario, the threat actor either compromises a user account with access to one or more storage accounts or alternatively, obtains a leaked Account Access Key or SAS token. With a compromised identity, the threat actor can either enumerate all storage accounts the user has permissions to (as covered in enumeration) or directly access a specific blob or container if the leaked key grants scoped access. Account Access Keys (AccountKey)/SAS tokens The threat actor might either use the storage account’s access keys or SAS token retrieved through the compromised user account provided they have the appropriate permissions or the leaked key itself may already be either an Account access key or SAS token. Access keys grant complete control while SAS key can generate a time-bound access, to authorize data transfers enabling them to view, upload, or download data at will. Figure 5: Account key used to download/view data Figure 6: SAS token used to download/view data Query: This query helps identify cases where an AccountKey/SAS was used to download/view data from a storage account StorageBlobLogs | where OperationName has "GetBlob" | where AuthenticationType in~ ("AccountKey", "SAS") | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project TimeGenerated, AccountName, OperationName, RequesterUpn, AuthenticationType, Uri, ObjectKey, StatusText, UserAgentHeader, CallerIpAddress, AuthenticationHash User Delegation SAS Available for Blob Storage only, a User Delegation SAS functions similar to a SAS but is protected with Microsoft Entra ID credentials rather than the storage account key. The creation of a User Delegation SAS is tracked as a corresponding "GetUserDelegationKey" log entry in StorageBlobLogs table. Figure 7: User-Delegation Key created Query: This query helps identify creation of a User-Delegation Key. The RequesterUpn provides the identity of the user account creating this key. StorageBlobLogs | where OperationName has "GetUserDelegationKey" | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project TimeGenerated, AccountName, OperationName, RequesterUpn, Uri, CallerIpAddress, ObjectKey, AuthenticationType, StatusCode, StatusText Figure 8: User-Delegation activity to download/read Query: This query helps identify cases where a download/read action was initiated while authenticated via a User delegation key StorageBlobLogs | where AuthenticationType has "DelegationSas" | where OperationName has "GetBlob" | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project Type, TimeGenerated, OperationName, AccountName, UserAgentHeader, ObjectKey, AuthenticationType, StatusCode, CallerIpAddress, Uri The operation "GetUserDelegationKey" within the StorageBlobLogs captures the identity responsible for generating a User Delegation SAS token. The AuthenticationHash field shows the Key used to sign the SAS token. When the SAS token is used, any operations will include the same SAS signature hash enabling you to correlate various actions performed using this token even if the originating IP addresses differ. Query: The following query extracts a SAS signature hash from the AuthenticationHash field. This helps to track the token's usage, providing an audit trail to identify potentially malicious activity. StorageBlobLogs | where AuthenticationType has "DelegationSas" | extend SasSHASignature = extract(@"SasSignature\((.*?)\)", 1, AuthenticationHash) | project Type, TimeGenerated, OperationName, AccountName, UserAgentHeader, ObjectKey, AuthenticationType, StatusCode, CallerIpAddress In the next scenario, we examine how a threat actor already in control of a compromised identity uses Azure RBAC to assign permissions. With administrative privileges over a storage account, the threat actor can grant access to additional accounts and establish long-term access to the storage accounts. Scenario 2: A user account is controlled by the threat actor and has elevated access to the Storage Account An identity named Bob was identified as compromised due to an unauthorized IP login. The investigation triggers when Azure Sign-in logs reveal logins originating from an unexpected location. This account has owner permissions for a resource group, allowing full access and role assignments in Azure RBAC. The threat actor grants access to another account they control, as shown in the AzureActivity logs. The AzureActivity logs in the figure below show that Reader, Data Access, and Storage Account Contributor roles were assigned to Hacker2 for a Storage Account within Azure: Figure 9: Assigning a role to a user Query: This query helps identify if a role has been assigned to a user AzureActivity | where Caller has "Bob" | where OperationNameValue has "MICROSOFT.AUTHORIZATION/ROLEASSIGNMENTS/WRITE" | extend RoleDefintionIDProperties = parse_json(Properties) | evaluate bag_unpack(RoleDefintionIDProperties) | extend RoleDefinitionIdExtracted = tostring(todynamic(requestbody).Properties.RoleDefinitionId) | extend RoleDefinitionIdExtracted = extract(@"roleDefinitions/([a-f0-9-]+)", 1, RoleDefinitionIdExtracted) | extend RequestedRole = case( RoleDefinitionIdExtracted == "ba92f5b4-2d11-453d-a403-e96b0029c9fe", "Storage Blob Data Contributor", RoleDefinitionIdExtracted == "b7e6dc6d-f1e8-4753-8033-0f276bb0955b", "Storage Blob Data Owner", RoleDefinitionIdExtracted == "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1", "Storage Blob Data Reader", RoleDefinitionIdExtracted == "db58b8e5-c6ad-4a2a-8342-4190687cbf4a", "Storage Blob Delegator", RoleDefinitionIdExtracted == "c12c1c16-33a1-487b-954d-41c89c60f349", "Reader and Data Access", RoleDefinitionIdExtracted == "17d1049b-9a84-46fb-8f53-869881c3d3ab","Storage Account Contributor", "") | extend roleAssignmentScope = tostring(todynamic(Authorization_d).evidence.roleAssignmentScope) | extend AuthorizedFor = tostring(todynamic(requestbody).Properties.PrincipalId) | extend AuthorizedType = tostring(todynamic(requestbody).Properties.PrincipalType) | project TimeGenerated, RequestedRole, roleAssignmentScope, ActivityStatusValue, Caller, CallerIpAddress, CategoryValue, ResourceProviderValue, AuthorizedFor, AuthorizedType Note: Refer to this resource for additional Azure in-built role IDs that can be used in this query. The Sign-in logs indicate that Hacker2 successfully accessed Azure from the same malicious IP address. We can examine StorageBlobLogs to determine if the user accessed data of the blob storage since specific roles related to the Storage Account were assigned to them. The activities within the blob storage indicate several entries attributed to the Hacker2 user, as shown in the figure below. Figure 10: User access to blob storage Query: This query helps identify access to blob storage from a malicious IP StorageBlobLogs | where TimeGenerated > ago (30d) | where CallerIpAddress has {{IPv4}} | extend ObjectName= ObjectKey | project TimeGenerated, AccountName, OperationName, AuthenticationType, StatusCode, StatusText, RequesterUpn, CallerIpAddress, UserAgentHeader, ObjectName, Category An analysis of the StorageBlobLogs, as shown in the figure below, reveals that Hacker2 performed a "StorageRead" operation on three files. This indicates that data was accessed or downloaded from blob storage. Figure 11: Blob Storage Read/Download activities The UserAgentHeader suggests that the storage account was accessed through the Azure portal. Consequently, the SignInLogs can offer further detailed information. Query: This query checks for read, write, or delete operations in blob storage and their access methods, StorageBlobLogs | where TimeGenerated > ago(30d) | where CallerIpAddress has {{IPv4}} | where OperationName has_any ("PutBlob", "GetBlob", "DeleteBlob") and StatusText == "Success" | extend Notes = case( OperationName == "PutBlob" and Category == "StorageWrite" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was written through Azure Storage Explorer", OperationName == "PutBlob" and Category == "StorageWrite" and UserAgentHeader has "AzCopy", "Blob was written through AzCopy Command", OperationName == "PutBlob" and Category == "StorageWrite" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was written through Azure portal", OperationName == "GetBlob" and Category == "StorageRead" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was Read/Download through Azure Storage Explorer", OperationName == "GetBlob" and Category == "StorageRead" and UserAgentHeader has "AzCopy", "Blob was Read/Download through AzCopy Command", OperationName == "GetBlob" and Category == "StorageRead" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was Read/Download through Azure portal", OperationName == "DeleteBlob" and Category == "StorageDelete" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was deleted through Azure Storage Explorer", OperationName == "DeleteBlob" and Category == "StorageDelete" and UserAgentHeader has "AzCopy", "Blob was deleted through AzCopy Command", OperationName == "DeleteBlob" and Category == "StorageDelete" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was deleted through Azure portal","") | project TimeGenerated, AccountName, OperationName, AuthenticationType, StatusCode, CallerIpAddress, ObjectName=ObjectKey, Category, RequesterUpn, Notes The log analysis confirms that the threat actor successfully extracted data from a storage account. Storage Account summary Detecting misuse within a Storage Account can be challenging, as routine operations may hide malicious activities. However, enabling logging is essential for investigation to help track accesses, especially when compromised identities or misused SAS tokens or keys are involved. Unusual changes in user permissions and irregularities in role assignments which are documented in the Azure Activity Logs, can signal unauthorized access, while Microsoft Entra ID sign-in logs can help identify compromised UPNs and suspicious IP addresses that ties into OAuth-based storage account access. By thoroughly analyzing Storage Account logs which details operation types and access methods, investigators can identify abuse and determine the scope of compromise. That not only helps when remediating the environment but can also provide guidance on preventing unauthorized data theft from occurring again.3.8KViews2likes0Comments