defender experts for hunting
27 TopicsCloud forensics: Prepare for the worst -implement security baselines for forensic readiness in Azure
Forensic readiness in the cloud Forensic readiness in the cloud refers to an organization’s ability to collect, preserve, and analyze digital evidence in preparation for security incidents. Forensic readiness is increasingly important as more organizations migrate workloads to the cloud. Achieving an appropriate security posture ensures that organizations are adequately equipped for forensic investigations. This requires more than just the presence of logs; logging and monitoring configurations must be thoughtfully scoped and proactively enabled. Additionally, the adoption of cloud environments presents unique challenges for forensic investigations. First, capturing the right evidence can be difficult due to the dynamic nature of cloud data. Second, in a shared responsibility model, organizations must work closely with their cloud providers to ensure preparedness for forensic investigations. Azure’s multi-tenant architecture adds another layer of complexity, as data from multiple customers may reside on the same physical hardware. Therefore, strict access controls and robust logging are essential. To maintain forensic readiness, organizations must implement comprehensive monitoring and logging across all cloud services to ensure evidence is available when needed. Preparing your Azure environment for forensic readiness When the Azure environment is set up correctly and configured with accurate logging in place, it becomes easier to quickly identify the scope of a security breach, trace the attacker’s actions, and identify the Tactics, Techniques, and Procedures (TTP) employed by a threat actor. Through the implementation of these measures, organizations can ensure that data required to support forensic investigations is available, hence ensuring compliance with auditing requirements, improving security, and ensuring security incidents are resolved efficiently. With that granularity of log data in the environment, organizations are more well-equipped to respond to an incident if it occurs. Case study: Forensic investigation disrupted due to lack of forensic readiness in Azure In a recent cybersecurity incident, a large company utilizing Azure experienced a major setback in its forensic investigation. This case study outlines the critical steps and logs that were missed, leading to a disrupted investigation. Step 1: Initial detection of the compromise The organization’s Security Operations Centre (SOC), identified anomalous outbound traffic originating from a compromised Azure virtual machine (VM) named “THA-VM.” Unfortunately, the absence of diagnostic settings significantly hindered the investigation. Without access to Guest OS logs and data plane logs, the team was unable to gain deeper visibility into the threat actor’s activities. The lack of critical telemetry—such as Windows Event Logs, Syslog, Network Security Group (NSG) flow logs, and resource-specific data plane access logs—posed a major challenge in assessing the full scope of the compromise. Had these diagnostic settings been properly configured, the investigation team would have been better equipped to uncover key indicators of compromise, including local account creation, process execution, command-and-control (C2) communications, and potential lateral movement. Figure 1: Diagnostic settings not configured on the virtual machine resource Step 2: Evidence collection challenges During the forensic analysis of the compromised virtual machine, the team attempted to capture a snapshot of the OS disk but discovered that restore points had not been configured and no backups were available—severely limiting their ability to preserve and examine critical disk-based artefacts such as malicious binaries, tampered system files, or unauthorized persistence mechanisms. Restore points, which are not enabled by default in Azure virtual machines, allow for the creation of application-consistent or crash-consistent snapshots of all managed disks, including the OS disk. These snapshots are stored in a restore point collection and serve as a vital tool in forensic investigations, enabling analysts to preserve the exact state of a VM at a specific point in time and maintain evidence integrity throughout the investigation process. Step 3: Analysis of the storage blob The team then turned to storage blobs after identifying unusual files that appeared to be associated with threat actor tool staging such as scanning utilities and credential dumping tools. However, because diagnostic settings for the storage account had not been enabled, the investigators were unable to access essential data plane logs. These logs could have revealed who uploaded or accessed the blobs and when those actions occurred. Since storage diagnostics are not enabled by default in Azure, this oversight significantly limited visibility into attacker behavior and impeded efforts to reconstruct the timeline and scope of malicious activity—an essential component of any effective forensic investigation. Step 4: Slow response and escalation In the absence of tailored logging and monitoring configurations, response timelines were delayed, and the full incident response process that was required was not initiated quickly enough to minimize the impact. Step 5: Recovery and lessons learned Despite the delays, the team pieced together elements of the story based on the data they had available, without determining the initial access vector largely because the necessary diagnostic data wasn't available. This absence of forensic readiness highlights the importance of configuring diagnostic settings, enabling snapshots, and using centralized logging solutions like Microsoft Sentinel, which will bring all this telemetry into a single pane of glass, providing real-time visibility and historical context in one place. This unified view enables faster incident detection, investigation, and response. Its built-in analytics and AI capabilities help surface anomalies that might otherwise go unnoticed, while retaining a searchable history of events for post-incident forensics. Recommended practices for forensic readiness in Azure The table below outlines key recommendations for deploying and administering workloads securely and effectively in Azure. Each recommendation is categorized by focus area and includes a best practice description, specific action to take, and a reference to supporting documentation or resources to assist with implementation. Category Best Practice Recommended Action Resource/Link Identity and Access Enable MFA for all users. [ ] Enable Multi-Factor Authentication (MFA) for all Azure AD Users. MFA in Azure AD Monitor Access Review and Role Assignments [ ] Regularly review identities (SPNs, Managed Identities, Users), role assignments and permissions for anomalies. Azure Identity Protection Implement RBAC with least privilege. [ ] Use Role-Based Access Control (RBAC) and assign least-privilege roles to users. Azure RBAC Overview Configure PIM for privileged roles. [ ] Configure Privileged Identity Management (PIM) for all privileged roles. Require approval for high privilege roles. PIM in Azure AD Enable Sign-in and Audit Logs. [ ] Ensure all sign-in activities and audit logs are enabled and logging in Azure AD. Azure Entra (AD) Sign-In Logs Conditional Access Policies: Protect high-risk resources from unauthorized access. [ ] Set Conditional Access policies to enforce MFA or access restrictions based on conditions like risk or location. Conditional Access in Azure Entra (AD) Logging and Monitoring Enable Azure Monitor [ ] Enable Azure Monitor to collect telemetry data from resources. Azure Monitor Overview Activate Microsoft Defender for Cloud. [ ] Activate and configure Microsoft Defender for Cloud for enhanced security monitoring. Microsoft Defender for Cloud Enable Diagnostic logging for VM and Applications. [ ] Configure Diagnostic logging for Azure VMs, and other critical resources. Azure Diagnostics Logging Centralize Logs in Log Analytics Workspace. [ ] Consolidate all logs into a Log Analytics Workspace for centralized querying. Log Analytics Workspace Set Audit logs retention to 365+ days. [ ] Ensure audit logs are retained for a minimum of 365 days to meet Forensic needs. Audit Log Retention Enable Advanced Threat Detection. [ ] Enable Microsoft Defender for Cloud and Sentinel to detect anomalous behavior and security threats in real time. Azure Sentinel Overview Data Protection Ensure Data encrypted at rest and in transit. [ ] Enable encryption for data at rest and in transit for all Azure resources. Azure Encryption Use Azure Key Vault for Key management. [ ] Store and manage encryption key, certificates and secrets in Azure Key Vault. Azure Key Vault Rotate Encryption Keys Regularly. Regularly rotate encryption key, certificates and secrets in Azure Key Vault. Manage Keys in Key Vault Configure Immutable Backups. [ ] Set up immutable backups for critical data to prevent tampering. Immutable Blob Storage Implement File Integrity Monitoring [ ] Enable File Integrity Monitoring in Azure Defender for Storage to detect unauthorized modifications. Azure Defender for Storage Network Security Configure Network Security Groups (NSGs). [ ] set up NSGs to restrict inbound/outbound traffic for VM’s and services. Network Security Groups Enable DDoS Protection. [ ] Implement DDoS Protection for critical resources to safeguard against distributed denial-of-service attacks. DDoS Protection in Azure Use VPNs or ExpressRoute for secure connectivity. [ ] Establish VPNs or ExpressRoute for secure, private network connectivity. Azure VPN Gateway Incident Response Set Up Alerts for suspicious activities. [ ] Configure alerts for suspicious activities such as failed login attempts or privilege escalation. Create Alerts in Azure Sentinel Automate incident response. [ ] Automate incident response workflows using Azure Automation or Logic Apps. Azure Logic Apps Integrate Threat intelligence with Sentinel. [ ] Integrate external threat intelligence feeds into Microsoft Sentinel to enrich detection capabilities Threat Intelligence in Azure Sentinel Run Advanced KQL Queries for Incident Investigations. [ ] Use Kusto Query Language (KQL) queries in Sentinel to investigate and correlate incidents. KQL Queries in Sentinel Establish Incident Response Plan [ ] Document and formalize your organization’s incident response plan with clear steps and procedures. Incident Response in Azure Policies and Processes Define a Forensic Readiness Policy. [ ] Establish and document a Forensic Readiness policy that outlines roles, responsibilities, and procedures. Azure Security Policies Conduct Administrator training. [ ] Provide regular training for administrators on security best practices, forensic procedures, and incident response. Azure Security Training By using Microsoft’s tools and implementing these recommended best practices, organizations can improve their forensic readiness and investigation capabilities in Azure. This approach not only helps in responding effectively to incidents but also enhances an organization’s overall security posture. By staying ahead of potential threats and maintaining forensic readiness, you’ll be better equipped to protect your organization and meet regulatory requirements. Conclusion Forensic readiness in Azure is not a one-time effort, it is an ongoing commitment that involves proactive planning, precise configuration, and strong coordination across security, operations, and governance teams. Key practices such as enabling diagnostic logging, centralizing telemetry, enforcing least-privilege access, and developing cloud-tailored incident response playbooks are essential. Together, these measures improve your ability to detect, investigate, and respond to security incidents in a timely and effective manner.1.9KViews4likes1CommentHow Microsoft Defender Experts uses AI to cut through the noise
Microsoft Defender Experts manages and investigates incidents for some of the world’s largest organizations. We understand the challenges facing our customers and are always looking for ways to respond quicker and scale our services to meet their needs. Teaching AI to think like a security expert We're leveraging AI to help Defender Experts expand our services and respond even faster to threats facing our customers. AI-based incident classification allows us to filter noise up front without compromising on detecting real threats. This AI-based capability is trained by security experts, built for precision, and designed to scale and act at speed. Our approach doesn't just rely on static rules or traditional filtering. Instead, our AI is powered by insights from hundreds of thousands of real investigations conducted by Defender Experts security analysts. These investigations form a goldmine of expert knowledge—how analysts think, what signals they trust, and how they separate benign and false positives from true threats. We use historical intelligence to evaluate each new incident. AI-based incident classification looks at various signals, such as evidence, tenant details, context from IOCs, and TI information. It assigns a similarity score based on those signals. By using a similarity algorithm, the AI-based system compares each new incident to known outcomes from the past—deciding whether it closely resembles true positives, false positives, or is benign. At a certain threshold, it confidently assigns the grade. If the pattern matches past false positives, the system de-grades the incident as noise. If the pattern looks similar to a known higher-risk threat, it escalates it faster. This helps us focus first on what matters most— true, actionable threats, which results in quicker response times for our customers. Human-centric and safe We know that trust is everything in cybersecurity. So even though AI helps us filter noise, we've built guardrails to make sure no real threats are missed: Tiered decisioning: Incidents that are classified as noise are reviewed by Defender Expert analysts to ensure they match the classification and other criteria for noise. Feedback loops: For continuous learning, anything classified as noise is sent to an analyst for validation so that there are no accidental misses of true threats. The feedback from them continuously improves the system. Transparency: classification decisions are visible, helping analysts understand why something is marked as noise or not. This approach strikes the right balance. AI does the heavy lifting up front, and our human security experts remain firmly in control of what is investigated. Quicker response for our customers AI-based incident classification in Defender Experts: 50% of noise is automatically triaged by AI-based incident classification with 100% precision Our experts respond faster to meaningful threats to our customer’s environment. “We no longer waste time chasing dead ends. The system helps us focus on what truly matters and our customers appreciate how quickly we can respond.” — Defender Experts Tier2 Analyst What’s next? We’re continuing to refine this system with more granular risk scoring per entity, deeper tenant-based similarity correlation, IOC based weightage, and additional real-time feedback from Defender Experts analysts. Final thoughts AI alone isn’t the answer—but AI guided by experts is a force multiplier. With AI-based incident classification, Defender Experts is showing what the future of SOCs can look like: faster, smarter, safer, and scalable. AI-based classification has helped reduce 50% of the noise from the analyst queue with 100% accuracy, saving analyst time so they can focus on what matters most. If you're a Defender Experts customer, you’re already seeing the benefit of quicker response times to true security threats. If you're a security leader struggling with alert overload, Microsoft Defender Experts for XDR, Microsoft’s MXDR (managed extended detection and response) service, can deliver around the clock, expert-led protection. For more information, please visit Microsoft Defender Experts for XDR | Microsoft Security450Views3likes0CommentsElevate your protection with expanded Microsoft Defender Experts coverage
Co-authors: Henry Yan, Sr. Product Marketing Manager and Sylvie Liu, Principal Product Manager Security Operations Centers (SOCs) are under extreme pressure due to a rapidly evolving threat landscape, an increase in volume and frequency of attacks driven by AI, and a widening skills gap. To address these challenges, organizations across industries are relying on Microsoft Defender Experts for XDR and Microsoft Defender Experts for Hunting to bolster their SOC and stay ahead of emerging threats. We are committed to continuously enhancing Microsoft Defender Experts services to help our customers safeguard their organizations and focus on what matters most. We are excited to announce the general availability of expanded Defender Experts coverage. With this update, Defender Experts for XDR and Defender Experts for Hunting now deliver around the clock protection and proactive threat hunting for your cloud workloads, starting with hybrid and multicloud servers in Microsoft Defender for Cloud. Additionally, third-party network signals from Palo Alto Networks, Zscaler, and Fortinet can now be used for incident enrichment in Defender Experts for XDR, enabling faster and more accurate detection and response. Extend 24/7, expert-led defense and threat hunting to your hybrid and multicloud servers As cloud adoption accelerates, the sophistication and frequency of cloud attacks are on the rise. According to IDC, in 2024, organizations experienced an average of more than nine cloud security incidents, with 89% reporting an increase year over year. Furthermore, cloud security is the leading skills gap with almost 40% of respondents in the O’Reilly 2024 State of Security Survey identifying it as the top area in need of skilled professionals. Virtual machines (VMs) are the backbone of cloud infrastructure, used to run critical applications with sensitive data while offering flexibility, efficiency, and scalability. This makes them attractive targets for attackers as compromised VMs can be used to potentially carry out malicious activities such as data exfiltration, lateral movement, and resource exploitation. Defender Experts for XDR now delivers 24/7, expert-led managed extended detection and response (MXDR) for your hybrid and multicloud servers in Defender for Cloud. Our security analysts will investigate, triage, and respond to alerts on your on-premises and cloud VMs across Microsoft Azure, Amazon Web Services, and Google Cloud Platform. With Defender Experts for Hunting, which is included in Defender Experts for XDR and also available as a standalone service, our expert threat hunters will now be able to hunt across hybrid and multicloud servers in addition to endpoints, identities, emails, and cloud apps, reducing blind spots and uncovering emerging cloud threats. Figure 1: Incidents from servers in Defender for Cloud investigated by Defender Experts Incident enrichment for improved detection accuracy and faster response By enriching Defender incidents with third-party network signals from Palo Alto Networks (PAN-OS Firewall), Zscaler (Zscaler Internet Access and Zscaler Private Access), and Fortinet (FortiGate Next-Generation Firewall), our security analysts gain deeper insights into attack paths. The additional context helps Defender Experts for XDR identify patterns and connections across domains, enabling more accurate detection and faster response to threats. Figure 2: Third-party enrichment data in Defender Experts for XDR report In this hypothetical scenario, we explore how incident enrichment with third-party network signals helped Defender Experts for XDR uncover lateral movement and potential data exfiltration attempts. Detection: Microsoft Defender for Identity flagged an "Atypical Travel" alert for User A, showing sign-ins from India and Germany within a short timeframe using different devices and IPs, suggesting possible credential compromise or session hijacking. However, initial identity and cloud reviews showed no signs of malicious activity. Correlation: From incident enrichment with third-party network signals, Palo Alto firewall logs revealed attempts to access unauthorized remote tools, while Zscaler proxy data showed encrypted traffic to an unprotected legacy SharePoint server. Investigation: Our security analysts uncovered that the attacker authenticated from a managed mobile device in Germany. Due to token reuse and a misconfigured Mobile Device Management profile, the device passed posture checks and bypassed Conditional Access, enabling access to internal SharePoint. Insights from third-party network signals helped Defender Experts for XDR confirm lateral movement and potential data exfiltration. Response: Once malicious access was confirmed, Defender Experts for XDR initiated a coordinated response, revoking active tokens, isolating affected devices, and hardening mobile policies to enforce Conditional Access. Flexible, cost-effective pricing Defender Experts coverage of servers in Defender for Cloud is priced per server per month, with charges based on the total number of server hours each month. You have the flexibility to scale your servers as needed while ensuring cost effectiveness as you only pay for Defender Experts coverage based on resources you use. For example, if you have a total of 4000 hours across all servers protected by Defender for Cloud in June (June has a total of 720 hours), you will be charged for a total of 5.56 servers in June (4000/720 = 5.56). There is no additional charge for third-party network signal enrichment beyond the data ingestion charge through Microsoft Sentinel. Please contact your Microsoft account representative for more information on pricing. Get started today Defender Experts coverage of servers in Defender for Cloud will be available as an add-on to Defender Experts for XDR and Defender Experts for Hunting. To enable coverage, you must have the following: Defender Experts for XDR or Defender Experts for Hunting license Defender for Servers Plan 1 or Plan 2 in Defender for Cloud You only need a minimum of 1 Defender Experts for XDR or Defender Experts for Hunting license to enable coverage of all your servers in Defender for Cloud. If you are interested in purchasing Defender Experts for XDR or the add-on for Defender Experts coverage of servers in Defender for Cloud, please complete this interest form. Third-party network signals for enrichment are available only for Defender Experts for XDR customers. To enable third-party network signals for enrichment, you must have the following: Microsoft Sentinel instance deployed Microsoft Sentinel onboarded to Microsoft Defender portal At least one of the supported network signals ingested through Sentinel built-in connectors: Palo Alto Networks (PAN-OS Firewall) Zscaler (Zscaler Internet Access and Zscaler Private Access) Fortinet (FortiGate Next-Generation Firewall) If you are an existing Defender Experts for XDR customer and are interested in enabling third-party network signals for enrichment, please reach out to your Service Delivery Manager. Learn more Technical Documentation Microsoft Defender Experts for XDR Microsoft Defender Experts for Hunting Third-party network signals for enrichment Plan Defender for Servers deployment Defender Experts Ninja Training3KViews3likes0CommentsCloud forensics: Why enabling Microsoft Azure Storage Account logs matters
Co-authors - Christoph Dreymann - Shiva P Introduction Azure Storage Accounts are frequently targeted by threat actors. Their goal is to exfiltrate sensitive data to an external infrastructure under their control. Because diagnostic logging is not always fully enabled by default, valuable evidence of their malicious actions may be lost. With this blog, we will explore realistic attack scenarios and demonstrate the types of artifacts those activities generate. By properly enabling Microsoft Azure Storage Account logs investigators gain a better understanding of the scope of the incident. The information can also provide guidance for remediating the environment and on preventing data theft from occurring again. Storage Account A Storage Account provides scalable, secure, and highly available storage for storing and managing data objects. Due to the variety of sensitive data that can be stored, it is another highly valued target by a threat actor. Threat actors exploit misconfigurations, weak access controls, and leaked credentials to gain unauthorized access. Key risks include Shared Access Signature token (SAS) misuse that allows threat actors to access or modify exposed blob storages. Storage Account key exposure could grant privileged access to the data plane. Investigating storage-related security incidents requires familiarity with Azure activity logs and Diagnostic logs. Diagnostic log types for Storage accounts are StorageBlob, StorageFile, StorageQueue, and StorageTable. These logs can help identify unusual access patterns, role changes, and unauthorized SAS token generation. This blog is centered around StorageBlob activity logs. Storage Account logging The logs for a Storage Account aren’t enabled by default. These logs capture operations, requests, and use such as read, write, and delete actions/requests on storage objects like blobs, queues, files, or tables. NOTE: There are no license requirements to enable Storage Account logging, but Log Analytics charges based on ingestion and retention (Pricing - Azure Monitor | Microsoft Azure) For more information on enabling logging for a Storage Account can be found here. Notable fields The log entries contain various fields which are of use not only during or after an incident, but for general monitoring of a storage account during normal operations (for a full list, see what data is available in the Storage Logs). Once the storage log is enabled, one of the key tables within Log Analytics is StorageBlobLogs, which provides details about blob storage operations, including read, write, and delete actions. Key columns such as OperationName, AuthenticationType, StatusText, and UserAgentHeader capture essential information about these activities. The OperationName field identifies operations on a storage account, such as “PutBlob” for uploads or “DeleteBlob” and “DeleteFile” for deletions. The UserAgentHeader fields offer valuable insights into the tools used to access a Blob storage. Accessing blob storages through the Azure portal is typically logged with a generic user agent, which indicates the application used to perform the access, such as a web browser like Mozilla Firefox. In contrast, tools like AzCopy or Microsoft Azure Storage Explorer are explicitly identified in the logs. Analyzing the UserAgentHeader provides crucial details about the access method, helping determine how the blob storage was accessed. The following table includes additional investigation fields, Field name Description TimeGenerated [UTC] The date and time of the operation request. AccountName Name of the Storage account. OperationName Name of the operation. A detailed list of for StorageBlob operation can be found here. AuthenticationType The type of authentication that was used to make this request. StatusCode The HTTP status code for the request. If the request is interrupted, this value might be set to Unknown. StatusText The status of the requested operation. Uri Uniform resource identifier that is requested. CallerIpAddress The IP address of the requester, including the port number. UserAgentHeader The User-Agent header value. ObjectKey Provides the path of the object requested. RequesterUpn User Principal Name of the requester. AuthenticationHash Hash of the authentication token used during a request. Request authenticated with SAS token includes a SAS signature specifying the hash derived from the signature part of the SAS token. For a full list, see what data is available in the Storage Logs. How a threat actor can access a Storage Account Threat actors can access the Storage Account through Azure-assigned RBAC, a SAS token (including User delegated SAS token), Azure Storage Account Keys and Anonymous Access (if configured). Storage Account Access Keys Azure Storage Account Access Keys are shared secrets that enable full access to Azure storage resources. When creating a storage account, Azure generates two access keys, both can be used for authentication with the storage account. These keys are permanent and do not have an expiration date. Both Storage Account Owners and roles such as Contributor or any other role with the assigned action of Microsoft.Storage/storageAccounts/listKeys/action can retrieve and use these credentials to access the storage account. Account Access Keys can be rotated/regenerated but if done unintentionally, it could disrupt applications or services dependent on the key for authentication. Additionally, this action invalidates any SAS tokens derived from that key, potentially revoking access to dependent workflows. Monitoring key rotations can help detect unexpected changes and mitigate disruptions. Query: This query can help identify instances of account key rotations in the logs AzureActivity | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/REGENERATEKEY/ACTION" | where ActivityStatusValue has "Start" | extend resource = parse_json(todynamic(Properties).resource) | extend requestBody = parse_json(todynamic(Properties).requestbody) | project TimeGenerated, OperationNameValue, resource, requestBody, Caller, CallerIpAddress Shared Access Signature SAS tokens offer a granular method for controlling access to Azure storage resources. SAS tokens enable specific permitted actions on a resource and their duration. They can be generated for blobs, queues, tables, and file shares within a storage account, providing precise control over data access. A SAS token allows access via a signed URL. A Storage Account Owner can generate a SAS token and connection strings for various resources within the storage account (e.g., blobs, containers, tables) without restrictions. Additionally, roles with Microsoft.Storage/storageAccounts/listKeys/action rights can also generate SAS tokens. SAS tokens enable access to storage resources using tools such as Azure Storage Explorer, Azure CLI, or PowerShell. It is important to note that the logs do not indicate when a SAS token was generated [How a shared access signature works]. However, their usage can be inferred by tracking configuration changes that enable the use of storage account keys option which is disabled by default. Figure 1: Configuration setting to enable account key access Query: This query is designed to detect configuration changes made to enable access via storage account keys AzureActivity | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE" | where ActivityStatusValue has "Success" | extend allowSharedKeyAccess = parse_json(tostring(parse_json(tostring(parse_json(Properties).responseBody)).properties)).allowSharedKeyAccess | where allowSharedKeyAccess == "true" User delegated Shared Access Signature A User Delegation SAS is a type of SAS token that is secured using Microsoft Entra ID credentials rather than the storage account key. For more details see Authorize a user delegation SAS. To request a SAS token using the user delegation key, the identity must possess the Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey action (see Assign permissions with RBAC). Azure Role-Based Access Control A threat actor must identify a target (an identity) that can assign roles or already holds specific RBAC roles. To assign Azure RBAC roles, an identity must have Microsoft.Authorization/roleAssignments/write, which allows the assignment of roles necessary for accessing storage accounts. Some examples of roles that provide permissions to access data within Storage Account (see Azure built-in roles for blob): Storage Account Contributor (Read, Write, Manage Access) Storage Blob Data Contributor (Read, Write) Storage Blob Data Owner (Read, Write, Manage Access) Storage Blob Data Reader (Read Only) Additionally, to access blob data in the Azure portal, a user must also be assigned the Reader role (see Assign an Azure role). More information about Azure built-in roles for a Storage Account can be found here Azure built-in roles for Storage. Anonymous Access If the storage account configuration 'Allow Blob anonymous access' is set to enabled and a container is created with anonymous read access, a threat actor could access the storage contents from the internet without any authorization. Figure 2: Configuration settings for Blob anonymous access and container-level anonymous access. Query: This query helps identify successful configuration changes to enable anonymous access AzureActivity | join kind=rightouter (AzureActivity | where TimeGenerated > ago(30d) | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE" | where Properties has "allowBlobPublicAccess" | extend ProperTies = parse_json(Properties) | evaluate bag_unpack(ProperTies) | extend allowBlobPublicAccess = todynamic(requestbody).properties["allowBlobPublicAccess"] | where allowBlobPublicAccess has "true" | summarize by CorrelationId) on CorrelationId | extend ProperTies = parse_json(Properties) | evaluate bag_unpack(ProperTies) | extend allowBlobPublicAccess_req = todynamic(requestbody).properties["allowBlobPublicAccess"] | extend allowBlobPublicAccess_res = todynamic(responseBody).properties["allowBlobPublicAccess"] | extend allowBlobPublicAccess = case (allowBlobPublicAccess_req!="", allowBlobPublicAccess_req, allowBlobPublicAccess_res!="", allowBlobPublicAccess_res, "") | project OperationNameValue, ActivityStatusValue, ResourceGroup, allowBlobPublicAccess, Caller, CallerIpAddress, ResourceProviderValue Key notes regarding the authentication methods When a user accesses Azure Blob Storage via the Azure portal, the interaction is authenticated using OAuth and is authorized by the Azure RBAC roles configuration for the given user. In contrast, authentication using Azure Storage Explorer and AzCopy depends on the method used: If a user interactively signs in via the Azure portal or utilizes the Device code flow, authentication appears as OAuth based. When using a SAS token, authentication is recorded as SAS-based for both tools. Access via Azure RBAC is logged in Entra ID Sign-in Logs, however, activity related to SAS token usage does not appear in the sign-in logs, as it provides pre-authorized access. Log analysis should consider all operations, since initial actions can reveal the true authentication method even OAuth-based access may show as SAS in logs. The screenshot below illustrates three distinct cases, each showcasing different patterns of authentication types used when accessing storage resources. A SAS token is consistently used across various operations, where the SAS token is the primary access method. The example below highlighted as ‘2’ demonstrates a similar pattern, with OAuth (using assigned Azure RBAC role) serving as the primary authentication method for all listed operations. Lastly, example number ‘3’, Operations start with OAuth authentication (using an assigned Azure RBAC role for authorization) and then uses a SAS token, indicating mixed authentication types. Figure 3: Different patterns of authentication types Additionally, when using certain applications such as Azure Storage Explorer with Account Access Keys authentication, the initial operations such as ListContainers and ListBlob are logged with the authentication type reported as “AccountKey”. However, for subsequent actions like file uploads or downloads, the authentication type switches to SAS in the logs. To accurately determine whether an Account Access Keys or SAS was used, it's important to correlate these actions with the earlier enumeration or sync activity within the logs. With this understanding, let’s proceed to analyze specific attack scenarios by utilizing the log analytics, such as the StorageBlobLogs table. Attack scenario This section will examine the typical steps that a threat actor might take when targeting a Storage Account. We will specifically focus on the Azure Resource Manager layer, where Azure RBAC initially dictates what a threat actor can discover. Enumeration During enumeration, a threat actor’s goal is to map out the available storage accounts. The range of this discovery is decided by the access privileges of a compromised identity. If that identity holds at least a minimum level of access (similar to a Reader) at the subscription level, it can view storage account resources without making any modifications. Importantly, this permission level does not grant access to the actual data stored within the Azure Storage itself. Hence, a threat actor is limited to interacting only with those storage accounts that are visible to them. To access and download files from Blob Storage, a threat actor must be aware of the names of containers (Operation: ListContainers) and the files within those containers (Operation: ListBlobs). All interactions with these storage elements are recorded in the StorageBlobLogs table. Containers or blobs in a container can be listed by a threat actor with the appropriate access rights. If access is not authorized, attempts to do so will result in error codes shown in the StatusCode field. A high number of unauthorized attempts resulting in errors would be a key indicator of suspicious activity or misconfiguration. Figure 4: Failure attempts to list blobs/containers Query: This query serves as a starting point for detecting a spike in unauthorized attempts to enumerate containers, blobs, files, or queues union Storage* | extend StatusCodeLong = tolong(StatusCode) | where OperationName has_any ("ListBlobs", "ListContainers", "ListFiles", "ListQueues") | summarize MinTime = min(TimeGenerated), MaxTime = max(TimeGenerated), OperationCount = count(), UnauthorizedAccess = countif(StatusCodeLong >= 400), OperationNames = make_set(OperationName), ErrorStatusCodes = make_set_if(StatusCode, StatusCodeLong >= 400), StorageAccountName = make_set(AccountName) by CallerIpAddress | where UnauthorizedAccess > 0 Note: The UnauthorizedAccess filter attribute must be adjusted based on your environment. Data exfiltration Let’s use the StorageBlobLogs to analyze two different attack scenarios. Scenario 1: Compromised user has access to a storage account In this scenario, the threat actor either compromises a user account with access to one or more storage accounts or alternatively, obtains a leaked Account Access Key or SAS token. With a compromised identity, the threat actor can either enumerate all storage accounts the user has permissions to (as covered in enumeration) or directly access a specific blob or container if the leaked key grants scoped access. Account Access Keys (AccountKey)/SAS tokens The threat actor might either use the storage account’s access keys or SAS token retrieved through the compromised user account provided they have the appropriate permissions or the leaked key itself may already be either an Account access key or SAS token. Access keys grant complete control while SAS key can generate a time-bound access, to authorize data transfers enabling them to view, upload, or download data at will. Figure 5: Account key used to download/view data Figure 6: SAS token used to download/view data Query: This query helps identify cases where an AccountKey/SAS was used to download/view data from a storage account StorageBlobLogs | where OperationName has "GetBlob" | where AuthenticationType in~ ("AccountKey", "SAS") | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project TimeGenerated, AccountName, OperationName, RequesterUpn, AuthenticationType, Uri, ObjectKey, StatusText, UserAgentHeader, CallerIpAddress, AuthenticationHash User Delegation SAS Available for Blob Storage only, a User Delegation SAS functions similar to a SAS but is protected with Microsoft Entra ID credentials rather than the storage account key. The creation of a User Delegation SAS is tracked as a corresponding "GetUserDelegationKey" log entry in StorageBlobLogs table. Figure 7: User-Delegation Key created Query: This query helps identify creation of a User-Delegation Key. The RequesterUpn provides the identity of the user account creating this key. StorageBlobLogs | where OperationName has "GetUserDelegationKey" | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project TimeGenerated, AccountName, OperationName, RequesterUpn, Uri, CallerIpAddress, ObjectKey, AuthenticationType, StatusCode, StatusText Figure 8: User-Delegation activity to download/read Query: This query helps identify cases where a download/read action was initiated while authenticated via a User delegation key StorageBlobLogs | where AuthenticationType has "DelegationSas" | where OperationName has "GetBlob" | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project Type, TimeGenerated, OperationName, AccountName, UserAgentHeader, ObjectKey, AuthenticationType, StatusCode, CallerIpAddress, Uri The operation "GetUserDelegationKey" within the StorageBlobLogs captures the identity responsible for generating a User Delegation SAS token. The AuthenticationHash field shows the Key used to sign the SAS token. When the SAS token is used, any operations will include the same SAS signature hash enabling you to correlate various actions performed using this token even if the originating IP addresses differ. Query: The following query extracts a SAS signature hash from the AuthenticationHash field. This helps to track the token's usage, providing an audit trail to identify potentially malicious activity. StorageBlobLogs | where AuthenticationType has "DelegationSas" | extend SasSHASignature = extract(@"SasSignature\((.*?)\)", 1, AuthenticationHash) | project Type, TimeGenerated, OperationName, AccountName, UserAgentHeader, ObjectKey, AuthenticationType, StatusCode, CallerIpAddress In the next scenario, we examine how a threat actor already in control of a compromised identity uses Azure RBAC to assign permissions. With administrative privileges over a storage account, the threat actor can grant access to additional accounts and establish long-term access to the storage accounts. Scenario 2: A user account is controlled by the threat actor and has elevated access to the Storage Account An identity named Bob was identified as compromised due to an unauthorized IP login. The investigation triggers when Azure Sign-in logs reveal logins originating from an unexpected location. This account has owner permissions for a resource group, allowing full access and role assignments in Azure RBAC. The threat actor grants access to another account they control, as shown in the AzureActivity logs. The AzureActivity logs in the figure below show that Reader, Data Access, and Storage Account Contributor roles were assigned to Hacker2 for a Storage Account within Azure: Figure 9: Assigning a role to a user Query: This query helps identify if a role has been assigned to a user AzureActivity | where Caller has "Bob" | where OperationNameValue has "MICROSOFT.AUTHORIZATION/ROLEASSIGNMENTS/WRITE" | extend RoleDefintionIDProperties = parse_json(Properties) | evaluate bag_unpack(RoleDefintionIDProperties) | extend RoleDefinitionIdExtracted = tostring(todynamic(requestbody).Properties.RoleDefinitionId) | extend RoleDefinitionIdExtracted = extract(@"roleDefinitions/([a-f0-9-]+)", 1, RoleDefinitionIdExtracted) | extend RequestedRole = case( RoleDefinitionIdExtracted == "ba92f5b4-2d11-453d-a403-e96b0029c9fe", "Storage Blob Data Contributor", RoleDefinitionIdExtracted == "b7e6dc6d-f1e8-4753-8033-0f276bb0955b", "Storage Blob Data Owner", RoleDefinitionIdExtracted == "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1", "Storage Blob Data Reader", RoleDefinitionIdExtracted == "db58b8e5-c6ad-4a2a-8342-4190687cbf4a", "Storage Blob Delegator", RoleDefinitionIdExtracted == "c12c1c16-33a1-487b-954d-41c89c60f349", "Reader and Data Access", RoleDefinitionIdExtracted == "17d1049b-9a84-46fb-8f53-869881c3d3ab","Storage Account Contributor", "") | extend roleAssignmentScope = tostring(todynamic(Authorization_d).evidence.roleAssignmentScope) | extend AuthorizedFor = tostring(todynamic(requestbody).Properties.PrincipalId) | extend AuthorizedType = tostring(todynamic(requestbody).Properties.PrincipalType) | project TimeGenerated, RequestedRole, roleAssignmentScope, ActivityStatusValue, Caller, CallerIpAddress, CategoryValue, ResourceProviderValue, AuthorizedFor, AuthorizedType Note: Refer to this resource for additional Azure in-built role IDs that can be used in this query. The Sign-in logs indicate that Hacker2 successfully accessed Azure from the same malicious IP address. We can examine StorageBlobLogs to determine if the user accessed data of the blob storage since specific roles related to the Storage Account were assigned to them. The activities within the blob storage indicate several entries attributed to the Hacker2 user, as shown in the figure below. Figure 10: User access to blob storage Query: This query helps identify access to blob storage from a malicious IP StorageBlobLogs | where TimeGenerated > ago (30d) | where CallerIpAddress has {{IPv4}} | extend ObjectName= ObjectKey | project TimeGenerated, AccountName, OperationName, AuthenticationType, StatusCode, StatusText, RequesterUpn, CallerIpAddress, UserAgentHeader, ObjectName, Category An analysis of the StorageBlobLogs, as shown in the figure below, reveals that Hacker2 performed a "StorageRead" operation on three files. This indicates that data was accessed or downloaded from blob storage. Figure 11: Blob Storage Read/Download activities The UserAgentHeader suggests that the storage account was accessed through the Azure portal. Consequently, the SignInLogs can offer further detailed information. Query: This query checks for read, write, or delete operations in blob storage and their access methods, StorageBlobLogs | where TimeGenerated > ago(30d) | where CallerIpAddress has {{IPv4}} | where OperationName has_any ("PutBlob", "GetBlob", "DeleteBlob") and StatusText == "Success" | extend Notes = case( OperationName == "PutBlob" and Category == "StorageWrite" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was written through Azure Storage Explorer", OperationName == "PutBlob" and Category == "StorageWrite" and UserAgentHeader has "AzCopy", "Blob was written through AzCopy Command", OperationName == "PutBlob" and Category == "StorageWrite" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was written through Azure portal", OperationName == "GetBlob" and Category == "StorageRead" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was Read/Download through Azure Storage Explorer", OperationName == "GetBlob" and Category == "StorageRead" and UserAgentHeader has "AzCopy", "Blob was Read/Download through AzCopy Command", OperationName == "GetBlob" and Category == "StorageRead" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was Read/Download through Azure portal", OperationName == "DeleteBlob" and Category == "StorageDelete" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was deleted through Azure Storage Explorer", OperationName == "DeleteBlob" and Category == "StorageDelete" and UserAgentHeader has "AzCopy", "Blob was deleted through AzCopy Command", OperationName == "DeleteBlob" and Category == "StorageDelete" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was deleted through Azure portal","") | project TimeGenerated, AccountName, OperationName, AuthenticationType, StatusCode, CallerIpAddress, ObjectName=ObjectKey, Category, RequesterUpn, Notes The log analysis confirms that the threat actor successfully extracted data from a storage account. Storage Account summary Detecting misuse within a Storage Account can be challenging, as routine operations may hide malicious activities. However, enabling logging is essential for investigation to help track accesses, especially when compromised identities or misused SAS tokens or keys are involved. Unusual changes in user permissions and irregularities in role assignments which are documented in the Azure Activity Logs, can signal unauthorized access, while Microsoft Entra ID sign-in logs can help identify compromised UPNs and suspicious IP addresses that ties into OAuth-based storage account access. By thoroughly analyzing Storage Account logs which details operation types and access methods, investigators can identify abuse and determine the scope of compromise. That not only helps when remediating the environment but can also provide guidance on preventing unauthorized data theft from occurring again.3.5KViews2likes0CommentsMemory under siege: The silent evolution of credential theft
From memory dumps to filesystem browsing Historically, threat groups like Lorenz have relied on tools such as Magnet RAM Capture to dump volatile memory for offline analysis. While this approach can be effective, it comes with significant operational overhead—dumping large memory files, transferring them, and parsing them with additional forensic tools is time-consuming. But adversaries are evolving. They are shifting toward real-time, low-footprint techniques like MemProcFS, a forensic tool that exposes system memory as a browsable virtual filesystem. When paired with Dokan, a user-mode library that enables filesystem mounting on Windows, MemProcFS can mount live memory—not just parse dumps—giving attackers direct access to volatile data in real time. This setup eliminates the need for traditional bulky memory dumps and allows attackers to interact with memory as if it were a local folder structure. The result is faster, more selective data extraction with minimal forensic trace. With this capability, attackers can: Navigate memory like folders, skipping raw dump parsing Directly access processes like lsass.exeto extract credentials swiftly Evade traditional detection, as no dump files are written to disk This marks a shift in post-exploitation tactics—precision, stealth, and speed now define how memory is harvested. Sample directory structure of live system memory mounted using MemProcFS (attacker’s perspective) Case study Microsoft Defender Experts, in late April 2025, observed this technique in an intrusion where a compromised user account was leveraged for lateral movement across the environment. The attacker demonstrated a high level of operational maturity, using stealthy techniques to harvest credentials and exfiltrate sensitive data. Attack Path summary as observed by Defender Experts After gaining access, the adversary deployed Dokan and MemProcFS to mount live memory as a virtual filesystem. This allowed them to interact with processes like lsass.exe in real-time, extracting credentials without generating traditional memory dumps—minimizing forensic artifacts. To further their access, the attacker executed vssuirun.exe to create a Volume Shadow Copy, enabling access to locked system files such as SAM and SYSTEM. These files were critical for offline password cracking and privilege escalation. Once the data was collected, it was compressed into an archive and exfiltrated via an SSH tunnel. Attackers compress the LSASS minidump from mounted memory into an archive for exfiltration This case exemplifies how modern adversaries combine modular tooling, real-time memory interaction, and encrypted exfiltration to operate below the radar and achieve their objectives with precision. Unmasking stealth: Defender Experts in action The attack outlined above exemplifies the stealth and sophistication of today’s threat actors—leveraging legitimate tools, operating in-memory, and leaving behind minimal forensic evidence. Microsoft Defender Experts successfully detected, investigated, and responded to this memory-resident threat by leveraging rich telemetry, expert-led threat hunting, and contextual analysis that goes far beyond automated detection. From uncovering evasive techniques like memory mounting and credential harvesting to correlating subtle signals across endpoints, Defender Experts bring human-led insight to the forefront of your cybersecurity strategy. Our ability to pivot quickly, interpret nuanced behaviors, and deliver tailored guidance ensures that even the most covert threats are surfaced and neutralized—before they escalate. Detection guidance The alert Memory forensics tool activity by Microsoft Defender for Endpoint might indicate threat activity associated with this technique. Microsoft Defender XDR customers can run the following query to identify suspicious use of MemProcFS: DeviceProcessEvents | where ProcessVersionInfoOriginalFileName has "MemProcFS" | where ProcessCommandLine has_all (" -device PMEM") Recommendations To reduce exposure to this emerging technique, Microsoft Defender Experts recommend the following actions: Educate security teamson memory-based threats and the offensive repurposing of forensic tools. Monitor for memory mounting activity, especially virtual drive creation linked to unusual processes or users. Restrict execution of dual-use toolslike MemProcFS via application control policies. Track filesystem driver installations, flagging Dokan usage as a potential precursor to memory access. Correlate SSH activity with data staging, especially when sensitive files are accessed or archived. Submit suspicious samplesto the Microsoft Defender Security Intelligence (WDSI) portal for analysis. Final thoughts We all agree - Memory is no longer just a post-incident artifact—it’s the new frontline in credential theft What we’re witnessing isn’t just a clever use of forensic tooling, it’s a strategic shift in how adversaries interact with volatile data. By mounting live memory as a virtual filesystem, attackers gain real-time access to a wide range of sensitive information—not just credentials. From authentication tokens and encryption keys to in-memory malware, clipboard contents, and application data, memory has become a rich, dynamic source of intelligence. Tools like MemProcFS and Dokan enable adversaries to extract this data with speed, precision, and minimal forensic footprint—often without leaving behind the traditional signs defenders rely on. This evolution challenges defenders to go beyond surface-level detection. We must monitor for subtle signs of memory access abuse, understand how legitimate forensic tools are being repurposed offensively, and treat memory as an active threat surface—not just a post-incident artifact. To learn more about how our human-led managed security services can help you stay ahead of similar emerging threats, please visit Microsoft Defender Experts for XDR, our managed extended detection and response (MXDR) service, and Microsoft Defender Experts for Hunting (included in Defender Experts for XDR and as a standalone service), our managed threat hunting service.