Microsoft Secure Tech Accelerator
Apr 03 2024, 07:00 AM - 11:00 AM (PDT)
Microsoft Tech Community
Protect your storage resources against blob-hunting
Published Feb 06 2023 11:33 AM 20.9K Views

image1.png

 

 

How to detect, investigate and prevent blob-hunting

 

 

Why is it important to understand blob-hunting?

 

1. Exfiltrating sensitive information from misconfigured resources is one of the top 3 cloud storage services* threats, and threat actors are continuously hunting storage objects because it’s easy, cheap, and there’s much to find. In some cases, they target your storage accounts.

 

2. Most people think they don’t have misconfigured storage resources. Most people do. Misconfiguration by end-users is a common problem; if you are safe today, there might still be a mistake tomorrow.

 

3. There are quick and effective ways to harden your security posture and prevent these threats from happening.

 

 * Cloud storage services such as Azure Blob Storage, Amazon S3, and GCP Cloud Storage

 

Threat actors use tools to exfiltrate sensitive information from exposed storage resources open to unauthenticated public access. This process is called blob-hunting, also known as Container Enumeration on Leaky Buckets. It is a common collection tactic, easy to do, cheap to carry out, does not require authentication, and there is no shortage of open-source tools that help facilitate and automate its process. 

 

Numerous data breaches across storage services in all cloud providers originated from mistakenly exposing data to public access due to configuration errors in access to the storage objects or mistakenly uploading sensitive content to an already publicly accessible storage container.

 

Some tools can help detect storage resources open to public access, but there are always human errors, and prevention alone is not enough.

 

Error continues to be a dominant trend and is responsible for 13% of breaches

- 2022 Data Breach Investigation Report by Verizon

 

This is where Microsoft Defender for Storage comes into play: it detects blob-hunting attempts and other malicious activities by monitoring unusual activities from unexpected sources. It alerts you on time with the relevant information to help you understand what happened and helps you harden your configurations to prevent attacks from happening in the future.

 

This post will cover the top blob-hunting questions and explain how Microsoft Defender for Storage detects and prevents this type of threat:

  1. What’s blob-hunting and how it is achieved
  2. Top methods and tools used by threat actors

  3. How Microsoft Defender for Storage helps detect and prevent these attempts

  4. How to investigate blob-hunting attempts and what red flags to monitor
  5. How to protect storage accounts against sensitive information leakage
  6. How to use Microsoft Sentinel to look for blob-hunting attempts proactively

To better understand Azure Storage, how it’s built, and its access policies, you can go to the Background - Azure Storage accounts and access levels section at the bottom of this post.

 

 

What’s blob-hunting, and how exactly is it achieved?

 

Blob-hunting is the act of guessing the URL of containers or blobs open to unauthenticated public access with the intent of exposing data from them. The following conditions must be met to successfully expose and exfiltrate data from storage accounts, and they are controlled by the owners of the storage accounts (or the users/applications with the appropriate permissions):

 

Blob access.png

 

  1. Public network access to the storage account is enabled for all networks (allow internet access).
  2. The storage account configuration settings allow public access.
  3. The blob container access level allows public access (set to ‘Container’ or ’Blob’ level).
  4. The threat actor correctly guessed the URL of the container or the blob:
    • Storage account name
    • Container name
    • Blob name

There are several ways to expose blobs, with different starting points:

 

Blob exposure paths.png

 

  1. The first starting point is brute-force guessing the names of storage accounts and discovering them when there’s little or no prior knowledge of their existence.
  2. The second starting point begins after threat actors already know the names of storage accounts. For example, attackers can find names online through search engines and can then start brute-force guessing the names of the containers.
  3. The third starting point is brute-forcing the way into the blobs by guessing the entire URL – the account, container, and blob names. This is usually the case when the container access level is set to 'Blob' and threat actors can't discover and enumerate blobs but can access them if they have the full URL. When this is the case, it usually means that the threat actor has targeted the resources and has information about it.

If the threat actors have discovered the storage account, they can start brute-force guessing the container names. If containers are found, when the access level is set to ‘Container’, they can enumerate (list) all blobs within the containers and exfiltrate that data.

 

The last starting point is interesting because if threat actors somehow found a blob URL or blobs were exposed during the brute-force guessing of the full URL, now they know that the account and container names they guessed or found exist. From that point, it’s possible to discover and expose other containers and, specifically, blobs within the discovered container.

 

While there are other attack vectors, these are the main ones. We will focus on the steps of exposing the storage account name, container name, and blob name.

 

 

A breakdown of the blob-hunting process implemented by threat actors

 

Finding storage account names

 

Public storage accounts have a URL of a public endpoint (more information in the Background section), which means that it's possible to guess storage accounts names by performing DNS queries on the URL and examining the response:

https://<<storage-account-name>>.blob.core.windows.net

There are multiple ways to query the DNS, and the simplest ones are to use the nslookup command line in the CLI or the Resolve-DnsName cmdlet in PowerShell, for example:

 

Resolve-DNS PowerShellResolve-DNS PowerShell

 

Threat actors enumerate multiple accounts at a time by automating the search for storage accounts with scripts that use a combination of custom/generic wordlists, DNS queries, and search engine APIs to guess and find storage accounts.

 

The following is an example of enumeration using the python script of dnscan in combination with a custom wordlist:

 

Python wordlist-based DNS subdomain scannerPython wordlist-based DNS subdomain scanner

 

It is also possible to find storage accounts using search engines, such as Google Dorking and Shodan. The following is a basic example of Google Dorking. By adding more filters, threat actors can pinpoint the search for sensitive information:

 

Google Dorking.png

 

Exposing container names

 

Once the storage account name is known, threat actors can start looking for containers open to public access. As with the storage account names, to map and expose containers, threat actors manually guess the container names or use wordlists of known names that usually imply containers that store sensitive data, such as: 'audit', 'dbbackup', 'vulnerability-assessment', etc. The following is a wordlist example of possible container names taken from one of the blob-hunting tools:

 

wordlist example.png

 

This is the part where the container access level determines if a container can be listed, which means that if someone discovers a container, they can list all the blobs within it.

 

Threat actors use the Blob service's REST API GET requests to validate that containers exist. List Blobs and Get Container Properties are the most common operations used to validate if the container exists and is open to public access.

 

These operation types are quite different. If the container access level allows it, ListBlobs lists all the blobs within containers, and GetContainerProperties returns the properties of the container without listing the blobs within (smaller signature).

 

Using the container URL, it is also possible to guess the names of containers with REST requests from the browser or other API platforms. For example, the following GET request verifies that the container exists and returns all the stored blobs. You can test it yourselves: 

https://mediaprod1.blob.core.windows.net/audio?restype=container&comp=list

 

GET request.png

 

Note: If the access level is set to ‘Blob’, the blobs within the container can be publicly accessible, but querying the container and performing operations on it (such as listing the blobs or getting the properties of the container) will return a ContainerNotFound 404 error-code, which helps mask the blobs within the container.

 

Exposing and enumerating blob names

 

Once threat actors discover the names of storage accounts and the containers within them, they can start trying to expose the data stored in the blobs (objects). They first try to use the ListBlobs operation, which lists all the blobs within the containers if the access level to the container permits it (set to 'Public' access level). If the container access level is set to 'Blob', listing the container will not work, leaving threat actors with the option to brute-force guess the blob names (the account and container names are already known).

In cases where the blob containers are not discoverable, threat actors can try brute-force guessing the full URL, but it will be harder.

 

The flow of a full blob-hunting attack

 

Common blob-hunting attacks are automated using dedicated tools such as feroxbusterMicroBurst, and Gobuster. These tools allow easy discovery of storage account names, and if these attempts are successful, threat actors can follow two approaches:

  1. Guess container names, thus exposing containers open to public access when the containers access level is set to ‘Container’ and then exposing the blobs within.
  2. Use a brute-force approach of guessing blob URLs and exposing specific blobs when the containers’ access level is set to 'Blob' and does not give away their existence, name, or properties, and does not enable listing blobs inside it even if the container name is known. After public blob URLs have been exposed, threat actors can exfiltrate the data.

The following is a basic example of using MicroBurst to guess container names of a known account (from our example – ‘mediaprod1’) by using a generic wordlist, exposing blobs by listing the blobs of the exposed containers, and downloading an exposed blob:

 

Enumerating blob containers of exposed storage accounts with wordlistsEnumerating blob containers of exposed storage accounts with wordlists

 

Who can hunt blobs?

 

Blob-hunting is easy to achieve, cheap in terms of resource usage, does not require authentication, and can originate from any local machine or VM. In some cases, the blob-hunting activities originate from cloud resources.

 

Blob-hunting can be an ad-hoc activity or a continuous effort to search the web for exposed cloud storage resources in cloud services like Azure Blob Storage, Amazon S3 Buckets, and GCP Files. There are searchable websites with databases for exposed content to check infrastructures for breaches. Unfortunately, these sites attract malicious actors who wish to take advantage of the data found there or from similar sources.

 

It is not uncommon to find that the source of blob-hunting activities is infected and controlled bots that are part of botnets. It is also common for the threat actors to mask their identity behind Tor exit nodes, which helps hide the source and makes it difficult to investigate and connect it to other activities.

 

 

How does Microsoft Defender for Storage help detect and prevent these blob-hunting attempts?

 

Microsoft Defender for Storage detects blob hunters trying to discover resources open to public access and attempt to expose blobs with sensitive data so that you can block them and remediate your posture. The service does this by continuously analyzing the telemetry stream generated by Azure Storage services without the necessity of turning on the diagnostic logs, accessing the data, and impacting performance. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud with the details on the suspicious activity, the threat actor, the access method, affected resources, performed operation types, MITRE ATT&CK tactic, potential causes, proper investigation steps, and instructions on how to remediate the threat and improve the security posture. These alerts can also be exported to any SIEM solution.

 

Misconfigured storage accounts.png

 

The following security alerts are a subset of the Microsoft Defender for Storage detection suite and can be triggered in different stages of the full blob-hunting attack path. These alerts inform you if malicious attempts to expose blobs were carried out, if someone accessed the containers, and if data was exfiltrated. They also provide a heads-up if containers with potentially sensitive information are misconfigured.

 

 

Successful and failed scanning attempts detection

 

There are three flavors of scanning-related (blob-hunting) alerts. They usually indicate a collection attack, where the threat actor tries to list blobs by guessing container names in the hope of finding open storage containers with sensitive data in them:

 

  • Publicly accessible storage containers successfully discovered” detects successful discoveries of publicly open storage containers in the storage account performed by a scanning script or tool. An example of the alert (screenshot taken from Defender for Cloud in the Azure Portal):

    image11.png

  • "Publicly accessible storage containers unsuccessfully scanned" detects a series of failed attempts to scan for publicly open storage containers performed in the last hour. Detecting failed threat actor attempts means detecting early.
  • Publicly accessible storage containers with potentially sensitive data have been exposed” detects the successful scanning of containers with names indicating they might contain sensitive data. Containers are flagged as potentially sensitive by comparing their names to container names that statistically have low public exposure, suggesting they might store sensitive information.

Scanning alerts contain information on the scanning source, what was scanned successfully, and what failed attempts were made to scan private or non-existent containers. The alert also indicates if the scanning activity originated from a Tor exit node or if the IP address is suspicious because it is associated with other malicious activities (data enriched by Microsoft Threat Intelligence).

 

 

Unusual unauthenticated access to containers detection

 

  • “Unusual unauthenticated access to a storage container” detects unusual unauthenticated read access to storage accounts that are usually authenticated. It is considered unusual when a storage account open to public access with only authenticated read requests (by examining the access history) suddenly receives unauthenticated read requests. In the scope of blob-hunting, this might indicate that a threat actor has accessed the account after successfully exposing blobs.
  • Unusual application accessed a storage account” detects unusual applications that access the account compared to recent activity. In the scope of blob-hunting, this might indicate that a threat actor has accessed the account after successfully exposing blobs.

 

 

Data exfiltration detection

 

There are two flavors to the data exfiltration detection alert. In the scope of blob-hunting attacks, the alerts are triggered if unusual exfiltration activities occur after successful scanning attempts:

  • Unusual amount of data extracted from a storage account (amount of data anomaly)” detects unusually large amounts of data extracted from the account compared to recent activity.
  • Unusual amount of data extracted from a storage account (number of blobs anomaly)” detects an unusually large number of blobs that have been extracted from the account compared to recent activity.

 

 

Containers access levels misconfiguration detection

 

The following alert is triggered on possible access level configuration errors to prevent public exposure of sensitive data:

  • Storage account with potentially sensitive data has been detected with a publicly exposed container” indicates that a possible misconfiguration has occurred if the access policy of a container with a name usually attributed to private containers storing sensitive information has been changed from ‘Private’ to ‘Public’, allowing unauthenticated access.

 

To learn more, visit the Microsoft Defender for Storage security alerts documentation.

 

 

How to investigate blob-hunting attempts, and what red flags to look for

 

By examining the storage account's data plane logs, you will notice that blob-hunting activities are characterized by repeated anonymous (unauthenticated) attempts to get information from storage resources by guessing URLs. Most of these attempts result in 404 error codes (resource not found), but they may also be successful, meaning that storage containers have been discovered and even possibly that blobs have been enumerated.

 

The following instructions are the general steps we recommend for investigating blob-hunting-related alerts. If the resource (diagnostic) logs are enabled on the compromised storage account, it helps deepen the investigation process:

 

  1. Look for who is responsible for the activity to rule out the possibility of a false positive
    • Look at the actor information inside the alert: source IP address, location, ASN, organization, and User Agent. Indicators of known applications or users can result from a faulty application that performed multiple failed read attempts to different containers. If this is the case, you can ignore the alert.
    • Examine if there's threat intelligence information within the IP entity. If so, Microsoft flagged this IP address as suspicious, and the address is associated with direct or indirect malicious activities.

      image12.png

    • In most cases, you should not rule out familiar or private IP addresses too quickly. They may indicate compromised identities or a breached environment. But specifically, since authentication is not required in the blob-hunting of storage resources scenario, it is unlikely that the source originated from your environment. If this is the case and the activity is repeated from an unknown source, this might be a true positive blob-hunting activity.

  2. Damage control – In case you didn't rule out the possibility it's a false positive, the assumption is that the activity is malicious, so the first step to take is to do damage control, and in case there was a data breach, perform quick mitigation steps:

    • Look at the “List of containers successfully scanned” field in the alert to understand which containers were successfully discovered.

    • Is there sensitive data inside the discovered containers?

    • Are there other publicly open containers within the same account that may contain sensitive information?

    • See if the container access level was changed from ‘Private’ to ‘Public’ and its access level is misconfigured. You can also check whether you received a “Storage account with potentially sensitive data has been detected with a publicly exposed container” alert before this alert – this may indicate that content in the container is sensitive.

  3. Look for what the threat actor did
    • Determine whether the containers that have been discovered were accessed after the discovery. In the alert, you can look at the "Size of extracted data" field to understand if the threat actor downloaded content from the container. You can also look at the "Operation Types" to understand the other operations the threat actor performed during that activity.
    • If the containers were only discovered and not accessed, it does not mean access attempts won't happen later. Ensure there's no sensitive content inside, no applications or users that might write sensitive content in the future, and that the access level to the container is the intended access level.

      image13.png

    • Examine the storage account “change analysis” workbook to see if any suspicious changes were made to the account. You can access it from the Azure Portal by going to the Workbooks blade of the storage account and clicking on the “Changes (preview)” workbook. You can find more information here.
    • Examine the Activity logs to see if someone performed unusual control plane operations on the account, such as listing storage access keys. These operations require authenticated access and help understand if a possible larger-scale breach was made to the account. It also displays the identity of the user who performed the operations.

      image14.png

    • Look for more alerts that may be related. An example is if there’s an alert indicating a possible configuration error in the access level of the container. Start from the same container, then the storage account, and move up to higher levels. 

      image15.png

  4. Investigate further (in case you have diagnostic logs enabled) 
    • Query the diagnostic logs for all activities originating from the IP address across all your storage accounts. Do not limit the investigation to a container within a specific account. The IP address is hard to spoof, but it can originate from Tor exit nodes and change during the threat actor's activity resulting in multiple IP addresses.
    • Check for other suspicious activities that originated from other IP addresses. In some cases, you may be able to match other IP addresses with the same user agent from the alert (it is useful when the user agent is unusual). It can give you an indication of other blob-hunting-related activities. But be aware that threat actors can change the User Agent quite easily, so don't rely on it too much when filtering information. It can also change during different operations on the container (such as exfiltrating data with different tools). 

      image16.png

  5.  Investigate further (in case you don’t have diagnostic logs enabled) 

    • If you don’t have diagnostic logs enabled on the resource, you can still detect anonymous requests from client applications using Azure Metrics Explorer. This helps you understand whether there were unauthenticated requests, how many, and when. 

    • Using the filter, you can look for unauthenticated requests (Authenticate Type), look for repeated failed attempts (Response Type), and filter by different operation types so you can detect successful anonymous GetBlob operations after a series of failed unauthenticated requests. The Metrics information does not include context on the source of the requests and does not let you filter at the container level. 

      image17.png

 

 

Red flags to pay attention to during the investigation process

 

If any of these signals arise during the investigation process, a faster escalation is required to prevent a possible data breach:

  1. Discovered containers contain sensitive data, or they have names/tags/properties indicating they might contain sensitive information.
  2. Data has been extracted from the containers.
  3. There is threat intelligence information on the source IP address in the security alert – this makes the IP address suspicious.
  4. Prior to the scanning alert, the "Storage account with potentially sensitive data has been detected with a publicly exposed container" alert was triggered on the container. This may indicate a possible misconfiguration of a container with sensitive data inside.
  5. At the time of the scanning alert or after it, one or more of the following alerts have been triggered: Unusual unauthenticated access, unusual application, and data exfiltration alerts – these alerts may indicate the access to the account, and that exfiltration of data has occurred.

 

 

How to protect your storage account against blob-hunting

 

It can take up to an hour to immensely improve your posture with prevention steps that help protect your accounts against blob-hunting in your storage resources:

  1. Microsoft Defender for Storage provides security recommendations that help you identify and quickly block public access to multiple storage accounts at a time. For example: 

    image18.png

  2. If public access is not a requirement for your business application, you can, and you should block all unauthenticated public access at the account level in the configuration page by opening the configuration blade in the storage account and disabling the public access: 

    image19.png

    You can check the public access setting for multiple accounts in your subscription with Azure Resource Graph Explorer in the Azure portal:

    image20.png

    In the case that public access to the account is required:
    1. Start by identifying the current open containers. You can achieve this by going to the Azure Portal / Storage Explorer and seeing if your container's access level is the intended access level. You can also achieve this by using the PowerShell script to list all the containers and their access levels, for example: 

      image21.png
      There are also dedicated open-source tools such as BlobHunter and Az-Blob-Attacker that help find open containers within your environment.

    2. Minimize the number of containers that allow public access.

    3. Reduce the access level to 'Blob' from 'Container' wherever possible. It will make the process of hunting blobs and exposing them much more difficult.

    4. Make sure no sensitive information is inside containers that allow public access.

    5. Manage the remaining containers that allow public access by ensuring that applications or users cannot upload sensitive information and that users with write permissions know that the uploaded data will be publicly accessible.

    6. Consider changing the names of the containers to unrecognizable names (you can use randomly generated names as well) or adding random prefixes/suffixes to the container names. Changing the names will limit the effectiveness of blob-hunting tools based on word lists.

    7. If you do not wish to receive scanning alerts – you can apply suppression rules to dismiss them at your desired scope.

  3. Enable Diagnostic Settings on the account. Logs can help monitor the account and perform detailed investigations on the account. They are disabled by default and cost money.

  4. Follow the instructions to prevent anonymous public read access to containers and blobs.

  5. Consider allowing traffic only from specific virtual networks and IP addresses to secure and control the level of network access to your storage accounts by configuring firewalls and virtual networks

    image22.png

    If the alerts are recurring on the same IP addresses, consider blocking them with the networking rules.

  6. You can also configure Monitor alert rules that notify you when a certain number of anonymous requests are made against your storage account. 

    Create alert rule.png

     

  7. Consider applying an Azure Resource Manager Read-only lock to prevent users from modifying the configuration of storage accounts.

For more security best practices for Blob storage, visit the Security recommendations for Blob storage documentation.

 

 

How to proactively look for blob-hunting with Microsoft Sentinel

 

When diagnostic settings are enabled, you can proactively hunt blob enumeration activity using Microsoft Sentinel. The following two queries can be executed within Microsoft Sentinel to detect suspicious enumeration activity.

 

The first query combines the IP address and User Agent to create a unique identifier. This identifier is then used to detect enumeration activity by aggregating activity based on the unique user identifier into sessions. By default, this hunting query will detect any single user who has enumerated at least 10 files and has a failure rate of over 50%. When calculating the sessions of activity using row_window_session(), the query will group any requests that occur within 30 seconds of each other and span a maximum time window of 12 hours. Each parameter can be modified at the top of the query depending on your hunting requirements.

 

let maxTimeBetweenRequests = 30s;
let maxWindowTime = 12h;
let timeRange = 30d;
let authTypes = dynamic(["Anonymous"]);
//
StorageBlobLogs
| where TimeGenerated > ago(timeRange)
// Collect anonymous requests to storage
| where AuthenticationType has_any(authTypes)
| where Uri !endswith "favicon.ico"
| where Category =~ "StorageRead"
// Process the filepath out of the request URI
| extend FilePath = array_slice(split(split(Uri, "?")[0], "/"), 3, -1)
| extend FullPath = strcat("/", strcat_array(FilePath, "/"))
// Extract the IP address, removing the port used
| extend CallerIpAddress = tostring(split(CallerIpAddress, ":")[0])
// Ignore private IP addresses
| where not(ipv4_is_private(CallerIpAddress))
| project
    TimeGenerated,
    AccountName,
    FullPath,
    CallerIpAddress,
    UserAgentHeader,
    StatusCode
| order by TimeGenerated asc 
| serialize 
// Generate sessions of access activity, where each request is within maxTimeBetweenRequests doens't last longer than maxWindowTime
| extend SessionStarted = row_window_session(TimeGenerated, maxWindowTime, maxTimeBetweenRequests, AccountName != prev(AccountName))
| order by TimeGenerated asc
// Summarize the results using the Session start time
| summarize Paths=make_list(FullPath), Statuses=make_set(StatusCode), CallerIPs=make_list(CallerIpAddress),
    DistinctPathCount=dcount(FullPath), AllRequestsCount=count(), CallerIPCount=dcount(CallerIpAddress), CallerUACount=dcount(UserAgentHeader), SessionEnded=max(TimeGenerated)
    by SessionStarted, AccountName
// Validate that each path visited is unique, scanners will generally try files once
| where DistinctPathCount > 1 and DistinctPathCount == AllRequestsCount
| order by DistinctPathCount
| extend ["Duration (Mins)"] = datetime_diff("minute", SessionEnded, SessionStarted)
| project-reorder
    SessionStarted,
    SessionEnded,
    ['Duration (Mins)'],
    AccountName,
    DistinctPathCount,
    AllRequestsCount,
    CallerIPCount,
    CallerUACount

 

 

IP address and User Agent are the only user identifiers available when investigating anonymous access. However, both of these identifiers can be manipulated by the attacker. The attacker can trivially change the User Agent when constructing the request. However, IP addresses are very difficult to spoof. For this reason, threat actors have moved to use residential proxy services, and these services allow the threat actor to use a different IP address with each request. Most of these services are served from residential IP addresses, so they are difficult to identify as part of a VPN network.

 

The second query does not rely on grouping activity based on the user's IP or User Agent. Instead, this query produces sessions of candidate scanning activity using the row_window_session() function. These results alone are interesting, and in some instances, the time between access can be reduced to as short as 1 second to detect enumeration activity spanning multiple IP addresses.

After sessions have been identified, the query exploits another aspect of enumeration by checking that each request in the session made a request to a unique file name. By avoiding the use of IP address and User Agent, this query can identify candidate scanning activity originating from a threat actor using volatile IP addresses.

 

 

let maxTimeBetweenRequests = 30s;
let maxWindowTime = 12h;
let timeRange = 30d;
let authTypes = dynamic(["Anonymous"]);
//
StorageBlobLogs
| where TimeGenerated > ago(timeRange)
// Collect anonymous requests to storage
| where AuthenticationType has_any(authTypes)
| where Uri !endswith "favicon.ico"
| where Category =~ "StorageRead"
// Process the filepath out of the request URI
| extend FilePath = array_slice(split(split(Uri, "?")[0], "/"), 3, -1)
| extend FullPath = strcat("/", strcat_array(FilePath, "/"))
// Extract the IP address, removing the port used
| extend CallerIpAddress = tostring(split(CallerIpAddress, ":")[0])
// Ignore private IP addresses
| where not(ipv4_is_private(CallerIpAddress))
| project
    TimeGenerated,
    AccountName,
    FullPath,
    CallerIpAddress,
    UserAgentHeader,
    StatusCode
| order by TimeGenerated asc 
| serialize 
// Generate sessions of access activity, where each request is within maxTimeBetweenRequests doens't last longer than maxWindowTime
| extend SessionStarted = row_window_session(TimeGenerated, maxWindowTime, maxTimeBetweenRequests, AccountName != prev(AccountName))
| order by TimeGenerated asc
// Summarize the results using the Session start time
| summarize Paths=make_list(FullPath), Statuses=make_set(StatusCode), CallerIPs=make_list(CallerIpAddress),
    DistinctPathCount=dcount(FullPath), AllRequestsCount=count(), CallerIPCount=dcount(CallerIpAddress), CallerUACount=dcount(UserAgentHeader), SessionEnded=max(TimeGenerated)
    by SessionStarted, AccountName
// Validate that each path visited is unique, scanners will generally try files once
| where DistinctPathCount > 1 and DistinctPathCount == AllRequestsCount
| order by DistinctPathCount
| extend ["Duration (Mins)"] = datetime_diff("minute", SessionEnded, SessionStarted)
| project-reorder
    SessionStarted,
    SessionEnded,
    ['Duration (Mins)'],
    AccountName,
    DistinctPathCount,
    AllRequestsCount,
    CallerIPCount,
    CallerUACount

 

 

Microsoft Sentinel also makes it possible to identify storage accounts where public access is allowed. The following query can be used to identify containers with Public Access or Public Network Access enabled.

 

 

AzureActivity
| where TimeGenerated > ago(30d)
// Extract storage write events
| where OperationNameValue =~ "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE"
| where ActivityStatusValue =~ "Start"
// Extract public access details from the properties
| extend RequestProperties = parse_json(tostring(Properties_d["requestbody"]))["properties"]
| extend PublicAccess = RequestProperties["allowBlobPublicAccess"]
| extend PublicNetworkAccess = RequestProperties["publicNetworkAccess"]
| extend ResourceId = iff(isnotempty(_ResourceId), _ResourceId, ResourceId)
| extend StorageAccount = split(ResourceId, "/")[-1]
| project
    TimeGenerated,
    Account=tostring(StorageAccount),
    ResourceId,
    OperationNameValue,
    PublicAccess,
    PublicNetworkAccess,
    RequestProperties,
    ActivityStatusValue
| where isnotempty(PublicAccess)
| summarize
    arg_max(TimeGenerated, PublicAccess),
    arg_max(TimeGenerated, PublicNetworkAccess)
    by Account
| where PublicAccess == true
| project LastStatus=TimeGenerated, Account, PublicAccess, PublicNetworkAccess
| order by LastStatus

 

 

 

 

Background - Azure Storage accounts and access levels

 

Azure Storage accounts store data objects, including blobs, file shares, queues, tables, and disks. The storage account provides a unique namespace for the data to be accessible from anywhere globally. Data in the storage account is durable, highly available, secure, and massively scalable.

 

Azure Blob Storage is one of the most popular services used in storage accounts. It's Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data which doesn't adhere to a particular data model or definition, such as text or binary data.

 

The cloud provider's APIs make it easy to retrieve data directly from the storage service, and threat actors leverage it to collect and exfiltrate sensitive information from open resources.

 

Blob storage offers three types of resources:

  • Storage account – provide a unique namespace for the data. There are no two storage accounts with the same name.
  • Container in storage accounts – organizes a set of blobs, like a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs.
  • Blob in containers – There are three types of blobs: block blobs that store text and binary data, append blobs that are made up of blocks like block blobs but optimized for append operations, and Page blobs that store random access files. Page blobs store virtual hard drive (VHD) files and serve as disks for Azure virtual machines.

Let's take an example that is used along with this post. We created a storage account named "mediaprod1", which has 3 containers named "pics", "vids", and "audio". There are different blobs representing pictures, videos, and audio files in them. The following diagram shows the relationship between the resources: 

Storage subscription.png

 

This is how it looks in the Azure Portal:

Containers listContainers list

  List of blobs within the ‘pics’ container: 

Blobs listBlobs list 


The following is important for our topic because this is exactly what threat actors exploit. Every Blob stored in the account has an address that includes a combination of the account name, the blob service name, the container name, and the blob name. This information forms the endpoint URL that allows access to the Blob. The structure is as follows:

https://<<storage-account-name>>.blob.core.windows.net/<<container-name>>/<<blob-name>>

If we take our example, the URL to one of the blobs in the ‘mediaprod1’ account looks like this:


Blob URL breakdownBlob URL breakdown

 

Accessing the data in storage accounts

Data is stored in blobs, and access to that data is determined by the networking rules, storage account access configuration, and the access level to the container that stores the data.

 

Storage accounts are configured by default to allow public access from the Internet, but it is possible to block it. Containers can be set into three different access levels, allowing the resource owners to determine if access to the data can be unauthenticated (also known as anonymous access) or only with authentication, which requires the storage account key, SAS token, or AAD to access the container and blob information.

 

The three access levels to containers:

  • Container – Open to public access. Blobs and container data can be read without authentication. It is also possible to enumerate (list) all the blobs within the container without authentication if the storage account and container names are known.

    image28.png

     

  • Blob – Semi-open to public access. It's impossible to get the container information and enumerate the blobs within them without authentication. 

    image29.png

    But blob data can be read without authentication (anonymously) with its URL, meaning that threat actors can guess the full URL (account name, container name, and blob name) and access the data. 

    image30.png
    This access level is still open to public access but is more restricted than the ‘Container’ access level.

 

  • Private (default) – Requires authentication to access blobs, container data, and enumerate blobs within the container. This is the most secure container access level.
    In the screenshots below, you can see the container access level configuration of a blob container (in the Azure Portal):

    image31.png

 

 

 

MITRE ATT&CK® tactics and techniques covered in this post

Cloud Infrastructure Discovery
(Technique T1580
)
An adversary may attempt to discover available infrastructure and resources within an infrastructure-as-a-service (IaaS) environment. This includes computing resources such as instances, virtual machines, and snapshots, as well as resources of other services, including storage and database services.
Cloud Storage Object Discovery
(Technique T1619
)

Adversaries may enumerate objects in cloud storage infrastructure and use this information during automated discovery to shape follow-on behaviors, including requesting all or specific objects from cloud storage. After identifying available storage services, adversaries may access the contents/objects stored in cloud infrastructure.

Cloud service providers offer APIs allowing users to enumerate objects stored within cloud storage. Examples include ListObjectsV2 in AWS and List Blobs in Azure.
Data from Cloud Storage Object
(Technique T1530
)

Adversaries may access data objects from improperly secured cloud storage. These solutions differ from other storage solutions (such as SQL or Elasticsearch) because there is no overarching application. Data from these solutions can be retrieved directly using the cloud provider's APIs.

 

 

 

Learn More

 

  1. Learn more on the threat matrix for storage services.

  2. Get started and learn more about the capabilities and features of Microsoft Defender for Storage.

  3. Watch the “Defender for Cloud in the Field - Defender for Storage” YouTube episode to learn more about the threat landscape for Azure Storage and how Microsoft Defender for Storage can help detect and mitigate these threats.

  4. Visit the Microsoft Defender for Cloud website to learn more about the plans and capabilities.

  5. Subscribe to our YouTube series for product deep dives.

  6. Follow us at @MSThreatProtect for the latest news and updates on cybersecurity.

 

 

Version history
Last update:
‎Feb 06 2023 11:33 AM
Updated by: