This article describes how to get container level stats in Azure Blob Storage, and how to work with the information provided by blob inventory.
The approach presented here uses Azure Databricks and is most suited to be used in storage accounts with a huge amount of data.
At the end of this article, you would be able to create a script to calculate:
This approach is based on two steps:
Use Blob Inventory to collect data from Storage account
Use Azure Databricks to analyse the data collected with Blob Inventory
Each step has a theoretical introduction and a practical example.
The Azure Storage blob inventory feature provides an overview of your containers, blobs, snapshots, and blob versions within a storage account. Use the inventory report to understand various attributes of blobs and containers such as your total data size, age, encryption status, immutability policy, and legal hold and so on. The report provides an overview of your data for business and compliance requirements (please see more here: Azure Storage blob inventory).
The steps to enable inventory report are presented here Enable Azure Storage blob inventory reports.
Please keep in mind the following:
This practical example is intended to help the user to understand the theoretical introduction.
Please see below the steps I followed to create an inventory rule:
As I mentioned above, it can take up to 24 hours for an inventory run to complete. After 24 hours, it is possible to see that the inventory rule was executed for the Aug 17th.
The file generated has almost 11 MiB. Please keep in mind that for files of this size we can use Excel. Azure Databricks should be used when the regular tools like Excel are not able to read the file.
Azure Databricks is a data analytics platform optimized for the Microsoft Azure cloud services platform. Azure Databricks offers three environments for developing data intensive applications: Databricks SQL, Databricks Data Science & Engineering, and Databricks Machine Learning (What is Azure Databricks?).
Please be aware of Azure Databricks Pricing before deciding to use it.
To start working with Azure Databricks we need to create and deploy an Azure Databricks workspace, and we also need to create a cluster. Please find here a QuickStart to Run a Spark job on Azure Databricks Workspace using the Azure portal.
Now that we have an Azure Databricks workspace and a cluster, we will use Azure Databricks to read the csv file generated by the inventory rule created above, and to calculate the container stats.
To be able to connect Azure Databricks workspace to the storage account where the blob inventory file is, we have to create a notebook. Please follow the steps presented here Run a Spark job with focus on steps 1 and 2.
With the notebook created, it is necessary to access the blob inventory file. Please copy the blob inventory file to the root of the container/filesystem.
If you have a ADLS GEN2 account, please follow step 1. If you do not have, please follow step 2:
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
import pyspark.sql.functions as F
storage_account_name = "StorageAccountName"
storage_account_key = "StorageAccountKey"
container = "ContainerName"
blob_inventory_file = "blob_inventory_file_name"
# Set spark configuration
spark.conf.set("fs.azure.account.key.{0}.dfs.core.windows.net".format(storage_account_name), storage_account_key)
# Read blob inventory file
df = spark.read.csv("abfss://{0}@{1}.dfs.core.windows.net/{2}".format(container, storage_account_name, blob_inventory_file), header='true', inferSchema='true')
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
import pyspark.sql.functions as F
storage_account_name = "StorageAccountName"
storage_account_key = "StorageAccountKey"
container = "ContainerName"
blob_inventory_file = "blob_inventory_file_name"
# Set spark configuration
spark.conf.set("fs.azure.account.key.{0}.blob.core.windows.net".format(storage_account_name), storage_account_key)
# Read blob inventory file
df = spark.read.csv("wasbs://{0}@{1}.blob.core.windows.net/{2}".format(container, storage_account_name, blob_inventory_file), header='true', inferSchema='true')
Please see below how to calculate the container stats with Azure Databricks organized as follow. First is presented the code sample and after the code execution result).
Calculate the total number of blobs in the container
print("Total number of blobs in the container:", df.count())
Calculate the total container capacity (in bytes)
display(df.agg({'Content-Length': 'sum'}).withColumnRenamed("sum(Content-Length)", "Total Container Capacity (in bytes)"))
Calculate the total number of snapshots in the container
from pyspark.sql.functions import *
print("Total number of snapshots in the container:", df.where(~(col("Snapshot")).like("Null")).count())
Calculate the total container snapshots capacity (in bytes)
dfT = df.where(~(col("Snapshot")).like("Null"))
display(dfT.agg({'Content-Length': 'sum'}).withColumnRenamed("sum(Content-Length)", "Total Container Snapshots Capacity (in bytes)"))
Calculate the total number of versions in the container
from pyspark.sql.functions import *
print("Total number of versions in the container:", df.where(~(col("VersionId")).like("Null")).count())
Calculate the total container versions capacity (in bytes)
dfT = df.where(~(col("VersionId")).like("Null"))
display(dfT.agg({'Content-Length': 'sum'}).withColumnRenamed("sum(Content-Length)", "Total Container Versions Capacity (in bytes)"))
Calculate the total number of blobs in the container by BlobType
display(df.groupBy('BlobType').count().withColumnRenamed("count", "Total number of blobs in the container by BlobType"))
Calculate the total number of blobs in the container by Content-Type
display(df.groupBy('Content-Type').count().withColumnRenamed("count", "Total umber of blobs in the container by Content-Type"))
When you finish, please go to compute and stop the cluster to avoid extra costs.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.