Azure Storage Accounts provide a reliable and scalable solution for storing blobs, files, queues, and more. While deploying storage accounts and containers is straightforward with infrastructure-as-code (IaC) tools like Bicep, creating a folder structure inside Blob containers requires additional scripting support.
Overview:
This blog demonstrates how to:
- Deploy an Azure Storage Account and Blob containers using Bicep
- Create a folder-like structure inside those containers using PowerShell
This approach is ideal for cloud engineers and DevOps professionals seeking end-to-end automation for structured storage provisioning.
Bicep natively supports the creation of:
- Storage Accounts
- Containers (via the blobServices resource)
However, folders (directories) inside containers are not first-class resources in ARM/Bicep — they're created by uploading a blob with a virtual path, e.g., folder1/blob.txt.
So how can we automate the creation of these folder structures without manually uploading dummy blobs?
You can check out the blog "Designing Reusable Bicep Modules: A Databricks Example" for a good reference on how to structure the storage account pattern. It covers reusable module design and shows how to keep things clean and consistent.
1. Deploy an Azure Storage Account and Blob containers using Bicep
You can provision a Storage Account and its associated Blob Containers using a few lines of code.
and the parameters for 'directory services' be like,
The process involves:
- Defining the Microsoft.Storage/storageAccounts resource for the Storage Account.
- Adding a nested blobServices/containers resource to create Blob containers within it.
- Using parameters to dynamically assign names, access tiers, and network rules.
2. Create a folder-like structure inside those containers using PowerShell
To simulate a directory structure in Azure Data Lake Storage Gen2, use Bicep with deployment scripts that execute az storage fs directory create. This enables automation of folder creation inside blob containers at deployment time.
In this setup:
- A Microsoft.Resources/deploymentScripts resource is used.
- The az storage fs directory create command creates virtual folders inside containers.
- Access is authenticated using a secure account key fetched via storageAccount.listKeys()
Parameter Flow and Integration
The solution uses Bicep’s module linking capabilities:
- The main module outputs the Storage Account name and container names.
- These outputs are passed as parameters to the deployment script module.
- The script loops through each container and folder, uploading a dummy blob to create the folder.
Here’s the final setup with the storage account and container ready, plus the directory created inside—everything’s all set!
Conclusion
This approach is especially useful in enterprise environments where storage structures must be provisioned consistently across environments.
You can extend this pattern further to:
- Tag blobs/folders
- Assign RBAC roles
- Handle folder-level metadata
Have you faced similar challenges with Azure Storage provisioning? Share your experience or drop a comment!