azure storage
117 TopicsGranting List-Only permissions for users in Azure Storage Account using ABAC
In this blog, we’ll explore how to configure list-only permissions for specific users in Azure Storage, allowing them to view the structure of files and directories without accessing or downloading their contents. Granting list-only permissions to specific users for an Azure Storage container path allows them to list files and directories without reading or downloading their contents. While RBAC manages access at the container or account level, ABAC offers more granular control by leveraging attributes like resource metadata, user roles, or environmental factors, enabling customized access policies to meet specific requirements. Disclaimer: Please test this solution before implementing it for your critical data. Pre-Requisites: Azure Storage GPV2 / ADLS Gen 2 Storage account Make sure to have enough permissions(Microsoft.Authorization/roleAssignments/write permissions) to assign roles to users , such as Owner or User Access Administrator Note: If you want to grant list-only permission to a particular container, ensure that the permission is applied specifically to that container. This approach limits the scope of access to just the intended container and enhances security by minimizing unnecessary permissions. However, in this example, I am demonstrating how to implement this role for the entire storage account. This setup allows users to list files and directories across all containers within the storage account, which might be suitable for scenarios requiring broader access. Action: You can follow the steps below to create a Storage Blob Data Reader role with specific conditions using the Azure portal: Step 1: Sign-in to the Azure portal with your credentials. Go to the storage account where you could like the role to be implemented/ scoped to. Select Access Control (IAM)->Add-> Add role assignment: Step2: On the Roles tab, select (or search for) Storage Blob Data Reader and click Next. On the Members tab, select User, group, or service principal to assign the selected role to one or more Azure AD users, groups, or service principals. Click Select members. Find and select the users, groups, or service principals. You can type in the Select box to search the directory for display name or email address. Please select the user and continue with Step 3 to configure conditions. Step 3: The Storage Blob Data Reader provides access to list, read/download the blobs. However, we would need to add appropriate conditions to restrict the read/download operations. On the Conditions tab, click Add condition. The Add role assignment condition page appears: In the Add action section, click Add action. The Select an action pane appears. This pane is a filtered list of data actions based on the role assignment that will be the target of your condition. Check the box next to Read a blob, then click Select: Step 4: Add the build expression in such a way that the below expression evaluates to false, so that the result entirely depends on the above condition. Save On the Review + assign tab, click Review + assign to assign the role with the condition. After a few moments, the security principal is assigned the role. Please Note: Along with the above permission, I have given the user Reader permission at the storage account level. You could give the Reader permission at the resource level/resource group level/subscription level too. We mainly have Management Plane and Data Plane while providing permissions to the user. The Management plane consists of operation related to storage account such as getting the list of storage accounts in a subscription, retrieve storage account keys or regenerate the storage account keys, etc. The Data plane access refers to the access to read, write or delete data present inside the containers. For more info, please refer to: https://docs.microsoft.com/en-us/azure/role-based-access-control/role-definitions#management-and-dat... To understand about the Built-in roles available for Azure resources, please refer to: https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles Hence, it is important that you give minimum of ‘Reader’ role at the Management plane level to test it out in Azure Portal. Step 5: Test the condition (Ensure that the authentication method is set to Azure AD User Account and not Access key) User can list the blobs inside the container. Download/Read blob failed. Related documentations: What is Azure attribute-based access control (Azure ABAC)? | Microsoft Learn Azure built-in roles - Azure RBAC | Microsoft Learn Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal - Azure ABAC - Azure Storage | Microsoft Learn Add or edit Azure role assignment conditions using the Azure portal - Azure ABAC | Microsoft Learn811Views0likes0CommentsWhy is the Azure Monitor chart showing dashed lines for the Availability metric?
Have you ever noticed the dash lines on the "Availability" Azure Monitor metric for your Storage Account? What's that about? Is your Storage Account down? Why is it that we see sections where there is a dashed line and other sections where there is a solid line? Well, it turns out that Azure metrics charts use dashed line style to indicate that there is a missing value (also known as “null value”) between two known time grain data points. Meaning, this behavior is by design, yes, by design. It is useful for identifying missing data points. The line chart is a superior choice for visualizing trends of high-density metrics but may be difficult to interpret for the metrics with sparse values, especially when corelating values with time grain is important. The dashed line makes reading of these charts easier but if your chart is still unclear, consider viewing your metrics with a different chart type. For example, a scattered plot chart for the same metric clearly shows each time grain by only visualizing a dot when there is a value and skipping the data point altogether when the value is missing, for my case this is what I get just after selecting the scatter chart: So, by using the scatter chart is a lot clearer now where are the missing data points. But why would there be any missing data point for the "Availability" Azure Monitor metric in the first place? Well, it turns out that the Azure Storage Resource Provider only reports the Availability data to Azure Monitor when there is ongoing activity, meaning, when the Storage Resource Provider is processing requests on any of its services (blob, queue, file, table). If you are not sending any requests to your Storage Account, you should expect the "Availability" Azure Monitor metric to show big sections of dashed lines and understand that this is by design. This is critical because if you don't know this, you may think that the following chart shows that your Storage Account has been "unavailable" for several hours: However, by taking a look at the scatter chart, we know now that there was a lot of inactivity on this Storage Account, and that at some point only 1 datapoint was showing "no availability", after which there was also an important gap again reflecting a period of no activity, for the Storage Account to then start receiving requests and for the "Availability" Azure Monitor metric to show 100% again: References ======== Chart shows dashed line https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-troubleshoot#chart-shows-dashed-line7.8KViews1like2CommentsConverting Page or Append Blobs to Block Blobs with ADF
In certain scenarios, a storage account may contain a significant number of page blobs classified under the hot access tier that are infrequently accessed or retained solely for backup purposes. To optimise costs, it is desirable to transition these page blobs to the archive tier. However, as indicated in the following documentation - https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview the ability to set the access tier is only available for block blobs; this functionality is not supported for append or page blobs. The Azure blob storage connector in Azure data factory is capable of copying blobs from block, append, or page blobs and copying data to only block blobs. https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#supported-capabilities Note: No extra configuration is required to set the blob type on the destination. By default, the ADF copy activity creates blobs as Block Blobs. In this blog, we will understand how to make use of Azure Data Factory to copy the page blobs to block blobs. Please note that this is applicable to append blobs as well. Let’s take a look at the steps ahead Step 1: Creating ADF instance Create an Azure data factory resource in the Azure portal referring to the following document - https://learn.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory After creation, click on "Launch Studio" as shown below Step 2: Creating datasets Create two datasets by navigating to Author -> Datasets -> New dataset. These datasets are used in source and sink for the ADF copy activity Select "Azure blob storage" -> click on continue -> select "binary" and continue Step 3: Creating Linked service Create a new linked service and provide the storage account name which contains page blobs Provide the file path where the page blobs are located. You would also need to create another dataset for destination. Repeat the steps from 3 to 6 to create another destination dataset to copy the blobs to the storage account as block blobs. Note: You can use same or different storage account for the destination dataset. Set it as per your requirements. Step 4: Configuring a Copy data pipeline Once the two datasets are created, now create a new pipeline and under "Move and Transform" section, drag and drop the "Copy data" activity as shown below. Under the Source and Sink sections from the drop down, select the source and destination datasets respectively which were created in the previous steps. Select the “Recursively” option and publish the changes. Source: Sink: Note: You can configure the filters and copy behaviour as per your requirements. Step 5: Debugging and validating Now as the configuration is completed, click on "Debug". If the pipeline activity ran successfully, you should be able to see "succeeded" status in the output section as below. Verify the blob type of the blobs in the destination storage account and it should show as block blob and access tier as Hot. After converting the blobs to block blobs, several methods are available to change their access tier to archive. These include implementing a blob lifecycle management policy, utilizing storage actions, or by using Az CLI or PowerShell scripts. Conclusion Utilising ADF enables the conversion of page or append blobs to block blobs, after which any standard method such as LCM policy or storage actions may be used to change the access tier to archive. This strategy offers a more streamlined and efficient solution compared to developing custom code or scripts. Reference links: https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#supported-capabilities https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-quickstart-portal https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview https://learn.microsoft.com/en-us/azure/storage/blobs/archive-blob?tabs=azure-powershell#bulk-archive215Views0likes0CommentsPurview Webinars
REGISTER FOR ALL WEBINARS HERE Upcoming Microsoft Purview Webinars JULY 15 (8:00 AM) Microsoft Purview | How to Improve Copilot Responses Using Microsoft Purview Data Lifecycle Management Join our non-technical webinar and hear the unique, real life case study of how a large global energy company successfully implemented Microsoft automated retention and deletion across the entire M365 landscape. You will learn how the company used Microsoft Purview Data Lifecyle Management to achieve a step up in information governance and retention management across a complex matrix organization. Paving the way for the safe introduction of Gen AI tools such as Microsoft Copilot. 2025 Past Recordings JUNE 10 Unlock the Power of Data Security Investigations with Microsoft Purview MAY 8 Data Security - Insider Threats: Are They Real? MAY 7 Data Security - What's New in DLP? MAY 6 What's New in MIP? APR 22 eDiscovery New User Experience and Retirement of Classic MAR 19 Unlocking the Power of Microsoft Purview for ChatGPT Enterprise MAR 18 Inheriting Sensitivity Labels from Shared Files to Teams Meetings MAR 12 Microsoft Purview AMA - Data Security, Compliance, and Governance JAN 8 Microsoft Purview AMA | Blog Post 📺 Subscribe to our Microsoft Security Community YouTube channel for ALL Microsoft Security webinar recordings, and more!1.2KViews2likes0CommentsRehydrating Archived Blobs via Storage Task Actions
Azure Storage Actions is a fully managed platform designed to automate data management tasks for Azure Blob Storage and Azure Data Lake Storage. You can use it to perform common data operations on millions of objects across multiple storage accounts without provisioning extra compute capacity and without requiring you to write code. Storage task actions can be used to rehydrate the archived blobs in any tier as required. Please note there is no option to set the rehydration priority and is defaulted to Standard one as of now. Note :- Azure Storage Actions are generally available in the following public regions: https://learn.microsoft.com/en-us/azure/storage-actions/overview#supported-regions Azure Storage Actions is currently in PREVIEW in following reions. Please refer: https://learn.microsoft.com/en-us/azure/storage-actions/overview#regions-supported-at-the-preview-level Below are the steps to achieve the rehydration :- Create a Task :- In the Azure portal, search for Storage tasks. Under Services, select Storage tasks - Azure Storage Actions. On the Azure Storage Actions | Storage Tasks page, select Create Fill in all the required details and click on Next to open the Conditions page. Add the conditions as below if you want to rehydrate to Cool tier. Add the Assignment :- Select Add assignment. In the Select scope section, select your subscription and storage account and name the assignment. In the Role assignment section, in the Role drop-down list, select the Storage Blob Data Owner to assign that role to the system-assigned managed identity of the storage task. In the Filter objects section, specify the filter if you want this to run on some specific objects or the whole storage account. In the Trigger details section, choose the runs of the task and then select the container where you'd like to store the execution reports. Select Add. In the Tags tab, select Next. In the Review + Create tab, select Review + create. When the task is deployed, Your deployment is complete page appears. Select Go to resource to open the Overview page of the storage task. Enable the Task Assignment :- Storage task assignments are disabled by default. Enable assignments from the Assignments page. Select Assignments, select the assignment, and then select Enable. The task assignment is queued to run and will run at the specified time. Monitoring the runs :- After the task completes running, you can view the results of the run. With the Assignments page still open, select View task runs. Select the View report link to download a report. You can also view these comma-separated reports in the container that you specified when you configured the assignment. Reference Links :- About Azure Storage Actions - Azure Storage Actions | Microsoft Learn Storage task best practices - Azure Storage Actions | Microsoft Learn Known issues and limitations with storage tasks - Azure Storage Actions | Microsoft Learn225Views0likes0CommentsAzure Storage Options - A Guide to Choosing the right storage option
At the heart of this post is Kairos IMS, an innovative Impact Management System designed to empower human-serving nonprofits and social impact organizations. Co-developed by the Urban League of Broward County and our trusted technology partner, Impactful, Kairos IMS reduces administrative burdens, enhances holistic care, and enables organizations to leverage data for increased agility and seamless service delivery. In this blog series, we’ll take a closer look at the powerful technologies that fuel Kairos IMS, from Azure services to security frameworks, offering insight into how modern infrastructure supports mission-driven impact. Click here to learn more. Provided in this guide is a nonprofit-friendly breakdown of the main Azure Storage types, what they’re good for, and how to choose based on your needs and budget. The 4 Main Types of Azure Storage Azure offers four primary types of storage: Storage Type What It Stores Best For Blob Storage Unstructured data: images, videos, PDFs Media files, documents, backups File Storage Shared files accessible via SMB protocol Team file shares, legacy apps, migrations Table Storage NoSQL key-value data Lightweight data like logs or sensor data Queue Storage Messages for task automation Background tasks, app-to-app communication Let’s break them down in more detail, with nonprofit use cases. 🟣 1. Azure Blob Storage (Binary Large Object) What it is: A flexible place to store unstructured data—like documents, images, and videos. Use case for nonprofits: Uploading program videos or workshop recordings for your community Storing scanned forms, reports, or grant applications Keeping secure backups of sensitive files Cost tip: You can save money using Cool or Archive tiers for files you rarely access. 🔵 2. Azure File Storage What it is: A cloud-based shared file system that acts like a network drive. Use case for nonprofits: Replacing on-premise file servers Collaborating across teams in remote or hybrid environments Making legacy nonprofit software cloud-accessible Bonus: It integrates easily with Windows using standard SMB protocols, so your team won’t need to learn anything new. 🟢 3. Azure Table Storage What it is: A NoSQL storage option for simple key-value pairs. Use case for nonprofits: Storing lightweight data like newsletter sign-ups or app usage logs When you need a low-cost alternative to a full database Note: It’s not for complex queries—this is basic storage, great for lightweight scenarios. 🟡 4. Azure Queue Storage What it is: A messaging system that lets apps send and receive messages asynchronously. Use case for nonprofits: Automating tasks, like sending thank-you emails after an online donation Managing volunteer registration workflows You probably won’t use this directly, but if your IT team or a consultant is building an app for you, it might be part of the backend. How to Choose: A Quick Guide for Nonprofits Need Best Option Store and access documents, images, or videos Blob Storage Share files across staff or locations File Storage Store structured data (like a simple database) Table Storage Automate tasks between services Queue Storage Long-term storage or backups (low cost) Blob Storage (Archive Tier) Replacing an on-site file server File Storage 💡 Cost-Saving Tips for Nonprofits Use your Azure credits: Eligible nonprofits get $3,500 in free Azure credits annually via Microsoft for Nonprofits. Pick the right tier: Blob storage offers Hot, Cool, and Archive tiers based on how often you access data. Turn on auto-delete or lifecycle rules: Save money by setting old files to auto-delete or move to a cheaper tier. Final Thoughts Azure Storage offers powerful tools to help your nonprofit stay secure, organized, and scalable. Choosing the right option ensures your team has access to the files and data they need—without overspending. Whether you’re working with an IT volunteer, a cloud consultant, or just learning it yourself, knowing the basics of Azure Storage puts your organization in a stronger position to grow and serve your community.540Views1like1CommentIntroducing Kairos: A New Era of Case Management for Nonprofits
Why Kairos, Why Now? Nonprofits have long struggled with fragmented systems, manual processes, and limited access to enterprise-grade technology. Kairos changes that. Built on Microsoft Azure and designed specifically for nonprofits, Kairos offers: Streamlined Case Management: From intake to closure, every step is digitized and intuitive Data-Driven Insights: Real-time dashboards and analytics help teams make smarter decisions. Custom Workflows: Tailored to the unique needs of each organization, not the other way around. Collaboration at Scale: Seamless coordination across departments, partners, and service providers. And it’s not just theory. During the recent soft launch, over 70 Urban Leaguers from 30 affiliates joined a live demo led by the Urban League of Broward County's own Daela Holness, showcasing how Kairos is already transforming service delivery. Built by the Community, for the Community This isn’t a top-down tech deployment. It’s a co-creation effort led by voices from across the nonprofit ecosystem. Our team recognized a critical need: nonprofits must own their data. Through deep conversations with nonprofit leaders and frontline staff, we envisioned a system that wouldn’t just manage cases—but empower entire organizations. Kairos was designed to serve every department, every program, and every team—so they can serve their communities faster, smarter, and more collaboratively. With Kairos, nonprofits can track families and services across programs, not in silos. That’s why we call it an impact management system—not just case management. It’s about seeing the full picture, breaking down barriers, and building stronger, more connected communities. What’s Next? This blog is just the beginning. We have published a series of deep dives into the technologies powering Kairos—from Azure services and Power BI dashboards to secure document management. Whether you're a nonprofit leader, a technologist, or a curious changemaker, there’s something here for you. Explore the Series Below is a link to over 20 blogs that will talks about the tech behind Kairos and how it fits into the broader nonprofit tech landscape. If you are just getting started in understanding technology, these will explain resources required for the application especially if you're considering the deployable model. Kairos IMS Blog Resources Take a look at the Kairos website to learn more.155Views0likes0CommentsWhat Nonprofits Need to Know About Cloud Storage Redundancy
At the heart of this post is Kairos IMS, an innovative Impact Management System designed to empower human-serving nonprofits and social impact organizations. Co-developed by the Urban League of Broward County and our trusted technology partner, Impactful, Kairos IMS reduces administrative burdens, enhances holistic care, and enables organizations to leverage data for increased agility and seamless service delivery. In this blog series, we’ll take a closer look at the powerful technologies that fuel Kairos IMS, from Azure services to security frameworks, offering insight into how modern infrastructure supports mission-driven impact. Click here to learn more. What Is Azure Storage Redundancy? Azure storage redundancy refers to how your data is copied and stored across multiple physical locations to keep it safe and accessible—even if hardware fails or a data center goes offline. Think of it as creating backup copies in real-time, so if one server goes down, another one picks up right where it left off. Azure offers several redundancy options, each with a different level of protection and cost: Locally Redundant Storage (LRS): Data is replicated three times within a single data center. Great for budget-conscious orgs. Cheapest option. Zone-Redundant Storage (ZRS): Data is stored across three different availability zones in the same region. Offers higher resilience. Mid-tier pricing. Geo-Redundant Storage (GRS): Data is copied to a secondary region hundreds of miles away. Ideal for disaster recovery. Higher cost. Read-Access Geo-Redundant Storage (RA-GRS): Like GRS, but you can read from the secondary region even if the primary one is down. Why Redundancy Matters for Nonprofits Nonprofits are often targets of cyberattacks and also operate in environments where internet outages or power failures can occur. Redundancy ensures that: You don’t lose important grant or donor data. Services like SharePoint or hosted databases stay online. You can continue serving your community even in unexpected situations. Using Your $2,000 in Azure Credits Wisely Microsoft offers approved nonprofits $2,000 in Azure credits each year through its Microsoft for Nonprofits program. Here’s how you can use those credits for storage redundancy: Start small with LRS or ZRS for frequently used files or backups. Use GRS for mission-critical data like financial or compliance documents. Back up virtual machines or databases with geo-redundancy for restore-anywhere capabilities. Pair with Azure Backup or Site Recovery for additional resilience. Tip: Monitor your credit usage in the Azure Cost Management and Billing dashboard so you don’t overspend. Getting Started If your nonprofit already has an Azure subscription through Microsoft's grant, you're ready to go! Here’s what to do next: Log into the Azure portal with admin credentials. Navigate to Storage Accounts > + Create. Choose your region and desired redundancy level. Configure Advanced, Networking, Data protection, Encryption, and Tag settings and then select Review + create to go over your configuration. Select Create to make your storage account. Start uploading files or connecting services like Microsoft 365 or backup tools. If you’re unsure which redundancy level is right for your nonprofit, a good starting point is to use LRS for general storage and reserve GRS for the most critical data. Storage redundancy isn’t just a technical term—it’s peace of mind. With Azure and your nonprofit credits, you can build a more resilient and secure digital foundation without spending out of pocket. Not sure how to get started? Microsoft has nonprofit partners and tech support that can help you make the most of your credits. Your mission is too important to risk downtime—let’s make sure your data is always safe and accessible.110Views0likes0CommentsProtecting Your Mission: How Azure’s Point-in-Time Restore Keeps Nonprofit Data Safe
At the heart of this post is Kairos IMS, an innovative Impact Management System designed to empower human-serving nonprofits and social impact organizations. Co-developed by the Urban League of Broward County and our trusted technology partner, Impactful, Kairos IMS reduces administrative burdens, enhances holistic care, and enables organizations to leverage data for increased agility and seamless service delivery. In this blog series, we’ll take a closer look at the powerful technologies that fuel Kairos IMS, from Azure services to security frameworks, offering insight into how modern infrastructure supports mission-driven impact. Click here to learn more. As nonprofits continue to embrace cloud technology to enhance their day-to-day and better serve their communities, protecting critical data becomes more important than ever. Whether it’s donor records, program data, or volunteer tracking, the risk of accidental deletion or corruption is real. That’s why features like Point-in-Time Restore (PITR) in Microsoft Azure play a vital role in ensuring your data stays safe—and your mission stays on track. What Is Point-in-Time Restore? Point-in-Time Restore is a feature in Azure that allows you to recover a database to a specific moment in the past—down to the second. Think of it like hitting "rewind" on your database. Whether it's due to human error, application issues, or malicious activity, PITR provides a safety net by allowing you to restore data to a time before the incident occurred. Services in Azure that support Point-in-Time Restore: Azure SQL Database This is the most common use case. PITR allows you to restore a database to any second within the retention period (up to 35 days by default). Azure Database for PostgreSQL – Single Server Supports PITR with up to 35-day retention. Azure Database for MySQL – Single Server Also supports PITR for recovering from accidental changes. Azure Cosmos DB (with Continuous Backup) PITR is available if you enable continuous backup. You can restore to any point within the past 30 days. What PITR is not available for (as of now): Azure Blob Storage (uses versioning and soft delete instead) Azure Files Azure Virtual Machines (use backup snapshots and recovery services vault) Azure Key Vault or Azure App Services (require other recovery strategies) Why Nonprofits Should Care About PITR Nonprofits often operate with limited IT staff and budgets, making automated and reliable data protection solutions essential. Here’s how PITR benefits your organization: Peace of Mind: Mistakes happen. PITR ensures you can recover from accidental deletions or changes without major downtime. Minimal Disruption: Restore your Azure SQL Database or other supported resources without disrupting other parts of your cloud environment. Compliance Support: If you handle donor information or health records, maintaining recoverability helps with data protection regulations. How Does PITR Work in Azure? Azure automatically creates full database backups every week, differential backups every 12-24 hours, and transaction log backups every 5-10 minutes. With PITR, you can choose any point within your retention period (up to 35 days by default) and restore your data to that exact moment. The restored database is created as a new copy—so you don’t overwrite the existing data unless you choose to. Use Case Example Imagine your nonprofit is using an Azure SQL Database to track volunteer hours. One day, someone accidentally runs a script that deletes an entire table. With PITR, you can restore the database to just before the incident—recovering your data without losing more than a few minutes’ worth of work. Steps to Perform a Point-in-Time Restore Go to the Azure portal and type in SQL Database into the Azure search bar. Navigate to your SQL Database. Click Restore from the toolbar. Select Point-in-time. Choose the desired restore point time. Provide a new name for the restored database. Configure other desired settings, review + create. Select Create. That’s it—Azure takes care of the heavy lifting. Tips for Nonprofits Review retention settings: Ensure your database's PITR retention period aligns with your backup and compliance policies. Test your restores: Regularly verify that you can perform a PITR to reduce surprises during real emergencies. Educate your team: Train staff on best practices for data entry and deletion to reduce the risk of needing restores. Data loss doesn’t have to be catastrophic. Azure’s Point-in-Time Restore is a powerful, low-effort way for nonprofits to stay resilient and mission-focused. It enables you to recover swiftly from setbacks and continue serving your community without unnecessary delays. Happy Restoring!93Views0likes0CommentsLifecycle Management of Blobs (Deletion) using Automation Tasks
Background: We often encounter scenarios where we need to delete blobs that have been idle in a storage account for an extended period. For a small number of blobs, deletion can be handled easily using the Azure Portal, Storage Explorer, or inline scripts such as PowerShell or Azure CLI. However, in most cases, we deal with a large volume of blobs, making manual deletion impractical. In such situations, it's essential to leverage automation tools to streamline the deletion process. One effective option is using Automation Tasks, which can help schedule and manage blob deletions efficiently. Note: Behind the scenes, an automation task is actually a logic app resource that runs a workflow. So, the Consumption pricing model of logic-app applies to automation tasks. Scenario’s where “Automation Tasks” are helpful: You have a requirement to automate deletion of blobs which are older than a specific time, in days, weeks or months. You don’t want to put in much manual effort rather have simple UI based steps to achieve your goal You have System containers, and you want to action on it. We have “LCM (Life Cycle management)” which too can be leveraged by users to automation deletion of older blobs; however LCM cannot be used to delete blobs from System containers. You have to work on page blobs. Setup “Automation Tasks”: Let’s walk through on how to achieve our goal. Navigate to the desired storage account and scroll down to the “Automation” section and select the “Tasks” blade and then click on “Add Task” from the top panel or bottom panel (highlighted in image). On the next page click the “Select” (highlighted image) The new page which opens up should look as below, however there isn’t anything we are doing. So let’s just click on the “Next : Configure” (highlighted in image) and move to the next screen. The new page opens needs to be filled as per your requirement. I have added a sample. You can also use it on your own containers as well. 'sample' is a folder inside container '$web' The “Expiration Age” field means that Blobs older than these number of days needs be deleted. In above screenshot, blobs older than 180 days would be deleted. Similarly, we can configure values in weeks or months as well. Once we are through with the steps proceed with creation of the task. Once task is created it looks as below: You can click on the “View Run” to see run history. In-case you want to modify the task, click on your tasks name. For example in above screenshot I can modify by clicking “mytask” link and re-configure the task. Now this isn’t sufficient. We will update some of the steps which were used to create the Logic-app. Hence we would need to edit some steps and save those before re-running the app. a) Go the logic app and navigate to the “Logic App Designer” blade b) Now click on the “+” sign as shown below and “Add an Action” c) Once the new page opens up, search for “List Blobs (v2)” and select it d) Choose the “Enter custom value” and enter your storage account name e) The values would like as shown below f) Now let's navigate to the “For Each” condition g) We need to delete the “Delete blob” too and replace with “Delete blob (V2)” h) The “Delete Blob (V2)” looks like as below i) With all steps ready, lets save the logic app and click on “Run”. You should observe the run passing successfully. Impact due to Firewall: The above steps for works when your storage account is configured for public access. However, when firewall is enabled, you would need to provide the necessary permissions, else you are going to encounter 403 "Authorization Failure" errors. There would be no issue to create the task, but you will see failures when you check the runs. Example: To overcome this limitation, you need to navigate to your logic app and generate a managed identity for the app and provide the identity “Storage Blob Data Contributor” role. Step1. Enable Managed Identity: In Azure Portal, go to your Logic App resource. Under Settings, select Identity. In the Identity pane, under System assigned, select On and Save. This step registers the system-assigned identity with Microsoft Entra ID, represented by an object ID. Step2. Assign Necessary Role: Open the Azure Storage Account in Azure Portal. Select Access control (IAM) > Add > Add role assignment. Assign a role like 'Storage Blob Data Contributor', which includes write access for blobs in an Azure Storage container, to the managed identity. Under Assign access to, select Managed identity > Add members, and choose your Logic App's identity Save and refresh and you see the new role configured to your storage account Remember that, if the Storage Account and logic app are in different region you should add another step in the firewall of storage account. You need to whitelist the logic app instance in “Resource instances” list as shown below: Conclusion: The multiple ways to action on blobs are provided for your convenience. Depending on your requirement, feasibility & other factors like comfortability with the feature or pricing too would certainly influence your decisions. However, in-case you want to action upon System containers like $logs or $web, “Automation Tasks” are one of the most helpful feature which you can use and achieve your goal. Note: At the time of writing this blog this feature is still in preview. So ensure to check if there are any limitations which might impact you before implementing it in your Production environment. References: Create automation tasks to manage and monitor Azure resources - Azure Logic Apps | Microsoft Learn Optimize costs by automatically managing the data lifecycle - Azure Blob Storage | Microsoft Learn566Views4likes0Comments