Azure Storage
76 TopicsGranting List-Only permissions for users in Azure Storage Account using ABAC
In this blog, we’ll explore how to configure list-only permissions for specific users in Azure Storage, allowing them to view the structure of files and directories without accessing or downloading their contents. Granting list-only permissions to specific users for an Azure Storage container path allows them to list files and directories without reading or downloading their contents. While RBAC manages access at the container or account level, ABAC offers more granular control by leveraging attributes like resource metadata, user roles, or environmental factors, enabling customized access policies to meet specific requirements. Disclaimer: Please test this solution before implementing it for your critical data. Pre-Requisites: Azure Storage GPV2 / ADLS Gen 2 Storage account Make sure to have enough permissions(Microsoft.Authorization/roleAssignments/write permissions) to assign roles to users , such as Owner or User Access Administrator Note: If you want to grant list-only permission to a particular container, ensure that the permission is applied specifically to that container. This approach limits the scope of access to just the intended container and enhances security by minimizing unnecessary permissions. However, in this example, I am demonstrating how to implement this role for the entire storage account. This setup allows users to list files and directories across all containers within the storage account, which might be suitable for scenarios requiring broader access. Action: You can follow the steps below to create a Storage Blob Data Reader role with specific conditions using the Azure portal: Step 1: Sign-in to the Azure portal with your credentials. Go to the storage account where you could like the role to be implemented/ scoped to. Select Access Control (IAM)->Add-> Add role assignment: Step2: On the Roles tab, select (or search for) Storage Blob Data Reader and click Next. On the Members tab, select User, group, or service principal to assign the selected role to one or more Azure AD users, groups, or service principals. Click Select members. Find and select the users, groups, or service principals. You can type in the Select box to search the directory for display name or email address. Please select the user and continue with Step 3 to configure conditions. Step 3: The Storage Blob Data Reader provides access to list, read/download the blobs. However, we would need to add appropriate conditions to restrict the read/download operations. On the Conditions tab, click Add condition. The Add role assignment condition page appears: In the Add action section, click Add action. The Select an action pane appears. This pane is a filtered list of data actions based on the role assignment that will be the target of your condition. Check the box next to Read a blob, then click Select: Step 4: Add the build expression in such a way that the below expression evaluates to false, so that the result entirely depends on the above condition. Save On the Review + assign tab, click Review + assign to assign the role with the condition. After a few moments, the security principal is assigned the role. Please Note: Along with the above permission, I have given the user Reader permission at the storage account level. You could give the Reader permission at the resource level/resource group level/subscription level too. We mainly have Management Plane and Data Plane while providing permissions to the user. The Management plane consists of operation related to storage account such as getting the list of storage accounts in a subscription, retrieve storage account keys or regenerate the storage account keys, etc. The Data plane access refers to the access to read, write or delete data present inside the containers. For more info, please refer to: https://docs.microsoft.com/en-us/azure/role-based-access-control/role-definitions#management-and-dat... To understand about the Built-in roles available for Azure resources, please refer to: https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles Hence, it is important that you give minimum of ‘Reader’ role at the Management plane level to test it out in Azure Portal. Step 5: Test the condition (Ensure that the authentication method is set to Azure AD User Account and not Access key) User can list the blobs inside the container. Download/Read blob failed. Related documentations: What is Azure attribute-based access control (Azure ABAC)? | Microsoft Learn Azure built-in roles - Azure RBAC | Microsoft Learn Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal - Azure ABAC - Azure Storage | Microsoft Learn Add or edit Azure role assignment conditions using the Azure portal - Azure ABAC | Microsoft Learn811Views0likes0CommentsWhy is the Azure Monitor chart showing dashed lines for the Availability metric?
Have you ever noticed the dash lines on the "Availability" Azure Monitor metric for your Storage Account? What's that about? Is your Storage Account down? Why is it that we see sections where there is a dashed line and other sections where there is a solid line? Well, it turns out that Azure metrics charts use dashed line style to indicate that there is a missing value (also known as “null value”) between two known time grain data points. Meaning, this behavior is by design, yes, by design. It is useful for identifying missing data points. The line chart is a superior choice for visualizing trends of high-density metrics but may be difficult to interpret for the metrics with sparse values, especially when corelating values with time grain is important. The dashed line makes reading of these charts easier but if your chart is still unclear, consider viewing your metrics with a different chart type. For example, a scattered plot chart for the same metric clearly shows each time grain by only visualizing a dot when there is a value and skipping the data point altogether when the value is missing, for my case this is what I get just after selecting the scatter chart: So, by using the scatter chart is a lot clearer now where are the missing data points. But why would there be any missing data point for the "Availability" Azure Monitor metric in the first place? Well, it turns out that the Azure Storage Resource Provider only reports the Availability data to Azure Monitor when there is ongoing activity, meaning, when the Storage Resource Provider is processing requests on any of its services (blob, queue, file, table). If you are not sending any requests to your Storage Account, you should expect the "Availability" Azure Monitor metric to show big sections of dashed lines and understand that this is by design. This is critical because if you don't know this, you may think that the following chart shows that your Storage Account has been "unavailable" for several hours: However, by taking a look at the scatter chart, we know now that there was a lot of inactivity on this Storage Account, and that at some point only 1 datapoint was showing "no availability", after which there was also an important gap again reflecting a period of no activity, for the Storage Account to then start receiving requests and for the "Availability" Azure Monitor metric to show 100% again: References ======== Chart shows dashed line https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-troubleshoot#chart-shows-dashed-line7.8KViews1like2CommentsConverting Page or Append Blobs to Block Blobs with ADF
In certain scenarios, a storage account may contain a significant number of page blobs classified under the hot access tier that are infrequently accessed or retained solely for backup purposes. To optimise costs, it is desirable to transition these page blobs to the archive tier. However, as indicated in the following documentation - https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview the ability to set the access tier is only available for block blobs; this functionality is not supported for append or page blobs. The Azure blob storage connector in Azure data factory is capable of copying blobs from block, append, or page blobs and copying data to only block blobs. https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#supported-capabilities Note: No extra configuration is required to set the blob type on the destination. By default, the ADF copy activity creates blobs as Block Blobs. In this blog, we will understand how to make use of Azure Data Factory to copy the page blobs to block blobs. Please note that this is applicable to append blobs as well. Let’s take a look at the steps ahead Step 1: Creating ADF instance Create an Azure data factory resource in the Azure portal referring to the following document - https://learn.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory After creation, click on "Launch Studio" as shown below Step 2: Creating datasets Create two datasets by navigating to Author -> Datasets -> New dataset. These datasets are used in source and sink for the ADF copy activity Select "Azure blob storage" -> click on continue -> select "binary" and continue Step 3: Creating Linked service Create a new linked service and provide the storage account name which contains page blobs Provide the file path where the page blobs are located. You would also need to create another dataset for destination. Repeat the steps from 3 to 6 to create another destination dataset to copy the blobs to the storage account as block blobs. Note: You can use same or different storage account for the destination dataset. Set it as per your requirements. Step 4: Configuring a Copy data pipeline Once the two datasets are created, now create a new pipeline and under "Move and Transform" section, drag and drop the "Copy data" activity as shown below. Under the Source and Sink sections from the drop down, select the source and destination datasets respectively which were created in the previous steps. Select the “Recursively” option and publish the changes. Source: Sink: Note: You can configure the filters and copy behaviour as per your requirements. Step 5: Debugging and validating Now as the configuration is completed, click on "Debug". If the pipeline activity ran successfully, you should be able to see "succeeded" status in the output section as below. Verify the blob type of the blobs in the destination storage account and it should show as block blob and access tier as Hot. After converting the blobs to block blobs, several methods are available to change their access tier to archive. These include implementing a blob lifecycle management policy, utilizing storage actions, or by using Az CLI or PowerShell scripts. Conclusion Utilising ADF enables the conversion of page or append blobs to block blobs, after which any standard method such as LCM policy or storage actions may be used to change the access tier to archive. This strategy offers a more streamlined and efficient solution compared to developing custom code or scripts. Reference links: https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#supported-capabilities https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview https://learn.microsoft.com/en-us/azure/storage-actions/storage-tasks/storage-task-quickstart-portal https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview https://learn.microsoft.com/en-us/azure/storage/blobs/archive-blob?tabs=azure-powershell#bulk-archive215Views0likes0CommentsRehydrating Archived Blobs via Storage Task Actions
Azure Storage Actions is a fully managed platform designed to automate data management tasks for Azure Blob Storage and Azure Data Lake Storage. You can use it to perform common data operations on millions of objects across multiple storage accounts without provisioning extra compute capacity and without requiring you to write code. Storage task actions can be used to rehydrate the archived blobs in any tier as required. Please note there is no option to set the rehydration priority and is defaulted to Standard one as of now. Note :- Azure Storage Actions are generally available in the following public regions: https://learn.microsoft.com/en-us/azure/storage-actions/overview#supported-regions Azure Storage Actions is currently in PREVIEW in following reions. Please refer: https://learn.microsoft.com/en-us/azure/storage-actions/overview#regions-supported-at-the-preview-level Below are the steps to achieve the rehydration :- Create a Task :- In the Azure portal, search for Storage tasks. Under Services, select Storage tasks - Azure Storage Actions. On the Azure Storage Actions | Storage Tasks page, select Create Fill in all the required details and click on Next to open the Conditions page. Add the conditions as below if you want to rehydrate to Cool tier. Add the Assignment :- Select Add assignment. In the Select scope section, select your subscription and storage account and name the assignment. In the Role assignment section, in the Role drop-down list, select the Storage Blob Data Owner to assign that role to the system-assigned managed identity of the storage task. In the Filter objects section, specify the filter if you want this to run on some specific objects or the whole storage account. In the Trigger details section, choose the runs of the task and then select the container where you'd like to store the execution reports. Select Add. In the Tags tab, select Next. In the Review + Create tab, select Review + create. When the task is deployed, Your deployment is complete page appears. Select Go to resource to open the Overview page of the storage task. Enable the Task Assignment :- Storage task assignments are disabled by default. Enable assignments from the Assignments page. Select Assignments, select the assignment, and then select Enable. The task assignment is queued to run and will run at the specified time. Monitoring the runs :- After the task completes running, you can view the results of the run. With the Assignments page still open, select View task runs. Select the View report link to download a report. You can also view these comma-separated reports in the container that you specified when you configured the assignment. Reference Links :- About Azure Storage Actions - Azure Storage Actions | Microsoft Learn Storage task best practices - Azure Storage Actions | Microsoft Learn Known issues and limitations with storage tasks - Azure Storage Actions | Microsoft Learn225Views0likes0CommentsLifecycle Management of Blobs (Deletion) using Automation Tasks
Background: We often encounter scenarios where we need to delete blobs that have been idle in a storage account for an extended period. For a small number of blobs, deletion can be handled easily using the Azure Portal, Storage Explorer, or inline scripts such as PowerShell or Azure CLI. However, in most cases, we deal with a large volume of blobs, making manual deletion impractical. In such situations, it's essential to leverage automation tools to streamline the deletion process. One effective option is using Automation Tasks, which can help schedule and manage blob deletions efficiently. Note: Behind the scenes, an automation task is actually a logic app resource that runs a workflow. So, the Consumption pricing model of logic-app applies to automation tasks. Scenario’s where “Automation Tasks” are helpful: You have a requirement to automate deletion of blobs which are older than a specific time, in days, weeks or months. You don’t want to put in much manual effort rather have simple UI based steps to achieve your goal You have System containers, and you want to action on it. We have “LCM (Life Cycle management)” which too can be leveraged by users to automation deletion of older blobs; however LCM cannot be used to delete blobs from System containers. You have to work on page blobs. Setup “Automation Tasks”: Let’s walk through on how to achieve our goal. Navigate to the desired storage account and scroll down to the “Automation” section and select the “Tasks” blade and then click on “Add Task” from the top panel or bottom panel (highlighted in image). On the next page click the “Select” (highlighted image) The new page which opens up should look as below, however there isn’t anything we are doing. So let’s just click on the “Next : Configure” (highlighted in image) and move to the next screen. The new page opens needs to be filled as per your requirement. I have added a sample. You can also use it on your own containers as well. 'sample' is a folder inside container '$web' The “Expiration Age” field means that Blobs older than these number of days needs be deleted. In above screenshot, blobs older than 180 days would be deleted. Similarly, we can configure values in weeks or months as well. Once we are through with the steps proceed with creation of the task. Once task is created it looks as below: You can click on the “View Run” to see run history. In-case you want to modify the task, click on your tasks name. For example in above screenshot I can modify by clicking “mytask” link and re-configure the task. Now this isn’t sufficient. We will update some of the steps which were used to create the Logic-app. Hence we would need to edit some steps and save those before re-running the app. a) Go the logic app and navigate to the “Logic App Designer” blade b) Now click on the “+” sign as shown below and “Add an Action” c) Once the new page opens up, search for “List Blobs (v2)” and select it d) Choose the “Enter custom value” and enter your storage account name e) The values would like as shown below f) Now let's navigate to the “For Each” condition g) We need to delete the “Delete blob” too and replace with “Delete blob (V2)” h) The “Delete Blob (V2)” looks like as below i) With all steps ready, lets save the logic app and click on “Run”. You should observe the run passing successfully. Impact due to Firewall: The above steps for works when your storage account is configured for public access. However, when firewall is enabled, you would need to provide the necessary permissions, else you are going to encounter 403 "Authorization Failure" errors. There would be no issue to create the task, but you will see failures when you check the runs. Example: To overcome this limitation, you need to navigate to your logic app and generate a managed identity for the app and provide the identity “Storage Blob Data Contributor” role. Step1. Enable Managed Identity: In Azure Portal, go to your Logic App resource. Under Settings, select Identity. In the Identity pane, under System assigned, select On and Save. This step registers the system-assigned identity with Microsoft Entra ID, represented by an object ID. Step2. Assign Necessary Role: Open the Azure Storage Account in Azure Portal. Select Access control (IAM) > Add > Add role assignment. Assign a role like 'Storage Blob Data Contributor', which includes write access for blobs in an Azure Storage container, to the managed identity. Under Assign access to, select Managed identity > Add members, and choose your Logic App's identity Save and refresh and you see the new role configured to your storage account Remember that, if the Storage Account and logic app are in different region you should add another step in the firewall of storage account. You need to whitelist the logic app instance in “Resource instances” list as shown below: Conclusion: The multiple ways to action on blobs are provided for your convenience. Depending on your requirement, feasibility & other factors like comfortability with the feature or pricing too would certainly influence your decisions. However, in-case you want to action upon System containers like $logs or $web, “Automation Tasks” are one of the most helpful feature which you can use and achieve your goal. Note: At the time of writing this blog this feature is still in preview. So ensure to check if there are any limitations which might impact you before implementing it in your Production environment. References: Create automation tasks to manage and monitor Azure resources - Azure Logic Apps | Microsoft Learn Optimize costs by automatically managing the data lifecycle - Azure Blob Storage | Microsoft Learn566Views4likes0CommentsUsing Azure Monitor Workbook to calculate Azure Storage Used Capacity for all storage accounts
In this blog, we will explore how to use Azure Monitor Workbook to collect and analyze metrics for all or selected storage accounts within a given subscription. We will walk through the steps to set up the workbook, configure metrics, and visualize storage account data to gain valuable insights into usage. Introduction For a given individual blob storage account, we can calculate the used capacity or transactions count or blob count by making use of PowerShell or Metrices available on Portal or Blob Inventory reports. However, if we are supposed to perform the same activity on all the storage accounts under a given subscription and create a consolidated report, it will be a huge task. For such cases, the Blob Inventory reports will not be of much help as it works on individual storage account level and PowerShell script could time out. In such scenarios, the Azure Monitor Workbook is a solution. It will help collect monitoring data of all the storage accounts under a given subscription and present you with a consolidated report. Azure Monitoring Workbook is a powerful tool that allows you to collect, analyze, and act on telemetry data from your Azure resources, in this case, storage account. For more information on Azure Monitoring Workbook, the link is: Azure Workbooks overview - Azure Monitor | Microsoft Learn Prerequisites 1. We will need an Azure Storage account under an active subscription. 2. Create an Azure Monitor workspace or we can make use of Monitor resource. Now, let’s set-up Azure Monitoring Workbook to collect the storage account telemetry. 1. Accessing an Azure Monitoring Workbook We can access it from the Monitor -> Workbook or from Log Analytics -> Workbook. You can refer to the below screenshot: In the Azure portal, select Monitor > Workbooks from the menu bars on the left. In a Log Analytics workspaces page, select Workbooks at the top of the page. 2. Configure Data Collection Once the workbook is selected, you need to configure data collection. For this, we can select either one storage account or multiple storage accounts from a given subscription. Under Workbook, click on “+New” Click on “+Add” and then select "+Add Metrices” option. Once the page loads, select “Resource Type as Storage Account”. Set the metrices granularity based on your requirement. Now, we need to select the metrices that we want to capture. For this blog, let us check on the used capacity of storage account. You can use the metrices option as below: Once you save the metrices, you can now run the metric. You will be able to see graph as below: 3. Export the report as excel sheet We can export the metrices seen as excel sheet as well. You can refer to the below screenshot: Benefits of Using Azure Monitoring Workbook for Azure Blob Storage 1. Enhanced Visibility Azure Monitoring Workbook provides enhanced visibility into the metrices of your Azure Blob Storage accounts. By collecting and analyzing metric data, you can gain insights into key performance indicators, such as storage capacity, transaction rates, and latency. This visibility allows you to proactively manage your storage account resources across the subscription. 2. Proactive Monitoring By setting up alerts and notifications, Azure Monitoring Workbook enables proactive monitoring of your Azure Blob Storage accounts. You can define thresholds for critical metrics and receive notifications when these thresholds are exceeded. This proactive approach helps you quickly detect and address potential issues before they impact your applications and services. 3. Centralized Monitoring Azure Monitoring Workbook allows you to centralize the monitoring of all your Azure resources, including Azure Blob Storage accounts. This centralized approach simplifies the management of your monitoring infrastructure and provides a single pane of glass for viewing and analyzing metric data. Conclusion By following the steps outlined in this guide, you can set up data collection, alerts, and analysis to gain valuable insights into your storage environment. We hope this blog provided you with information on how to gain more insights into your storage account environment for a given subscription. Happy Learning!900Views0likes0CommentsRecovering Large Number of Soft-deleted Blobs Using Storage Actions
Recovering soft-deleted blobs in Azure can often be a daunting task, especially when dealing with a large number of deletions. Traditional methods such as using the Azure Portal, Azure Storage Explorer, or Azure Storage Browser involve repetitive and time-consuming steps that are not ideal for mass recovery operations. Fortunately, Azure Storage Actions provide a more streamlined and efficient approach to handle these scenarios.448Views1like0CommentsEnable SFTP on Azure File Share using ARM Template and upload files using WinScp
SFTP is a very widely used protocol which many organizations use today for transferring files within their organization or across organizations. Creating a VM based SFTP is costly and high-maintenance. ACI service is very inexpensive and requires very little maintenance, while data is stored in Azure Files which is a fully managed SMB service in cloud. This template demonstrates an creating a SFTP server using Azure Container Instances (ACI). The template generates two resources: storage account is the storage account used for persisting data, and contains the Azure Files share sftp-group is a container group with a mounted Azure File Share. The Azure File Share will provide persistent storage after the container is terminated. ARM Template for creation of SFTP with New Azure File Share and a new Azure Storage account Resources.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "metadata": { "_generator": { "name": "bicep", "version": "0.4.63.48766", "templateHash": "17013458610905703770" } }, "parameters": { "storageAccountType": { "type": "string", "defaultValue": "Standard_LRS", "metadata": { "description": "Storage account type" }, "allowedValues": [ "Standard_LRS", "Standard_ZRS", "Standard_GRS" ] }, "storageAccountPrefix": { "type": "string", "defaultValue": "sftpstg", "metadata": { "description": "Prefix for new storage account" } }, "fileShareName": { "type": "string", "defaultValue": "sftpfileshare", "metadata": { "description": "Name of file share to be created" } }, "sftpUser": { "type": "string", "defaultValue": "sftp", "metadata": { "description": "Username to use for SFTP access" } }, "sftpPassword": { "type": "securestring", "metadata": { "description": "Password to use for SFTP access" } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "Primary location for resources" } }, "containerGroupDNSLabel": { "type": "string", "defaultValue": "[uniqueString(resourceGroup().id, deployment().name)]", "metadata": { "description": "DNS label for container group" } } }, "functions": [], "variables": { "sftpContainerName": "sftp", "sftpContainerGroupName": "sftp-group", "sftpContainerImage": "atmoz/sftp:debian", "sftpEnvVariable": "[format('{0}:{1}:1001', parameters('sftpUser'), parameters('sftpPassword'))]", "storageAccountName": "[take(toLower(format('{0}{1}', parameters('storageAccountPrefix'), uniqueString(resourceGroup().id))), 24)]" }, "resources": [ { "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2019-06-01", "name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "kind": "StorageV2", "sku": { "name": "[parameters('storageAccountType')]" } }, { "type": "Microsoft.Storage/storageAccounts/fileServices/shares", "apiVersion": "2019-06-01", "name": "[toLower(format('{0}/default/{1}', variables('storageAccountName'), parameters('fileShareName')))]", "dependsOn": [ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]" ] }, { "type": "Microsoft.ContainerInstance/containerGroups", "apiVersion": "2019-12-01", "name": "[variables('sftpContainerGroupName')]", "location": "[parameters('location')]", "properties": { "containers": [ { "name": "[variables('sftpContainerName')]", "properties": { "image": "[variables('sftpContainerImage')]", "environmentVariables": [ { "name": "SFTP_USERS", "secureValue": "[variables('sftpEnvVariable')]" } ], "resources": { "requests": { "cpu": 1, "memoryInGB": 1 } }, "ports": [ { "port": 22, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "[format('/home/{0}/upload', parameters('sftpUser'))]", "name": "sftpvolume", "readOnly": false } ] } } ], "osType": "Linux", "ipAddress": { "type": "Public", "ports": [ { "port": 22, "protocol": "TCP" } ], "dnsNameLabel": "[parameters('containerGroupDNSLabel')]" }, "restartPolicy": "OnFailure", "volumes": [ { "name": "sftpvolume", "azureFile": { "readOnly": false, "shareName": "[parameters('fileShareName')]", "storageAccountName": "[variables('storageAccountName')]", "storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value]" } } ] }, "dependsOn": [ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]" ] } ], "outputs": { "containerDNSLabel": { "type": "string", "value": "[format('{0}.{1}.azurecontainer.io', reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName'))).ipAddress.dnsNameLabel, reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName')), '2019-12-01', 'full').location)]" } } } Parameters.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "storageAccountType": { "value": "Standard_LRS" }, "storageAccountPrefix": { "value": "sftpstg" }, "fileShareName": { "value": "sftpfileshare" }, "sftpUser": { "value": "sftp" }, "sftpPassword": { "value": null }, "location": { "value": "[resourceGroup().location]" }, "containerGroupDNSLabel": { "value": "[uniqueString(resourceGroup().id, deployment().name)]" } } } ARM Template to Enable SFTP for an Existing Azure File Share in Azure Storage account Resources.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "metadata": { "_generator": { "name": "bicep", "version": "0.4.63.48766", "templateHash": "16190402726175806996" } }, "parameters": { "existingStorageAccountResourceGroupName": { "type": "string", "metadata": { "description": "Resource group for existing storage account" } }, "existingStorageAccountName": { "type": "string", "metadata": { "description": "Name of existing storage account" } }, "existingFileShareName": { "type": "string", "metadata": { "description": "Name of existing file share to be mounted" } }, "sftpUser": { "type": "string", "defaultValue": "sftp", "metadata": { "description": "Username to use for SFTP access" } }, "sftpPassword": { "type": "securestring", "metadata": { "description": "Password to use for SFTP access" } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "Primary location for resources" } }, "containerGroupDNSLabel": { "type": "string", "defaultValue": "[uniqueString(resourceGroup().id, deployment().name)]", "metadata": { "description": "DNS label for container group" } } }, "functions": [], "variables": { "sftpContainerName": "sftp", "sftpContainerGroupName": "sftp-group", "sftpContainerImage": "atmoz/sftp:debian", "sftpEnvVariable": "[format('{0}:{1}:1001', parameters('sftpUser'), parameters('sftpPassword'))]" }, "resources": [ { "type": "Microsoft.ContainerInstance/containerGroups", "apiVersion": "2019-12-01", "name": "[variables('sftpContainerGroupName')]", "location": "[parameters('location')]", "properties": { "containers": [ { "name": "[variables('sftpContainerName')]", "properties": { "image": "[variables('sftpContainerImage')]", "environmentVariables": [ { "name": "SFTP_USERS", "secureValue": "[variables('sftpEnvVariable')]" } ], "resources": { "requests": { "cpu": 1, "memoryInGB": 1 } }, "ports": [ { "port": 22, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "[format('/home/{0}/upload', parameters('sftpUser'))]", "name": "sftpvolume", "readOnly": false } ] } } ], "osType": "Linux", "ipAddress": { "type": "Public", "ports": [ { "port": 22, "protocol": "TCP" } ], "dnsNameLabel": "[parameters('containerGroupDNSLabel')]" }, "restartPolicy": "OnFailure", "volumes": [ { "name": "sftpvolume", "azureFile": { "readOnly": false, "shareName": "[parameters('existingFileShareName')]", "storageAccountName": "[parameters('existingStorageAccountName')]", "storageAccountKey": "[listKeys(extensionResourceId(format('/subscriptions/{0}/resourceGroups/{1}', subscription().subscriptionId, parameters('existingStorageAccountResourceGroupName')), 'Microsoft.Storage/storageAccounts', parameters('existingStorageAccountName')), '2019-06-01').keys[0].value]" } } ] } } ], "outputs": { "containerDNSLabel": { "type": "string", "value": "[format('{0}.{1}.azurecontainer.io', reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName'))).ipAddress.dnsNameLabel, reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName')), '2019-12-01', 'full').location)]" } } } Parameters.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "existingStorageAccountResourceGroupName": { "value": null }, "existingStorageAccountName": { "value": null }, "existingFileShareName": { "value": null }, "sftpUser": { "value": "sftp" }, "sftpPassword": { "value": null }, "location": { "value": "[resourceGroup().location]" }, "containerGroupDNSLabel": { "value": "[uniqueString(resourceGroup().id, deployment().name)]" } } } Deploy the ARM Templates using PowerShell or Azure CLI or Custom Template deployment using Azure Portal. Choose the subscription you want to create the sftp service in Create a new Resource Group It will automatically create a storage account Give a File Share Name Provide a SFTP user name Provide a SFTP password Wait till the deployment is done successfully Click on the container sftp-group Copy the FQDN from the container group Download WinScp from WinSCP :: Official Site :: Download Provide Hostname : FQDN for ACI; Port Number: 22; User Name and Password Click on Login 13. Drag and drop a file from the left side to the Right side. 14. Now, go to the Storage Account and Navigate to File share. The file appears on the file share.11KViews2likes5CommentsHow to configure directory level permission for SFTP local user
SFTP is a feature which is supported for Azure Blob Storage with hierarchical namespace (ADLS Gen2 Storage Account). As documented, the permission system used by SFTP feature is different from normal permission system in Azure Storage Account. It’s using a form of identity management called local users. Normally the permission which user can set up on local users while creating them is on container level. But in real user case, it’s usual that user needs to configure multiple local users, and each local user only has permission on one specific directory. In this scenario, using ACLs (Access control lists) for local users will be a great solution. In this blog, we’ll set up an environment using ACLs for local users and see how it meets the above aim. Attention! As mentioned in Caution part of the document, the ACLs for local users are supported, but also still in preview. Please do not use this for your production environment. Preparation Before configuring local users and ACLs, the following things are already prepared: One ADLS Gen2 Storage Account. (In this example, it’s called zhangjerryadlsgen2) A container (testsftp) with two directories. (dir1 and dir2) One file uploaded into each directory. (test1.txt and test2.txt) The file system in this blog is like: Aim The aim is to have user1 which can only list files saved in dir1 and user2 which can only list files saved in dir2. Both of them should be unable to do any other operations in the matching directory (dir1 for user1 and dir2 for user2) and should be unable to do any operations in root directory and the other directory. Configuring local users From Azure Portal, it’s easy to enable SFTP feature and create local users. Here except user1 and user2, another additional user is also necessary. It will be used as the administrator to assign ACLs on user1 and user2. In this blog, it’s called admin. While creating the admin, its landing directory should be the root directory of the container and the permissions should be all given. While creating the user1 and user2, as the permission will be controlled by using ACLs, the containers and permissions should be left empty and the Allow ACL authorization should be checked. The landing directory should be configured to the directory which this user should have permission later. (In this blog, user1 should be on dir1 and user2 should be on dir2.) User1: User2: After local users are created, one more step which is needed before configuring ACL is to note down the user ID of user1 and user2. By clicking the created local user, a page as following to edit local user should show out and the user ID will be included there. In this blog, the user ID of user1 is 1002 and user ID of user2 is 1003. Configuring ACLs Before starting configuring ACLs, clarifying which permissions to assign is necessary. As explained in this document, the ACLs contains three different permissions: Read(R), Write(W) and Execute(X). And from the “Common scenarios related to ACL permissions” part of the same document, there is a table which contains most operations and their corresponding required permissions. Since the aim of this blog is to allow user1 only to list the dir1, according to table, we know that correct permission for user1 should be X on root directory, R and X on dir1. (For user2, it’s X on root directory, R and X on dir2). After clarifying the needed permissions, the next step is to assign ACLs. The first step is to connect to the Storage Account using SFTP as admin: (In this blog, the PowerShell session + OpenSSL is used but it’s not the only way. Users can also use any other way to build SFTP connection to the Storage Account.) Since assigning ACLs for local users is not possible to a specific user, and the owner of root directory is a built-in user which is controlled by Azure, the easiest way here is to give X permissions to all other users. (For concept of other users, please refer to this document) Next step is to assign R and X permission. But considering the same reason, it’s impossible to give R and X permissions for all other users again. Because if it’s done, user1 will also have R and X permissions on dir2, which does not match the aim. The best way here is to change the owner of the directory. Here we should change the owner of dir1 to user1 and dir2 to user2. (By this way, user1 will not have permission to touch dir2.) After above configurations, while connecting to the Storage Account by SFTP connection using user1 and user2, only listing file operation under corresponding directory is allowed. User1: User2: (The following test result proves that only list operation under /dir2 is allowed. All other operations will return permission denied or not found error.) About landing directory What will happen if all other configurations are correct but the landing directory is configured as root directory for user1 or user2? The answer to the above question is quite simple: The configuration will still work, but will impact the user experience. To show the the result of that case, one more local user called user3 with user ID 1005 is created but its landing directory is configured as admin, which is on root directory. The ACL permission assigned on it is same as user2 (change owner of dir2 to user3.) While connecting to the Storage Account by SFTP using user3, it will be landing on root directory. But per ACLs configuration, it only has permission to list files in dir2, hence the operations in root directory and dir1 are expected to fail. To apply further operation, user needs to add dir2/ in the command or cd dir2 at first.1.6KViews0likes0CommentsHow to secure the configuration file while using blobfuse2 for security compliance
Overview: The blobfuse2 functionality is used to mount Azure Storage Account as file system on Linux machine. To establish the connection with storage account via blobfuse2 and to authenticate the request against the storage account, we make use of configuration file for it. The configuration file contains the storage account details along with the container to be mounted and what mode of authentication to be used. The configuration yaml file includes parameters for blobfuse2 settings. In general, the details saved in configuration file are in plain text. Hence, if any users access the configuration file, they would be able to access the sensitive information related to the storage account, like for example, the storage account access keys and SAS token. Let’s say that, as part of security reasons, you want to safeguard the configuration file from bad actors and prevent the leak of your storage account’s sensitive details. In such scenario, you can make use of blobfuse2 secure command for it. Using blobfuse2 secure command, we can encrypt, decrypt, get or set details in the encrypted configuration file. We will be securing the configuration file using passphrase. Hence, do save the passphrase as it is needed for decrypt, get, set commands. Note: At present, the configuration file encryption is available in blobfuse2 only. Let us discuss in detail the blobfuse2 secure command and how we can mount the blobfuse2 using the encrypted config file. For holistic view regarding the blobfuse2 secure command, in this blog, we have initially mounted blobfuse2 using plain text configuration file. The blobfuse2 mount was successful and to show the contains of configuration file, we have performed “cat” command. Please do refer to the below screenshot for the same. Command used is: sudo blobfuse2 mount ~/<mountpath_name> --config-file=./config.yaml Create an encrypted configuration yaml file: Let us secure the configuration file using blobfuse2 secure encrypt command. Performing “dir” command, we can see the configuration file before and after encryption. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure encrypt --config-file./config.yaml -- passphrase={passphrasesample} --output-file=./encryptedconfig.yaml Now, let us perform the blobfuse2 mount command using encrypted configuration file that we created using the above step. Refer to the screenshot below for further details. Command: sudo blobfuse2 mount ~/<mountpath_name> --config-file=./encryptedconfig.yaml --passphrase={passphrasesample} --secure-config Note: Do note that, post the configuration file is encrypted, the original configuration file is deleted. Hence, if there is any blobfuse2 mount that was done prior to the encryption of the configuration file, ensure that the blobfuse2 mount is using the correct configuration file. Fetch parameter from encrypted configuration file: Let’s say that you want to get a particular parameter from the encrypted config file. Using “cat” command, if we see the details of the config file, the encrypted data will not be readable. Hence, we need to use blobfuse2 secure get command for the it. Perform “blobfuse2 secure get” command to get the details from the encrypted config file. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure get --config-file=./encryptedconfig.yaml --passphrase={passphrasesample} --key=file_cache.path Set parameter in encrypted configuration file: In the encrypted configuration file, if you want to set any new parameter, we can use blobfuse2 secure set command to set the details. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure set --config-file=./encrytedconfig.yaml --passphrase={passphrasesample} --key=logging.log_level --value=log_debug Decrypt the configuration yaml file: Now we know how we can encrypt the configuration file, let's understand how we can use the blobfuse2 secure command to decrypt the configuration file. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure decrypt --config-file=./encryptedconfig.yaml --passphrase={passphrasesample} --output-file=./decryptedconfig.yaml We can see the contents of the decrypted configuration file using “cat” command. In this way, we can secure the config file used for blobfuse2 and meet our security requirement. References: If you face any issues with blobfuse2 troubleshooting, you can refer to the blog here: How to troubleshoot blobfuse2 issues | Microsoft Community Hub For blobfuse2 secure commands, you can refer to the link here: How to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file - Azure Storage | Microsoft Learn Hope this article turns out helpful! Happy Learning!347Views1like0Comments