azure files
56 TopicsSimplifying file share management and control for Azure Files
Azure Files makes it easy to run your file shares in the cloud without the overhead of on-premises file servers or NAS devices. Until now, managing file shares in Azure has also meant managing storage accounts, an extra layer of management that brings along capacity planning, shared settings, and scaling challenges. To simplify this experience, we're excited to announce the preview of a new file share-centric management model for Azure Files. This shift means you can focus on just the part you care about - creating and using file shares with your applications, without the overhead of storage account management. With the new management model, you can now: Deploy files shares using easy automation as a top-level resource. Configure granular secure access by share. Monitor and scale per share with added flexibility. Leverage simplified transparent pricing with provisioned v2. Let's look at how this works. A new way to manage file shares With the Microsoft.FileShares management model, file shares are now top-level Azure resources, just like virtual machines, disks, or virtual networks. This allows file shares to seamlessly integrate with Azure's ecosystem of tools, including templates, policies, tags, and cost management. By having file shares as top-level Azure resources, you no longer need to puzzle over which storage account settings actually apply. Each file share comes with only the settings that matter, so you can manage it directly without extra layers of complexity. The result is a simpler, more intuitive experience where you stay focused on your workload, not the infrastructure underneath. Per share settings unlock a new level of granular control: each file share can now have its own networking and security rules, tailored to the workload it supports. The result is isolation and flexibility: security without compromise. Provisioning and billing are also simplified in this mode, as you no longer need to capacity plan files against the storage, IOPS, and throughput limits of the storage accounts hosting them. Each file share now scales independently up to Azure Files' limits, so growth in one file share doesn't impact any others. And because Azure billing always works on a per resource basis, every file share stands on its own as a separate billable item. That makes costs easy to track, allocate, or charge back to the right project, department, or customer. Combined with the provisioned v2 billing model for Azure Files, the result is transparent pricing: you provision exactly what you need for each share and can attribute the cost with precision. In this first release, you'll be able to create and manage NFS file shares on SSD, with support for SMB file shares planned in the future. Built to scale Azure Files supports a diverse customer base, ranging from small businesses managing a few shares to large enterprises deploying thousands. It accommodates both traditional file share workloads with long-lived persistent data and dynamic container workloads that provision and decommission file shares frequently. No matter the scenario, our goal is the same: Azure Files should adapt to your workload, not the other way around. These principles are baked directly into the new model, ensuring that users do not need to create additional subscriptions due to management limitations, and that sufficient scalability and performance are provided to meet demanding workloads. In preview, you can create up to 1,000 file shares per subscription per region. But raw resource counts don't mean much if the management service can't keep pace - just as important, the new model significantly raises the management service limits compared to the storage account model. For most customers, this makes management throttling much less likely, even at scale (see Azure Files scale targets for information on both Microsoft.FileShares and Microsoft.Storage request limits). As we work toward general availability, we plan to further increase both resource and request limits to help customers operate at scale without running into throttling or needing to shard file shares across multiple subscriptions. Speed matters just as much as scale, and in preview, provisioning a file share has typically been faster than provisioning through a storage account. In our in-house testing, we observed file shares deployed using the new model were about ~2x faster than classic file shares, and we intend to continue to improve those numbers as we work towards general availability. Get started today You can start creating file share resources today in preview, which is open to everyone. Just go to the Azure portal, search for "file shares" and click "+ Create": A few important notes about what's not yet available in preview: The new management model is only supported on NFS and not SMB shares (on either SSD or HDD) for now. NFS file shares using customer managed keys (CMK), file share soft-delete, and AKS integration via the CSI driver are not yet available, but are planned for general availability. The initial preview is available in a limited set of regions, however we will expand this list as we work towards general availability. See regional availability for a complete list. To learn more, please see: Planning for an Azure Files deployment How to create a file share (Microsoft.FileShares) Azure Files scale targets1.6KViews8likes2CommentsLower costs and boost flexibility with Azure Files provisioned v2
For enterprise IT professionals or startup developers alike, cost efficiency for file storage is top of mind. Whether you're running mission-critical databases, production scale applications like SAP, or cloud native applications using file storage on AKS, your storage infrastructure should adapt to your workload - not the other way around. To bring this flexibility to your hands, we introduced the provisioned v2 model for the HDD (standard) media tier in 2024. Today, we are excited to announce that we're extending the provisioned v2 model to the SSD (premium) media tier. Provisioned v2 is designed to give you more control, better performance alignment, and significant cost savings across a wide range of customer scenarios - by decoupling performance from capacity, lowering the minimum share size to 32 GiB, and increasing the maximum share size to 256 TiB. With provisioned v2 you can dynamically scale up or down your file share capacity or performance as needed without any downtime based on your workload pattern. Right-sized performance for every workload Whether you are running general purpose file shares, DevOps tooling, AI workflows or databases, you can benefit from leveraging the provisioned v2 model. Here are some examples: Database workloads such as SQL Server, Oracle®, MongoDB, and enterprise platforms like SAP and EPIC require high IOPS and throughput but minimal storage. With provisioned v2, you can secure needed performance without excess storage, resulting in substantial cost savings. Containerized workloads, like Azure Kubernetes Service (AKS), often use very small file shares to achieve shared storage between volumes. With provisioned v2, we've decreased the minimum share size from 100 GiB to 32 GiB and have enabled you to provision just the minimum IOPS and throughput that's included for free. This means that the minimum file share cost in Azure Files is going from $16 / month to just $3.20 / month - an 80% cost savings! Workloads that require fast fetch of infrequently used data, like media files, where the storage workload requires IOPS/throughput occasionally but requires the low latency upon retrieval that you can only get on SSD storage media. With provisioned v2, we've increased the maximum share size from 100 TiB to 256 TiB, enabling larger than ever file shares on Azure Files. And the flexible provisioning afforded by provisioned v2 enables you to dramatically decrease bundled IOPS/throughput to match the archive's requirements. Let's take a deeper look at these savings with some sample workloads: Workload scenario Provisioned v1 Provisioned v2 Cost savings Workload using defaults for IOPS and throughput 10 TiB storage, ~13K IOPS, ~1.1 GiB / sec throughput $1,638.40 / month $1,341.09 / month 18% Relational database workload 4 TiB storage, 20K IOPS, 1.5 GiB / sec throughput $2,720 / month $925.42 / month 66% Hot archive for multimedia 100 TiB storage, 15K IOPS, 2 GiB / sec throughput $16,384.00 / month $10,641.93 / month 35% To learn more about how to compare your costs between the provisioned v2 and provisioned v1 models, see understanding the costs of the provisioned v2 model. All pricing comparisons are shown using the West US 2 prices for locally redundant storage (LRS). Top reasons to give provisioned v2 a try If you haven't looked at Azure Files before, now is the best time to get started. Here's why you should look at making the move to Azure Files with provisioned v2 SSD: Affordable, with low entry costs starting at just $3.20/month. Flexible and customizable to fit a wide range of requirements. Easy to understand and predictable pricing. Support for high IOPS and low latency performance, ideal for performance-critical applications that require sustained throughput and rapid data access. Support for unpredictable or burst-heavy usage patterns, ensuring smooth performance under variable demand. Scalable sizing options, with SSD file shares ranging from 32 GiB to 256 TiB - well-suited for workloads with smaller footprints. How it works With the provisioned v2 model, IOPS and throughput are recommended to you based on the amount of provisioned storage you select, however this recommendation is completely overridable by you. If your workload needs more IOPS or throughput than the default recommendations, you can provision more without having to provision a bunch of extra storage. And if your workload needs less than the default recommendation, you can decrease the provisioned IOPS and throughput all the way down to the minimums for a file share. The best part of this is that you don't have to get this right on file share creation: if you don't know what your performance requirements are or your workload's patterns change over time, you can dynamically scale up or down your file share's provisioning as needed, without any downtime. The provisioned v2 file share also gives you all of the telemetry needed to monitor your workload's used IOPS and throughput usage, enabling you to continuously tune your file share to your workload. Getting started is easy Provisioned v2 for SSD is available right now, in all public cloud regions (see provisioned v2 availability for details). Simply select "Azure Files" for primary service, "Premium" for performance, and "Provisioned v2" for billing when creating your storage account in the Azure Portal. To learn more about how to get started, see: Azure Files pricing page Understanding the provisioned v2 model | Microsoft Learn How to create an Azure file share | Microsoft Learn501Views3likes0CommentsSecure Linux workloads using Azure Files with Encryption in Transit
Encryption in Transit (EiT) overview As organizations increasingly move to cloud environments, safeguarding data security both at rest and during transit is essential for protecting sensitive information from emerging threats and for maintaining regulatory compliance. Azure Files already offers encryption at rest using Microsoft-managed or customer-managed keys for NFS file shares. Today, we're excited to announce the General Availability of Encryption in Transit (EiT) for NFS file shares. By default, Azure encrypts data moving across regions. In addition, all clients accessing Azure Files NFS shares are required to be within the scope of a trusted virtual network (VNet) to ensure secure access to applications. However, data transferred within resources in a VNet remains unencrypted. Enabling EiT ensures that all read & writes to the NFS file shares within the VNET are encrypted providing an additional layer of security. With EiT, enterprises running production scale applications with Azure Files NFS shares can now meet their end-to-end compliance requirements. Feedback from the NFS community and Azure customers emphasized the need for an encryption approach that is easy to deploy, portable, and scalable. TLS enables a streamlined deployment model for NFS with EiT while minimizing configuration complexity, maintaining protocol transparency, and avoiding operational overhead. The result is a more secure, performant, and standards-compliant solution that integrates seamlessly into existing NFS workflows. With EiT, customers can now encrypt all NFS traffic using the latest and most secure version of TLS, TLS 1.3, achieving enterprise-grade security effortlessly. TLS provides three core security guarantees: Confidentiality: Data is encrypted, preventing eavesdropping. Authentication: Client verifies the server via certificates during handshake to establish trust. Integrity: TLS ensures that information arrives safely and unchanged, thus adding protection against data corruption or bitflips in transit. TLS encryption for Azure Files is delivered via stunnel, a trusted, open-source proxy designed to add TLS encryption to existing client-server communications without modifying the applications themselves. It has been widely used for its robust security and transparent, in-transit encryption for many use cases across industries for many years. AZNFS Mount Helper for Seamless Setup EiT client setup and mount for NFS volumes may seem like a daunting task, but we have made it easier using the AZNFS mount helper tool. Simplicity and Resiliency: AZNFS is a simple, open-source tool, maintained and supported by Microsoft, that automates stunnel setup and NFS volume mounting over a secure TLS tunnel. AZNFS’s in-built watchdog's auto-reconnect logic protects the TLS mounts, ensuring high availability during unexpected connectivity interruptions. Sample AZNFS mount commands, customized to your NFS volume, are available in the Azure portal (screenshot below). Fig 1. Azure portal view to configure AZNFS for Azure clients using EiT Standardized and flexible: Mounting with AZNFS incorporates the Microsoft recommended performance, security and reliability mount options by default while providing flexibility to adjust these settings to fit your workload. For example, while TLS is the default selection, you can override it to non-TLS connections for scenarios like testing or debugging. Broad Linux compatibility: AZNFS is available through Microsoft’s package repository for major Linux distributions, including Ubuntu, RedHat, SUSE, Alma Linux, Oracle Linux and more. Seamless upgrades: AZNFS package updates automatically in the background without affecting the active mount connections. You will not need any maintenance windows or downtime to perform upgrades. The illustration below shows how EiT helps transmit data securely between clients and NFS volumes over trusted networks. Fig 2. EiT set up flow and secure data transfer for NFS shares Enterprise Workloads and Platform Support EiT is compatible with applications running on a wide range of platforms, including Linux VMs in Azure, on-premises Linux servers, VM scale sets, and Azure Batch, ensuring compatibility with major Linux distributions for cloud, hybrid, and on-premises deployments. Azure Kubernetes Service (AKS): The preview of NFS EiT in AKS will be available shortly. In the meantime, the upstream Azure Files CSI Driver includes AZNFS integration, which can be manually configured to enable EiT for NFS volumes with stateful container workloads. SAP: SAP systems are central to many business operations and handle sensitive data like financial information, customer details, and proprietary data. Securing this confidential data within the SAP environment, including its central services, is a critical concern. NFS volumes, used in central services are single points of failure, making their security and availability crucial. This blog post on SAP deployments on Azure provides guidance on using EiT enabled NFS volumes for SAP deployment scenarios to make them even more secure. SAP tested EiT for their SAP RISE deployments and shared positive feedback: “The NFS Encryption in Transit preview has been a key enabler for running RISE customers mission critical workloads on Azure Files, helping us meet high data in transit encryption requirements without compromising performance or reliability. It has been critical in supporting alignment with strict security architectures and control frameworks—especially for regulated industries like financial services and healthcare. We’re excited to see this capability go GA and look forward to leveraging it at scale.” Ventsislav Ivanov, IT Architecture Chief Expert, SAP Compliance-centric verticals: As part of our preview, customers in industry verticals including financial services, insurance, retail leveraged EiT to address their data confidentiality and compliance needs. One such customer, Standard Chartered, a major global financial institution, highlighted its benefits. “The NFS Encryption in Transit preview has been a key enabler for migrating one of our on-premises applications to Azure. It allowed us to easily run tests in our development and staging environments while maintaining strict compliance and security for our web application assets. Installation of the required aznfs package was seamless, and integration into our bootstrap script for virtual machine scale set automation went smoothly. Additionally, once we no longer needed to disable the HTTPS requirement on our storage account, no further changes were necessary to our internal Terraform modules—making the experience nearly plug-and-play. We’re excited to see this capability reach general availability” Mohd Najib, Azure Cloud Engineer, Standard Chartered Regional availability and pricing Encryption in Transit GA with TLS 1.3 is rolling out globally and is now available in most regions. EiT can be enabled on both new and existing storage accounts and Azure Files NFS shares. There is no additional cost for enabling EiT. Next Steps to Secure Your Workloads Explore More: How to encrypt data in transit for NFS shares| Microsoft Learn Mandate Security: Enable “Secure Transfer Required” on all your Storage Accounts with NFS volumes to mandate EiT for additional layer of protection. Enforce at Scale: Enable Azure Policy for enforcing EiT across your subscription. Please reach out to the team at AzureFiles@microsoft.com for any questions and feedback.670Views4likes0CommentsAzure Files NFS Encryption In Transit for SAP on Azure Systems
Azure Files NFS volumes now support encryption in-transit via TLS. With this enhancement, Azure Files NFS v4.1 offers the robust security that modern enterprises require, without compromising performance by ensuring all traffic between clients and servers is fully encrypted. Now Azure Files NFS data can be encrypted end-to-end: at rest, in transit, and across the network. Using Stunnel, an open-source TLS wrapper, Azure Files encrypts the TCP stream between the NFS client and Azure Files with strong encryption using AES-GCM, without needing Kerberos. This ensures data confidentiality while eliminating the need for complex setups or external authentication systems like Active Directory. The AZNFS utility package simplifies encrypted mounts by installing and setting up Stunnel on the client (Azure VMs). The AZNFS mount helper mounts the NFS shares with TLS support. The mount helper initializes dedicated stunnel client process for each storage account’s IP address. The stunnel client process listens on a local port for inbound traffic and then redirects encrypted nfs client traffic to the 2049 port where NFS server is listening on. The AZNFS package runs a background job called aznfswatchdog. It ensures that stunnel processes are running for each storage account and cleans up after all shares from the storage account are unmounted. If for some reason a stunnel process is terminated unexpectedly, the watchdog process restarts it. For more details, refer to the following document: How to encrypt data in transit for NFS shares Availability in Azure Regions All regions that support Azure Premium Files now support encryption in transit. Supported Linux releases For SAP on Azure environment, Azure Files NFS Encryption in Transit (EiT) is available for the following Operating System releases. SLES for SAP 15 SP4 onwards RHEL for SAP 8.6 onwards (EiT is currently not supported for file systems managed by Pacemaker clusters on RHEL.) Refer to SAP Note 1928533 for Operating system supportability for SAP on Azure systems. How to deploy Encryption in Transit (EiT) for Azure Files NFS Shares Refer to the SAP on Azure deployment planning guide about Using Azure Premium Files NFS and SMB for SAP workload As described in the planning guide, for SAP workloads, following are the supported uses of Azure Files NFS shares and EiT can be used for all the scenarios: sapmnt volume for a distributed SAP systems transport directory for SAP landscape /hana/shared for HANA scale-out. Review carefully the considerations for sizing /hana/shared, as appropriately sized /hana/shared volume contributes to system's stability file interface between your SAP landscape and other applications Deploy the Azure File NFS storage account. Refer to the standard documentation for creating the Azure Files storage account, file share and private endpoint. Create an NFS Azure file share Note : We can enforce EiT for all the file shares in the Azure Storage account by enabling ‘secure transfer required’ option. Deploy the mount helper (AZNFS) package on the Linux VM. Follow the instructions for your Linux distribution to install the package. Create the directories to mount the file shares. mkdir -p <full path of the directory> Mount the NFS File share. Refer to the section for mounting the Azure Files NFS EiT file share in Linux VMs. To mount the file share permanently by adding the mount commands in ‘/etc/fstab’. vi /etc/fstab sapnfs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 aznfs noresvport,vers=4,minorversion=1,sec=sys,_netdev 0 0 # Mount the file systems mount -a o File systems mentioned above are an example to explain the mount command syntax. o When adding nfs mount entry to /etc/fstab, the fstype is "nfs". However, to use AZNFS mount helper and EiT, we need to use the fstype as "aznfs" which is not known to the Operating System, so at boot time the server tries to mount these entries before the watchdog is active, and they may fail. Users should always add "_netdev" option to their /etc/fstab entries to make sure shares are mounted on reboot only after the required services (like network) are active. o We can add “notls” option in the mount command, if we don’t want to use the EiT but just want to use AZNFS mount helper to mount the file system. Also , we cannot mix EiT and no-EiT methods for different file systems using Azure Files NFS in the same Azure VM. Mount commands may fail to mount the file systems if EiT and no-EiT methods are used in the same VM o Mount helper supports private-endpoint based connections for Azure Files NFS EiT. o If SAP VM is custom domain joined, then we can use custom DNS FQDN OR short names for file share in the ‘/etc/fstab’ as its defined in the DNS. To verify the hostname resolution, check using ‘nslookup <hostname>’ and ‘getent host <hostname>’ commands. Mount the NFS File share as pacemaker cluster resource for SAP Central Services. In high availability setup of SAP Central Services, we may use file system as a resource in pacemaker cluster and it needs to be mounted using pacemaker cluster command. In the pacemaker commands to setup file system as cluster resource, we need to change the mount type to ‘aznfs’ from ‘nfs’. Also it’s recommended to use ‘_netdev’ in the options parameter. Following are the SAP Central Services setup scenarios in which Azure Files NFS is used as pacemaker resource agent, and we can use Azure Files NFS EiT. Azure VMs high availability for SAP NW on SLES with NFS on Azure Files Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files For SUSE Linux: SUSE 15 SP4 (for SAP) and higher releases recognise the ‘aznfs’ as file system type in the pacemaker resource agent. SUSE recommends using simple mount approach for high availability setup of SAP Central services, in which all file systems are mounted using ‘/etc/fstab’ only. For RHEL Linux: RHEL 8.6 (for SAP) and higher releases will be recognising ‘aznfs’ as file system type in pacemaker resource agent. At the time of writing the blog, ‘aznfs’ as file system type is not yet recognised by the FileSystem resource agent(RS) on RHEL, hence this setup can’t be used at this moment. For SAP HANA scale-out with HSR setup We can use Azure Files NFS EiT for SAP HANA scale-out with HSR setup as described in the below docs. SAP HANA scale-out with HSR and Pacemaker on SLES SAP HANA scale-out with HSR and Pacemaker on RHEL We need to mount ‘/hana/shared’ File system with EiT by defining the filesystem type as ‘aznfs’ in ‘/etc/fstab’. Also it’s recommended to use ‘_netdev’ in the options parameter. For SUSE Linux: In the Create File system resource section with SAP HANA high availability “SAPHanaSR-ScaleOut” package, in which we create a dummy file system cluster resource, which will monitor and report failures for ‘/hana/shared’ file system, we can continue to follow the steps as it is in the above document with ‘fstype=nfs4’. ‘/hana/shared’ file system will still be using EiT as defined in ‘/etc/fstab’. For SAP HANA high availability “SAPHanaSR-angi”, there are no further actions needed to use Azure File NFS EiT. For RHEL Linux: In the Create File system resource section, we can replace the file system type to ‘aznfs’ from ‘nfs’ in the pacemaker resource configuration for ‘/hana/shared’ file systems. Validation of in-transit data Encryption for Azure Files NFS. Refer to Verify that the in-transit data encryption succeeded section to check and confirm if EiT is successfully working. Summary Go ahead with EiT!! Simplified deployment of Encryption in Transit of Azure Files Premium NFS (Locally redundant Storage / Zonal redundant Storage) will strengthen the security footprint of Production and non-Production SAP on Azure environments.From GlusterFS to Azure Files: A Real-World Migration Story
A few weeks ago, we received a call familiar to many cloud architects—a customer with a massive GlusterFS deployment impacted by Red Hat's end-of-support deadline (December 2024) wondering: "What now?". With hundreds of terabytes across their infrastructure serving both internal teams and external customers, moving away from GlusterFS became a business continuity imperative. Having worked with numerous storage migrations over the years, I could already see the late nights ahead for their team if they simply tried to recreate their existing architecture in the cloud. So, we rolled up our sleeves and dug into their environment to find a better way forward. The GlusterFS challenge GlusterFS emerged in 2005 as a groundbreaking open-source distributed file system that solved horizontal scaling problems when enterprise storage had to work around mechanical device limitations. Storage administrators traditionally created pools of drives limited to single systems and difficult to expand without major downtime. GlusterFS addressed this by allowing distributed storage across physical servers, each maintaining its own redundant storage. Red Hat's acquisition of GlusterFS (Red Hat to Acquire Gluster) in 2011 brought enterprise legitimacy, but its architecture reflected a pre-cloud world with significant limitations: Costly local/geo replication due to limited site/WAN bandwidth Upgrades requiring outages and extensive planning Overhead from OS patching and maintaining compliance standards Constant "backup babysitting" for offsite tape rotation 24/7 on-call staffing for potential "brick" failures Indeed, during our initial discussions, customer’s storage team lead half-jokingly mentioned having a special ringtone for middle-of-the-night "brick" failure alerts. We also noticed that they were running the share exports on SMB 3.0 and NFS 3.0, something which is considered “slightly” deprecated today. Note: In GlusterFS, a "brick" is the basic storage unit—a directory on a disk contributing to the overall volume that enables scalable, distributed storage. Why Azure Files made perfect sense With the challenges our customer faced with maintaining redundancies & administration efforts, they required a turnkey solution to manage their data. Azure Files provided them a fully managed file share service in the Cloud, offering SMB, NFS, and REST-based shares, with on-demand scaling, integrated backups & automated failover. GlusterFS was designed for large scale distributed storage systems. With Azure Files, GlusterFS customers can take advantage of up to 100TiB of Premium file or 256TiB of Provisioned V2 HDD, 10 GBPs of throughput and up to 10K IOPS for demanding workloads. The advantages of Azure Files don’t just end at performance. As customers migrate from GlusterFS to Azure files, these are the additional benefits out of the box: Azure Backup integration One-click redundancy configuration upgrades Built-in monitoring via Azure Monitor HIPAA, PCI DSS, and GDPR compliance Enterprise security through granular access control and encryption (in transit and at Rest) The financial reality At a high level, we found that migrating to Azure files was 3X cheaper than migrating to an equivalent VM based setup running GlusterFS. We compared a self-managed 3-node GlusterFS cluster (running SMB 3.0) on Azure VMs via Provisioned v2 disks with Azure Files - Premium tier (SMB 3.11). Note: All disks on VM are using Provisioned V2 for best cost saving. Region - East US2. Component GlusterFS on Azure VMs with Premium SSD v2 Disk Azure Files Premium Compute 3 x D16ads v5 VMs (16 vCPUs, 64 GiB RAM) $685.75 N/A VM OS Disks (P10) $15.42 N/A Storage 100TB Storage $11,398.18 $10,485.75 Provisioned Throughput (storage only) 2400MBps 10,340MBps Provisioned IOPS (storage only) 160000 102400 Additional Storage for Replication (~200%) $22,796.37 N/A Backup & DR Backup Solution (30 days, ZRS redundancy) $16,343.04 $4,608.00 Monthly Total $51,238.76 $15,094.75 As the table illustrates, even before we factor in the administration cost, Azure Files already has a compelling financial advantage. We also recently released “Provisioned v2” billing model for Azure files – HDD tier which provides fine grained cost management and can scale up to 256TiB!! With GlusterFS running on-premises, customers must take in account the various administrative overheads, which will be taken away with Azure Files. Factors Current (GlusterFS) Azure Files Management & Maintenance Significant None Storage Administration Personnel 15-20 hours/week Minimal Rebalancing Operations Required Automatic Failover effort Required Automatic Capacity Planning Required Automatic Scaling Complexity High None Implementation of Security Controls Required Included The migration journey We developed a phased approach tailored to the customer's risk tolerance, starting with lower-priority workloads as a pilot: Phase 1: Assessment (2-3 weeks) Inventory GlusterFS environments and analyse workloads Define requirements and select appropriate Azure Files tier Develop migration strategy Phase 2: Pilot Migration (1-2 weeks) Set up Azure Files and test connectivity Migrate test workloads and refine process Phase 3: Production Migration (variable) Execute transfers using appropriate tools (AzCopy, Robocopy, rsync // fpsync) Implement incremental sync and validate data integrity Phase 4: Optimization (1-2 weeks) Fine-tune performance and implement monitoring Decommission legacy infrastructure Results that matter Working with Tata Consultancy Services (TCS) as our migration partner, the customer did a POC migrating from a three-node RHEL 8 environment with a 1TB SMB (GlusterFS) share, to Azure Storage Account- Premium files. The source share was limited to ~1500 IOPS, and had 20+ subfolders, each being reserved for application access which made administrative tasks challenging. The application sub-folder structure was modified to individual Azure Files shares as part of the migration planning process. In addition, each share was secured using on-premises Active directory – Domain controller-based share authentication. Migration was done using Robocopy with SMB shares mounted on Windows clients and data copy being done in a mirror mode. The migration delivered significant benefits: Dramatically improved general-purpose performance due to migration of HDD based shares to SSD (1500 IOPS shared at source vs 3000 IOPS // 200MBPS base performance per share) Meeting and exceeding current RTO and RPO requirements (15 min) set by customer Customer mentioned noticeable performance gains for SQL Server workloads Flexibility to resize each share to Azure files maximum limit, independent of noise neighbours as previously configured Significant reduced TCO (at 33% of cost compared to equivalent VM based deployment) with higher base performance What this means for your GlusterFS environment If you're facing the GlusterFS support deadline, this is an opportunity to modernize your file storage approach. Azure Files offers a chance to eliminate infrastructure headaches through simplified management, robust security, seamless scalability, and compelling economics. Looking to begin your own migration? Reach out to us at azurefiles@microsoft.com, contact your Microsoft representatives, or explore our Azure Files documentation to learn more about capabilities and migration paths.243Views0likes0CommentsEnhance Your Linux Workloads with Azure Files NFS v4.1: Secure, Scalable, and Flexible
Enhance your Linux workloads with Azure Files NFS v4.1, enterprise-grade solution. With new support for in-transit encryption and RESTful access, it delivers robust security and flexible data access for mission-critical and data-intensive applications.861Views0likes0CommentsAnnouncing General Availability of Next generation Azure Data Box Devices
Today, we’re excited to announce the General Availability of Azure Data Box 120 and Azure Data Box 525, our next-generation compact, NVMe-based Data Box devices. These devices are currently available for customers to order in the US, US Gov, Canada, EU and the UK Azure regions, with broader availability coming soon. Since the preview announcement at Ignite '24, we have successfully ingested petabytes of data, encompassing multiple orders serving customers across various industry verticals. Customers have expressed delight over the reliability and efficiency of the new devices with up to 10x improvement in data transfer rates, highlighting them as a valuable and essential asset for large-scale data migration projects. These new device offerings reflect insights gained from working with our customers over the years and understanding their evolving data transfer needs. They incorporate several improvements to accelerate offline data transfers to Azure, including: Fast copy - Built with NVMe drives for high-speed transfers and improved reliability and support for faster network connections Ease of use - larger capacity offering (525 TB) in a compact form-factor for easy handling Resilient - Ruggedized devices built to withstand rough conditions during transport Secure - Enhanced physical, hardware and software security features Broader availability – Presence planned in more Azure regions, meeting local compliance standards and regulations What’s new? Improved Speed & Efficiency NVMe-based devices offer faster data transfer rates, providing a 10x improvement in data transfer speeds to the device as compared to previous generation devices. With a dataset comprised of mostly large (TB-sized) files, on average half a petabyte can be copied to the device in under two days. High-speed transfers to Azure with data upload up to 5x faster for medium to large files, reducing the lead time for your data to become accessible in the Azure cloud. Improved networking with support for up to 100 GbE connections, as compared to 10 GbE on the older generation of devices. Two options with usable capacity of 120 TB and 525 TB in a compact form factor meeting OSHA requirements. Devices ship the next day air in most regions. Learn more about the performance improvements on Data Box 120 and Data Box 525. Enhanced Security The new devices come with several new physical, hardware and software security enhancements. This is in addition to the built in Azure security baseline for Data Box and Data Box service security measures currently supported by the service. Secure boot functionality with hardware root of trust and Trusted Platform Module (TPM) 2.0. Custom tamper-proof screws and built-in intrusion detection system to detect unauthorized device access. AES 256-bit BitLocker software encryption for data at rest is currently available. Hardware encryption via the RAID controller, which will be enabled by default on these devices, is coming soon. Furthermore, once available, customers can enable double encryption through both software and hardware encryption to meet their sensitive data transfer requirements. These ISTA 6A compliant devices are built to withstand rough conditions during shipment while keeping both the device and your data safe and intact. Learn more about the enhanced security features on Data Box 120 and Data Box 525. Broader Azure region coverage A recurring request from our customers has been wider regional availability of higher-capacity devices to accelerate large migrations. We’re happy to share that Azure Data Box 525 will be available across US, US Gov, EU, UK and Canada with broader presence in EMEA and APAC regions coming soon. This marks a significant improvement in the availability of a large-capacity device as compared to the current Data Box Heavy which is available only in the US and Europe. What our customers have to say For the last several months, we’ve been working directly with our customers of all industries and sizes to leverage the next generation devices for their data migration needs. Customers love the larger capacity with form-factor familiarity, seamless set up and faster copy. “We utilized Azure Data Box for a bulk migration of Unix archive data. The data, originating from IBM Spectrum Protect, underwent pre-processing before being transferred to Azure blobs via the NFS v4 protocol. This offline migration solution enabled us to efficiently manage our large-scale data transfer needs, ensuring a seamless transition to the Azure cloud. Azure Data Box proved to be an indispensable tool in handling our specialized migration scenario, offering a reliable and efficient method for data transfer.” – ST Microelectronics Backup & Storage team “This new offering brings significant advantages, particularly by simplifying our internal processes. With deployments ranging from hundreds of terabytes to even petabytes, we previously relied on multiple regular Data Box devices—or occasionally Data Box Heavy devices—which required extensive operational effort. The new solution offers sizes better aligned with our needs, allowing us to achieve optimal results with fewer logistical steps. Additionally, the latest generation is faster and provides more connectivity options at data centre premises, enhancing both efficiency and flexibility for large-scale data transfers.” - Lukasz Konarzewski, Senior Data Architect, Commvault “We have had a positive experience overall with the new Data Box devices to move our data to Azure Blob storage. The devices offer easy plug and play installation, detailed documentation especially for the security features and good data copy performance. We would definitely consider using it again for future large data migration projects.” – Bas Boeijink, Cloud Engineer, Eurofiber Cloud Infra Upcoming changes to older SKUs availability Note that in regions where the next-gen devices are available, new orders for Data Box 80 TB and Data Box Heavy devices cannot be placed post May 31, 2025. We will however continue to process and support all existing orders. Order your device today! The devices are currently available for customers to order in the US, Canada, EU, UK, and US Gov Azure regions. We will continue to expand to more regions in the upcoming months. Azure Data Box provides customers with one of the most cost-effective solutions for data migration, offering competitive pricing with the lowest cost per TB among offline data transfer solutions. You can learn more about the pricing across various regions by visiting our pricing page. You can use the Azure portal to select the requisite SKU suitable for your migration needs and place the order. Learn more about the all-new Data Box devices here. We are committed to continuing to deliver innovative solutions to lower the barrier for bringing data to Azure. Your feedback is important to us. Tell us what you think about the new Azure Data Box devices by writing to us at DataBoxPM@microsoft.com – we can’t wait to hear from you.877Views2likes0CommentsSupercharge Azure Files performance for metadata-intensive workloads
Handling millions—or billions—of small files is business as usual for many cloud workloads. But behind the scenes, it’s not just about reading and writing data—it's the constant file opens, closes, directory listings, and existence checks that really impact performance. These metadata operations may seem small, but they’re critical—and can become a major bottleneck if they’re not fast. From AI/ML workloads on Azure Kubernetes Service, to web apps like Moodle, to CI/CD pipelines and virtual desktops—many applications are metadata intensive. And when every millisecond counts, latency in these operations can drag down the entire experience. That’s why we’re excited to introduce a major boost in metadata performance. Applications experience up to 55% lower latency and 2–3x more consistent response times, ensuring greater reliability. Workloads with high metadata interaction, such as AI/ML pipelines, see the biggest gains with 3x higher parallel metadata IOPS for improved efficiency and scalability. Removing metadata bottlenecks allows more data operations too. We've seen workloads increase data IOPS and throughput up to 60%. In Azure Files SSD (premium), this enhancement accelerates metadata operations for both SMB and REST protocols, benefiting new and existing file shares at no extra cost. Whether you're running critical business applications, scaling DevOps workflows, or supporting thousands of virtual desktop users, Azure Files is now faster, more scalable, and optimized for your most demanding workloads. Metadata Caching accelerates real-world solutions. GIS on Azure Virtual Desktop GIS (Geographic Information System) workloads are crucial for analyzing and managing spatial data, supporting industries like urban planning, agriculture, and disaster management. By visualizing spatial relationships, GIS helps organizations make better decisions about infrastructure and resource management. Azure Virtual Desktop (AVD) is a popular choice for hosting GIS workloads in the cloud. These workloads often experience performance bottlenecks due to frequent interactions with large volumes of smaller files on shared file storage. Metadata caching reduces latency and accelerates file interactions—such as opening and closing these files—enabling faster data access and improving GIS job execution in virtual desktop environments. Customers, like Suncor Energy, are already experiencing the impact of Metadata Caching in GIS workloads. “Enabling Metadata Cache in Azure Files SSD (premium) significantly improved geospatial (GIS) workload performance, reducing execution time by 43.18%. This enhancement boosts throughput and IOPS, increasing the value of Azure Files.” — Colin Stuckless, Suncor Energy Inc. Moodle Web Services Moodle is a comprehensive learning management system (LMS) that combines server hosting, databases (such as MySQL or PostgreSQL), file storage (using Azure Files SSD), and PHP-based web servers. It’s designed to facilitate course management, allowing instructors to upload materials, assignments, and quizzes. Moodle requires frequent read/write requests for course materials, assignments, and user interactions generating a high volume of metadata lookups, particularly when accessing shared content or navigating large course repositories. With Metadata Caching, Moodle operates faster and more efficiently. Response times have improved by 33%, reducing wait times for students and instructors when accessing materials or submitting work. These enhancements also boost Moodle’s scalability, enabling it to support 3x more students and user sessions without compromising performance. Even during peak usage, when many users are active simultaneously, Moodle remains stable and responsive. As a result, students can access resources and submit work more quickly, while instructors can manage larger courses and assignments more effectively. GitHub Actions on Azure Kubernetes Service (AKS) GitHub Actions is a powerful automation tool seamlessly integrated with GitHub, enabling developers to build, test, and deploy code directly from their repositories. By leveraging Azure Kubernetes Service (AKS), GitHub Actions automates tasks through workflows defined in AKS YAML files, facilitating efficient container orchestration, scaling, and deployment of applications within a Kubernetes environment. These workflows can be triggered by various events, such as code pushes, pull requests, or even scheduled times, streamlining the development process and enhancing efficiency. These operations generate a high volume of metadata lookups, as each workflow execution involves checking for updated dependencies, accessing cached libraries, and continuously writing execution logs. Metadata caching significantly reduces the time required to retrieve and process metadata, resulting in quicker build artifact handling and smoother, more efficient deployment cycles. As a result, pipeline execution is 57% faster, allowing developers to build and deploy in half the time! How to get started You can now supercharge your Azure Files performance by enabling metadata caching for your applications today, at no extra cost! So don’t wait! To get started, register your subscription with the Metadata Cache feature using Azure portal or PowerShell to enable all new and existing accounts with Metadata Caching. Metadata Cache is now generally available in multiple regions, with more being added as we expand coverage. For Regional Availability please visit the following Link1.7KViews0likes0CommentsGeneral Availability: Vaulted backup for Azure Files - Boost your data security and compliance
We are thrilled to announce the General Availability (GA) of Vaulted Backup support in Azure Backup for Azure files - Standard tier to help seamlessly protect your data and applications hosted on Azure file share. With this release, you can now leverage vaulted backup integration to protect Standard SMB file shares. Azure Backup vaulted support for Azure file share provides enhanced data protection with the ability to configure snapshot and vaulted backup in a single policy to a secure backup location(Recovery Services vault) and support regional recovery. Vaulted backup provides advanced protection capabilities like ransomware protection, ability to restore even when file share is deleted which are missing with snapshot only backup. Vaulted backup solution seamlessly integrates with Azure File sync allowing File sync customers to protect data tiered to the cloud long-term in a cost-effective manner. In this blog post let’s explore how Azure Backup can enable robust data protection solution for businesses migrating and hosting applications on Azure file share. Security: Protection against Ransomware Ransomware or malware attack, continues to be a major threat to organizations worldwide, often leaving businesses at the mercy of cybercriminals demanding hefty ransom payments in exchange for access to their encrypted data. Vaulted backups provide a vital line of defense, ensuring that organizations can recover their data without giving in to ransom demands. Key features offered by vaulted Backups which protects against Ransomware: Isolation: Vaulted Backup data is isolated from your production storage accounts and stored in a separate tenant managed by Microsoft. This isolation helps safeguard your data against unauthorized tampering and ensures that your backups remain intact. Advanced security: Features like vault lock, multi-user authorization, and soft delete, which add additional layers of protection, ensuring that backups are immune to malicious deletion or tampering. Governance and security posture: Azure Backup integrates with the Business Continuity and Disaster Recovery (BCDR) security posture, allowing you to better manage and govern the security of your backups. This ensures that your backups meet the right level of protection and are recoverable when you need them most. Regulatory and compliance Azure File share enables users from industries like legal, finance, and health to store crucial business data. To comply with regulations and compliance checks, one will require offsite backups with long-term retention, which snapshots alone couldn't provide. With vaulted backup users can move snapshots to a Recovery Services Vault in the same Azure region as their primary storage, with options for cross-regional replication. This setup allows backup data to be retained for up to 99 years in low-cost, highly secure, immutable storage, meeting regulatory and compliance requirements during audits and legal holds. Furthermore, with the introduction of the new cross-subscription backup capability, organizations can allocate backup data to dedicated subscriptions. This feature allows customers to consolidate all backups into a single subscription, enhancing cost management and ensuring independent access control. It enables organizations to retain control over their data protection strategy while ensuring that each department or project adheres to its specific regulatory and security requirements. Enterprise Ready Vaulted backup support now enables adherence to the widely accepted 3-2-1 backup rule for Azure files protection. Azure Backup is well integrated with Azure Business Continuity Center that can offer centralized management to gain visibility, monitor jobs, alerts and reporting. How does 3-2-1 backup help? Human errors, insider threats, or stolen credentials can lead to critical data loss. File share snapshots serve as the first line of defense to restore your data. In case, where snapshots are not available, Vaulted Backups, stored securely outside of your primary storage account, provide an additional protected copy of your data. Additionally, the copy of the backup can be replicated to another region using GRS Geo-Redundant storage. A backup policy will enable you to manage the schedule and retention for both snapshots and vault copies. In the event of deletion whether accidental or malicious the restore process will first be initiated using snapshots. If snapshots are unavailable, recovery will proceed from the vault. If the primary region is down then one can restore from the secondary region with Cross Region restore option Getting started Here are three simple steps to help you get started with configuring vaulted backup for Azure File shares: Create a Recovery services vault: A vault is a management entity that stores backups and allows you to access and manage them. Create a backup policy: Backup policy enables you to configure the frequency and retention of backups based on your business requirements. Select the storage account and File shares to backup: You can choose to back up all File share or select specific File shares from the selected storage account depending on the criticality of the data they contain. Learn more about vaulted backup for File share here. Pricing and availability Vaulted backup for Azure File share standard is generally available in these regions. Vaulted backup for premium file shares will continue to be in public preview. You will incur a protected instance fee and charges for backup storage for both standard and premium shares from 1 st April 2025. To learn about pricing, refer to the Azure File share backup pricing page. Contact us If you have questions or feedback, please reach out to us at AskAzureBackupTeam@microsoft.com.1.6KViews1like2CommentsEnable ADDS authetication fails
I am trying to execute the Enable ADDS authentication using Azhybridfiles module. when I try to execute the command Join-AzStorageAccount ` -ResourceGroupName $ResourceGroupName ` -StorageAccountName $StorageAccountName ` -SamAccountName $SamAccountName ` -DomainAccountType $DomainAccountType ` -OrganizationalUnitDistinguishedName $OuDistinguishedName it keeps on progressing with below warning but the progress stuck without completing or any error message.85Views0likes1Comment