azure files
50 TopicsLower costs and boost flexibility with Azure Files provisioned v2
For enterprise IT professionals or startup developers alike, cost efficiency for file storage is top of mind. Whether you're running mission-critical databases, production scale applications like SAP, or cloud native applications using file storage on AKS, your storage infrastructure should adapt to your workload - not the other way around. To bring this flexibility to your hands, we introduced the provisioned v2 model for the HDD (standard) media tier in 2024. Today, we are excited to announce that we're extending the provisioned v2 model to the SSD (premium) media tier. Provisioned v2 is designed to give you more control, better performance alignment, and significant cost savings across a wide range of customer scenarios - by decoupling performance from capacity, lowering the minimum share size to 32 GiB, and increasing the maximum share size to 256 TiB. With provisioned v2 you can dynamically scale up or down your file share capacity or performance as needed without any downtime based on your workload pattern. Right-sized performance for every workload Whether you are running general purpose file shares, DevOps tooling, AI workflows or databases, you can benefit from leveraging the provisioned v2 model. Here are some examples: Database workloads such as SQL Server, Oracle®, MongoDB, and enterprise platforms like SAP and EPIC require high IOPS and throughput but minimal storage. With provisioned v2, you can secure needed performance without excess storage, resulting in substantial cost savings. Containerized workloads, like Azure Kubernetes Service (AKS), often use very small file shares to achieve shared storage between volumes. With provisioned v2, we've decreased the minimum share size from 100 GiB to 32 GiB and have enabled you to provision just the minimum IOPS and throughput that's included for free. This means that the minimum file share cost in Azure Files is going from $16 / month to just $3.20 / month - an 80% cost savings! Workloads that require fast fetch of infrequently used data, like media files, where the storage workload requires IOPS/throughput occasionally but requires the low latency upon retrieval that you can only get on SSD storage media. With provisioned v2, we've increased the maximum share size from 100 TiB to 256 TiB, enabling larger than ever file shares on Azure Files. And the flexible provisioning afforded by provisioned v2 enables you to dramatically decrease bundled IOPS/throughput to match the archive's requirements. Let's take a deeper look at these savings with some sample workloads: Workload scenario Provisioned v1 Provisioned v2 Cost savings Workload using defaults for IOPS and throughput 10 TiB storage, ~13K IOPS, ~1.1 GiB / sec throughput $1,638.40 / month $1,341.09 / month 18% Relational database workload 4 TiB storage, 20K IOPS, 1.5 GiB / sec throughput $2,720 / month $925.42 / month 66% Hot archive for multimedia 100 TiB storage, 15K IOPS, 2 GiB / sec throughput $16,384.00 / month $10,641.93 / month 35% To learn more about how to compare your costs between the provisioned v2 and provisioned v1 models, see understanding the costs of the provisioned v2 model. All pricing comparisons are shown using the West US 2 prices for locally redundant storage (LRS). Top reasons to give provisioned v2 a try If you haven't looked at Azure Files before, now is the best time to get started. Here's why you should look at making the move to Azure Files with provisioned v2 SSD: Affordable, with low entry costs starting at just $3.20/month. Flexible and customizable to fit a wide range of requirements. Easy to understand and predictable pricing. Support for high IOPS and low latency performance, ideal for performance-critical applications that require sustained throughput and rapid data access. Support for unpredictable or burst-heavy usage patterns, ensuring smooth performance under variable demand. Scalable sizing options, with SSD file shares ranging from 32 GiB to 256 TiB - well-suited for workloads with smaller footprints. How it works With the provisioned v2 model, IOPS and throughput are recommended to you based on the amount of provisioned storage you select, however this recommendation is completely overridable by you. If your workload needs more IOPS or throughput than the default recommendations, you can provision more without having to provision a bunch of extra storage. And if your workload needs less than the default recommendation, you can decrease the provisioned IOPS and throughput all the way down to the minimums for a file share. The best part of this is that you don't have to get this right on file share creation: if you don't know what your performance requirements are or your workload's patterns change over time, you can dynamically scale up or down your file share's provisioning as needed, without any downtime. The provisioned v2 file share also gives you all of the telemetry needed to monitor your workload's used IOPS and throughput usage, enabling you to continuously tune your file share to your workload. Getting started is easy Provisioned v2 for SSD is available right now, in all public cloud regions (see provisioned v2 availability for details). Simply select "Azure Files" for primary service, "Premium" for performance, and "Provisioned v2" for billing when creating your storage account in the Azure Portal. To learn more about how to get started, see: Azure Files pricing page Understanding the provisioned v2 model | Microsoft Learn How to create an Azure file share | Microsoft Learn413Views3likes0CommentsSecure Linux workloads using Azure Files with Encryption in Transit
Encryption in Transit (EiT) overview As organizations increasingly move to cloud environments, safeguarding data security both at rest and during transit is essential for protecting sensitive information from emerging threats and for maintaining regulatory compliance. Azure Files already offers encryption at rest using Microsoft-managed or customer-managed keys for NFS file shares. Today, we're excited to announce the General Availability of Encryption in Transit (EiT) for NFS file shares. By default, Azure encrypts data moving across regions. In addition, all clients accessing Azure Files NFS shares are required to be within the scope of a trusted virtual network (VNet) to ensure secure access to applications. However, data transferred within resources in a VNet remains unencrypted. Enabling EiT ensures that all read & writes to the NFS file shares within the VNET are encrypted providing an additional layer of security. With EiT, enterprises running production scale applications with Azure Files NFS shares can now meet their end-to-end compliance requirements. Feedback from the NFS community and Azure customers emphasized the need for an encryption approach that is easy to deploy, portable, and scalable. TLS enables a streamlined deployment model for NFS with EiT while minimizing configuration complexity, maintaining protocol transparency, and avoiding operational overhead. The result is a more secure, performant, and standards-compliant solution that integrates seamlessly into existing NFS workflows. With EiT, customers can now encrypt all NFS traffic using the latest and most secure version of TLS, TLS 1.3, achieving enterprise-grade security effortlessly. TLS provides three core security guarantees: Confidentiality: Data is encrypted, preventing eavesdropping. Authentication: Client verifies the server via certificates during handshake to establish trust. Integrity: TLS ensures that information arrives safely and unchanged, thus adding protection against data corruption or bitflips in transit. TLS encryption for Azure Files is delivered via stunnel, a trusted, open-source proxy designed to add TLS encryption to existing client-server communications without modifying the applications themselves. It has been widely used for its robust security and transparent, in-transit encryption for many use cases across industries for many years. AZNFS Mount Helper for Seamless Setup EiT client setup and mount for NFS volumes may seem like a daunting task, but we have made it easier using the AZNFS mount helper tool. Simplicity and Resiliency: AZNFS is a simple, open-source tool, maintained and supported by Microsoft, that automates stunnel setup and NFS volume mounting over a secure TLS tunnel. AZNFS’s in-built watchdog's auto-reconnect logic protects the TLS mounts, ensuring high availability during unexpected connectivity interruptions. Sample AZNFS mount commands, customized to your NFS volume, are available in the Azure portal (screenshot below). Fig 1. Azure portal view to configure AZNFS for Azure clients using EiT Standardized and flexible: Mounting with AZNFS incorporates the Microsoft recommended performance, security and reliability mount options by default while providing flexibility to adjust these settings to fit your workload. For example, while TLS is the default selection, you can override it to non-TLS connections for scenarios like testing or debugging. Broad Linux compatibility: AZNFS is available through Microsoft’s package repository for major Linux distributions, including Ubuntu, RedHat, SUSE, Alma Linux, Oracle Linux and more. Seamless upgrades: AZNFS package updates automatically in the background without affecting the active mount connections. You will not need any maintenance windows or downtime to perform upgrades. The illustration below shows how EiT helps transmit data securely between clients and NFS volumes over trusted networks. Fig 2. EiT set up flow and secure data transfer for NFS shares Enterprise Workloads and Platform Support EiT is compatible with applications running on a wide range of platforms, including Linux VMs in Azure, on-premises Linux servers, VM scale sets, and Azure Batch, ensuring compatibility with major Linux distributions for cloud, hybrid, and on-premises deployments. Azure Kubernetes Service (AKS): The preview of NFS EiT in AKS will be available shortly. In the meantime, the upstream Azure Files CSI Driver includes AZNFS integration, which can be manually configured to enable EiT for NFS volumes with stateful container workloads. SAP: SAP systems are central to many business operations and handle sensitive data like financial information, customer details, and proprietary data. Securing this confidential data within the SAP environment, including its central services, is a critical concern. NFS volumes, used in central services are single points of failure, making their security and availability crucial. This blog post on SAP deployments on Azure provides guidance on using EiT enabled NFS volumes for SAP deployment scenarios to make them even more secure. SAP tested EiT for their SAP RISE deployments and shared positive feedback: “The NFS Encryption in Transit preview has been a key enabler for running RISE customers mission critical workloads on Azure Files, helping us meet high data in transit encryption requirements without compromising performance or reliability. It has been critical in supporting alignment with strict security architectures and control frameworks—especially for regulated industries like financial services and healthcare. We’re excited to see this capability go GA and look forward to leveraging it at scale.” Ventsislav Ivanov, IT Architecture Chief Expert, SAP Compliance-centric verticals: As part of our preview, customers in industry verticals including financial services, insurance, retail leveraged EiT to address their data confidentiality and compliance needs. One such customer, Standard Chartered, a major global financial institution, highlighted its benefits. “The NFS Encryption in Transit preview has been a key enabler for migrating one of our on-premises applications to Azure. It allowed us to easily run tests in our development and staging environments while maintaining strict compliance and security for our web application assets. Installation of the required aznfs package was seamless, and integration into our bootstrap script for virtual machine scale set automation went smoothly. Additionally, once we no longer needed to disable the HTTPS requirement on our storage account, no further changes were necessary to our internal Terraform modules—making the experience nearly plug-and-play. We’re excited to see this capability reach general availability” Mohd Najib, Azure Cloud Engineer, Standard Chartered Regional availability and pricing Encryption in Transit GA with TLS 1.3 is rolling out globally and is now available in most regions. EiT can be enabled on both new and existing storage accounts and Azure Files NFS shares. There is no additional cost for enabling EiT. Next Steps to Secure Your Workloads Explore More: How to encrypt data in transit for NFS shares| Microsoft Learn Mandate Security: Enable “Secure Transfer Required” on all your Storage Accounts with NFS volumes to mandate EiT for additional layer of protection. Enforce at Scale: Enable Azure Policy for enforcing EiT across your subscription. Please reach out to the team at AzureFiles@microsoft.com for any questions and feedback.642Views4likes0CommentsFrom GlusterFS to Azure Files: A Real-World Migration Story
A few weeks ago, we received a call familiar to many cloud architects—a customer with a massive GlusterFS deployment impacted by Red Hat's end-of-support deadline (December 2024) wondering: "What now?". With hundreds of terabytes across their infrastructure serving both internal teams and external customers, moving away from GlusterFS became a business continuity imperative. Having worked with numerous storage migrations over the years, I could already see the late nights ahead for their team if they simply tried to recreate their existing architecture in the cloud. So, we rolled up our sleeves and dug into their environment to find a better way forward. The GlusterFS challenge GlusterFS emerged in 2005 as a groundbreaking open-source distributed file system that solved horizontal scaling problems when enterprise storage had to work around mechanical device limitations. Storage administrators traditionally created pools of drives limited to single systems and difficult to expand without major downtime. GlusterFS addressed this by allowing distributed storage across physical servers, each maintaining its own redundant storage. Red Hat's acquisition of GlusterFS (Red Hat to Acquire Gluster) in 2011 brought enterprise legitimacy, but its architecture reflected a pre-cloud world with significant limitations: Costly local/geo replication due to limited site/WAN bandwidth Upgrades requiring outages and extensive planning Overhead from OS patching and maintaining compliance standards Constant "backup babysitting" for offsite tape rotation 24/7 on-call staffing for potential "brick" failures Indeed, during our initial discussions, customer’s storage team lead half-jokingly mentioned having a special ringtone for middle-of-the-night "brick" failure alerts. We also noticed that they were running the share exports on SMB 3.0 and NFS 3.0, something which is considered “slightly” deprecated today. Note: In GlusterFS, a "brick" is the basic storage unit—a directory on a disk contributing to the overall volume that enables scalable, distributed storage. Why Azure Files made perfect sense With the challenges our customer faced with maintaining redundancies & administration efforts, they required a turnkey solution to manage their data. Azure Files provided them a fully managed file share service in the Cloud, offering SMB, NFS, and REST-based shares, with on-demand scaling, integrated backups & automated failover. GlusterFS was designed for large scale distributed storage systems. With Azure Files, GlusterFS customers can take advantage of up to 100TiB of Premium file or 256TiB of Provisioned V2 HDD, 10 GBPs of throughput and up to 10K IOPS for demanding workloads. The advantages of Azure Files don’t just end at performance. As customers migrate from GlusterFS to Azure files, these are the additional benefits out of the box: Azure Backup integration One-click redundancy configuration upgrades Built-in monitoring via Azure Monitor HIPAA, PCI DSS, and GDPR compliance Enterprise security through granular access control and encryption (in transit and at Rest) The financial reality At a high level, we found that migrating to Azure files was 3X cheaper than migrating to an equivalent VM based setup running GlusterFS. We compared a self-managed 3-node GlusterFS cluster (running SMB 3.0) on Azure VMs via Provisioned v2 disks with Azure Files - Premium tier (SMB 3.11). Note: All disks on VM are using Provisioned V2 for best cost saving. Region - East US2. Component GlusterFS on Azure VMs with Premium SSD v2 Disk Azure Files Premium Compute 3 x D16ads v5 VMs (16 vCPUs, 64 GiB RAM) $685.75 N/A VM OS Disks (P10) $15.42 N/A Storage 100TB Storage $11,398.18 $10,485.75 Provisioned Throughput (storage only) 2400MBps 10,340MBps Provisioned IOPS (storage only) 160000 102400 Additional Storage for Replication (~200%) $22,796.37 N/A Backup & DR Backup Solution (30 days, ZRS redundancy) $16,343.04 $4,608.00 Monthly Total $51,238.76 $15,094.75 As the table illustrates, even before we factor in the administration cost, Azure Files already has a compelling financial advantage. We also recently released “Provisioned v2” billing model for Azure files – HDD tier which provides fine grained cost management and can scale up to 256TiB!! With GlusterFS running on-premises, customers must take in account the various administrative overheads, which will be taken away with Azure Files. Factors Current (GlusterFS) Azure Files Management & Maintenance Significant None Storage Administration Personnel 15-20 hours/week Minimal Rebalancing Operations Required Automatic Failover effort Required Automatic Capacity Planning Required Automatic Scaling Complexity High None Implementation of Security Controls Required Included The migration journey We developed a phased approach tailored to the customer's risk tolerance, starting with lower-priority workloads as a pilot: Phase 1: Assessment (2-3 weeks) Inventory GlusterFS environments and analyse workloads Define requirements and select appropriate Azure Files tier Develop migration strategy Phase 2: Pilot Migration (1-2 weeks) Set up Azure Files and test connectivity Migrate test workloads and refine process Phase 3: Production Migration (variable) Execute transfers using appropriate tools (AzCopy, Robocopy, rsync // fpsync) Implement incremental sync and validate data integrity Phase 4: Optimization (1-2 weeks) Fine-tune performance and implement monitoring Decommission legacy infrastructure Results that matter Working with Tata Consultancy Services (TCS) as our migration partner, the customer did a POC migrating from a three-node RHEL 8 environment with a 1TB SMB (GlusterFS) share, to Azure Storage Account- Premium files. The source share was limited to ~1500 IOPS, and had 20+ subfolders, each being reserved for application access which made administrative tasks challenging. The application sub-folder structure was modified to individual Azure Files shares as part of the migration planning process. In addition, each share was secured using on-premises Active directory – Domain controller-based share authentication. Migration was done using Robocopy with SMB shares mounted on Windows clients and data copy being done in a mirror mode. The migration delivered significant benefits: Dramatically improved general-purpose performance due to migration of HDD based shares to SSD (1500 IOPS shared at source vs 3000 IOPS // 200MBPS base performance per share) Meeting and exceeding current RTO and RPO requirements (15 min) set by customer Customer mentioned noticeable performance gains for SQL Server workloads Flexibility to resize each share to Azure files maximum limit, independent of noise neighbours as previously configured Significant reduced TCO (at 33% of cost compared to equivalent VM based deployment) with higher base performance What this means for your GlusterFS environment If you're facing the GlusterFS support deadline, this is an opportunity to modernize your file storage approach. Azure Files offers a chance to eliminate infrastructure headaches through simplified management, robust security, seamless scalability, and compelling economics. Looking to begin your own migration? Reach out to us at azurefiles@microsoft.com, contact your Microsoft representatives, or explore our Azure Files documentation to learn more about capabilities and migration paths.230Views0likes0CommentsEnhance Your Linux Workloads with Azure Files NFS v4.1: Secure, Scalable, and Flexible
Enhance your Linux workloads with Azure Files NFS v4.1, enterprise-grade solution. With new support for in-transit encryption and RESTful access, it delivers robust security and flexible data access for mission-critical and data-intensive applications.814Views0likes0CommentsAnnouncing General Availability of Next generation Azure Data Box Devices
Today, we’re excited to announce the General Availability of Azure Data Box 120 and Azure Data Box 525, our next-generation compact, NVMe-based Data Box devices. These devices are currently available for customers to order in the US, US Gov, Canada, EU and the UK Azure regions, with broader availability coming soon. Since the preview announcement at Ignite '24, we have successfully ingested petabytes of data, encompassing multiple orders serving customers across various industry verticals. Customers have expressed delight over the reliability and efficiency of the new devices with up to 10x improvement in data transfer rates, highlighting them as a valuable and essential asset for large-scale data migration projects. These new device offerings reflect insights gained from working with our customers over the years and understanding their evolving data transfer needs. They incorporate several improvements to accelerate offline data transfers to Azure, including: Fast copy - Built with NVMe drives for high-speed transfers and improved reliability and support for faster network connections Ease of use - larger capacity offering (525 TB) in a compact form-factor for easy handling Resilient - Ruggedized devices built to withstand rough conditions during transport Secure - Enhanced physical, hardware and software security features Broader availability – Presence planned in more Azure regions, meeting local compliance standards and regulations What’s new? Improved Speed & Efficiency NVMe-based devices offer faster data transfer rates, providing a 10x improvement in data transfer speeds to the device as compared to previous generation devices. With a dataset comprised of mostly large (TB-sized) files, on average half a petabyte can be copied to the device in under two days. High-speed transfers to Azure with data upload up to 5x faster for medium to large files, reducing the lead time for your data to become accessible in the Azure cloud. Improved networking with support for up to 100 GbE connections, as compared to 10 GbE on the older generation of devices. Two options with usable capacity of 120 TB and 525 TB in a compact form factor meeting OSHA requirements. Devices ship the next day air in most regions. Learn more about the performance improvements on Data Box 120 and Data Box 525. Enhanced Security The new devices come with several new physical, hardware and software security enhancements. This is in addition to the built in Azure security baseline for Data Box and Data Box service security measures currently supported by the service. Secure boot functionality with hardware root of trust and Trusted Platform Module (TPM) 2.0. Custom tamper-proof screws and built-in intrusion detection system to detect unauthorized device access. AES 256-bit BitLocker software encryption for data at rest is currently available. Hardware encryption via the RAID controller, which will be enabled by default on these devices, is coming soon. Furthermore, once available, customers can enable double encryption through both software and hardware encryption to meet their sensitive data transfer requirements. These ISTA 6A compliant devices are built to withstand rough conditions during shipment while keeping both the device and your data safe and intact. Learn more about the enhanced security features on Data Box 120 and Data Box 525. Broader Azure region coverage A recurring request from our customers has been wider regional availability of higher-capacity devices to accelerate large migrations. We’re happy to share that Azure Data Box 525 will be available across US, US Gov, EU, UK and Canada with broader presence in EMEA and APAC regions coming soon. This marks a significant improvement in the availability of a large-capacity device as compared to the current Data Box Heavy which is available only in the US and Europe. What our customers have to say For the last several months, we’ve been working directly with our customers of all industries and sizes to leverage the next generation devices for their data migration needs. Customers love the larger capacity with form-factor familiarity, seamless set up and faster copy. “We utilized Azure Data Box for a bulk migration of Unix archive data. The data, originating from IBM Spectrum Protect, underwent pre-processing before being transferred to Azure blobs via the NFS v4 protocol. This offline migration solution enabled us to efficiently manage our large-scale data transfer needs, ensuring a seamless transition to the Azure cloud. Azure Data Box proved to be an indispensable tool in handling our specialized migration scenario, offering a reliable and efficient method for data transfer.” – ST Microelectronics Backup & Storage team “This new offering brings significant advantages, particularly by simplifying our internal processes. With deployments ranging from hundreds of terabytes to even petabytes, we previously relied on multiple regular Data Box devices—or occasionally Data Box Heavy devices—which required extensive operational effort. The new solution offers sizes better aligned with our needs, allowing us to achieve optimal results with fewer logistical steps. Additionally, the latest generation is faster and provides more connectivity options at data centre premises, enhancing both efficiency and flexibility for large-scale data transfers.” - Lukasz Konarzewski, Senior Data Architect, Commvault “We have had a positive experience overall with the new Data Box devices to move our data to Azure Blob storage. The devices offer easy plug and play installation, detailed documentation especially for the security features and good data copy performance. We would definitely consider using it again for future large data migration projects.” – Bas Boeijink, Cloud Engineer, Eurofiber Cloud Infra Upcoming changes to older SKUs availability Note that in regions where the next-gen devices are available, new orders for Data Box 80 TB and Data Box Heavy devices cannot be placed post May 31, 2025. We will however continue to process and support all existing orders. Order your device today! The devices are currently available for customers to order in the US, Canada, EU, UK, and US Gov Azure regions. We will continue to expand to more regions in the upcoming months. Azure Data Box provides customers with one of the most cost-effective solutions for data migration, offering competitive pricing with the lowest cost per TB among offline data transfer solutions. You can learn more about the pricing across various regions by visiting our pricing page. You can use the Azure portal to select the requisite SKU suitable for your migration needs and place the order. Learn more about the all-new Data Box devices here. We are committed to continuing to deliver innovative solutions to lower the barrier for bringing data to Azure. Your feedback is important to us. Tell us what you think about the new Azure Data Box devices by writing to us at DataBoxPM@microsoft.com – we can’t wait to hear from you.840Views2likes0CommentsSupercharge Azure Files performance for metadata-intensive workloads
Handling millions—or billions—of small files is business as usual for many cloud workloads. But behind the scenes, it’s not just about reading and writing data—it's the constant file opens, closes, directory listings, and existence checks that really impact performance. These metadata operations may seem small, but they’re critical—and can become a major bottleneck if they’re not fast. From AI/ML workloads on Azure Kubernetes Service, to web apps like Moodle, to CI/CD pipelines and virtual desktops—many applications are metadata intensive. And when every millisecond counts, latency in these operations can drag down the entire experience. That’s why we’re excited to introduce a major boost in metadata performance. Applications experience up to 55% lower latency and 2–3x more consistent response times, ensuring greater reliability. Workloads with high metadata interaction, such as AI/ML pipelines, see the biggest gains with 3x higher parallel metadata IOPS for improved efficiency and scalability. Removing metadata bottlenecks allows more data operations too. We've seen workloads increase data IOPS and throughput up to 60%. In Azure Files SSD (premium), this enhancement accelerates metadata operations for both SMB and REST protocols, benefiting new and existing file shares at no extra cost. Whether you're running critical business applications, scaling DevOps workflows, or supporting thousands of virtual desktop users, Azure Files is now faster, more scalable, and optimized for your most demanding workloads. Metadata Caching accelerates real-world solutions. GIS on Azure Virtual Desktop GIS (Geographic Information System) workloads are crucial for analyzing and managing spatial data, supporting industries like urban planning, agriculture, and disaster management. By visualizing spatial relationships, GIS helps organizations make better decisions about infrastructure and resource management. Azure Virtual Desktop (AVD) is a popular choice for hosting GIS workloads in the cloud. These workloads often experience performance bottlenecks due to frequent interactions with large volumes of smaller files on shared file storage. Metadata caching reduces latency and accelerates file interactions—such as opening and closing these files—enabling faster data access and improving GIS job execution in virtual desktop environments. Customers, like Suncor Energy, are already experiencing the impact of Metadata Caching in GIS workloads. “Enabling Metadata Cache in Azure Files SSD (premium) significantly improved geospatial (GIS) workload performance, reducing execution time by 43.18%. This enhancement boosts throughput and IOPS, increasing the value of Azure Files.” — Colin Stuckless, Suncor Energy Inc. Moodle Web Services Moodle is a comprehensive learning management system (LMS) that combines server hosting, databases (such as MySQL or PostgreSQL), file storage (using Azure Files SSD), and PHP-based web servers. It’s designed to facilitate course management, allowing instructors to upload materials, assignments, and quizzes. Moodle requires frequent read/write requests for course materials, assignments, and user interactions generating a high volume of metadata lookups, particularly when accessing shared content or navigating large course repositories. With Metadata Caching, Moodle operates faster and more efficiently. Response times have improved by 33%, reducing wait times for students and instructors when accessing materials or submitting work. These enhancements also boost Moodle’s scalability, enabling it to support 3x more students and user sessions without compromising performance. Even during peak usage, when many users are active simultaneously, Moodle remains stable and responsive. As a result, students can access resources and submit work more quickly, while instructors can manage larger courses and assignments more effectively. GitHub Actions on Azure Kubernetes Service (AKS) GitHub Actions is a powerful automation tool seamlessly integrated with GitHub, enabling developers to build, test, and deploy code directly from their repositories. By leveraging Azure Kubernetes Service (AKS), GitHub Actions automates tasks through workflows defined in AKS YAML files, facilitating efficient container orchestration, scaling, and deployment of applications within a Kubernetes environment. These workflows can be triggered by various events, such as code pushes, pull requests, or even scheduled times, streamlining the development process and enhancing efficiency. These operations generate a high volume of metadata lookups, as each workflow execution involves checking for updated dependencies, accessing cached libraries, and continuously writing execution logs. Metadata caching significantly reduces the time required to retrieve and process metadata, resulting in quicker build artifact handling and smoother, more efficient deployment cycles. As a result, pipeline execution is 57% faster, allowing developers to build and deploy in half the time! How to get started You can now supercharge your Azure Files performance by enabling metadata caching for your applications today, at no extra cost! So don’t wait! To get started, register your subscription with the Metadata Cache feature using Azure portal or PowerShell to enable all new and existing accounts with Metadata Caching. Metadata Cache is now generally available in multiple regions, with more being added as we expand coverage. For Regional Availability please visit the following Link1.7KViews0likes0CommentsGeneral Availability: Vaulted backup for Azure Files - Boost your data security and compliance
We are thrilled to announce the General Availability (GA) of Vaulted Backup support in Azure Backup for Azure files - Standard tier to help seamlessly protect your data and applications hosted on Azure file share. With this release, you can now leverage vaulted backup integration to protect Standard SMB file shares. Azure Backup vaulted support for Azure file share provides enhanced data protection with the ability to configure snapshot and vaulted backup in a single policy to a secure backup location(Recovery Services vault) and support regional recovery. Vaulted backup provides advanced protection capabilities like ransomware protection, ability to restore even when file share is deleted which are missing with snapshot only backup. Vaulted backup solution seamlessly integrates with Azure File sync allowing File sync customers to protect data tiered to the cloud long-term in a cost-effective manner. In this blog post let’s explore how Azure Backup can enable robust data protection solution for businesses migrating and hosting applications on Azure file share. Security: Protection against Ransomware Ransomware or malware attack, continues to be a major threat to organizations worldwide, often leaving businesses at the mercy of cybercriminals demanding hefty ransom payments in exchange for access to their encrypted data. Vaulted backups provide a vital line of defense, ensuring that organizations can recover their data without giving in to ransom demands. Key features offered by vaulted Backups which protects against Ransomware: Isolation: Vaulted Backup data is isolated from your production storage accounts and stored in a separate tenant managed by Microsoft. This isolation helps safeguard your data against unauthorized tampering and ensures that your backups remain intact. Advanced security: Features like vault lock, multi-user authorization, and soft delete, which add additional layers of protection, ensuring that backups are immune to malicious deletion or tampering. Governance and security posture: Azure Backup integrates with the Business Continuity and Disaster Recovery (BCDR) security posture, allowing you to better manage and govern the security of your backups. This ensures that your backups meet the right level of protection and are recoverable when you need them most. Regulatory and compliance Azure File share enables users from industries like legal, finance, and health to store crucial business data. To comply with regulations and compliance checks, one will require offsite backups with long-term retention, which snapshots alone couldn't provide. With vaulted backup users can move snapshots to a Recovery Services Vault in the same Azure region as their primary storage, with options for cross-regional replication. This setup allows backup data to be retained for up to 99 years in low-cost, highly secure, immutable storage, meeting regulatory and compliance requirements during audits and legal holds. Furthermore, with the introduction of the new cross-subscription backup capability, organizations can allocate backup data to dedicated subscriptions. This feature allows customers to consolidate all backups into a single subscription, enhancing cost management and ensuring independent access control. It enables organizations to retain control over their data protection strategy while ensuring that each department or project adheres to its specific regulatory and security requirements. Enterprise Ready Vaulted backup support now enables adherence to the widely accepted 3-2-1 backup rule for Azure files protection. Azure Backup is well integrated with Azure Business Continuity Center that can offer centralized management to gain visibility, monitor jobs, alerts and reporting. How does 3-2-1 backup help? Human errors, insider threats, or stolen credentials can lead to critical data loss. File share snapshots serve as the first line of defense to restore your data. In case, where snapshots are not available, Vaulted Backups, stored securely outside of your primary storage account, provide an additional protected copy of your data. Additionally, the copy of the backup can be replicated to another region using GRS Geo-Redundant storage. A backup policy will enable you to manage the schedule and retention for both snapshots and vault copies. In the event of deletion whether accidental or malicious the restore process will first be initiated using snapshots. If snapshots are unavailable, recovery will proceed from the vault. If the primary region is down then one can restore from the secondary region with Cross Region restore option Getting started Here are three simple steps to help you get started with configuring vaulted backup for Azure File shares: Create a Recovery services vault: A vault is a management entity that stores backups and allows you to access and manage them. Create a backup policy: Backup policy enables you to configure the frequency and retention of backups based on your business requirements. Select the storage account and File shares to backup: You can choose to back up all File share or select specific File shares from the selected storage account depending on the criticality of the data they contain. Learn more about vaulted backup for File share here. Pricing and availability Vaulted backup for Azure File share standard is generally available in these regions. Vaulted backup for premium file shares will continue to be in public preview. You will incur a protected instance fee and charges for backup storage for both standard and premium shares from 1 st April 2025. To learn about pricing, refer to the Azure File share backup pricing page. Contact us If you have questions or feedback, please reach out to us at AskAzureBackupTeam@microsoft.com.1.5KViews1like2CommentsEnable ADDS authetication fails
I am trying to execute the Enable ADDS authentication using Azhybridfiles module. when I try to execute the command Join-AzStorageAccount ` -ResourceGroupName $ResourceGroupName ` -StorageAccountName $StorageAccountName ` -SamAccountName $SamAccountName ` -DomainAccountType $DomainAccountType ` -OrganizationalUnitDistinguishedName $OuDistinguishedName it keeps on progressing with below warning but the progress stuck without completing or any error message.68Views0likes1CommentHybrid File Tiering Addresses Top CIO Priorities of Risk Control and Cost Optimization
Hybrid File Tiering addresses top CIO priorities of risk control and cost optimization This article describes how you can leverage Komprise Intelligent Tiering for Azure with any on-premises file storage platform and Azure Blob Storage to reduce your cost by 70% and shrink your ransomware attack surface. Note: This article has been co-authored by Komprise and Microsoft. Unstructured data plays a big role in today's IT budgets and risk factors Unstructured data, which is any data that does not fit neatly into a database or tabular format, has been growing exponentially and is now projected by analysts to be over 80% of business information. Unstructured data is commonly referred to as file data, which is the terminology used for the rest of this article. File data has caught some IT leaders by surprise because it is now consuming a significant portion of IT budgets with no sign of slowing down. File data is expensive to manage and retain because it is typically stored and protected by replication to an identical storage platform which can be very expensive at scale. We will now review how you can easily identify hot and cold data and transparently tier cold files to Azure to cut costs and shrink ransomware exposure with Komprise. Why file data is factoring into CIO priorities CIOs are prioritizing cost optimization, risk management and revenue improvement as key priorities for their data. 56% chose cost optimization as their top priority according to the 2024 Komprise State of Unstructured Data Management survey. This is because file data is often retained for decades, its growth rate is in double-digits, and it can easily be petabytes of data. Keeping a primary copy, a backup copy and a DR copy means three or more copies of the large volume of file data which becomes prohibitively expensive. On the other hand, file data has largely been untapped in terms of value, but businesses are now realizing the importance of file data to train and fine tune AI models. Smart solutions are required to balance these competing requirements. Why file data is vulnerable to ransomware attacks File data is arguably the most difficult data to protect against ransomware attacks because it is open to many different users, groups and applications. This increases risk because a single user's or group's mistake can lead to a ransomware infection. If the file is shared and accessed again, the infection can quickly spread across the network undetected. As ransomware lurks, the risk increases. For these reasons, you cannot ignore file data when creating a ransomware defense strategy. How to leverage Azure to cut the cost and inherent risk of file data retention You can cut costs and shrink the ransomware attack surface of file data using Azure even when you still require on-premises access to your files. The key is reducing the amount of file data that is actively accessed and thus exposed to ransomware attacks. Since 80% of file data is typically cold and has not been accessed in months (see Demand for cold data storage heats up | TechTarget), transparently offloading these files to immutable storage through hybrid tiering cuts both costs and risks. Hybrid tiering offloads entire files from the data storage, snapshot, backup and DR footprints while your users continue to see and access the tiered files without any change to your application processes or user behavior. Unlike storage tiering which is typically offered by the storage vendor and causes blocks of files to be controlled by the storage filesystem to be placed in Azure, hybrid tiering operates at the file level and transparently offloads the entire file to Azure while leaving behind a link that looks and behaves like the file itself. Hybrid tiering offloads cold files to Azure to cut costs and shrink the ransomware attack surface: Cut 70%+ costs: By offloading cold files and not blocks, hybrid tiering can shrink the amount of data you are storing and backing up by 80%, which cuts costs proportionately. As shown in the example below, you can cut 70% of file storage and backup costs by using hybrid tiering. Assumptions Amount of Data on NAS (TB) 1024 % Cold Data 80% Annual Data Growth Rate 30% On-Prem NAS Cost/GB/Mo $0.07 Backup Cost/GB/Mo $0.04 Azure Blob Cool Cost/GB/Mo $0.01 Komprise Intelligent Tiering for Azure/GB/Mo $0.008 On-Prem NAS On-prem NAS + Azure Intelligent Tiering Data in On-Premises NAS 1024 205 Snapshots 30% 30% Cost of On-Prem NAS Primary Site $1,064,960 $212,992 Cost of On-Prem NAS DR Site $1,064,960 $212,992 Backup Cost $460,800 $42,598 Data on Azure Blob Cool $0 819 Cost of Azure Blob Cool $0 $201,327 Cost of Komprise $100,000 Total Cost for 1PB per Year $2,590,720 $769,909 SAVINGS/PB/Yr $1,820,811 70% Shrink ransomware attack surface by 80%: Offloading cold files to immutable Azure Blob removes cold files from the active attack surface thus eliminating 80% of the storage, DR and backup costs while also providing a potential recovery path if the cold files get infected. By having Komprise tier to immutable Azure Blob with versioning, even if someone tried to infect a cold file, it would be saved as a new version – enabling recovery using an older version. Learn more about Azure Immutable Blob storage here. In addition to cost savings and improved ransomware defense, the benefits of Hybrid Cloud Tiering using Komprise and Azure are: Leverage Existing Storage Investment: You can continue to use your existing NAS storage and Komprise to tier cold files to Azure. Users and applications continue to see and access the files as if they were still on-premises. Leverage Azure Data Services: Komprise maintains file-object duality with its patented Transparent Move Technology (TMT), which means the tiered files can be viewed and accessed in Azure as objects, allowing you to use Azure Data Services natively. This enables you to leverage the full power of Azure with your enterprise file data. Works Across Heterogeneous Vendor Storage: Komprise works across all your file and object storage to analyze and transparently tier data to Azure file and object tiers. Ongoing Lifecycle Management in Azure: Komprise continues to manage data lifecycle in Azure, so as data gets colder, it can move from Azure Blob Cool to Cold to Archive tier based on policies you control. Azure and Komprise customers are already using hybrid tiering to improve their ransomware posture while reducing costs – a great example is Katten. Global law firm saves $900,000 per year and achieves resilient ransomware defense with Komprise and Azure Katten Muchin Rosenman LLP (Katten) is a full-service law firm delivering legal services across more than a dozen practice areas and sectors, including Aviation, Construction, Energy, Education, Entertainment, Healthcare and Real Estate. Like many other large law firms, Katten has been seeing an average 20% annual growth in storage for file related data, resulting in the need to add on-premises storage capacity every 12-18 months. With a focus on managing data storage costs in an environment where data is growing exponentially annually but cannot be deleted, Katten needed a solution that could provide deep data insights and the ability to move file data as it ages to immutable object storage in the cloud for greater cost savings and ransomware protection. Katten Law implemented hybrid tiering using Komprise Intelligent Tiering to Azure and leveraged Immutable Blob storage to not only save $900,000 annually but also improved their ransomware defense posture. Read how Katten Law does hybrid tiering to Azure using Komprise. Summary: Hybrid Tiering helps CIOs to optimize file costs and cut ransomware risks Cost optimization and Risk management are top CIO priorities. File data is a major contributor to both costs and ransomware risks. Organizations are leveraging Komprise to tier cold files to Azure while continuing to use their on-premises file storage NAS. This provides a low risk approach with no disruption to users and apps while cutting 70% costs and shrinking the ransomware attack surface by 80%. Next steps To learn more and get a customized assessment of your savings, visit the Azure Marketplace listing or contact azure@komprise.com.672Views3likes1CommentHow to Save 70% on File Data Costs
In the final entry in our series on lowering file storage costs, DarrenKomprise shares how Komprise can help lower on-premises and Azure-based file storage costs. Komprise and Azure offer you a means to optimize unstructured data costs now and in the future!14KViews1like1Comment