azure
54 TopicsProtect your Storage accounts using network security perimeter - now generally available
We are excited to announce the general availability of network security perimeter support for Azure Storage accounts. A network security perimeter allows organizations to define a logical network isolation boundary for Platform-as-a-Service (PaaS) resources like Azure Storage accounts that are deployed outside your organization’s virtual networks. This restricts public network access to PaaS resources by default and provides secure communication between resources within the perimeter. Explicit inbound and outbound rules allow access to authorized resources. Securing data within storage accounts requires a multi-layered approach, encompassing network access controls, authentication, and authorization mechanisms. Network access controls for storage accounts can be defined into two broad categories - access from PaaS resources and from all other resources. For access from PaaS resources, organizations can leverage either broad controls through Azure “trusted services” or granular access using resource instance rul es. For other resources, access control may involve IP-based firewall rules, allowing virtual network access, or by enabling private endpoints. However, the complexity of managing all these can pose significant challenges when scaled across large enterprises. Misconfigured firewalls, public network exposure on storage accounts, or excessively permissive policies heighten the risk of data exfiltration. It is often challenging to audit these risks at the application or storage account level, making it difficult to identify open exfiltration paths throughout all PaaS resources in an environment. Network security perimeters offer an effective solution to these concerns. First, by grouping assets such as Azure Key Vault and Azure Monitor in the same perimeter as your storage accounts, communications between these resources are secured while disabling public access by default, thereby preventing data exfiltration to unauthorized destinations. Then, they centralize the management of network access controls across numerous PaaS resources at scale by providing a single pane of glass. This approach promotes consistency in settings and reduces administrative overhead, thereby minimizing potential for configuration errors. Additionally, they provide comprehensive control over both inbound and outbound access across all the associated PaaS resources to authorized resources. How do network security perimeters protect Azure Storage Accounts? Network security perimeters support granular resource access using profiles. All inbound and outbound rules are defined on a profile, and the profile can be applied to single or multiple resources within the perimeter. Network security perimeters provide two primary operating modes: “Transition” mode (formerly referred to as “Learning” mode) and “Enforced” mode. “Transition” mode acts as an initial phase when onboarding a PaaS resource into any network security perimeter. When combined with logging, this mode enables you to analyze current access patterns without disrupting existing connectivity. “Enforced” mode is when are all defined perimeter rules replace all resource specific rules except private end points. After analyzing logs in “Transition” mode, you can tweak your perimeter rules as necessary and then switch to “Enforced” mode. Benefits of network security perimeters Secure resource-to-resource communication: Resources in the same perimeter communicate securely, keeping data internal and blocking unauthorized transfers. For example, an application’s storage account and its associated database, when part of the same perimeter can communicate securely. However, all communications from another database outside of the perimeter will be blocked to the storage account. Centralized network isolation Administrators can manage firewall and resource access policies centrally in network security perimeters across all their PaaS resources in a single pane of glass, streamlining operations and minimizing errors. Prevent data exfiltration: Centralized access control and logging of inbound and outbound network access attempts across all resources within a perimeter enables comprehensive visibility for compliance and auditing purposes and helps address data exfiltration. Seamless integration with existing Azure features: Network security perimeter works in conjunction with private endpoints by allowing Private endpoint traffic to storage accounts within a perimeter There is no additional cost to using network security perimeter. Real-world customer scenarios Let us explore how network security perimeters specifically strengthen the security and management of Azure Storage accounts through common applications. Create a Secure Boundary for Storage Accounts A leading financial organization sought to enhance the protection of sensitive client data stored in Azure Storage accounts. The company used Azure Monitor with a Log Analytics workspace to collect and centralize logs from all storage accounts, enabling constant monitoring and alerts for suspicious activity. This supported compliance and rapid incident response. They also used Azure Key Vault to access customer-managed encryption keys. They configured network access controls on each communication path from these resources to the storage account. They disabled public network access and employed a combination of Virtual Network (Vnet), firewall rules, private endpoints, and service endpoints. However, this created a huge overhead that had to be continuously managed as and when additional resources required access to the storage account. To address this, the company implemented network security perimeters, and blocked public and untrusted access to their storage account by default. By placing the specific Azure Key Vault and Log Analytics Workspace within the same network security perimeter as the storage account, the organization achieved a secure boundary around their data in an efficient manner. Additionally, to let an authorized application access this data, they defined an inbound access rule in the profile governing their storage account, thereby restricting access for the application to only the required PaaS resources. Prevent Data Exfiltration from Storage Accounts One of the most dangerous data exfiltration attacks is when an attacker obtains the credentials to a user account with access to an Azure Storage account, perhaps through phishing or credential stuffing. In a traditional setup, this attacker could potentially connect from anywhere on the internet and initiate large-scale data exfiltration to external servers, putting sensitive business or customer information at risk. With network security perimeter in place, however, only resources within perimeter or authorized external resources can access the storage account, drastically limiting the attacker’s options. Even if they have valid credentials, network security perimeter rules block the attacker’s attempts to connect from an unapproved network or unapproved machines within a compromised network. Furthermore, the perimeter enforces strict outbound traffic controls: storage accounts inside the perimeter cannot send data to any external endpoint unless a specific outbound rule permits it. Restricting inbound access and tightly controlling outbound data flows enhances the security of sensitive data in Azure Storage accounts. The presence of robust network access control on top of storage account credentials creates multiple hurdles for attackers to overcome and significantly reduces the risk of both unauthorized access and data exfiltration. Unified Access Management across the entire Storage estate A Large retailer found it difficult to manage multiple Azure Storage accounts. Typically, updating firewall rules or access permissions involved making repeated changes for each account or using complex scripts to automate the process. This approach not only increased the workload but also raised the risk of inconsistent settings or misconfigurations, which could potentially expose data. With network security perimeter, the retailed grouped storage accounts under a perimeter and sometimes using subsets of accounts under different perimeters. For accounts requiring special permissions within a single perimeter, the organization created separate profiles to customize inbound and outbound rules specific to them. Administrators could now define and update access policies at the profile level, with rules immediately enforced across every storage account and other resources associated with the profile. The updates consistently applied to all resources for both blocking public internet access and for allowing specific internal subscriptions, thus reducing gaps and simplifying operations. The network security perimeter also provided a centralized log of all network access attempts on storage accounts, eliminating the need for security teams to pull logs separately from each account. It showed what calls accessed accounts, when, and where, starting immediately after enabling logs in the “Transition” mode, and then continuing into Enforced mode. This streamlined approach enhances the organization’s compliance reporting, accelerated incident response, and improved understanding of information flow across the cloud storage environment. Getting started Explore this Quickstart guide, to implement a network security perimeter and configure the right profiles for your storage accounts. For guidance on usage and limitations related to Storage accounts, refer to the documentation. Network security perimeter does not have additional costs for using it. As you begin, consider which storage accounts to group under a perimeter, and how to segment profiles for special access needs within the perimeter.378Views2likes0CommentsTLS 1.0 and 1.1 support will be removed for new & existing Azure storage accounts starting Feb 2026
To meet evolving technology and regulatory needs and align with security best practices, we are removing support for Transport Layer Security (TLS) 1.0 and 1.1 for both existing and new storage accounts in all clouds. TLS 1.2 will be the minimum supported TLS version for Azure Storage starting February 2026. Azure Storage currently supports TLS 1.0 and 1.1 (for backward compatibility) and TLS 1.2 on public HTTPS endpoints. TLS 1.2 is more secure and faster than older TLS versions. TLS 1.0 and 1.1 do not support modern cryptographic algorithms and cipher suites. Many of the Azure storage customers are already using TLS 1.2 and we are sharing this guidance to expedite the transition for customers currently on TLS 1.0 and 1.1. Customers must secure their infrastructure by using TLS 1.2+ with Azure Storage by Jan 31, 2026. The older TLS versions (1.0 and 1.1) are being deprecated and removed to meet evolving standards (FedRAMP, NIST), and provide improved security for our customers. This change will impact both existing and new storage accounts using TLS 1.0 and 1.1. To avoid disruptions to your applications connecting to Azure Storage, you must migrate to TLS 1.2 and remove dependencies on TLS version 1.0 and 1.1, by Jan 31, 2026. Learn more about how to migrate to TLS1.2. As best practice, we also recommend using Azure policy to enforce a minimum TLS version. Learn more here about how to enforce a minimum TLS version for all incoming requests. If you already use Azure Policy to enforce TLS version, minimum supported version after this change rolls out will be TLS 1.2. Help and Support If you have questions, get answers from community experts in Microsoft Q&A. If you have a support plan and you need technical help, create a support request: For Issue type, select Technical. For Subscription, select your subscription. For Service, select My services. For Service type, select Blob Storage. For Resource, select the Azure resource you are creating a support request for. For Summary, type a description of your issue. For Problem type, select Connectivity For Problem subtype, select Issues using TLS.57KViews2likes5CommentsEnhance Your Data Protection Strategy with Azure Elastic SAN’s Newest Backup Options
As organizations adopt Azure Elastic SAN for scalable high-performance storage, we are pleased to announce the public preview of backup support via Azure Backup and Commvault. These fully managed solutions simplify data protection for Elastic SAN volumes, automating backup scheduling, restore point management, and data recovery. They help safeguard your volumes against data loss scenarios such as accidental deletions, ransomware, and application errors. Both the integration of Azure Backup for Elastic SAN and Commvault’s integration with Elastic SAN are in public preview and available for everyone to use. Both of these integrations are powered by Elastic SAN’s crash-consistent, storage native snapshots. Learn more about Elastic SAN here! Azure Backup Release Highlights The public preview of Azure Backup for Elastic SAN introduces several important capabilities designed to enhance your data protection strategy: Operational Tier Backup with Independent Lifecycle Each backup operation creates a Managed Disk Incremental Snapshot of your Elastic SAN volume. These snapshots are stored in locally redundant storage (LRS) in supported regions and exist independently of the original volume’s lifecycle. This means your backups remain available for recovery even if the original Elastic SAN volume is deleted, ensuring reliable data protection. The Elastic SAN volumes can be restored from these managed disk snapshots that are backed up by Azure Backup. Vaulting, immutability, and other capabilities are on the roadmap and will be incorporated into subsequent releases. Daily Restore Points Azure Backup supports up to 450 restore points with a daily backup schedule. This high number of restore points provides robust short-term retention, allowing you to recover data quickly to any previous state within the retention period. It significantly reduces the risk of data loss due to accidental deletions or other incidents. Retaining backups over 450 days is not available in this preview. Simplified Management Customers pick the number of daily backups that they want to retain, and Azure Backup does the rest including creating new backups and deleting oldest backups to match the retention setting. Configuration and monitoring are integrated with the Azure Business Continuity Center, giving you a unified and streamlined management experience. This automation allows you to focus on your core business activities while Azure handles the complexity of data protection. Important Cost Information During the public preview, the following cost structure applies: The Azure Backup Protected Instance Fee for Elastic SAN volumes is not charged Charges for Managed Disk Incremental Snapshots in the operational tier apply at standard Azure rates. The first snapshot that is exported will be a full snapshot. In summary, Azure Backup for Elastic SAN delivers a powerful and comprehensive backup solution. With features such as independent lifecycle backups, high-frequency restore points, and simplified management, you can confidently protect your Elastic SAN volumes from a range of data loss scenarios. Try this new capability to experience enhanced data protection for your workloads. Commvault Release Highlights Protecting Azure Elastic SAN Volumes with Commvault Our partners at Commvault continue to deliver meaningful innovation through deep integration with Microsoft Azure. In the below writeup, Commvault showcases how Azure Elastic SAN volumes are now protected within Commvault’s platform—bringing unified, enterprise-grade protection to performance intensive Elastic SAN workloads. If you're exploring scalable, resilient cloud storage with built-in data protection, this is a valuable read. Here are highlights from Commvault on their added support for Elastic SAN protection Thanks to a close partnership between Commvault and Microsoft, organizations can now take advantage of robust backup and recovery for Azure Elastic SAN storage. This deep integration means you benefit from the trusted protection and unified management of Commvault’s platform, now extended to Azure’s high-performance, scalable Elastic SAN solution. As a result, you can easily safeguard your mission-critical workloads in the cloud while enjoying the flexibility, centralized management, and resilience that Elastic SAN provides. Designed for Scalable, Resilient Cloud Environments With Commvault’s integration, organizations can protect Azure Elastic SAN volumes attached to Azure virtual machines (VMs) using the same trusted platform they rely on for comprehensive data protection for many other Azure resources. Key capabilities include: Snapshot-based protection:IntelliSnap support enables rapid, low-impact backups that minimize performance impact on production systems. The number of snapshots that can be retained is configurable based on your storage plan. Commvault offers a day-based retention plan that defaults to 30 days but can be extended indefinitely. Alternatively, you can retain Elastic SAN snapshots based on a snapshot count as well. Flexible recovery options: Full-VM and attach-disk restores are supported, including cross-region backups and restores. In cross-region restores, Elastic SAN volumes are automatically restored as managed disks. Broad platform compatibility: Both Windows- and Linux-based VMs are supported. Elastic SAN volume discovery requires PowerShell on Windows or Python 3 on Linux. Deployment and Configuration Considerations For optimal performance and streamlined protection workflows, enterprises should consider the following implementation guidance linked below. App-consistent restore points for Elastic SAN volumes are not currently supported. Attach-disk restores to a VM will result in managed disks regardless of source (primary or secondary copy). ESAN volumes will need to be connected to the VM via iSCSI. Accelerate Cloud Confidence with Commvault Azure Elastic SAN represents a significant advancement in cloud storage architecture. With Commvault’s integrated protection, enterprises can deploy this powerful capability to help make sure their data remains secure, recoverable, and compliant. To learn more about protecting Azure workloads – including Elastic SAN – contact your Commvault account team or visit our Azure protection documentation. The requirements for using this integration can be found here. Conclusion The Azure Elastic SAN team is committed to supporting your backup needs and giving you peace of mind as you run workloads on Azure. With both Azure Backup and Commvault integrations, you have flexible options designed for different scenarios: Azure Backup is best suited for Azure-native, single volume snapshots, offering a 450-day retention period, seamless integration, and simplicity within the Azure ecosystem. Commvault, on the other hand, excels at providing backups for multiple volumes attached to the same VM, as well as advanced enterprise features like granular recovery, an indefinite retention period and robust retention management. If you have any questions about which solution is right for you, please contact us at AzElasticSAN-Ex@microsoft.com —we’re happy to help.121Views0likes0CommentsHow Microsoft Azure and Qumulo Deliver a Truly Cloud-Native File System for the Enterprise
Disclaimer: The following is a post authored by our partner Qumulo. Qumulo has been a valued partner in the Azure Storage ecosystem for many years and we are happy to share details on their unique approach to solving challenges of scalable filesystems! Whether you’re training massive AI models, running HPC simulations in life sciences, or managing unstructured media archives at scale, performance is everything. Qumulo and Microsoft Azure deliver the cloud-native file system built to handle the most data-intensive workloads, with the speed, scalability, and simplicity today's innovators demand. But supporting modern workloads at scale is only part of the equation. Qumulo and Microsoft have resolved one of the most entrenched and difficult challenges in modernizing the enterprise data estate: empowering file data with high performance across a global workforce without impacting the economics of unstructured data storage. According to Gartner, global end-user spending on public cloud services is set to surpass $1 trillion by 2027. That staggering figure reflects more than just a shift in IT budgets—it signals a high-stakes race for relevance. CIOs, CTOs, and other tech-savvy execs are under relentless pressure to deliver the capabilities that keep businesses profitable and competitive. Whether they’re ready or not, the mandate is clear: modernize fast enough to keep up with disruptors, many of whom are using AI and lean teams to move at lightning speed. To put it simply, grow margins without getting outpaced by a two-person startup using AI in a garage. That’s the challenge leaders face every day. Established enterprises must contend with the duality of maintaining successful existing operations and the potential disruption to those operations by a more agile business model that offers insight into the next wave of customer desires and needs. Nevertheless, established enterprises have a winning move - unleash the latent productivity increases and decision-making power hidden within years, if not decades, worth of data. Thoughtful CIOs, CTOs, and CXOs have elected to move slowly in these areas due to the tyranny of quarterly results and the risk of short-term costs reflecting poorly on the present at the expense of the future. In this sense, adopting innovative technologies forced organizations to choose between self-disruption with long-term benefits or non-disruptive technologies with long-term disruption risk. When it comes to network-attached storage, CXOs were forced to accept non-disruptive technologies because the risk was too high. This trade-off is no longer required. Microsoft and Qumulo have addressed this challenge in the realm of unstructured file data technologies by delivering a cloud-native architecture that combines proven Azure primitives with Qumulo’s suite of file storage solutions. Now, those patient CXOs, waiting to adopt hardened technologies, can shift their file data paradigm into Azure while improving business value, data portability, and reducing the financial burden on their business units. Today, organizations that range from 50,000+ employees with global offices, to organizations with a few dozen employees with unstructured data-centric operations have discovered the incredible performance increases, data availability, accessibility, and economic savings realized when file data moves into Azure using one of two Qumulo solutions: Option 1 — Azure Native Qumulo (ANQ) is a fully managed file service that delivers truly elastic capacity, throughput, and IOPS, along with all the enterprise features of your on-premises NAS and a TCO to match. Option 2 — Cloud Native Qumulo (CNQ) on Microsoft Azure is a self-hosted file data service that offers the performance and scale your most demanding workloads require, at a comparable total cost of ownership to on-premises storage. Both CNQ on Microsoft Azure and ANQ offer the flexibility and capacity of object storage while remaining fully compatible with file-based workflows. As data platforms purpose-built for the cloud, CNQ and ANQ provide three key characteristics: Elasticity — Performance and capacity can scale independently, both up and down, dynamically. Boundless Scale — Virtually no limitations on file system size or file count, with full multi-protocol support. Utility-Based Pricing — Like Microsoft Azure, Qumulo operates on a pay-as-you-go model, charging only for resources used without requiring pre-provisioned capacity or performance. The collaboration between Qumulo’s cloud-native file solutions and the Microsoft Azure ecosystem enables seamless migration of a wide range of workflows, from large-scale archives to high-performance computing (HPC) applications, from on-premises environments to the cloud. For example, a healthcare organization running a fully cloud-hosted Picture Archiving and Communication System (PACS) alongside a Vendor Neutral Archive (VNA) can leverage Cloud Native Qumulo (CNQ) to manage medical imaging data in Azure. CNQ offers a HIPAA-compliant, highly durable, and cost-efficient platform for storing both active and infrequently accessed diagnostic images, enabling secure access while optimizing storage costs. With Azure’s robust cloud infrastructure, organizations can design a cloud file solution that scales to meet virtually any size or performance requirement, while unlocking new possibilities in cloud-based AI and HPC workloads. Further, using the Qumulo Cloud Data Fabric, the enterprise is able to connect geographically separated data sources within one unified, strictly consistent (POSIX-compliant), secure, and high-performance file system. As organizational needs evolve — whether new workloads are added or existing workloads expand — Cloud Native Qumulo or Azure Native Qumulo can easily scale to meet performance demands while maintaining the predictable economics that meet existing or shrinking budgets. About Azure Native Qumulo and Cloud Native Qumulo on Azure Azure Native Qumulo (ANQ) and Cloud Native Qumulo (CNQ) enable organizations to leverage a fully customizable, multi-protocol solution that dynamically scales to meet workload performance requirements. Engineered specifically for the cloud, ANQ is designed for simplicity of operation and automatic scalability as a fully managed service. CNQ offers the same great technology, directly leveraging cloud-native resources like Azure Virtual Machines (VMs), Azure Networking, and Azure Blob Storage to provide a scalable platform that adapts to the evolving needs of today’s workloads – but deploys entirely in the enterprise tenant, allows for direct control over the underlying infrastructure, and requires a little bit higher level of internal expertise to operate. Azure Native Qumulo and Cloud Native Qumulo on Azure also deliver a fully dynamic file storage platform that is natively integrated with the Microsoft Azure backend. Here’s what sets ANQ and CNQ apart: Elastic Scalability — Each ANQ and CNQ instance on Azure Blob Storage can automatically scale to exabyte-level storage within a single namespace by simply adding data. On Microsoft Azure, performance adjustments are straightforward: just add or remove compute instances to instantly boost throughput or IOPS, all without disruption and within minutes. Plus, you pay only for the capacity and compute resources you use. Deployed in Minutes — ANQ deploys from the Azure Portal, CLI, or PowerShell, just like a native service. CNQ runs in your own Azure virtual network and can be deployed via Terraform. You can select the compute type that best matches your workload’s performance requirements and build a complete file data platform on Azure in under six minutes for a three-node cluster. Automatic TCO Management — can be facilitated through services like Komprise Intelligent Tiering for Azure and Azure Blob Storage access tiers. It optimizes storage costs and manages data lifecycle. By analyzing data access patterns, these systems move files or objects to appropriate tiers, reducing costs for infrequently accessed data. Additionally, all data written to CNQ is compressed to ensure maximum cost efficiency. ANQ automatically adapts to your workload requirements, and CNQ’s fully customizable architecture can be configured to meet the specific throughput and IOPS requirements of virtually any file or object-based workload. You can purchase either ANQ or CNQ through a pay-as-you-go model, eliminating the need to pre-provision cloud file services. Simply pay for what you use. ANQ and CNQ deliver comparable performance and services to on-premises file storage at a similar TCO. Qumulo’s cloud-native architecture redefines cloud storage by decoupling capacity from performance, allowing both to be adjusted independently and on demand. This provides the flexibility to modify components such as compute instance type, compute instance count, and cache disk capacity — enabling rapid, non-disruptive performance adjustments. This architecture, which includes the innovative Predictive Cache, delivers exceptional elasticity and virtually unlimited capacity. It ensures that businesses can efficiently manage and scale their data storage as their needs evolve, without compromising performance or reliability. ANQ and CNQ retain all the core Qumulo functionalities — including real-time analytics, robust data protection, security, and global collaboration. Example architecture In the example architecture, we see a solution that uses Komprise to migrate file data from third-party NAS systems to ANQ. Komprise provides platform-agnostic file migration services at massive scale in heterogeneous NAS environments. This solution facilitates the seamless migration of file data between mixed storage platforms, providing high-performance data movement, ensuring data integrity, and empowering you to successfully complete data migration projects from your legacy NAS to an ANQ instance. Figure: Azure Native Qumulo’s exabyte-scale file data platform and Komprise Beyond inherent scalability and dynamic elasticity, ANQ and CNQ support enterprise-class data management features such as snapshots, replication, and quotas. ANQ and CNQ also offer multi-protocol support — NFS, SMB, FTP, and FTP-S — for file sharing and storage access. Additionally, Azure supports a wide range of protocols for various services. For authentication and authorization, it commonly uses OAuth 2.0, OpenID Connect, and SAML. For IoT, MQTT, AMQP, and HTTPS are supported for device communication. By enabling shared access to the same data via all protocols, ANQ and CNQ support collaborative and mixed-use workloads, eliminating the need to import file data into object storage. Qumulo consistently delivers low time-to-first-byte latencies of 1–2ms, offering a combined file and object platform for even the most performance-intensive AI and HPC workloads. ANQ and CNQ can run in all Azure regions (although ANQ operates best in regions with three availability zones), allowing your on-premises data centers to take advantage of Azure’s scalability, reliability, and durability. ANQ and CNQ can also be dynamically reconfigured without taking services offline, so you can adjust performance — temporarily or permanently — as workloads change. An ANQ or CNQ instance deployed initially as a disaster recovery or archive target can be converted into a high-performance data platform in seconds, without redeploying the service or migrating hosted data. If you already use Qumulo storage on-premises or in other cloud platforms, Qumulo’s Cloud Data Fabric enables seamless data movement between on-premises, edge, and Azure-based deployments. Connect portals between locations to build a Global Namespace and instantly extend your on-premises data to Azure’s portfolio of cloud-native applications, such as Microsoft Copilot, AI Studio, Microsoft Fabric, and high-performance compute and GPU services for burst rendering or various HPC engines. Cloud Data Fabric moves files through a large-scale data pipeline instantly and seamlessly. Use Qumulo’s continuous replication engine to enable disaster recovery scenarios, or combine replication with Qumulo’s cryptographically locked snapshot feature to protect older versions of critical data from loss or ransomware. ANQ and CNQ leverage Azure Blob’s 11-nines durability to achieve a highly available file system and utilizes multiple availability zones for even greater availability — without the added costs typically associated with replication in other file systems. Conclusion The future of enterprise storage isn’t just in the cloud — it’s in smart, cloud-native infrastructure that scales with your business, not against it. Azure Native Qumulo (ANQ) and Cloud Native Qumulo (CNQ) on Microsoft Azure aren’t just upgrades to legacy storage — they’re a reimagining of what file systems can do in a cloud-first world. Whether you're running AI workloads, scaling HPC environments, or simply looking to escape the limitations of aging on-prem NAS, ANQ and CNQ give you the power to do it without compromise. With elastic performance, utility-based pricing, and native integration with Azure services, Qumulo doesn’t just support modernization — it accelerates it. To help you unlock these benefits, the Qumulo team is offering a free architectural assessment tailored to your environment and workloads. If you’re ready to lead, not lag, and want to explore how ANQ and CNQ can transform your enterprise storage, reach out today by emailing Azure@qumulo.com. Let’s build the future of your data infrastructure together.459Views1like0CommentsSecure Linux workloads using Azure Files with Encryption in Transit
Encryption in Transit (EiT) overview As organizations increasingly move to cloud environments, safeguarding data security both at rest and during transit is essential for protecting sensitive information from emerging threats and for maintaining regulatory compliance. Azure Files already offers encryption at rest using Microsoft-managed or customer-managed keys for NFS file shares. Today, we're excited to announce the General Availability of Encryption in Transit (EiT) for NFS file shares. By default, Azure encrypts data moving across regions. In addition, all clients accessing Azure Files NFS shares are required to be within the scope of a trusted virtual network (VNet) to ensure secure access to applications. However, data transferred within resources in a VNet remains unencrypted. Enabling EiT ensures that all read & writes to the NFS file shares within the VNET are encrypted providing an additional layer of security. With EiT, enterprises running production scale applications with Azure Files NFS shares can now meet their end-to-end compliance requirements. Feedback from the NFS community and Azure customers emphasized the need for an encryption approach that is easy to deploy, portable, and scalable. TLS enables a streamlined deployment model for NFS with EiT while minimizing configuration complexity, maintaining protocol transparency, and avoiding operational overhead. The result is a more secure, performant, and standards-compliant solution that integrates seamlessly into existing NFS workflows. With EiT, customers can now encrypt all NFS traffic using the latest and most secure version of TLS, TLS 1.3, achieving enterprise-grade security effortlessly. TLS provides three core security guarantees: Confidentiality: Data is encrypted, preventing eavesdropping. Authentication: Client verifies the server via certificates during handshake to establish trust. Integrity: TLS ensures that information arrives safely and unchanged, thus adding protection against data corruption or bitflips in transit. TLS encryption for Azure Files is delivered via stunnel, a trusted, open-source proxy designed to add TLS encryption to existing client-server communications without modifying the applications themselves. It has been widely used for its robust security and transparent, in-transit encryption for many use cases across industries for many years. AZNFS Mount Helper for Seamless Setup EiT client setup and mount for NFS volumes may seem like a daunting task, but we have made it easier using the AZNFS mount helper tool. Simplicity and Resiliency: AZNFS is a simple, open-source tool, maintained and supported by Microsoft, that automates stunnel setup and NFS volume mounting over a secure TLS tunnel. AZNFS’s in-built watchdog's auto-reconnect logic protects the TLS mounts, ensuring high availability during unexpected connectivity interruptions. Sample AZNFS mount commands, customized to your NFS volume, are available in the Azure portal (screenshot below). Fig 1. Azure portal view to configure AZNFS for Azure clients using EiT Standardized and flexible: Mounting with AZNFS incorporates the Microsoft recommended performance, security and reliability mount options by default while providing flexibility to adjust these settings to fit your workload. For example, while TLS is the default selection, you can override it to non-TLS connections for scenarios like testing or debugging. Broad Linux compatibility: AZNFS is available through Microsoft’s package repository for major Linux distributions, including Ubuntu, RedHat, SUSE, Alma Linux, Oracle Linux and more. Seamless upgrades: AZNFS package updates automatically in the background without affecting the active mount connections. You will not need any maintenance windows or downtime to perform upgrades. The illustration below shows how EiT helps transmit data securely between clients and NFS volumes over trusted networks. Fig 2. EiT set up flow and secure data transfer for NFS shares Enterprise Workloads and Platform Support EiT is compatible with applications running on a wide range of platforms, including Linux VMs in Azure, on-premises Linux servers, VM scale sets, and Azure Batch, ensuring compatibility with major Linux distributions for cloud, hybrid, and on-premises deployments. Azure Kubernetes Service (AKS): The preview of NFS EiT in AKS will be available shortly. In the meantime, the upstream Azure Files CSI Driver includes AZNFS integration, which can be manually configured to enable EiT for NFS volumes with stateful container workloads. SAP: SAP systems are central to many business operations and handle sensitive data like financial information, customer details, and proprietary data. Securing this confidential data within the SAP environment, including its central services, is a critical concern. NFS volumes, used in central services are single points of failure, making their security and availability crucial. This blog post on SAP deployments on Azure provides guidance on using EiT enabled NFS volumes for SAP deployment scenarios to make them even more secure. SAP tested EiT for their SAP RISE deployments and shared positive feedback: “The NFS Encryption in Transit preview has been a key enabler for running RISE customers mission critical workloads on Azure Files, helping us meet high data in transit encryption requirements without compromising performance or reliability. It has been critical in supporting alignment with strict security architectures and control frameworks—especially for regulated industries like financial services and healthcare. We’re excited to see this capability go GA and look forward to leveraging it at scale.” Ventsislav Ivanov, IT Architecture Chief Expert, SAP Compliance-centric verticals: As part of our preview, customers in industry verticals including financial services, insurance, retail leveraged EiT to address their data confidentiality and compliance needs. One such customer, Standard Chartered, a major global financial institution, highlighted its benefits. “The NFS Encryption in Transit preview has been a key enabler for migrating one of our on-premises applications to Azure. It allowed us to easily run tests in our development and staging environments while maintaining strict compliance and security for our web application assets. Installation of the required aznfs package was seamless, and integration into our bootstrap script for virtual machine scale set automation went smoothly. Additionally, once we no longer needed to disable the HTTPS requirement on our storage account, no further changes were necessary to our internal Terraform modules—making the experience nearly plug-and-play. We’re excited to see this capability reach general availability” Mohd Najib, Azure Cloud Engineer, Standard Chartered Regional availability and pricing Encryption in Transit GA with TLS 1.3 is rolling out globally and is now available in most regions. EiT can be enabled on both new and existing storage accounts and Azure Files NFS shares. There is no additional cost for enabling EiT. Next Steps to Secure Your Workloads Explore More: How to encrypt data in transit for NFS shares| Microsoft Learn Mandate Security: Enable “Secure Transfer Required” on all your Storage Accounts with NFS volumes to mandate EiT for additional layer of protection. Enforce at Scale: Enable Azure Policy for enforcing EiT across your subscription. Please reach out to the team at AzureFiles@microsoft.com for any questions and feedback.656Views4likes0CommentsFrom GlusterFS to Azure Files: A Real-World Migration Story
A few weeks ago, we received a call familiar to many cloud architects—a customer with a massive GlusterFS deployment impacted by Red Hat's end-of-support deadline (December 2024) wondering: "What now?". With hundreds of terabytes across their infrastructure serving both internal teams and external customers, moving away from GlusterFS became a business continuity imperative. Having worked with numerous storage migrations over the years, I could already see the late nights ahead for their team if they simply tried to recreate their existing architecture in the cloud. So, we rolled up our sleeves and dug into their environment to find a better way forward. The GlusterFS challenge GlusterFS emerged in 2005 as a groundbreaking open-source distributed file system that solved horizontal scaling problems when enterprise storage had to work around mechanical device limitations. Storage administrators traditionally created pools of drives limited to single systems and difficult to expand without major downtime. GlusterFS addressed this by allowing distributed storage across physical servers, each maintaining its own redundant storage. Red Hat's acquisition of GlusterFS (Red Hat to Acquire Gluster) in 2011 brought enterprise legitimacy, but its architecture reflected a pre-cloud world with significant limitations: Costly local/geo replication due to limited site/WAN bandwidth Upgrades requiring outages and extensive planning Overhead from OS patching and maintaining compliance standards Constant "backup babysitting" for offsite tape rotation 24/7 on-call staffing for potential "brick" failures Indeed, during our initial discussions, customer’s storage team lead half-jokingly mentioned having a special ringtone for middle-of-the-night "brick" failure alerts. We also noticed that they were running the share exports on SMB 3.0 and NFS 3.0, something which is considered “slightly” deprecated today. Note: In GlusterFS, a "brick" is the basic storage unit—a directory on a disk contributing to the overall volume that enables scalable, distributed storage. Why Azure Files made perfect sense With the challenges our customer faced with maintaining redundancies & administration efforts, they required a turnkey solution to manage their data. Azure Files provided them a fully managed file share service in the Cloud, offering SMB, NFS, and REST-based shares, with on-demand scaling, integrated backups & automated failover. GlusterFS was designed for large scale distributed storage systems. With Azure Files, GlusterFS customers can take advantage of up to 100TiB of Premium file or 256TiB of Provisioned V2 HDD, 10 GBPs of throughput and up to 10K IOPS for demanding workloads. The advantages of Azure Files don’t just end at performance. As customers migrate from GlusterFS to Azure files, these are the additional benefits out of the box: Azure Backup integration One-click redundancy configuration upgrades Built-in monitoring via Azure Monitor HIPAA, PCI DSS, and GDPR compliance Enterprise security through granular access control and encryption (in transit and at Rest) The financial reality At a high level, we found that migrating to Azure files was 3X cheaper than migrating to an equivalent VM based setup running GlusterFS. We compared a self-managed 3-node GlusterFS cluster (running SMB 3.0) on Azure VMs via Provisioned v2 disks with Azure Files - Premium tier (SMB 3.11). Note: All disks on VM are using Provisioned V2 for best cost saving. Region - East US2. Component GlusterFS on Azure VMs with Premium SSD v2 Disk Azure Files Premium Compute 3 x D16ads v5 VMs (16 vCPUs, 64 GiB RAM) $685.75 N/A VM OS Disks (P10) $15.42 N/A Storage 100TB Storage $11,398.18 $10,485.75 Provisioned Throughput (storage only) 2400MBps 10,340MBps Provisioned IOPS (storage only) 160000 102400 Additional Storage for Replication (~200%) $22,796.37 N/A Backup & DR Backup Solution (30 days, ZRS redundancy) $16,343.04 $4,608.00 Monthly Total $51,238.76 $15,094.75 As the table illustrates, even before we factor in the administration cost, Azure Files already has a compelling financial advantage. We also recently released “Provisioned v2” billing model for Azure files – HDD tier which provides fine grained cost management and can scale up to 256TiB!! With GlusterFS running on-premises, customers must take in account the various administrative overheads, which will be taken away with Azure Files. Factors Current (GlusterFS) Azure Files Management & Maintenance Significant None Storage Administration Personnel 15-20 hours/week Minimal Rebalancing Operations Required Automatic Failover effort Required Automatic Capacity Planning Required Automatic Scaling Complexity High None Implementation of Security Controls Required Included The migration journey We developed a phased approach tailored to the customer's risk tolerance, starting with lower-priority workloads as a pilot: Phase 1: Assessment (2-3 weeks) Inventory GlusterFS environments and analyse workloads Define requirements and select appropriate Azure Files tier Develop migration strategy Phase 2: Pilot Migration (1-2 weeks) Set up Azure Files and test connectivity Migrate test workloads and refine process Phase 3: Production Migration (variable) Execute transfers using appropriate tools (AzCopy, Robocopy, rsync // fpsync) Implement incremental sync and validate data integrity Phase 4: Optimization (1-2 weeks) Fine-tune performance and implement monitoring Decommission legacy infrastructure Results that matter Working with Tata Consultancy Services (TCS) as our migration partner, the customer did a POC migrating from a three-node RHEL 8 environment with a 1TB SMB (GlusterFS) share, to Azure Storage Account- Premium files. The source share was limited to ~1500 IOPS, and had 20+ subfolders, each being reserved for application access which made administrative tasks challenging. The application sub-folder structure was modified to individual Azure Files shares as part of the migration planning process. In addition, each share was secured using on-premises Active directory – Domain controller-based share authentication. Migration was done using Robocopy with SMB shares mounted on Windows clients and data copy being done in a mirror mode. The migration delivered significant benefits: Dramatically improved general-purpose performance due to migration of HDD based shares to SSD (1500 IOPS shared at source vs 3000 IOPS // 200MBPS base performance per share) Meeting and exceeding current RTO and RPO requirements (15 min) set by customer Customer mentioned noticeable performance gains for SQL Server workloads Flexibility to resize each share to Azure files maximum limit, independent of noise neighbours as previously configured Significant reduced TCO (at 33% of cost compared to equivalent VM based deployment) with higher base performance What this means for your GlusterFS environment If you're facing the GlusterFS support deadline, this is an opportunity to modernize your file storage approach. Azure Files offers a chance to eliminate infrastructure headaches through simplified management, robust security, seamless scalability, and compelling economics. Looking to begin your own migration? Reach out to us at azurefiles@microsoft.com, contact your Microsoft representatives, or explore our Azure Files documentation to learn more about capabilities and migration paths.236Views0likes0CommentsEnhance Your Linux Workloads with Azure Files NFS v4.1: Secure, Scalable, and Flexible
Enhance your Linux workloads with Azure Files NFS v4.1, enterprise-grade solution. With new support for in-transit encryption and RESTful access, it delivers robust security and flexible data access for mission-critical and data-intensive applications.837Views0likes0CommentsProtect Azure Data Lake Storage with Vaulted Backups
We are thrilled to announce a limited public preview of vaulted backups for Azure Data Lake Storage. This is available now for test workloads and we’d like to get your feedback. Vaults are secure, encrypted copies of your data, enabling restoration to an alternate location in cases of accidental or malicious deletion. Vaulted backups are fully isolated from the source data, ensuring continuity for your business operations even in scenarios where the source data is compromised. This fully managed solution leverages the Azure Backup service to manage backups with automated retention and scheduling. By creating a backup policy, you can define a backup schedule and retention period. Based on this policy, Azure Backup service generates recovery points and manages the lifecycle of backups seamlessly. Ways vaulted backups protect your data: Isolation from Production Data – Vaulted backups are stored in a separate, Microsoft-managed tenant, preventing attackers from accessing both primary and backup data. Strict Access Controls – Backup management requires distinct permissions, ensuring segregation of duties and reducing insider threats. Advanced Security Features – With features like soft delete, immutability, and encryption, vaulted backups safeguard data against unauthorized modifications and premature deletions. Even if attackers compromise the primary storage account, backups remain secure within the vault, preserving data integrity and ensuring compliance. Alternate location recovery - Vaulted backups provide a reliable recovery solution by enabling restoration to an alternate storage account, ensuring business continuity even when the original account is inaccessible. Additionally, this capability allows organizations to create separate data copies for purposes such as testing, development, or analytics, without disrupting production environments. Granular recovery - With vaulted backups, you can restore the entire storage account or specific containers based on your needs. You can also use prefix matching to recover select blobs. With the growing frequency and sophistication of cyberattacks, protecting your data against loss or corruption is more critical than ever. Consider the following example use case where having vaulted backups can save the day. Enhanced Protection Against Ransomware Attacks Ransomware attacks can encrypt critical data, complicating recovery unless a ransom is paid. Vaulted backups offer an independent and secure recovery solution, allowing you to restore data without succumbing to attackers' demands. Accidental or Malicious Storage Account Deletion Human errors, insider threats, or compromised credentials can result in the deletion of entire storage accounts. Vaulted backups provide a crucial layer of protection by storing backups in Microsoft-managed storage, independent of your primary storage account. This ensures that an additional copy of your data remains intact, even if the original storage account is accidentally or maliciously deleted. Compliance Regulations Certain industries mandate offsite backups and long-term data retention to meet regulatory standards. Vaulted backups enable organizations to comply by offering offsite backup storage within the same Azure region as the primary storage account. With vaulted backups, data can be retained for up to 10 years. Getting started To enroll in the preview fill out this form. For more details, refer to this article. Vaulted backups can be configured for block blobs within HNS-enabled, standard general-purpose v2 ADLS storage accounts in specified regions here. Support for additional regions will be added incrementally. Currently, this preview is recommended exclusively for testing purposes. The Azure Backup protected instance fee and the vault backup storage fees are not currently charged. Now is a great time to give vaulted backups a try! Contact us If you have questions or feedback, please reach out to us at AskAzureBackupTeam@microsoft.com.378Views0likes0CommentsBuilding a Scalable Web Crawling and Indexing Pipeline with Azure storage and AI Search
In the ever-evolving world of data management, keeping search indexes up-to-date with dynamic data can be challenging. Traditional approaches, such as manual or scheduled indexing, are resource-intensive, delay-prone, and difficult to scale. Azure Blob Trigger combined with an AI Search Indexer offers a cutting-edge solution to overcome these challenges, enabling real-time, scalable, and enriched data indexing. This blog explores how Blob Trigger, integrated with Azure Cognitive Search, transforms the indexing process by automating workflows and enriching data with AI capabilities. It highlights the step-by-step process of configuring Blob Storage, creating Azure Functions for triggers, and seamlessly connecting with an AI-powered search index. The approach leverages Azure's event-driven architecture, ensuring efficient and cost-effective data management.1.8KViews7likes10CommentsAnnouncing General Availability of Next generation Azure Data Box Devices
Today, we’re excited to announce the General Availability of Azure Data Box 120 and Azure Data Box 525, our next-generation compact, NVMe-based Data Box devices. These devices are currently available for customers to order in the US, US Gov, Canada, EU and the UK Azure regions, with broader availability coming soon. Since the preview announcement at Ignite '24, we have successfully ingested petabytes of data, encompassing multiple orders serving customers across various industry verticals. Customers have expressed delight over the reliability and efficiency of the new devices with up to 10x improvement in data transfer rates, highlighting them as a valuable and essential asset for large-scale data migration projects. These new device offerings reflect insights gained from working with our customers over the years and understanding their evolving data transfer needs. They incorporate several improvements to accelerate offline data transfers to Azure, including: Fast copy - Built with NVMe drives for high-speed transfers and improved reliability and support for faster network connections Ease of use - larger capacity offering (525 TB) in a compact form-factor for easy handling Resilient - Ruggedized devices built to withstand rough conditions during transport Secure - Enhanced physical, hardware and software security features Broader availability – Presence planned in more Azure regions, meeting local compliance standards and regulations What’s new? Improved Speed & Efficiency NVMe-based devices offer faster data transfer rates, providing a 10x improvement in data transfer speeds to the device as compared to previous generation devices. With a dataset comprised of mostly large (TB-sized) files, on average half a petabyte can be copied to the device in under two days. High-speed transfers to Azure with data upload up to 5x faster for medium to large files, reducing the lead time for your data to become accessible in the Azure cloud. Improved networking with support for up to 100 GbE connections, as compared to 10 GbE on the older generation of devices. Two options with usable capacity of 120 TB and 525 TB in a compact form factor meeting OSHA requirements. Devices ship the next day air in most regions. Learn more about the performance improvements on Data Box 120 and Data Box 525. Enhanced Security The new devices come with several new physical, hardware and software security enhancements. This is in addition to the built in Azure security baseline for Data Box and Data Box service security measures currently supported by the service. Secure boot functionality with hardware root of trust and Trusted Platform Module (TPM) 2.0. Custom tamper-proof screws and built-in intrusion detection system to detect unauthorized device access. AES 256-bit BitLocker software encryption for data at rest is currently available. Hardware encryption via the RAID controller, which will be enabled by default on these devices, is coming soon. Furthermore, once available, customers can enable double encryption through both software and hardware encryption to meet their sensitive data transfer requirements. These ISTA 6A compliant devices are built to withstand rough conditions during shipment while keeping both the device and your data safe and intact. Learn more about the enhanced security features on Data Box 120 and Data Box 525. Broader Azure region coverage A recurring request from our customers has been wider regional availability of higher-capacity devices to accelerate large migrations. We’re happy to share that Azure Data Box 525 will be available across US, US Gov, EU, UK and Canada with broader presence in EMEA and APAC regions coming soon. This marks a significant improvement in the availability of a large-capacity device as compared to the current Data Box Heavy which is available only in the US and Europe. What our customers have to say For the last several months, we’ve been working directly with our customers of all industries and sizes to leverage the next generation devices for their data migration needs. Customers love the larger capacity with form-factor familiarity, seamless set up and faster copy. “We utilized Azure Data Box for a bulk migration of Unix archive data. The data, originating from IBM Spectrum Protect, underwent pre-processing before being transferred to Azure blobs via the NFS v4 protocol. This offline migration solution enabled us to efficiently manage our large-scale data transfer needs, ensuring a seamless transition to the Azure cloud. Azure Data Box proved to be an indispensable tool in handling our specialized migration scenario, offering a reliable and efficient method for data transfer.” – ST Microelectronics Backup & Storage team “This new offering brings significant advantages, particularly by simplifying our internal processes. With deployments ranging from hundreds of terabytes to even petabytes, we previously relied on multiple regular Data Box devices—or occasionally Data Box Heavy devices—which required extensive operational effort. The new solution offers sizes better aligned with our needs, allowing us to achieve optimal results with fewer logistical steps. Additionally, the latest generation is faster and provides more connectivity options at data centre premises, enhancing both efficiency and flexibility for large-scale data transfers.” - Lukasz Konarzewski, Senior Data Architect, Commvault “We have had a positive experience overall with the new Data Box devices to move our data to Azure Blob storage. The devices offer easy plug and play installation, detailed documentation especially for the security features and good data copy performance. We would definitely consider using it again for future large data migration projects.” – Bas Boeijink, Cloud Engineer, Eurofiber Cloud Infra Upcoming changes to older SKUs availability Note that in regions where the next-gen devices are available, new orders for Data Box 80 TB and Data Box Heavy devices cannot be placed post May 31, 2025. We will however continue to process and support all existing orders. Order your device today! The devices are currently available for customers to order in the US, Canada, EU, UK, and US Gov Azure regions. We will continue to expand to more regions in the upcoming months. Azure Data Box provides customers with one of the most cost-effective solutions for data migration, offering competitive pricing with the lowest cost per TB among offline data transfer solutions. You can learn more about the pricing across various regions by visiting our pricing page. You can use the Azure portal to select the requisite SKU suitable for your migration needs and place the order. Learn more about the all-new Data Box devices here. We are committed to continuing to deliver innovative solutions to lower the barrier for bringing data to Azure. Your feedback is important to us. Tell us what you think about the new Azure Data Box devices by writing to us at DataBoxPM@microsoft.com – we can’t wait to hear from you.852Views2likes0Comments