Azure NetApp Files
14 TopicsHow to Save 70% on File Data Costs
In the final entry in our series on lowering file storage costs, DarrenKomprise shares how Komprise can help lower on-premises and Azure-based file storage costs. Komprise and Azure offer you a means to optimize unstructured data costs now and in the future!14KViews1like1CommentHybrid File Tiering Addresses Top CIO Priorities of Risk Control and Cost Optimization
Hybrid File Tiering addresses top CIO priorities of risk control and cost optimization This article describes how you can leverage Komprise Intelligent Tiering for Azure with any on-premises file storage platform and Azure Blob Storage to reduce your cost by 70% and shrink your ransomware attack surface. Note: This article has been co-authored by Komprise and Microsoft. Unstructured data plays a big role in today's IT budgets and risk factors Unstructured data, which is any data that does not fit neatly into a database or tabular format, has been growing exponentially and is now projected by analysts to be over 80% of business information. Unstructured data is commonly referred to as file data, which is the terminology used for the rest of this article. File data has caught some IT leaders by surprise because it is now consuming a significant portion of IT budgets with no sign of slowing down. File data is expensive to manage and retain because it is typically stored and protected by replication to an identical storage platform which can be very expensive at scale. We will now review how you can easily identify hot and cold data and transparently tier cold files to Azure to cut costs and shrink ransomware exposure with Komprise. Why file data is factoring into CIO priorities CIOs are prioritizing cost optimization, risk management and revenue improvement as key priorities for their data. 56% chose cost optimization as their top priority according to the 2024 Komprise State of Unstructured Data Management survey. This is because file data is often retained for decades, its growth rate is in double-digits, and it can easily be petabytes of data. Keeping a primary copy, a backup copy and a DR copy means three or more copies of the large volume of file data which becomes prohibitively expensive. On the other hand, file data has largely been untapped in terms of value, but businesses are now realizing the importance of file data to train and fine tune AI models. Smart solutions are required to balance these competing requirements. Why file data is vulnerable to ransomware attacks File data is arguably the most difficult data to protect against ransomware attacks because it is open to many different users, groups and applications. This increases risk because a single user's or group's mistake can lead to a ransomware infection. If the file is shared and accessed again, the infection can quickly spread across the network undetected. As ransomware lurks, the risk increases. For these reasons, you cannot ignore file data when creating a ransomware defense strategy. How to leverage Azure to cut the cost and inherent risk of file data retention You can cut costs and shrink the ransomware attack surface of file data using Azure even when you still require on-premises access to your files. The key is reducing the amount of file data that is actively accessed and thus exposed to ransomware attacks. Since 80% of file data is typically cold and has not been accessed in months (see Demand for cold data storage heats up | TechTarget), transparently offloading these files to immutable storage through hybrid tiering cuts both costs and risks. Hybrid tiering offloads entire files from the data storage, snapshot, backup and DR footprints while your users continue to see and access the tiered files without any change to your application processes or user behavior. Unlike storage tiering which is typically offered by the storage vendor and causes blocks of files to be controlled by the storage filesystem to be placed in Azure, hybrid tiering operates at the file level and transparently offloads the entire file to Azure while leaving behind a link that looks and behaves like the file itself. Hybrid tiering offloads cold files to Azure to cut costs and shrink the ransomware attack surface: Cut 70%+ costs: By offloading cold files and not blocks, hybrid tiering can shrink the amount of data you are storing and backing up by 80%, which cuts costs proportionately. As shown in the example below, you can cut 70% of file storage and backup costs by using hybrid tiering. Assumptions Amount of Data on NAS (TB) 1024 % Cold Data 80% Annual Data Growth Rate 30% On-Prem NAS Cost/GB/Mo $0.07 Backup Cost/GB/Mo $0.04 Azure Blob Cool Cost/GB/Mo $0.01 Komprise Intelligent Tiering for Azure/GB/Mo $0.008 On-Prem NAS On-prem NAS + Azure Intelligent Tiering Data in On-Premises NAS 1024 205 Snapshots 30% 30% Cost of On-Prem NAS Primary Site $1,064,960 $212,992 Cost of On-Prem NAS DR Site $1,064,960 $212,992 Backup Cost $460,800 $42,598 Data on Azure Blob Cool $0 819 Cost of Azure Blob Cool $0 $201,327 Cost of Komprise $100,000 Total Cost for 1PB per Year $2,590,720 $769,909 SAVINGS/PB/Yr $1,820,811 70% Shrink ransomware attack surface by 80%: Offloading cold files to immutable Azure Blob removes cold files from the active attack surface thus eliminating 80% of the storage, DR and backup costs while also providing a potential recovery path if the cold files get infected. By having Komprise tier to immutable Azure Blob with versioning, even if someone tried to infect a cold file, it would be saved as a new version – enabling recovery using an older version. Learn more about Azure Immutable Blob storage here. In addition to cost savings and improved ransomware defense, the benefits of Hybrid Cloud Tiering using Komprise and Azure are: Leverage Existing Storage Investment: You can continue to use your existing NAS storage and Komprise to tier cold files to Azure. Users and applications continue to see and access the files as if they were still on-premises. Leverage Azure Data Services: Komprise maintains file-object duality with its patented Transparent Move Technology (TMT), which means the tiered files can be viewed and accessed in Azure as objects, allowing you to use Azure Data Services natively. This enables you to leverage the full power of Azure with your enterprise file data. Works Across Heterogeneous Vendor Storage: Komprise works across all your file and object storage to analyze and transparently tier data to Azure file and object tiers. Ongoing Lifecycle Management in Azure: Komprise continues to manage data lifecycle in Azure, so as data gets colder, it can move from Azure Blob Cool to Cold to Archive tier based on policies you control. Azure and Komprise customers are already using hybrid tiering to improve their ransomware posture while reducing costs – a great example is Katten. Global law firm saves $900,000 per year and achieves resilient ransomware defense with Komprise and Azure Katten Muchin Rosenman LLP (Katten) is a full-service law firm delivering legal services across more than a dozen practice areas and sectors, including Aviation, Construction, Energy, Education, Entertainment, Healthcare and Real Estate. Like many other large law firms, Katten has been seeing an average 20% annual growth in storage for file related data, resulting in the need to add on-premises storage capacity every 12-18 months. With a focus on managing data storage costs in an environment where data is growing exponentially annually but cannot be deleted, Katten needed a solution that could provide deep data insights and the ability to move file data as it ages to immutable object storage in the cloud for greater cost savings and ransomware protection. Katten Law implemented hybrid tiering using Komprise Intelligent Tiering to Azure and leveraged Immutable Blob storage to not only save $900,000 annually but also improved their ransomware defense posture. Read how Katten Law does hybrid tiering to Azure using Komprise. Summary: Hybrid Tiering helps CIOs to optimize file costs and cut ransomware risks Cost optimization and Risk management are top CIO priorities. File data is a major contributor to both costs and ransomware risks. Organizations are leveraging Komprise to tier cold files to Azure while continuing to use their on-premises file storage NAS. This provides a low risk approach with no disruption to users and apps while cutting 70% costs and shrinking the ransomware attack surface by 80%. Next steps To learn more and get a customized assessment of your savings, visit the Azure Marketplace listing or contact azure@komprise.com.87Views2likes0CommentsMicrosoft Ignite 2023: What is new in file shares for enterprise workloads?
We are announcing various new capabilities for Azure Files and Azure NetApp Files to continue to improve file share experiences for enterprise workloads. You can now enjoy more flexibility, security, and performance for your data storage needs.3.4KViews0likes0CommentsMigrate the critical file data you need to power your applications
When you migrate applications to Azure, you cannot leave file data behind. The Azure File Migration program can help you migrate data from NFS, SMB, and S3 sources to Azure Storage Services in less time, less risk, and no headache! Learn how to take advantage of this program and about the fundamentals of file migration in this post.16KViews6likes8CommentsLeverage Azure NetApp Files for R Studio workloads
R is an open-source language used for statistical computing and graphics. It's used in the statistical analysis of genetics, natural language processing, analysing financial data, and more. R provides an interactive command line experience. RStudio is an interactive development environment (IDE) available for the R language. The free version provides code editing tools, an integrated debugging experience, and package development tools. It includes a console, syntax-highlighting editor that supports direct code execution, as well as tools for plotting, history, debugging and workspace management. RStudio is available in open source and commercial editions and runs on the desktop (Windows, Mac, and Linux) or in a browser connected to RStudio Server or RStudio Workbench (Debian/Ubuntu, Red Hat/CentOS, and SUSE Linux). RStudio deployment requires a shared file storage to store project files and user specific files. For example, code, documents, user configuration and session data. RStudio Connect is a publishing platform for the work your teams create in R and Python. Share Shiny applications, R Markdown reports, Plumber APIs, dashboards, plots, Jupyter Notebooks, and more in one convenient place. Use push-button publishing from the RStudio IDE, scheduled execution of reports, and flexible security policies to bring the power of data science to your entire enterprise. RStudio Connect manages uploaded content within the server's data directory. This data directory must be a shared location RStudio Team is a bundle of RStudio professional products for doing statistical data-analysis, sharing data products, and managing packages. Most of the RStudio products require a high performance / low latency shared storage option. Particularly, for configurations that load balance across two or more nodes you will need a networked storage solution. Shared storage is used to persist content such as project files and application data across your network. They recommend and support the NFS protocol. Azure NetApp Files is a first party Azure service for migration and running the most demanding enterprise file-workloads in the cloud: native SMBv3.0 and NFS (v3.0 and v4.1) file shares, databases, SAP, and high-performance computing applications, with no code changes. R workloads are characterised by high IOPS and throughput requirements. Azure NetApp files can handle between ~130,000 pure random writes and ~460,000 pure random reads and extremely low latency with up to 4.4 GBps of throughput. This makes it particularly suitable to host R workloads. With Azure NetApp Files, we can set up native NFS v3 or NFS v4.1 volume as described below: Pre-requisites You need to have access to Azure portal and active subscription to provision resources You must have already set up a capacity pool. A subnet must be delegated to Azure NetApp Files. The NFS client should be in the same VNet or peered VNet as the Azure NetApp Files volume. Connecting from outside the VNet is supported; however, it will introduce additional latency and decrease overall performance. Ensure that the NFS client is up-to-date and running the latest updates for the operating system. Deciding which NFS version to use: NFSv3 can handle a wide variety of use cases and is commonly deployed in most enterprise applications. You should validate what version (NFSv3 or NFSv4.1) your application requires and create your volume using the appropriate version. For example, if you use Apache ActiveMQ, file locking with NFSv4.1 is recommended over NFSv3. Security Support for UNIX mode bits (read, write, and execute) is available for NFSv3 and NFSv4.1. Root-level access is required on the NFS client to mount NFS volumes. Steps 1. Create an NFS volume Click the Volumes blade from the Capacity Pools blade. Click + Add volume to create a volume. In the Create a Volume window, click Create, and provide information for the following fields under the Basics tab: Volume name Specify the name for the volume that you are creating. A volume name must be unique within each capacity pool. It must be at least three characters long. The name must begin with a letter. It can contain letters, numbers, underscores ('_'), and hyphens ('-') only. You cannot use default or bin as the volume name. Capacity pool Specify the capacity pool where you want the volume to be created. Quota Specify the amount of logical storage that is allocated to the volume. The Available quota field shows the amount of unused space in the chosen capacity pool that you can use towards creating a new volume. The size of the new volume must not exceed the available quota. Throughput (MiB/S) If the volume is created in a manual QoS capacity pool, specify the throughput you want for the volume. If the volume is created in an auto QoS capacity pool, the value displayed in this field is (quota x service level throughput). Virtual network Specify the Azure virtual network (VNet) from which you want to access the volume. The Vnet you specify must have a subnet delegated to Azure NetApp Files. The Azure NetApp Files service can be accessed only from the same Vnet or from a Vnet that is in the same region as the volume through Vnet peering. You can also access the volume from your on-premises network through Express Route. Subnet Specify the subnet that you want to use for the volume. The subnet you specify must be delegated to Azure NetApp Files. If you have not delegated a subnet, you can click Create new on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select Microsoft.NetApp/volumes to delegate the subnet for Azure NetApp Files. In each Vnet, only one subnet can be delegated to Azure NetApp Files. If you want to apply an existing snapshot policy to the volume, click Show advanced section to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu. For information about creating a snapshot policy, see Manage snapshot policies. Click Protocol, and then complete the following actions: Select NFS as the protocol type for the volume. Specify a unique file path for the volume. This path is used when you create mount targets. The requirements for the path are as follows: It must be unique within each subnet in the region. It must start with an alphabetical character. It can contain only letters, numbers, or dashes (-). The length must not exceed 80 characters. Select the Version (NFSv3 or NFSv4.1) for the volume. If you are using NFSv4.1, indicate whether you want to enable Kerberos encryption for the volume. Optionally, configure export policy for the NFS volume. Click Review + Create to review the volume details. Then click Create to create the volume. The volume you created appears in the Volumes page.A volume inherits subscription, resource group, location attributes from its capacity pool. To monitor the volume deployment status, you can use the Notifications tab. Mount the ANF NFS volume on your RStudio clients. Click the Volumes blade, and then select the volume for which you want to mount. Click Mount instructions from the selected volume, and then follow the instructions to mount the volume. R analytics jobs may require different levels of underlying storage performance based on the processing algorithms. With Azure NetApp Files we can non-disruptively tune the storage performance based on the requirement. Refer – Dynamic Changing of volume service level. https://docs.microsoft.com/en-us/azure/azure-netapp-files/dynamic-change-volume-service-level This article is authored by: Sudev Kurur Cloud Solutions Architect at NetApp6.1KViews0likes0Comments