azure blob
26 TopicsHow to Save 70% on File Data Costs
In the final entry in our series on lowering file storage costs, DarrenKomprise shares how Komprise can help lower on-premises and Azure-based file storage costs. Komprise and Azure offer you a means to optimize unstructured data costs now and in the future!14KViews1like1CommentHybrid File Tiering Addresses Top CIO Priorities of Risk Control and Cost Optimization
Hybrid File Tiering addresses top CIO priorities of risk control and cost optimization This article describes how you can leverage Komprise Intelligent Tiering for Azure with any on-premises file storage platform and Azure Blob Storage to reduce your cost by 70% and shrink your ransomware attack surface. Note: This article has been co-authored by Komprise and Microsoft. Unstructured data plays a big role in today's IT budgets and risk factors Unstructured data, which is any data that does not fit neatly into a database or tabular format, has been growing exponentially and is now projected by analysts to be over 80% of business information. Unstructured data is commonly referred to as file data, which is the terminology used for the rest of this article. File data has caught some IT leaders by surprise because it is now consuming a significant portion of IT budgets with no sign of slowing down. File data is expensive to manage and retain because it is typically stored and protected by replication to an identical storage platform which can be very expensive at scale. We will now review how you can easily identify hot and cold data and transparently tier cold files to Azure to cut costs and shrink ransomware exposure with Komprise. Why file data is factoring into CIO priorities CIOs are prioritizing cost optimization, risk management and revenue improvement as key priorities for their data. 56% chose cost optimization as their top priority according to the 2024 Komprise State of Unstructured Data Management survey. This is because file data is often retained for decades, its growth rate is in double-digits, and it can easily be petabytes of data. Keeping a primary copy, a backup copy and a DR copy means three or more copies of the large volume of file data which becomes prohibitively expensive. On the other hand, file data has largely been untapped in terms of value, but businesses are now realizing the importance of file data to train and fine tune AI models. Smart solutions are required to balance these competing requirements. Why file data is vulnerable to ransomware attacks File data is arguably the most difficult data to protect against ransomware attacks because it is open to many different users, groups and applications. This increases risk because a single user's or group's mistake can lead to a ransomware infection. If the file is shared and accessed again, the infection can quickly spread across the network undetected. As ransomware lurks, the risk increases. For these reasons, you cannot ignore file data when creating a ransomware defense strategy. How to leverage Azure to cut the cost and inherent risk of file data retention You can cut costs and shrink the ransomware attack surface of file data using Azure even when you still require on-premises access to your files. The key is reducing the amount of file data that is actively accessed and thus exposed to ransomware attacks. Since 80% of file data is typically cold and has not been accessed in months (see Demand for cold data storage heats up | TechTarget), transparently offloading these files to immutable storage through hybrid tiering cuts both costs and risks. Hybrid tiering offloads entire files from the data storage, snapshot, backup and DR footprints while your users continue to see and access the tiered files without any change to your application processes or user behavior. Unlike storage tiering which is typically offered by the storage vendor and causes blocks of files to be controlled by the storage filesystem to be placed in Azure, hybrid tiering operates at the file level and transparently offloads the entire file to Azure while leaving behind a link that looks and behaves like the file itself. Hybrid tiering offloads cold files to Azure to cut costs and shrink the ransomware attack surface: Cut 70%+ costs: By offloading cold files and not blocks, hybrid tiering can shrink the amount of data you are storing and backing up by 80%, which cuts costs proportionately. As shown in the example below, you can cut 70% of file storage and backup costs by using hybrid tiering. Assumptions Amount of Data on NAS (TB) 1024 % Cold Data 80% Annual Data Growth Rate 30% On-Prem NAS Cost/GB/Mo $0.07 Backup Cost/GB/Mo $0.04 Azure Blob Cool Cost/GB/Mo $0.01 Komprise Intelligent Tiering for Azure/GB/Mo $0.008 On-Prem NAS On-prem NAS + Azure Intelligent Tiering Data in On-Premises NAS 1024 205 Snapshots 30% 30% Cost of On-Prem NAS Primary Site $1,064,960 $212,992 Cost of On-Prem NAS DR Site $1,064,960 $212,992 Backup Cost $460,800 $42,598 Data on Azure Blob Cool $0 819 Cost of Azure Blob Cool $0 $201,327 Cost of Komprise $100,000 Total Cost for 1PB per Year $2,590,720 $769,909 SAVINGS/PB/Yr $1,820,811 70% Shrink ransomware attack surface by 80%: Offloading cold files to immutable Azure Blob removes cold files from the active attack surface thus eliminating 80% of the storage, DR and backup costs while also providing a potential recovery path if the cold files get infected. By having Komprise tier to immutable Azure Blob with versioning, even if someone tried to infect a cold file, it would be saved as a new version – enabling recovery using an older version. Learn more about Azure Immutable Blob storage here. In addition to cost savings and improved ransomware defense, the benefits of Hybrid Cloud Tiering using Komprise and Azure are: Leverage Existing Storage Investment: You can continue to use your existing NAS storage and Komprise to tier cold files to Azure. Users and applications continue to see and access the files as if they were still on-premises. Leverage Azure Data Services: Komprise maintains file-object duality with its patented Transparent Move Technology (TMT), which means the tiered files can be viewed and accessed in Azure as objects, allowing you to use Azure Data Services natively. This enables you to leverage the full power of Azure with your enterprise file data. Works Across Heterogeneous Vendor Storage: Komprise works across all your file and object storage to analyze and transparently tier data to Azure file and object tiers. Ongoing Lifecycle Management in Azure: Komprise continues to manage data lifecycle in Azure, so as data gets colder, it can move from Azure Blob Cool to Cold to Archive tier based on policies you control. Azure and Komprise customers are already using hybrid tiering to improve their ransomware posture while reducing costs – a great example is Katten. Global law firm saves $900,000 per year and achieves resilient ransomware defense with Komprise and Azure Katten Muchin Rosenman LLP (Katten) is a full-service law firm delivering legal services across more than a dozen practice areas and sectors, including Aviation, Construction, Energy, Education, Entertainment, Healthcare and Real Estate. Like many other large law firms, Katten has been seeing an average 20% annual growth in storage for file related data, resulting in the need to add on-premises storage capacity every 12-18 months. With a focus on managing data storage costs in an environment where data is growing exponentially annually but cannot be deleted, Katten needed a solution that could provide deep data insights and the ability to move file data as it ages to immutable object storage in the cloud for greater cost savings and ransomware protection. Katten Law implemented hybrid tiering using Komprise Intelligent Tiering to Azure and leveraged Immutable Blob storage to not only save $900,000 annually but also improved their ransomware defense posture. Read how Katten Law does hybrid tiering to Azure using Komprise. Summary: Hybrid Tiering helps CIOs to optimize file costs and cut ransomware risks Cost optimization and Risk management are top CIO priorities. File data is a major contributor to both costs and ransomware risks. Organizations are leveraging Komprise to tier cold files to Azure while continuing to use their on-premises file storage NAS. This provides a low risk approach with no disruption to users and apps while cutting 70% costs and shrinking the ransomware attack surface by 80%. Next steps To learn more and get a customized assessment of your savings, visit the Azure Marketplace listing or contact azure@komprise.com.181Views3likes0CommentsControl geo failover for ADLS and SFTP with unplanned failover.
We are excited to announce the General Availability of customer managed unplanned failover for Azure Data Lake Storage and storage accounts with SSH File Transfer Protocol (SFTP) enabled. What is Unplanned Failover? With customer managed unplanned failover, you are in control of initiating your failover. Unplanned failover allows you to switch your storage endpoints from the primary region to the secondary region. During an unplanned failover, write requests are redirected to the secondary region, which then becomes the new primary region. Because an unplanned failover is designed for scenarios where the primary region is experiencing an availability issue, unplanned failover happens without the primary region fully completing replication to the secondary region. As a result, during an unplanned failover there is a possibility of data loss. This loss depends on the amount of data that has yet to be replicated from the primary region to the secondary region. Each storage account has a ‘last sync time’ property, which indicates the last time a full synchronization between the primary and the secondary region was completed. Any data written between the last sync time and the current time may only be partially replicated to the secondary region, which is why unplanned failover may incur data loss. Unplanned failover is intended to be utilized during a true disaster where the primary region is unavailable. Therefore, once completed, the data in the original primary region is erased, the account is changed to locally redundant storage (LRS) and your applications can resume writing data to the storage account. If the previous primary region becomes available again, you can convert your account back to geo-redundant storage (GRS). Migrating your account from LRS to GRS will initiate a full data replication from the new primary region to the secondary which has geo-bandwidth costs. If your scenario involves failing over while the primary region is still available, consider planned failover. Planned failover can be utilized in scenarios including planned disaster recovery testing or recovering from non-storage related outages. Unlike unplanned failover, the storage service endpoints must be available in both the primary and secondary regions before a planned failover can be initiated. This is because planned failover is a 3-step process that includes: (1) making the current primary read only, (2) syncing all the data to the secondary (ensuring no data loss), and (3) swapping the primary and secondary regions so that writes are now in the new region. In contrast with unplanned failover, planned failover maintains the geo-redundancy of the account so planned failback does not require a full data copy. To learn more about planned failover and how it works view, Public Preview: Customer Managed Planned Failover for Azure Storage | Microsoft Community Hub To learn more about each failover option and the primary use case for each view, Azure storage disaster recovery planning and failover - Azure Storage | Microsoft Learn How to get started? Getting started is simple, to learn more about the step-by-step process to initiate an unplanned failover review the documentation: Initiate a storage account failover - Azure Storage | Microsoft Learn Feedback If you have questions or feedback, reach out at storagefailover@service.microsoft.com178Views0likes0CommentsOn-premises-first hybrid workflows in healthcare. Why start with digital pathology?
Traditionally, digital pathology solutions were always an on-premises only solutions. And they will always require on-premises components. That doesn't mean that they can't take advantage of cloud services. Read how one of our ISV solutions, Tiger Technology solves this challenge. This blog post describes one of the ways digital pathology can be implemented in the hybrid manner.212Views0likes0CommentsPublic Preview: Use Azure Blob Storage on Windows as a file share using Network File System(NFS) 3.0
Azure Blob Storage team is announcing the Public Preview of the capability to use Blob storage on Windows using Network File System (NFS) 3.0 protocol, while capability to access Blob Storage using NFS on Linux is generally available (GA).5.1KViews3likes2CommentsWhen to use AzCopy versus Azure PowerShell or Azure CLI
In this article you will learn the difference between API operations on Azure in the control plane and the data plane, how various tools such as AzCopy, Azure PowerShell, and Azure CLI leverage these APIs, and which tool fits best for your workload. All these tools are CLI based and work across platforms including Windows, Linux, and macOS.4KViews2likes0CommentsStorage migration: Combine Azure Storage Mover and Azure Data Box
Learn how to leverage the combination of Azure Storage Mover and Azure Data Box for offline migration + online catch-up. This post also discusses a popular migration strategy, helping you predict and minimize any downtime of workloads.3.5KViews1like1Comment