azure blob
41 TopicsControl geo failover for ADLS and SFTP with unplanned failover.
We are excited to announce the General Availability of customer managed unplanned failover for Azure Data Lake Storage and storage accounts with SSH File Transfer Protocol (SFTP) enabled. What is Unplanned Failover? With customer managed unplanned failover, you are in control of initiating your failover. Unplanned failover allows you to switch your storage endpoints from the primary region to the secondary region. During an unplanned failover, write requests are redirected to the secondary region, which then becomes the new primary region. Because an unplanned failover is designed for scenarios where the primary region is experiencing an availability issue, unplanned failover happens without the primary region fully completing replication to the secondary region. As a result, during an unplanned failover there is a possibility of data loss. This loss depends on the amount of data that has yet to be replicated from the primary region to the secondary region. Each storage account has a ‘last sync time’ property, which indicates the last time a full synchronization between the primary and the secondary region was completed. Any data written between the last sync time and the current time may only be partially replicated to the secondary region, which is why unplanned failover may incur data loss. Unplanned failover is intended to be utilized during a true disaster where the primary region is unavailable. Therefore, once completed, the data in the original primary region is erased, the account is changed to locally redundant storage (LRS) and your applications can resume writing data to the storage account. If the previous primary region becomes available again, you can convert your account back to geo-redundant storage (GRS). Migrating your account from LRS to GRS will initiate a full data replication from the new primary region to the secondary which has geo-bandwidth costs. If your scenario involves failing over while the primary region is still available, consider planned failover. Planned failover can be utilized in scenarios including planned disaster recovery testing or recovering from non-storage related outages. Unlike unplanned failover, the storage service endpoints must be available in both the primary and secondary regions before a planned failover can be initiated. This is because planned failover is a 3-step process that includes: (1) making the current primary read only, (2) syncing all the data to the secondary (ensuring no data loss), and (3) swapping the primary and secondary regions so that writes are now in the new region. In contrast with unplanned failover, planned failover maintains the geo-redundancy of the account so planned failback does not require a full data copy. To learn more about planned failover and how it works view, Public Preview: Customer Managed Planned Failover for Azure Storage | Microsoft Community Hub To learn more about each failover option and the primary use case for each view, Azure storage disaster recovery planning and failover - Azure Storage | Microsoft Learn How to get started? Getting started is simple, to learn more about the step-by-step process to initiate an unplanned failover review the documentation: Initiate a storage account failover - Azure Storage | Microsoft Learn Feedback If you have questions or feedback, reach out at storagefailover@service.microsoft.com150Views0likes0CommentsOn-premises-first hybrid workflows in healthcare. Why start with digital pathology?
Traditionally, digital pathology solutions were always an on-premises only solutions. And they will always require on-premises components. That doesn't mean that they can't take advantage of cloud services. Read how one of our ISV solutions, Tiger Technology solves this challenge. This blog post describes one of the ways digital pathology can be implemented in the hybrid manner.208Views0likes0CommentsPublic Preview: Use Azure Blob Storage on Windows as a file share using Network File System(NFS) 3.0
Azure Blob Storage team is announcing the Public Preview of the capability to use Blob storage on Windows using Network File System (NFS) 3.0 protocol, while capability to access Blob Storage using NFS on Linux is generally available (GA).5.1KViews3likes2CommentsWhen to use AzCopy versus Azure PowerShell or Azure CLI
In this article you will learn the difference between API operations on Azure in the control plane and the data plane, how various tools such as AzCopy, Azure PowerShell, and Azure CLI leverage these APIs, and which tool fits best for your workload. All these tools are CLI based and work across platforms including Windows, Linux, and macOS.4KViews2likes0CommentsStorage migration: Combine Azure Storage Mover and Azure Data Box
Learn how to leverage the combination of Azure Storage Mover and Azure Data Box for offline migration + online catch-up. This post also discusses a popular migration strategy, helping you predict and minimize any downtime of workloads.3.5KViews1like1CommentProtecting access to storage account with backups and archived data
Hello experts, I've been trying to understand how to protect backups, and archived data stored in azure blob storage. What is the way to protect those data in case that a global admin rights got compromised? I understand that data are encrypted, etc... but in scenario above, what could be an additional level to make sure that even if global admin account got compromised, it will not be easy to access those critical data?212Views0likes1CommentAzure Storage Container - Soft Delete Monitoring
Hi All, Can someone let me know if there is a way to export all the soft deleted items in my container on a daily basis into a csv or any file format I could connect PBI with? I want to monitor all Active and Soft Deleted items on a Power Bi report I have created and at this moment I can't seem to find a way to get a list of all the items that have been deleted.. Thanks421Views0likes1Comment