Migrating storage from on-premises can be challenging. That’s why we are on a mission to make your migrations as simple as possible. We've developed robust solutions that enable you to transfer your files and folders to Azure, tailored to meet your specific migration needs.
|
At times, the optimal approach is to migrate your files and folders via your network from on-premises to Azure. In such instances, we provide Azure Storage Mover, a fully-managed migration service. Learn more |
|
|
Alternatively, migrating data offline might be more suitable: Azure Data Box allows you to transport terabytes of data to Azure swiftly, affordably, and dependably. You will receive a specialized Data Box storage device to load with your data and send directly to an Azure data center. Learn more |
Did you know these two services can be comnined to form an effective file and folder migration solution that you can use to predict and minimize downtime for your workloads?
Offline migration, online catch-up
Utilizing Azure Data Box likely conserved a significant amount of bandwidth. However, any active workload on your source storage likely made changes while your Data Box was in transit to Azure.
Consequently, you'll also need to bring those changes to your cloud storage, before a workload can be cut-over to it.
Catch-up copies typically need minimal bandwidth since most of the data already resides in Azure, and only the delta needs to be transferred. Azure Storage Mover is an excellent tool for this purpose.
We ensure that Storage Mover jobs can detect the differences between your on-site storage and cloud storage. Storage Mover will then effectively transfer any updates and new files not previously captured by your Data Box transfer.
Maximizing your upload bandwidth is crucial. For instance, if only a file's metadata (such as permissions) has changed, Storage Mover will upload only the new metadata instead of the entire file content.
Storage Mover's copy modes, merge and mirror, allow you to tailor your cloud storage updates to your specific needs.
Storage Mover can be also used independently of Data Box.
Of course, you can also use Storage Mover without Data Box. In that case you’d migrate entirely over your network. Using Data Box may bring both time and bandwidth savings but isn’t needed in every migration scenario.
Minimizing and predicting workload downtime
When transitioning on-premises workloads to Azure Storage, you typically aim to:
- Reduce the duration your on-prem application is offline during the switch.
- Establish a predictable downtime period for users and business operations reliant on the workload.
Azure Storage Mover is designed to assist in achieving both goals.
|
The idea behind this approach is that you migrate your data from source to target several times.
How long exactly this first copy will take, depends on many factors and is hard to predict. Therefore, it is not advisable to take any workloads that depend on this data offline prior to initiating this bulk copy step. Instead, maintain your workloads active on the source data. |
Keeping your workloads active on the source constantly introduces changes and new files to the source. It may even prevent some of your files from being migrated, because they are in use. But that’s OK. After your bulk migration finishes, you immediately start this catch-up migration job. Now, you only need to transfer the changes that have occurred since the initial bulk migration started. Likely, this catch-up migration job will complete more quickly since there are fewer bytes that need to be transferred across your network. |
This speed-up migration job is optional. Initiate this job immediately after the completion of the preceding "catch-up" job.
As the last job concluded more quickly than the initial "bulk-migration" job, there was less time for changes to accumulate. Consequently, this speed-up job is expected to complete even more swiftly.
Multiple speed-up jobs can be executed consecutively. Eventually, you will reach a point where the processing time of a job is no longer decreasing, and reaches it's minimum for the given namespace. At this stage, almost no data needs to be transferred over the network, and the majority of the time is spent on determining whether a file requires migration. Additional local compute cores and RAM can be beneficial. |
Once your speed-up copy job(s) no longer finish any faster than the preceding ones, it's probable that you've reached the minimum that the combination of your namespace (number of files) and the local compute resources allow for.
This implies that executing an additional job will probably complete in a similar timeframe. You have identified a predictable, minimal downtime for your workload(s) that depend on this namespace.
And just like that, you are up and running again. |
It's important to note the limitations of this method.
An extensive collection of small files with a high change rate might necessitate longer downtime. Moreover, this technique won't capture files that are in constant use until the final cut-over migration job. If there's a considerable number of such files or their total size is large, achieving a predictable minimum with this method is hardly feasible.
Consequently, this method is not suitable for migrating active database files, for example. The convergent, n-pass migration strategy is designed for general-purpose namespaces. For databases or files that are always open, it's best to use specialized migration tools tailored for those specific workloads.
Ready to get started?
Data Box:
- Documentation home
- Which Data Box device is right for me?
- Training: Import data offline with Data Box
Storage Mover
Updated May 17, 2024
Version 2.0Fabian_Uhse
Microsoft
Joined August 26, 2017
Azure Storage Blog
Follow this blog board to get notified when there's new activity