Migrate databases from SQL Server to SQL Managed Instance using Log Replay Service (Preview)
Published Feb 18 2021 06:53 AM 8,388 Views
Microsoft
This article explains how to manually configure database migration from SQL Server 2008-2019 to SQL Managed Instance using Log Replay Service (LRS) currently in public preview. This is a cloud service enabled for Managed Instance based on the SQL Server log shipping technology. LRS should be used in cases when there exist complex custom migrations and hybrid architectures, when more control is needed, when there exists little tolerance for downtime, or when Azure Data Migration Service (DMS) cannot be used.

 

Note that both DMS and LRS use the same underlying migration technology and the same APIs. With releasing LRS, we are further enabling complex custom migrations and hybrid architecture between on-prem. SQL Server and SQL Managed Instances.

 

When to use Log Replay Service

 

LRS cloud service can be used directly with PowerShell, CLI cmdlets, or API, to manually build and orchestrate database migrations to SQL Managed Instance. You might want to consider using LRS cloud service in some of the following cases:

 

  • More control is needed for your database migration project
  • There exists a little tolerance for downtime on migration cutover
  • DMS executable cannot be installed in your environment
  • DMS executable does not have file access to local database backups
  • No access to host OS is available, or no Administrator privileges
  • Unable to open networking ports between your environment and Azure
  • Network throttling, or proxy blocking issues in your environment
  • Backups are stored directly to Azure Blob Storage using TO URL option
  • There exists a need to use differential backups
Note: Recommended automated way to migrate databases from SQL Server to SQL Managed Instance is using Azure DMS. This service is using the same LRS cloud service at the back end with log shipping in NORECOVERY mode. You should consider manually using LRS to orchestrate migrations in cases when Azure DMS does not fully support your scenarios.


How does it work

 

Building a custom solution using LRS to migrate databases to the cloud requires several orchestration steps shown in the diagram and outlined in the table below.

 

The migration consists of making full database backups on SQL Server with CHECKSUM enabled, and copying backup files to Azure Blob Storage. LRS is used to restore backup files from Azure Blob Storage to SQL Managed Instance. Azure Blob Storage is used as an intermediary storage between SQL Server and SQL Managed Instance.

 

LRS will monitor Azure Blob Storage for any new differential, or log backups added after the full backup has been restored, and will automatically restore any new files added. The progress of backup files being restored on SQL Managed Instance can be monitored using the service, and the process can also be aborted if necessary.

 

LRS does not require a specific backup file naming convention as it scans all files placed on Azure Blob Storage and it constructs the backup chain from reading the file headers only. Databases are in "restoring" state during the migration process, as they are restored in NORECOVERY mode, and cannot be used for reading or writing until the migration process has been fully completed.

 

In case of migrating several databases, backups for each database need to be placed in a separate folder on Azure Blob Storage. LRS needs to be started separately for each database and different paths to separate Azure Blob Storage folders needs to be specified.

 

LRS can be started in autocomplete, or continuous mode. When started in autocomplete mode, the migration will complete automatically when the last backup file name specified has been restored. When started in continuous mode, the service will continuously restore any new backup files added, and the migration will complete on the manual cutover only. It is recommended that application and the workload are stopped and the final log-tail backup taken before manual cutover is executed. The final cutover step will complete restoring the last backup file and will make the database come online for read and write use on SQL Managed Instance.

 

Once LRS is stopped, either automatically on autocomplete, or manually on cutover, the restore process cannot be resumed for a database that was brought online on SQL Managed Instance. To restore additional backup files once the migration was completed through autocomplete, or manually on cutover, the database needs to be deleted and the entire backup chain needs to be restored from scratch by restarting the LRS.

 

log-replay-service-conceptual.png

 

Operation Details
1. Copy database backups from SQL Server to Azure Blob Storage.

- Copy full, differential, and log backups from SQL Server to Azure Blob Storage container using Azcopy or Azure Storage Explorer.

- Use any file names, as LRS does not require a specific file naming convention.
- In migrating several databases, a separate folder is required for each database.

2. Start the LRS service in the cloud.

- Service can be started with a choice of cmdlets:
PowerShell start-azsqlinstancedatabaselogreplay
CLI az_sql_midb_log_replay_start cmdlets.

- Start LRS separately for each different database pointing to a different backup folder on Azure Blob Storage.

- Once started, the service will take backups from the Azure Blob Storage container and start restoring them on SQL Managed Instance.
- In case LRS was started in continuous mode, once all initially uploaded backups are restored, the service will watch for any new files uploaded to the folder and will continuously apply logs based on the LSN chain, until the service is stopped.

2.1. Monitor the operation progress. - Progress of the restore operation can be monitored with a choice of or cmdlets:
PowerShell get-azsqlinstancedatabaselogreplay
CLI az_sql_midb_log_replay_show cmdlets.
2.2. Stop\abort the operation if needed.

- In case that migration process needs to be aborted, the operation can be stopped with a choice of cmdlets:
PowerShell stop-azsqlinstancedatabaselogreplay
CLI az_sql_midb_log_replay_stop cmdlets.

- This will result in deletion of the database being restored on SQL Managed Instance.
- Once stopped, LRS cannot be continued for a database. Migration process needs to be restarted from scratch.

3. Cutover to the cloud when ready.

- Stop the application and the workload. Take the last log-tail backup and upload to Azure Blob Storage.

- Complete the cutover by initiating LRS complete operation with a choice of cmdlets:
PowerShell complete-azsqlinstancedatabaselogreplay
or CLI az_sql_midb_log_replay_complete.

- This will cause LRS service to restore the last backup file. The service will stop and database will come online for read and write use on SQL Managed Instance.

- Repoint the application connection string from SQL Server to SQL Managed Instance, and start the application. You will need to orchestrate this step yourself, either through a manual connection string change in your application, or automatically (e.g. if your application can, for example, read the connection string from a property, or a database).

Requirements for getting started

 

SQL Server side

  • SQL Server 2008-2019
  • Full backup of databases (one or multiple files)
  • Differential backup (one or multiple files)
  • Log backup (not split for transaction log file)
  • CHECKSUM must be enabled for backups (mandatory)

Azure side

  • PowerShell Az.SQL module version 2.16.0, or above (install, or use Azure Cloud Shell)
  • CLI version 2.19.0, or above (install)
  • Azure Blob Storage container provisioned
  • SAS security token with Read and List only permissions generated for the blob storage container

RBAC permissions

  • Subscription Owner role, or
  • Azure operator needs to have the Managed Instance Contributor RBAC Role, or
  • Custom role with the following permission:
    Microsoft.Sql/managedInstances/databases/*

Migrating multiple databases

  • Backup files for different databases must be placed in separate folders on Azure Blob Storage.
  • LRS needs to be started separately for each database pointing to an appropriate folder on Azure Blob Storage.
  • LRS can support up to 100 simultaneous restore processes per single SQL Managed Instance.

Best practices

The following are highly recommended as best practices:

  • Run Data Migration Assistant to validate your databases are ready to be migrated to SQL Managed Instance.
  • Split full and differential backups into multiple files, instead of a single file.
  • Enable backup compression.
  • Use Cloud Shell to execute scripts as it will always be updated to the latest cmdlets released.
  • Plan to complete the migration within 36 hours since LRS service has been started. This is a grace period preventing system managed software patches once LRS has been started.

Important: 

  • Database being restored using LRS cannot be used until the migration process has been completed. This is because underlying technology is restore in NORECOVERY mode.
  • Read-only access to databases during the migration is not supported by LRS.
  • Once migration has been completed either through the autocomplete, or on manual cutover, the migration process is finalized as LRS does not support restore resume.

 

Steps to execute

 

Create Azure Blob container

 

Azure Blob Storage is used as an intermediary storage for backup files between SQL Server and SQL Managed Instance. Follow these steps to create Azure Blob Storage container:

 

  1. Create a storage account
  2. Crete a blob container inside the storage account

 

Make backups on the SQL Server

 

Backups on the SQL Server can be made with either of the following two options:

 

  • Option 1: Backup to the local disk storage, then upload files to Azure Blob Storage, in case your environment is restrictive of direct backup to Azure Blob Storage.
  • Option 2: Backup directly to Azure Blob Storage with "TO URL" option in T-SQL, in case your environment and security procedures allow you to do so. 

First, modify the database to use the full recovery mode.

T-SQL

-- To permit log backups, before the full database backup, modify the database to use the full recovery model.

USE master

ALTER DATABASE SampleDB

SET RECOVERY FULL

GO

 

Then, make backups on SQL Server and ensure that CHECKSUM option is enabled. This is mandatory for the LRS to start. It is also recommended that COMPRESSION option is enabled as well.

 

Proceed with one of the two options - backing up to the local disk, or directly to Azure Blob Storage, depending on your circumstances.

 

Option 1: Make backup on the local disk

 

Use the below sample code to set the recovery to full for a database, and then make full, diff and log backup on SQL Server with storage to a local disk. Backups made on the local storage will need to be copied to Azure Blob Storage.

T-SQL

-- Example on how to make full database backup to the local disk

BACKUP DATABASE [SampleDB]
TO DISK='C:\BACKUP\SampleDB_full.bak'
WITH INIT, COMPRESSION, CHECKSUM

GO

 

-- Example on how to make differential database backup to the locak disk

BACKUP DATABASE [SampleDB]
TO DISK='C:\BACKUP\SampleDB_diff.bak'
WITH DIFFERENTIAL, COMPRESSION, CHECKSUM
GO

 

-- Example on how to make the log backup

BACKUP LOG [SampleDB]
TO DISK='C:\BACKUP\SampleDB_log.trn'
WITH CHECKSUM
GO

 

Note: You can use any filename structure for the backup files. LRS does not require a specific backup file naming convention as it scans all files placed on Azure Blob Storage. LRS constructs the backup chain from reading the file headers only, regardless of what their filename is.

 

Upload backups from SQL Server to Azure Blob Storage

 

Use some of the following approaches to upload backups from the local disk to to the Azure Blob Storage:

 

  • Using SQL Server native BACKUP TO URL functionality.
  • Using Azcopy, or Azure Storage Explorer to copy backups to a blob container.
  • Using Storage Explorer in Azure Portal.
  • Schedule Agent job on SQL Server to continuously makes backups to Azure Blob Storage

 

Option 2: Make backups from SQL Server directly to Azure Blob Storage

 

In case your operating procedures and networking allows backups directly to Azure Blob Storage, this option is faster in ensuring your backups are stored on Azur Blob Storage and ready to be used by LRS.

 

Generate the SAS token for write access to the Azure Blob Storage in Azure portal. Use the below code sample to add the access token to the blob storage on SQL Server. Note, the 'SHARED ACCESS SIGNATURE' needs to replaced with the SAS token.

T-SQL

/* Example:
USE master
CREATE CREDENTIAL [https://msfttutorial.blob.core.windows.net/containername]
WITH IDENTITY='SHARED ACCESS SIGNATURE'
, SECRET = 'sharedaccesssignature'
GO */

 

USE master
CREATE CREDENTIAL [https://<storageaccount>.blob.core.windows.net/<containername>]
-- this name must match the container path, start with https and must not contain a forward slash at the end
WITH IDENTITY='SHARED ACCESS SIGNATURE'
-- this is a mandatory string and should not be changed
, SECRET = 'sharedaccesssignature'
-- this is the shared access signature key that you obtained in section 1.
GO

 

Then use the below code sample to make the backup directly to the Azure Blob Storage URL.

T-SQL

-- Example on how to make full database backup to Azure Blob Storage
BACKUP DATABASE [SampleDB]
TO URL = ‘https://<storageaccount>.blob.core.windows.net/<containername>/SampleDB_full.bak'
WITH INIT, COMPRESSION, CHECKSUM
GO

 

-- Example on how to make differential database backup to Azure Blob Storage
BACKUP DATABASE [SampleDB]
TO URL = ‘https://<storageaccount>.blob.core.windows.net/<containername>/SampleDB_diff.bak'
WITH DIFFERENTIAL, COMPRESSION, CHECKSUM
GO

 

-- Example on how to make the log backup to Azure Blob Storage
BACKUP LOG [SampleDB]

TO URL = ‘https://<storageaccount>.blob.core.windows.net/<containername>/SampleDB_log.trn'
WITH CHECKSUM
GO

 

For additional details on how to make backups from SQL Server to Azure Blob Storage, see Tutorial: Use Azure Blob Storage service with SQL Server.

 

Create SAS authentication token with List and Read permissions on Azure Blob Storage for LRS

 

Please pay a particular attention to this section as #1 issue with not being able to start LRS is because of not having valid SAS authentication token to Azure Blob Storage, or not copying it properly.

 

LRS needs permissions to access backup files on the Azure Blob Storage. Generate SAS authentication token to the entire storage container (not a specific file, or files) with Read and List permissions only following these steps:

 

  • Access storage account using Azure portal
  • Navigate to Storage Explorer (#1 in the screenshot below)
  • Expand Blob Container
  • Right click on the blob container (#2 in the screenshot below)
  • Select Get Shared Access Signature (#3 in the screenshot below)

SAS TOKEN 01.PNG

 

  • Select the token expiry timeframe. (#4 in the screenshot below)
  • Select the time zone for the token - UTC or your local time zone which is your browser's time. (#5 in the screenshot below)
    • Please note that time zone of the start and expiry time for the SAS token and your SQL Managed Instance might be different.
    • Ensure that SAS token has the appropriate time validity taking time zones into consideration for the entire duration of your migration.
    • As the best practice, and if possible, it is recommended that you set the time zone to somewhat earlier and later time of your planned migration window to avoid issues with different time zones.
  • Ensure Read and List only permissions are selected. (#6 in the screenshot below)
    • No other permissions must be selected, or otherwise LRS will not be able to start. This security requirement is by design (e.g. LRS must not be able to write to your storage container).
  • Click on the Create button. (#7 in the screenshot below)

SAS TOKEN 02.PNG

 

SAS authentication will be generated with the time validity that you have specified earlier. Note that you will need the URI version of the token generated - as shown in the screenshot below.

The generated token.PNG

 

Copy parameters from SAS token generated

 

To be able to properly use the SAS token to start LRS, we need to understand its structure. The URI of the SAS token generated consists of two parts - 1. StorageContainerUri and 2. StorageContainerSasToken, separated with a question mark (?), as shown in the image below.

 

Storage container URI.png

 

  • The first part starting with "https://" until the question mark (?) is used for the StorageContainerURI parameter that is fed as in input to LRS. This gives LRS information about the folder where database backup files are stored.
  • The second part, starting after the question mark (?), in the example "sp=" and all the way until the end of the string is StorageContainerSasToken parameter. This is the actual signed authentication token, valid for the duration of the time specified. Note that this part does not necessarily need to start with "sp=" as shown, and that your case might differ.

Important to note is that you need to copy the first part before the question mark, and use it as StorageContainerURI parameter, and that you need to copy the second part after the question mark and use it for StorageContainerSasToken. The question mark (?) is not used in either of the parts we need to copy.

 

Copy parameters as follows:

 

1. Copy the first part of the token starting from https:// all the way until the question mark (?) and use it as StorageContainerUri parameter in PowerShell or CLI for starting LRS, as shown in the screenshot below.

SAS TOKEN - part 1.PNG

 

2. Copy the second part of the token starting from question mark (?), all the way until the end of the string, and use it as StorageContainerSasToken parameter in PowerShell or CLI for starting LRS, as shown in the screenshot below.

 

SAS TOKEN - part 2.PNG

 

Important: 

  • Permissions for the SAS token for Azure Blob Storage need to be Read and List only. In case of any other permissions granted for the SAS authentication token, starting LRS service will fail. These security requirements are by design.
  • The token must have the appropriate time validity. Please ensure time zones between the token generated and managed instance are taken into consideration.
  • Ensure that StorageContainerUri parameter for PowerShell or CLI is copied from the URI of the generated token, starting from https:// until the questions mark (?). Do not include the question mark.
  • Ensure that StorageContainerSasToken parameter for PowerShell of CLI is copied from the URI of the generated token, starting from the question mark (?), until the end of the string. Do not include the question mark.


Log in to Azure and select subscription


Use the following PowerShell cmdlet to log in to Azure:

PowerShell

Login-AzAccount

 

Select the appropriate subscription where your SQL Managed Instance resides using the following PowerShell cmdlet:

PowerShell

Select-AzSubscription -SubscriptionId <subscription ID>

 

Start the migration


The migration is started by starting the LRS service. The service can be started in autocomplete, or continuous mode. When started in autocomplete mode, the migration will complete automatically when the last backup file specified has been restored. This option requires the start command to specify the filename of the last backup file. When LRS is started in continuous mode, the service will continuously restore any new backup files added, and the migration will complete on the manual cutover only.

 

Start LRS in autocomplete mode

 

To start LRS service in autocomplete mode, use the following PowerShell, or CLI commands. Specify the last backup file name with -LastBackupName parameter. Upon restoring the last backup file name specified, the service will automatically initiate a cutover.

 

Start LRS in autocomplete mode - PowerShell example:

PowerShell

Start-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
-InstanceName "ManagedInstance01" `
-Name "ManagedDatabaseName" `
-Collation "SQL_Latin1_General_CP1_CI_AS" `

-StorageContainerUri "https://storage4dani.blob.core.windows.net/lrs-container" `
-StorageContainerSasToken "sp=rl&st=2021-02-27T20:38:25Z&se=2021-02-28T20:38:25Z&sv=2020-02-10&sr=c&sig=itBHvhmJxcGrArSka7ra%2Fbgj3YMlYEhQH773jmt5E%2F0%3D" `
-AutoCompleteRestore `

-LastBackupName "last_backup.bak"

 

Start LRS in autocomplete mode - CLI example:

CLI

az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb -a --last-bn "last_backup.bak"

--storage-uri "https://storage4dani.blob.core.windows.net/lrs-container" 
--storage-sas "sp=rl&st=2021-02-27T20:38:25Z&se=2021-02-28T20:38:25Z&sv=2020-02-10&sr=c&sig=itBHvhmJxcGrArSka7ra%2Fbgj3YMlYEhQH773jmt5E%2F0%3D"

 

Start LRS in continuous mode

 

Start LRS in continuous mode - PowerShell example:

PowerShell

Start-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
-InstanceName "ManagedInstance01" `
-Name "ManagedDatabaseName" `
-Collation "SQL_Latin1_General_CP1_CI_AS" `

-StorageContainerUri "https://storage4dani.blob.core.windows.net/lrs-container" `
-StorageContainerSasToken "sp=rl&st=2021-02-27T20:38:25Z&se=2021-02-28T20:38:25Z&sv=2020-02-10&sr=c&sig=itBHvhmJxcGrArSka7ra%2Fbgj3YMlYEhQH773jmt5E%2F0%3D"

 

Start LRS in continuous mode - CLI example:

 

CLI

az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb
--storage-uri "https://storage4dani.blob.core.windows.net/lrs-container" 
--storage-sas "sp=rl&st=2021-02-27T20:38:25Z&se=2021-02-28T20:38:25Z&sv=2020-02-10&sr=c&sig=itBHvhmJxcGrArSka7ra%2Fbgj3YMlYEhQH773jmt5E%2F0%3D"

 

Scripting LRS start in continuous mode

 

PowerShell and CLI clients to start LRS in continuous mode are synchronous. This means that clients will wait for the API response to report on success or failure to start the job. During this wait the command will not return the control back to the command prompt. In case you are scripting the migration experience, and require the LRS start command to give control back immediately to continue with rest of the script, you can execute PowerShell as a background job with -AsJob switch. For example:

 

PowerShell

$lrsjob = Start-AzSqlInstanceDatabaseLogReplay <required parameters> -AsJob

 

When you start a background job, a job object returns immediately, even if the job takes an extended time to finish. You can continue to work in the session without interruption while the job runs. For details on running PowerShell as a background job, see the PowerShell Start-Job documentation.

 

Similarly, to start a CLI command on Linux as a background process, use the ampersand (&) sign at the end of the LRS start command.

 

CLI

az sql midb log-replay start <required parameters> &

 

Important: Once LRS has been started, any system managed software patches will be halted for the next 36 hours. Upon passing of this window, the next automated software patch will automatically stop the ongoing LRS. In such case, migration cannot be resumed and it needs to be restarted from scratch.

 

Monitor the migration progress

 

To monitor the migration operation progress, use the following PowerShell command:

PowerShell

Get-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
-InstanceName "ManagedInstance01" `
-Name "ManagedDatabaseName"

 

To monitor the migration operation progress, use the following CLI command:

CLI

az sql midb log-replay show -g mygroup --mi myinstance -n mymanageddb


Stop the migration

In case you need to stop the migration, use the following cmdlets. Stopping the migration will delete the restoring database on SQL Managed Instance due to which it will not be possible to resume the migration.

 

To stop\abort the migration process, use the following PowerShell command:

PowerShell

Stop-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
-InstanceName "ManagedInstance01" `
-Name "ManagedDatabaseName"

 

To stop\abort the migration process, use the following CLI command:

 

CLI

az sql midb log-replay stop -g mygroup --mi myinstance -n mymanageddb

 

Complete the migration (continuous mode)

 

In case LRS is started in continuous mode, once you have ensured that all backups have been restored, initiating the cutover will complete the migration. Upon cutover completion, database will be migrated and ready for read and write access.

 

To complete the migration process in LRS continuous mode, use the following PowerShell command:

 

PowerShell

Complete-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
-InstanceName "ManagedInstance01" `
-Name "ManagedDatabaseName" `

-LastBackupName "last_backup.bak"

 

To complete the migration process in LRS continuous mode, use the following CLI command:

 

CLI

az sql midb log-replay complete -g mygroup --mi myinstance -n mymanageddb --last-backup-name "last_backup.bak"

 

Successful migration (example)

 

Upon completion of restoring all backups if LRS was started in autocomplete mode, or on manual complete if LRS was started in continuous mode, PowerShell or CLI will show success of the operation.

 

Shown in the screenshot below is successful completion of LRS stared in autocomplete mode, restoring AdventureWorks2019 database on SQL Managed Instance.

Successful restore 01.PNG

On completion of the process, databases successfully restored will be available for read and write operations on SQL Managed Instance. Shown in the screenshot below is example of AdventureWorks2019 database successfully restored on SQL Managed Instance.

Successful restore 02..PNG

 

Functional limitations

 

Functional limitations of Log Replay Service (LRS) are:

 

  • Database being restored cannot be used for read-only access during the migration process.
  • System managed software patches will be blocked for 36 hours since starting LRS. Upon expiry of this time window, the next software update will stop LRS. In such case, LRS needs to be restarted from scratch.
  • LRS requires databases on the SQL Server to be backed up with CHECKSUM option enabled.
  • SAS token for use by LRS needs to be generated for the entire Azure Blob Storage container, and must have Read and List permissions only.
  • Backup files for different databases must be placed in separate folders on Azure Blob Storage.
  • LRS needs to be started separately for each database pointing to separate folders with backup files on Azure Blob Storage.
  • LRS can support up to 100 simultaneous restore processes per single SQL Managed Instance.

 

Troubleshooting

 

Once you start the LRS, use the monitoring cmdlets (get-azsqlinstancedatabaselogreplay or az_sql_midb_log_replay_show) to see the status of the operation. If after some time LRS fails to start with an error please check for some of the most common issues:

 

  • Verify if there already exists a database with the same name on SQL MI that you are trying to migrate from SQL Server? If this is the case, resolve this conflict by renaming one of databases.
  • Did you backup database on the SQL Server using the CHECKSUM option?
  • Are the permissions on the SAS token Read and List only for the LRS service?
  • Did you copy the StorageContainerUri parameter from the generated token URI starting from "https://" until the question mark (?), and without including the question mark?
  • Did you copy the  StorageContainerSasToken from the generated token URI starting from the question mark (?), not including the question mark itself, all the way until the end of the string?
  • Is the SAS token validity time applicable for the time window of starting and completing the migration? Did you take into consideration differences in time zones used for SQL Managed Instance and the SAS token? Try re-generating the SAS token with extending the token validity of the time window before and after the current date.
  • Is the spelling for the database name, resource group name, and managed instance name correct?
  • In case of autocomplete, did you specify a valid filename for the last backup file?
  • In case of scripting the LRS start in continuous mode, to have the PowerShell return execution immediately to the next script command, did you start it as a background process with "-AsJob" switch?

 

More resources

 

Read about new improvements (June 2022) improving the migration experience using LRS, see Resumable restore improves SQL Managed Instance database migrations experience.

 

Disclaimer

 

Please note that products and options presented in this article are subject to change. This article reflects the user-initiated manual failover option available for Azure SQL Managed Instance in June, 2022.

 

Closing remarks

 

If you find this article useful, please like it on this page and share through social media.

 

To share this article, you can use the Share button below, or this short link: https://aka.ms/mi-logshipping.

6 Comments
Co-Authors
Version history
Last update:
‎Jul 14 2022 02:09 PM
Updated by: