Perform data migration from Azure SQL Managed Instance to Azure SQL Database using Azure Data Factory
Overview
In this post, I’ll show you how to perform a data migration from Azure SQL Managed Instance (SQL MI) to Azure SQL Database (SQL DB) using a combination of schema + security migration steps and Azure Data Factory (ADF) for moving the data. In an earlier post (Data Migration - Azure SQL MI and Azure SQL DB | Microsoft Community Hub), I briefly covered several approaches for moving data from SQL MI to SQL DB. This article focuses on an approach that works well when you want a repeatable data-movement pipeline (ADF) but still need to migrate database objects and users as part of a complete cutover. Using ADF is a good option when you need a repeatable pipeline or when the source data volume is large. This article walks one of the methods you can use, particularly if source data size is large. This post will walk you through the key configuration steps to copy data from SQL MI to SQL DB using ADF.
Note: If you want to migrate in the opposite direction (Azure SQL Database → SQL Managed Instance), you can follow the same general flow in this article—just swap which platform is the source and which is the destination and adjust any feature-compatibility work accordingly.
The steps below assume that you are migrating data over a private endpoint so that all traffic remains on a secure, private network path. In this configuration, you deploy a Windows VM to host a Self‑Hosted Integration Runtime (SHIR). If your requirements permit data movement over the public network, you can instead use the Azure Integration Runtime (Azure IR), which removes the need for a Windows VM and SHIR. In that case, Azure IR handles the data movement, and traffic flows over the public network rather than through a private endpoint.
High-level steps
- Assess compatibility and feature differences (SQL MI vs SQL DB)
- Migrate schema and database objects (tables, views, procs, functions, etc.)
- Migrate users and permissions (and map logins to users where applicable)
- Deploy Azure Data Factory (ADF)
- (If using private endpoints) Provision a Windows VM to host SHIR and register it in ADF
- Create linked services (connections) for SQL MI and SQL DB
- Run the data copy using the Copy Data tool (or a pipeline), validate, and cut over
Pre-migration assessment (SQL MI → SQL DB)
There is no automated assessment tool available to evaluate readiness for migrating between Azure SQL Managed Instance and Azure SQL Database. As a result, pre‑migration assessment is a manual exercise. Before migrating, you should validate high‑level compatibility, including supported features, removal of instance‑level or cross‑database dependencies, security model alignment, target service tier sizing, and application connectivity behavior. Differences in platform capabilities and architecture should be reviewed early to avoid migration or post‑migration issues.
Here is check list to consider:
- ☐ Feature compatibility reviewed (no unsupported MI features in use)
- ☐ No instance‑level or cross‑database dependencies required
- ☐ Security model validated (database‑scoped users and authentication)
- ☐ Target SQL Database service tier identified and sized
- ☐ Application connectivity updated (connection strings, retries, HA behavior)
- ☐ Proof‑of‑concept testing completed with a representative database
Migrate schema and database objects
ADF is primarily a data movement service, so migrate your (*1) schema and programmable objects first. Common options include deploying a DACPAC (SqlPackage) to Azure SQL Database, using SQL Server Data Tools (SSDT) to publish a database project, or generating scripts from SSMS (tables, views, stored procedures, functions, synonyms, sequences, user-defined types, etc.). Ensure the target schema matches the source (data types, collations, constraints) before you start the bulk data copy.
(*1) In the Copy Data tool step (described later), ADF can auto-create the table schemas (and views) based on the source tables. So, you can skip migrating the schema upfront; however, programmable objects (stored procedures, function, etc.) and other non-schema objects have to be migrated.
- Create the target Azure SQL Database (and choose the correct server, tier, and size).
- Deploy the schema and objects to SQL DB (DACPAC/SSDT/scripts).
- Create supporting objects needed for the load (schemas, filegroups aren’t applicable in SQL DB, but schemas and tables are).
- Decide when to create indexes and foreign keys: for very large loads, creating them after the data copy can significantly speed up ingestion.
Deploy ADF
Follow the steps in Create an Azure Data Factory - Azure Data Factory | Microsoft Learn to deploy ADF. After deployment, open Azure Data Factory Studio—we’ll use it to configure the integration runtime, linked services, and the copy activity. Following ADF deployment, launch the Azure Data Factory Studio. We will come back to what needs to be configured in ADF Studio
Provision Windows VM – used to host SHIR
This Azure VM hosts the Self-hosted Integration Runtime (SHIR), which ADF uses to connect to your SQL MI and SQL DB over private networking and to run the copy operation.
Configure SHIR in Azure Data Factory and install SHIR on Azure VM
On the page for your data factory, select Launch Studio to open Azure Data Factory Studio. The steps below assume you are working from the Azure VM that will host SHIR. If the VM can’t download the SHIR installer (for example, it has no outbound internet access), download the installer from a machine that has internet access and then copy it to the VM.
Select (1) Manage, and then (2) Integration runtimes to display the integration runtimes.
Select New which will display the Integration runtime setup page at right.
- Select Azure, Self-Hosted, and then select Continue.
- Select Self-Hosted, and then select Continue.
- Enter a name for the integration runtime, and then select Create.
- For this example, use Manual setup (Option 2) rather than express setup. Download the integration runtime installer from the provided link.
After downloading the SHIR installer, close the page for now. Back on the Integration runtimes page (in ADF studio), you’ll see the new runtime listed with a status of 'Unavailable'. It will change to 'Running' after you install and register SHIR on the VM.
Run the SHIR installer (the .msi file) and complete the installation wizard. At the end of the wizard, you’ll be prompted to register the integration runtime.
To get the authentication key, return to ADF Studio, select your integration runtime on the Integration runtimes page to open Edit integration runtime, and then copy the key. Paste the key into the SHIR registration dialog on the VM. Select Register to complete registration.
After registration completes, select Finish to close the wizard. You can leave Enable remote access from intranet unchecked unless you plan to add additional SHIR nodes for high availability and scalability. You can change this setting later if needed.
If the registration succeeds, you will see the integration runtimes is at running status.
Configure Linked Service (connections to source and target)
In this step, you configure connections to the source (SQL MI) and the target (Azure SQL DB).
Important: ADF copy activities migrate data. You must create the target database ahead of time.
In ADF Studio, go to Manage > Linked services. This will display the New linked service pane.
Note: If you already have linked services created for both SQL MI and SQL DB, you can reuse them—when configuring the Copy Data tool or a pipeline, simply select the existing linked service for the source and the existing linked service for the destination.
Search for 'sql' to display related connectors. in Data stores. Select the connector for SQL MI, and then select Continue.
Enter required properties to connect to SQL MI.
For Connect via integration runtime, select the self-hosted integration runtime you created earlier.
Set the remaining properties as appropriate for your environment (server name, database name, authentication, etc.). For other properties, set/enter appropriate values to connect to your server.
Example settings used in this demo (key properties only):
- Endpoint type: Private endpoint
- Authentication type: SQL authentication (demo only—use your preferred/approved auth method)
- Trust server certificate: Selected
Next, create a linked service for the target server (Azure SQL Database). Follow the same general steps you used for SQL MI, selecting the appropriate connector and the same SHIR (if you are using private endpoints). You can follow the same steps as done with SQL MI
When finished, you should see two linked services configured—one for SQL MI and one for SQL DB.
Configure Ingestion and run data migration
Now we are ready to perform the data migration. Select [Home] at the ADF Studio and select [Ingest]. This will display the Copy Data tool page.
Important: Before you start, make sure the target database exists in Azure SQL Database and that you have permissions to create/insert data into the target tables.
For larger migrations, consider copying in batches (for example, by date/key range) and using parallelism where appropriate to improve throughput while staying within SQL DB resource limits. Plan time for validation (row counts/checksums or targeted queries) and decide how you will handle ongoing changes during cutover (for example, a final delta load or an application downtime window).
The Copy Data tool page
Select Built-in copy task > Run once now, and Continue
Select Source type and connection. Select table(s) to migrate.
Select Next at the [Apply filter] page
Select the destination
Select Next to get to Column mapping page. Unless you want to specify explicit mapping to customize column/field mapping from source to destination, select Next to accept default settings.
At the Copy Data tool page, leave all with default settings and continue Next.
On the Summary page, select Next to start data migration.
After selecting Next, the migration starts. You can select Finish to close this page and continue monitor the migration progress from the Monitor blade.
Checking migration status
You can select Finish to close the page and go to Monitor to monitor the migration progress.
Validation and cutover
After schema, security, and data are migrated, validate the target before switching applications over to SQL DB. Validation typically includes row counts, checksum sampling, and running key application queries and reports against the new database. For cutover, plan how you will handle changes occurring on SQL MI while the bulk copy is running (for example, schedule downtime, or run a final delta load if your design supports it).
- Validate schema: object counts, constraints, and (if used) post-deployment scripts.
- Validate security: users/groups exist, role memberships are correct, and the app can connect with least privilege.
- Validate data: row counts per table, spot-check aggregates, and targeted checksum comparisons where feasible.
- Cut over: update connection strings/DNS, monitor workload in SQL DB, and keep a rollback plan.
Notes and Observations
Migrating database objects
Although this article recommends migrating schemas and database objects before copying data, this demo relies on Azure Data Factory’s automatic schema generation for simplicity. This is useful for testing or learning ADF workflows, but it is not suitable for production migrations.
The Wide World Importers sample database, that was used for the source database, includes a memory‑optimized table, but ADF generated a standard table instead. This illustrates a key limitation: ADF creates tables using basic metadata (columns, data types, and keys) and does not generate full, feature‑equivalent schemas.
Best practice: Use tools such as SSMS, SSDT, DACPAC, or BACPAC to prepare complete schemas and objects in the target database and use ADF strictly for data movement.
Online migration
Online migration keeps the source database available while data is copied to the target using a repeatable pipeline such as Azure Data Factory. Schema and security are prepared in advance, and the bulk data copy runs without requiring an extended outage.
Changes made to the source during the copy aren’t automatically synchronized, so a planned cutover—such as a short downtime window or final validation step—is still required to ensure data consistency before switching applications to the target.
Summary
Migrating from SQL MI to Azure SQL Database is more than a single copy operation. A successful full migration typically includes (1) assessing feature compatibility, (2) deploying schema and objects to the target, (3) recreating users/roles/permissions, and (4) using ADF to move the data in a repeatable, monitorable way—optionally through private networking with SHIR. After the copy completes, validate schema, security, and data, then execute a planned cutover to move applications to SQL DB.
- Assess and remediate MI→SQL DB compatibility gaps.
- Migrate schema + programmable objects (DACPAC/SSDT/scripts) before loading data.
- Migrate security: users, roles, role memberships, and permissions (prefer Entra ID where possible).
- Configure ADF (and SHIR if using private endpoints), then run Copy Data/pipelines for the data load.
- Validate results and cut over with a rollback plan.