Announcing Azure Synapse Pathway: Turbocharge your data warehouse migration

Published Mar 02 2021 06:00 AM 26.3K Views

Data warehouses contain the most important assets for an organization – their data. As companies grow, so do their needs to analyze all data they generate. Organizations are modernizing their data warehouses to reduce costly on-premises datacenter footprints, while using advanced analytics to improve business outcomes. Moving data to the cloud is no small task. Integrating siloed data across business units for a migration comes with many challenges because business logic and other database objects need to be disentangled when that data moves.


One of the more complex migration tasks is translating SQL code. To convert hundreds of thousands of lines of legacy code across database objects requires data teams to either manually rewrite the existing code or hire a vendor to complete the task. Both require thousands of hours of intensive labor and knowledge transfer of business and system logic, which are prone to human error and incur a large cost.


We are pleased to announce Azure Synapse Pathway to help simplify and accelerate migration for both on-premises and cloud data warehouses to Azure Synapse Analytics.


Synapse Pathway animation_GIF.gif


How does it work?

Azure Synapse Pathway connects to the source system and inspects details about your database objects. An assessment report captures further details on the database objects that can be translated into Azure Synapse Analytics. With Azure Synapse Pathway the source database objects are automatically converted and optimized to T-SQL code on Azure Synapse Analytics. This means your existing code, whether a thousand or million lines of code, will be converted by Azure Synapse Pathway.


As a result of these capabilities, the traditional process of manual code conversion can now be automated in a fraction of the time; all while cutting out manual errors and reducing the total cost of the migration.


This preview version of Azure Synapse Pathway supports conversion of database, schema, and tables from IBM Netezza, SQL Server, and Snowflake today. Soon, we’ll provide support for Teradata, followed by Redshift, BigQuery and Hive based Hadoop solutions. We’ll also expand the surface area of conversion to database views, procedures, functions and more to further simplify the automation of analytics workloads into Azure Synapse.


Get Started Today

Occasional Visitor

Seems, pathway documentation link is not available yet.  this need a quick fix first :)



The docs will be live around 9am PST. 

Occasional Visitor

Is there a Command line tool available to be able to automate the data warehouse migration? How do we automate this process?


Hi can you please delve a little deeper the matter of migrating existing T-SQL code from on-premise SQL Servers? Shouldn't T-SQL be the same? Can this service be leveraged to address the compatibility / feature parity issues raised by the Data Migration Assistant? Thanks!


@rgupta8   On installation, AspCmd.exe will be available in C:\Program Files (x86)\Azure Synapse Pathway (Preview). Use --help to get details on options.


@Mauro_mauromi For Synapse supported T-SQL statements, the translation from SQL Server will be the same for most of the cases but there are few statements which differ.
For example: difference in Create Table syntax for SQL Server and Synapse


Hi any plans for translation from Oracle?

@Mauro_mauromi you can also use SQL Server Migration assistant (SSMA) for migrations from Oracle to Synapse Pools (formerly called SQL Data Warehouse). Please download the free tool here: Download Microsoft SQL Server Migration Assistant for Oracle from Official Microsoft Download Center The major diffrence between SSAM and Pathway is that SSMA needs a connection to the source system to collect the schema information, whereas Pathway works in an offline manner.

Should the term optimized to be used in this scenario?  I believe it just converts and translates the code and makes it compatible, but I don't think it actually optimizes the code. 


@Maureen Magnotta Sanders In the scenarios where DML or complex DDL is processed (not yet publicly available), code is optimized for Synapse SQL. 

Occasional Visitor

I tried using this tool. Looks like we cannot use DDL scripts generated as-is from SQL Server. For example foreign key relationships are not supported and it throws error. Probably guidelines required for generating source T-SQL scripts.

@srinidon In really large database systems (VLDB) data isn't modeled  in a way with primary / foreign key constrains. This validation typically happens at load time via the ETL process. In a data warehouse you would typically not do single  insert operations like in an OLTP application.
You still have primary keys and foreign keys , but they are not enforced by the DBMS system, because this would slow down distributed operations significantly. This approach allows unperceived performance a query times, when done right.

Please have a look at the Best practices here and here

Occasional Visitor

This is all good. However, if you have a poorly designed source system you could end up with a poor synapse environment that ends up costing a lot of money. 

Version history
Last update:
‎Mar 04 2021 03:59 PM
Updated by: