database platform cse – sql
47 TopicsData Migration Strategies for Large-Scale Sybase to SQL Migrations Using SSMA, SSIS and ADF- Part 1
Introduction In today’s data-driven landscape, the migration of databases is a crucial task that requires meticulous planning and execution. Our recent project for migrating data from Sybase ASE to MSSQL set out to illuminate this process, using tools like Microsoft SQL Server Migration Assistant (SSMA), Microsoft SQL Server Integration Service (SSIS) packages, and Azure Data Factory (ADF). We carefully designed our tests to cover a range of database sizes: 1GB, 10GB, 100GB, and 500GB. Each size brought its own set of challenges and valuable insights, guiding our migration strategies for a smooth transition. For the smaller databases (1GB and 10GB), we utilized SSMA, which demonstrated strong efficiency and reliability in handling straightforward migrations. SSMA was particularly effective in converting database schemas and moving data with minimal complication. As we scaled up larger datasets (100GB and 500GB), we incorporated SSIS packages alongside Azure Data Factory to address the increased complexity and ensure robust performance throughout the migration process. Our exercises yielded important findings related to performance metrics such as data throughput, error rates during transfer, and overall execution times for each migration approach. These insights helped us refine our methodologies and underscored the necessity of selecting the right tools for each migration scenario when transitioning from Sybase to MSSQL. Ultimately, our experience highlighted that thorough analysis is essential for identifying potential bottlenecks and optimizing workflows, enabling successful and efficient migrations across databases of all sizes. The results reassure stakeholders that, with a well-considered approach and comprehensive testing, migrations can be executed seamlessly while maintaining the integrity of all datasets involved. There are 2 parts in the Strategy, the 1st part covers the overview, source database environment setup for Sybase, target database environment setup for Azure SQL, migration steps, time consumed and key learnings for SSMA and SSIS, the remaining sections are covered in Part 2 here :- Part 2 Objectives of the Evaluation Assess the migration duration for various database sizes. Evaluate the capabilities and limitations of each tool. Identify optimal patterns for bulk data movement. Recommend the best tool or parameter combination for various scenarios which gives the best results for different sample datasets. Overview of Tools SQL Server Migration Assistant (SSMA) for SAP ASE Microsoft SQL Server Migration Assistant (SSMA) for Sybase Adaptive Server Enterprise (ASE) is a tool for migrating SAP ASE databases to SQL Server 2012 (11.x) through SQL Server 2022 (16.x) on Windows and Linux, Azure SQL Database or Azure SQL Managed Instance. It supports schema conversion, data migration, and limited post-migration testing. SSMA converts Sybase ASE database objects (tables, views, stored procedures, etc.) to Azure SQL-compatible formats and migrates data using a client-side or server-side engine. SQL Server Integration Services (SSIS) SQL Server Integration Services is a platform for building enterprise-level data integration and data transformations solutions. It offers flexibility in handling non-compatible objects, custom transformations, and large-scale data transfers through its pipeline architecture. SSIS is particularly useful when SSMA cannot migrate certain objects or when additional data transformation is required like Unparsed SQL, SET option conversion issues, Identifier conversion and issues, date format conversion and with NON-ANSI joins Azure Data Factory (ADF) ADF is a fully managed, serverless, cloud-based data integration service for orchestrating and automating data movement and transformation across on-premises and cloud environments. It is well-suited for hybrid migrations, large-scale data pipelines, and integration with Azure services like Azure SQL. ADF excels in scenarios requiring scalability and parallel processing. Environment Setup For testing, we used the following setup: Source Environment Sybase ASE Version: 16.0 SP03 OS: SUSE Linux Enterprise Server 15 SP6 VM Size: Standard B12ms (12 vcpus, 48 GiB memory) Target Environment SQL Server 2022: hosted on Azure VM OS: Windows Server 2022 VM Size: Standard D16as v4 (16 vcpus, 64 GiB memory) Network Both VMs hosted in same Azure region Premium SSD LRS disks for source and target Metrics We evaluated the tools based on: Data Migration Time: Time to migrate 1 GB, 10 GB, 100 GB (1 x 100 GB table) and 500 GB (5 x 100 GB tables) of data. Scalability: Performance with increased data volumes (up to 500 GB). SQL Server Migration Assistant (SSMA) Migration steps: Install SSMA for SAP ASE 10.3 or above and required drivers (Sybase ASE ODBC/ADO.NET providers). Create an SSMA project, configure source (Sybase ASE) and target (Azure SQL) connections. Assess database compatibility, customize data type mappings (e.g., Sybase TEXT to SQL Server NVARCHAR(MAX)), and convert schema. Migrate data using SSMA’s client-side or server-side engine. Validate migrated objects and data. Test Results: Test Scenario Data Size Time Taken Throughput Threads Notes Single Copy 1 GB 28 seconds ~36 Mbps 1 Single Threaded Single Copy 10 GB 4 minutes 36 seconds ~37 Mbps 1 Single Threaded Parallel Copy 100 GB 44 minutes ~38 Mbps 40 Parallel Threads - Using project level setting changes, like Migration engine, batch size, parallel data migration mode and multi loading Scalability Test 500 GB 3 hours 44 minutes ~38 Mbps 40 Parallel Threads - Same performance as of sequential nature of SSMA’s processing engine, which limits parallelism Key learnings: Excellent for schema conversion and small data sets Not scalable beyond 100 GB, memory issues and slow single-threaded loads Limited configuration for tuning bulk inserts Performance Tuning Insights: Migration Engine: Configured server-side data migration to optimize performance and reduce processing overhead. Client-side data migration refers to SSMA client retrieving the data from the source and bulk inserting that data into Azure SQL. Server-side data migration refers to SSMA data migration engine (bulk copy program) running on the Azure SQL box as a SQL Agent job retrieving data from the source and inserting directly into Azure SQL thus avoiding an extra client-hop (better performance). When choosing this method, you will need to specify which version of the BCP is intended to use (32bit or 64bit): Batch Size: Data is migrated in batches from the source tables into Azure SQL tables within transactions. The batch size option determines how many rows are loaded into Azure SQL per transaction. By default, it is 10,000, but we increased it to 270,000 since our dataset contained 26,214,400 rows (around 100 GB). Parallel Data Migration Mode: This option is available only when using the Client Side Data Migration Engine mode. It defines the number of parallel threads to be used during migration. By default, it is set to Auto (10 threads). To modify it, select Custom and specify the desired number of parallel threads. We changed this to 40 to get the best results. Multi loading: With Multi-Loading enabled, SSMA uses multiple parallel threads to load data batches at the same time, which can significantly speed up migration for large tables. It essentially breaks the source data into chunks (based on your batch size setting) and loads them concurrently. SQL Server Integration Services (SSIS) Migration steps: Create an SSIS project in SQL Server Data Tools (SSDT). Configure ODBC / ADO .NET Source (Sybase ASE) and ADO .NET/OLE DB Destination (Azure SQL) with appropriate drivers. Build Control flow and data flow tasks for each table, applying transformations for incompatible data types or business logic. Execute packages in parallel for large tables, optimizing buffer size and commit intervals. Monitor and log errors for troubleshooting. Test Results: Test Scenario Data Size Time Taken Throughput Threads Notes Single Copy 1 GB 31 seconds ~33 Mbps 1 Single Threaded Single Copy 10 GB 4 minutes 16 seconds ~40 Mbps 1 Single Threaded Parallel Copy 100 GB 38 minutes ~44 Mbps 5 Parallel Threads – Regulating MaxConcurrentExecutable, DefaultBufferMaxRows, Engine threads and Azure SQL maximum server memory Scalability Test 500 GB 3 hours 12 minutes ~44 Mbps 5 Parallel Threads - Performance improved with larger buffer sizes (100 MB) and SSD I/O. Key learnings: Very fast for large data volumes with tuning Requires development time (package design, error handling) Bottleneck: network throughput on 500 GB run Best suited for 10–500 GB on-prem or IaaS migrations Performance Tuning Insights: DefaultBufferSize: The task’s buffer settings can be configured using the DefaultBufferSize property, which defines the buffer size, and the DefaultBufferMaxRows property, which specifies the maximum number of rows per buffer. By default, the buffer size is 10 MB (with a maximum of 100 MB), and the default maximum number of rows is 10,000. DefaultBufferMaxRows: The data flow engine begins the task of sizing its buffers by calculating the estimated size of a single row of data. It then multiplies the estimated size of a row by the value of DefaultBufferMaxRows to obtain a preliminary value for the buffer size. If the result is greater than the value of DefaultBufferSize, the engine reduces the number of rows. If the result is less than the minimum buffer size calculated inside the engine increases the number of rows. If the result falls between the minimum buffer size and the value of DefaultBufferSize, the engine sizes the buffer as close as possible to the estimated row size times the value of DefaultBufferMaxRows. If sufficient memory is available, it is better to use fewer large buffers instead of many small ones. In other words, performance improves when the number of buffers is minimized, and each buffer holds as many rows as possible. IsSorted: Sorting using a SSIS task is a slow operation by definition. Avoiding unnecessary sorting can enhance the performance of data flow in the package. Set the IsSorted property of a component in the output data flow upstream to True. ADO .NET Source: When retrieving data from a view using an OLE DB data source, choose SQL command as the data access mode and provide a SELECT statement. Using a SELECT statement ensures that the view is accessed in the most efficient way. OLEDB Destination: Several OLE DB Destination settings can significantly impact data transfer performance: Data Access Mode – This setting offers the Fast Load option, which internally uses a BULK INSERT statement to load data into the destination table, instead of executing individual INSERT statements for each row. Unless you have a specific reason to change it, keep the default Fast Load option enabled. When using Fast Load, additional performance-related settings become available (listed below). Keep Identity – By default, this is unchecked. If the destination table has an identity column, SQL Server generates identity values automatically. Checking this option ensures that identity values from the source are preserved and inserted into the destination. Keep Nulls – By default, this is unchecked. If a NULL value is encountered in the source and the target column has a default constraint, the default value will be inserted. Enabling this option preserves the NULL values from the source instead of applying the default constraint. Table Lock – This is checked by default, meaning a table-level lock is acquired during data load instead of multiple row-level locks. This prevents lock escalation issues and generally improves performance. Keep this enabled unless the table is actively being used by other processes at the same time. Check Constraints – Checked by default. This validates incoming data against the destination table’s constraints. If you are confident the data will not violate constraints, unchecking this option can improve performance by skipping the validation step. Rows per Batch – The default value is -1, which means all incoming rows are treated as a single batch. You can change this to a positive integer to divide the incoming rows into multiple batches, where the value specifies the maximum number of rows per batch. Maximum Insert Commit Size – The default value is 2147483647 (the maximum for a 4-byte integer), which commits all rows in a single transaction once the load completes successfully. You can set this to a positive integer to commit data in smaller chunks. While committing more frequently does add overhead to the data flow engine, it helps reduce pressure on the transaction log and tempdb, preventing excessive growth during high-volume data loads. DelayValidation: SSIS uses validation to determine if the package could fail at runtime. SSIS uses two types of validation. First is package validation (early validation) which validates the package and all its components before starting the execution of the package. Second SSIS uses component validation (late validation), which validates the components of the package once started. If you set it to TRUE, early validation will be skipped and the component will be validated only at the component level (late validation) which is during package execution. MaxConcurrentExecutables: It's the property of the SSIS package and specifies the number of executables (different tasks inside the package) that can run in parallel within a package or in other words, the number of threads SSIS runtime engine can create to execute the executables of the package in parallel. EngineThreads: This property specifies the number of source threads (does data pull from source) and worker thread (does transformation and upload into the destination) that can be created by data flow pipeline engine to manage the flow of data and data transformation inside a data flow task, it means if the EngineThreads has value 5 then up to 5 source threads and also up to 5 worker threads can be created. Please note, this property is just a suggestion to the data flow pipeline engine, the pipeline engine may create less or more threads if required. IsolationLevel: In SQL Server Integration Services (SSIS), IsolationLevel is a property that defines how a transaction within a package interacts with other concurrent transactions in the database. It determines the degree to which one transaction is isolated from the effects of other transactions. Default is ‘Serializable’ which Locks the entire data set being read and keeps the lock until the transaction completes. Instead set it to ‘ReadUncommited’ or ‘ReadCommited’. ‘ReadUncommited’ Reads data without waiting for other transactions to finish. Can read rows that are not yet committed (a.k.a. dirty reads). While ‘ReadCommited’ only reads committed rows. Prevents dirty reads by waiting until other transactions commit or roll back. Use ReadUncommited when you need speed and can tolerate inaccurate data else use ReadCommited but it is slower than Read Uncommitted because it must wait for locks to release. TransactionOption: The TransactionOption property in SQL Server Integration Services (SSIS) is used to control how tasks, containers, or the entire package participate in transactions to ensure data integrity. It determines whether a task or container starts a transaction, joins an existing one, or does not participate in any transaction. The property is available at the package level, container level (e.g., For Loop, Foreach Loop, Sequence), and for individual Control Flow tasks (e.g., Execute SQL Task, Data Flow Task). Set this to ‘NotSupported’ by which the task or container does not participate in any transaction, even if a parent container or package has started one. If a transaction exists at a higher level (e.g., package or parent container), the task or container operates outside of it. It is the fastest because there’s no transaction overhead. Database Recovery Model: The source database is configured to use the Bulk-logged recovery model, which aims to minimally log bulk operations. This should be significantly more performant than the Full recovery model, assuming the bulk insert meets the criteria required to be minimally logged. The criteria for the target table are as follows – The target table is not being replicated. Table locking is specified (using TABLOCK). If the table has no indexes, data pages are minimally logged. If the table does not have a clustered index but has one or more non-clustered indexes, data pages are always minimally logged. How index pages are logged, however, depends on whether the table is empty – If the table is empty, index pages are minimally logged. If table is non-empty, index pages are fully logged. If the table has a clustered index and is empty, both data and index pages are minimally logged. In contrast, if a table has a clustered index and is non-empty, data pages and index pages are both fully logged regardless of the recovery model. Pulling High Volumes of Data: To enhance the speed and efficiency of the data migration process, we devised an optimized approach that involves transforming the target table into a heap by dropping all its indexes at the outset, thereby eliminating the overhead of index maintenance during data insertion. Following this, we transfer the data to the heap table, which is significantly faster due to the absence of indexes. Once the data transfer is complete, we recreate the indexes on the target table to restore its original structure and optimize query performance. This streamlined method of dropping indexes, transferring data to a heap, and then recreating indexes substantially accelerates the overall migration process compared to maintaining indexes throughout. Feedback and suggestions We hope this post has helped you configure your migration solution and choose the right options to successfully migrate your databases. The remaining steps are covered in Part 2 here :- Part 2 If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Customer Success Engineering (Ninja) Team (datasqlninja@microsoft.com). Thanks for your support! Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide.Data Migration Strategies for Large-Scale Sybase to SQL Migrations Using SSMA, SSIS and ADF- Part 2
Introduction In today’s data-driven landscape, the migration of databases is a crucial task that requires meticulous planning and execution. Our recent project for migrating data from Sybase ASE to MSSQL set out to illuminate this process, using tools like Microsoft SQL Server Migration Assistant (SSMA), Microsoft SQL Server Integration Service (SSIS) packages, and Azure Data Factory (ADF). We carefully designed our tests to cover a range of database sizes: 1GB, 10GB, 100GB, and 500GB. Each size brought its own set of challenges and valuable insights, guiding our migration strategies for a smooth transition. For the smaller databases (1GB and 10GB), we utilized SSMA, which demonstrated strong efficiency and reliability in handling straightforward migrations. SSMA was particularly effective in converting database schemas and moving data with minimal complication. As we scaled up larger datasets (100GB and 500GB), we incorporated SSIS packages alongside Azure Data Factory to address the increased complexity and ensure robust performance throughout the migration process. Our exercises yielded important findings related to performance metrics such as data throughput, error rates during transfer, and overall execution times for each migration approach. These insights helped us refine our methodologies and underscored the necessity of selecting the right tools for each migration scenario when transitioning from Sybase to MSSQL. Ultimately, our experience highlighted that thorough analysis is essential for identifying potential bottlenecks and optimizing workflows, enabling successful and efficient migrations across databases of all sizes. The results reassure stakeholders that, with a well-considered approach and comprehensive testing, migrations can be executed seamlessly while maintaining the integrity of all datasets involved. There are 2 parts in the solution. Part 1 covers SSMA and SSIS. Part 2 covers the Azure Data Factory (ADF), tests results, performance improvement guidelines and conclusion. Azure Data Factory (ADF) Migration steps: Set up an Azure Data Factory instance and Integration Runtime (IR) for on-premises connectivity. Create pipelines with Copy data activities, mapping Sybase ASE tables to SQL Server/Azure SQL. Configure parallel copy settings and staging storage (Azure Blob Storage) for large datasets (if required). Monitor pipeline execution and retry failed activities. Test Results: Test Scenario Data Size Time Taken Throughput Threads Notes Single Copy 1 GB 40 seconds ~25 Mbps 1 Single Threaded Single Copy 10 GB 4 minutes 22 seconds ~40 Mbps 1 Single Threaded Parallel Copy 100 GB 39 minutes ~44 Mbps 16 Parallel Threads Scalability Test 500 GB 3 hours 10 minutes ~45 Mbps 16 Parallel Threads - ADF scaled better due to cloud elasticity Key learnings: Cloud-native, scalable: handled 500 GB with ease Parallel copy and batch tuning make it faster as volume increases Auto-scaling prevents many typical OOM (Out of Memory) errors seen with SSMA Recommended for cloud targets (Azure SQL MI, Azure SQL DB) Performance Tuning Insights: In our setup we added a lookup, a foreach and one copy data within our pipeline. Lookup: In the Settings tab, select the SQL dataset as the source. Instead of choosing the entire table, go to the “Use query” section and provide the query shown below. This query uses a recursive Common Table Expression (CTE) to dynamically generate partition ranges across a large integer sequence (representing the total number of rows). Here is a sample output of this query for top 20 rows– ForEach: Use this query in the Pipeline expression builder which is a reference to the Lookup output in our pipeline. Copydata: Use below query at source which allows you to dynamically query different partitions of a large table (like Table_1_100GB) in parallel or sequentially. On Sink, change “write batch size” and “Max concurrent connections as per the rowset and assigned memory. Optimizing these values can improve performance and reduce overhead during data transfers. For small rows, increase writeBatchSize to reduce batch overhead and improve throughput. For large rows, use a smaller value to avoid memory or database overload. If the source data exceeds the specified batch size, ADF processes data in multiple batches automatically. Additionally adjust “Maximum data integration unit” and “Degree of copy parallelism”. A Data Integration Unit (DIU) is a measure that represents the power of a single unit in Azure Data Factory and Synapse pipelines. Power is a combination of CPU, memory, and network resource allocation. DIU only applies to Azure integration runtime. DIU doesn't apply to self-hosted integration runtime. While Degree of copy parallelism determines how many parallel copy activities can run simultaneously, optimizing the data transfer process. Cumulative Tests Result Disclaimer: All test results published herein are provided solely for reference purposes and reflect performance under ideal conditions within our controlled environment. Actual performance in the user's environment may vary significantly due to factors including, but not limited to, network speed, system bottlenecks, hardware limitations, CPU cores, memory, disk I/O, firewall configurations, and other environmental variables. On the source and target databases also multiple performance optimization and configuration adjustments have been implemented to enhance the migration efficiency. We strongly recommend that users conduct their own testing to determine performance under their specific conditions. Data Size Best Performer Observation 1 GB All tools performed efficiently Minimal overhead; no significant performance difference. 10 GB SSIS Optimized batch processing led to better performance. 100 GB SSIS and ADF Both benefited from parallelism. 500 GB ADF and SSIS ADF's scalability and retry mechanisms proved valuable. SSIS was equivalent with tuned data flow components. Data Volume Row Count SSMA Time / Speed SSIS Time / Speed ADF Time / Speed 1 GB 262,144 28 sec / ⚡ 36 MB/s 31 sec / ⚡33 MB/s 40 sec / ⚡25 MB/s 10 GB 2,621,440 4 min 36 sec / ⚡37 MB/s 4 min 16 sec / ⚡40 MB/s 4 min 22 sec / ⚡40 MB/s 100 GB 26,214,400 44 min / ⚡38 MB/s 38 min / ⚡44 MB/s 39 min / ⚡44 MB/s 500 GB 26,214,400 x 5 3 hr 44 min / ⚡38 MB/s 3 hr 12 min / ⚡44 MB/s 3 hr 10 min / ⚡45 MB/s Performance Improvement Guidelines Use SSMA for Schema Conversion: SSMA is the primary tool for automating schema conversion. Customize data type mappings (e.g., Sybase DATETIME to SQL Server DATETIME2) and handle case-sensitive databases carefully. Leverage SSIS for Complex Data Transformations: Use SSIS for tables with non-compatible data types or when business logic requires transformation. Optimize performance with parallel tasks and appropriate buffer settings. Use ADF for Large-Scale or Hybrid Migrations: ADF is ideal for large datasets or migrations to Azure SQL Database. Use staging storage and parallel copy to maximize throughput. Ensure stable network connectivity for on-premises to cloud transfers. Tips to improve ADF performance: Use staging areas (e.g., Azure Blob Storage) to offload source systems and speed up data transfers. Enable parallel copy in Copy Activity to increase throughput. Use the Integration Runtime closest to your data source to reduce network latency. Enable data partitioning on large tables to parallelize read/write operations. Adjust degree of parallelism to match your compute capacity. Use Self-Hosted IR or Azure IR with higher compute for large or complex migrations. Enable Auto-Scaling where supported to handle spikes efficiently. Monitor IR utilization to avoid under-provisioning or over-provisioning. Please refer below links for more details related to ADF – Sybase ASE to Azure SQL full and incremental data copy using ASE Transfer Table Tool and ADF Copy activity performance optimization features - Azure Data Factory & Azure Synapse | Microsoft Learn Db2 to Azure SQL fast data copy using ADF Migration Readiness Testing: Conduct performance testing on production-scale environments prior to the actual migration to obtain an accurate baseline of system behavior and identify potential bottlenecks under real workload conditions. Hybrid Approach: Combine SSMA for schema conversion, SSIS for complex data migrations, and ADF for orchestration in large-scale or cloud-based scenarios. For example, use SSMA to convert schemas, SSIS to migrate problematic tables, and ADF to orchestrate the overall pipeline. Validation: Post-migration, validate data integrity using checksums or row counts and test stored procedures for functional equivalence. Use SQL Server Management Studio (SSMS) for debugging. In the end we can use Microsoft Database Compare Utility that allows comparison of multiple source and target databases. Challenges and Mitigations Sybase-Specific Syntax: SSMA may fail to convert complex stored procedures with Sybase-specific T-SQL. Manually rewrite these using SQL Server T-SQL. LOB Data: Large Object (LOB) data types (e.g., TEXT, IMAGE) may cause truncation errors. Map to NVARCHAR(MAX) or VARBINARY(MAX) and validate data post-migration. Network Latency in ADF: For on-premises to Azure migrations, ensure high-bandwidth connectivity or use Azure ExpressRoute to minimize latency. Case Sensitivity: Sybase ASE databases may be case-sensitive, while SQL Server defaults to case-insensitive. Configure SQL Server collations (e.g. SQL_Latin1_General_CP1_CS_AS) to match source behavior. Conclusion SSMA, SSIS, and ADF each offer unique strengths for migrating Sybase ASE to SQL Server, Azure SQL Database or Azure SQL Managed Instance. SSMA excels in schema conversion, SSIS in complex data transformations, and ADF in scalability and cloud integration. A hybrid approach, leveraging SSMA for schema conversion, SSIS for problematic data, and ADF for orchestration, often yields the best results. Evaluation shows ADF’s superior scalability for large datasets, while SSIS provides flexibility for complex migrations. Proper planning, including schema assessment, data type mapping, and performance tuning, is critical for a successful migration. For further details refer to Microsoft’s official documentation: SSMA for Sybase: SQL Server Migration Assistant for Sybase (SybaseToSQL) - SQL Server | Microsoft Learn SSMA Project Settings: Project Settings (Migration) (SybaseToSQL) - SQL Server | Microsoft Learn SSIS: SQL Server Integration Services - SQL Server Integration Services (SSIS) | Microsoft Learn ADF: Azure Data Factory - Data Integration Service | Microsoft Azure Feedback and suggestions If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Customer Success Engineering (Ninja) Team (datasqlninja@microsoft.com). Thanks for your support! Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide.Seamless Online Homogeneous SQL Family Migration via Azure Data Factory using SQL CDC
Migrating data across SQL platforms, be it SQL Server, Azure SQL Database, Managed Instance, or SQL Server on IaaS, often involves operational complexity and potential downtime. Azure Data Factory (ADF) removes those barriers by enabling seamless, logical data movement across these services in either direction. Whether using SQL Change Data Capture (CDC) for near-zero downtime or traditional batch-based strategies, ADF ensures data consistency and operational continuity throughout the process. While physical data migration strategies remain valuable in many scenarios, this blog focuses on how ADF delivers a unified, scalable approach to logical database migration, in modernizing the database environments with minimal downtime. Prerequisites NOTE: Please make sure to go through the limitations of CDC as this blog doesn't cover those. SQL CDC Limitations Known Issues with CDC Before proceeding, please ensure you have the following prerequisites: An Azure subscription. Access to Azure Data Factory. Source and target databases, such as SQL Server, Azure SQL Database, Azure SQL MI etc. Enable Change Data Capture (CDC) on the source database for online migration. CDC captures changes like insert, update, and delete (DML) operations in the source database, allowing near real-time replication to a target database with minimal latency. To enable CDC, run: -- Enable CDC on the database EXEC sys.sp_cdc_enable_db; -- Enable CDC on the source table EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'SourceTable', @role_name = NULL; Azure Data Factory Provisioning ADF should be provisioned to provide a runtime environment for executing the pipeline. Self-hosted Integration Runtime (SHIR) SHIR is required to connect to the data source or destination which is not natively reachable by Azure (e.g., on-premises, private VNET, behind firewall). Linked Services These should be created to connect to the source and target. Datasets Datasets identify data within different data stores, such as tables, files, folders, and documents. Performance Optimization To speed up the process, primary keys, non-clustered indexes and constraints should be dropped on the target to reduce blocking/deadlocks and minimize resource contention. Script Components Adf_source.sql This script should be deployed on the source SQL Server. It will populate information in the dbo.data_extraction_config_adf table to run Change Data Capture (CDC) and the initial load pipeline. Adf_target.sql This script should be deployed on the target SQL server. It will create stored procedures to help merge CDC changes and create objects necessary for running pipelines smoothly. Master tables dbo.cdc__watermark__adf contains information about the CDC tables for the last watermark. dbo.data__extraction__config__adf contains information about the heap tables for initial load and CDC tables. dbo.sqlqueries contains information about the clustered tables for initial load. Let's deep dive into pipelines to handle different scenarios Pipeline 1: ClusteredTableMigration_Initial This pipeline migrates data only from clustered tables. The dbo.sqlqueries table automatically populates with clustered table info via the pipeline (Stored Procedure Activity). Ensure the source table schema matches the target table schema. To run the pipeline for specific tables, set the IsActive flag to 0 (inactive) or 1 (active) in the sqlqueries table or add the table name in the Lookup activity. Pipeline 2: HeapTableMigration_Initial This pipeline is designated for migrating heap tables. Prior to executing this pipeline, ensure that the heap table information has been added to the dbo.data__extraction__config__adf table. The source table schema should be synchronized with the target table schema. To execute the pipeline for a set of tables, the IsActive flag may be set to 0 (inactive) or 1 (active) in the dbo.data__extraction__config__adf table. Pipeline 3: CDCTableMigration This pipeline facilitates the migration of clustered tables with Change Data Capture (CDC) enabled. Prior to execution, please ensure that the relevant information for these clustered tables is entered into the dbo.data__extraction__config__adf table. Ensure the table schema is synchronized with the target schema, and that all tables intended for CDC synchronization possess a primary key and matching schema definition on the target system (excluding constraints and non-clustered indexes). To execute the pipeline for specific tables, the IsActive flag can be set to 0 (inactive) or 1 (active) in the dbo.data__extraction__config__adf table. Schedule the Pipeline - For CDC load only Create a trigger: Create a trigger to schedule the pipeline to run at regular intervals (e.g., every 5-30 minutes based on application requirements) to capture and apply changes incrementally. Monitor the pipeline: Monitor the pipeline runs to verify that the data is being migrated and synchronized accurately. Cutover and cleanup Once the delta changes are synchronized fully on source and target database, cutover can be initiated by setting the source database to read-only and then changing the connection string of the application (or all apps, agent jobs etc. that are impacted) to use the new target database and perform cleanup by deleting the SPs in target database and stop the CDC, remove tables, and SPs in source database. Conclusion Using Azure Data Factory allows for both online and offline data migration with minimal downtime, ensuring consistency between source and target databases. Change Data Capture enables near real-time data migration, suitable for environments requiring continuous data synchronization. Note - To get ADF Pipelines and T-SQL Queries mentioned in this blog please reach out to our team alias : datasqlninja@microsoft.comOptimized Data Transfer from Sybase ASE to Azure SQL via Chunked BCP Processing
Introduction Enterprises upgrading legacy databases often face challenges in migrating complex schemas and efficiently transferring large volumes of data. Transitioning from SAP ASE (Sybase ASE) to Azure SQL Database is a common strategy to take advantage of enhanced features, improved scalability, and seamless integration with Microsoft services. With business growth, the limitations of the legacy system become apparent, performance bottlenecks, high maintenance costs, and difficulty in integrating with modern cloud solutions. SQL Server Migration Assistant for SAP Adaptive Server Enterprise (SSMA) Automates migration from SAP ASE to SQL Server, Azure SQL Database and Azure SQL Managed Instance. While SSMA provides a complete end-to-end migration solution, the custom BCP script (ASEtoSQLdataloadusingbcp.sh) enhances this process by enabling parallel data transfers, making it especially effective for migrating large databases with minimal downtime. Script Workflow One of the most common challenges we hear from customers migrating from Sybase ASE to SQL Server is: “How can we speed up data transfer for large tables without overwhelming the system?” When you are dealing with hundreds of tables or millions of rows, serial data loads can quickly become a bottleneck. To tackle this, we created a script called ASEtoSQLdataloadusingbcp.sh that automates and accelerates the data migration process using parallelism. It starts by reading configuration settings from external files and retrieves a list of tables, either from the source database or from a user-provided file. For each table, the script checks if it meets criteria for chunking based on available indexes. If it does, the table is split into multiple views, and each view is processed in parallel using BCP, significantly reducing the overall transfer time. If chunking is not possible, the script performs a standard full-table transfer. Throughout the entire process, detailed logging ensures everything is traceable and easy to monitor. This approach gives users both speed and control , helping migrations finish faster without sacrificing reliability. Prerequisites Before running the script, ensure the following prerequisites are met: Database schema is converted and deployed using SQL Server Migration Assistant (SSMA). Both the source (SAP ASE) and target (Azure SQL DB) databases are accessible from the host system running the script. Source ASE database should be hosted on Unix or Linux. The target SQL Server can be hosted on Windows, Linux, or as an Azure. Configuration Files The configuration aspect of the solution is designed for clarity and reuse. All operational parameters are defined in external files, this script will use following external config files during bcp_config.env The primary configuration file, bcp_config.env, contains connection settings and control flags. In the screenshot below you can see the format of the file. chunking_config.txt The chunking_config.txt file defines the tables to be partitioned, identifies the primary key column for chunking, and specifies the number of chunks into which the data should be divided. table_list.txt Use table_list.txt as the input if you want a specific list of tables. Steps to run the script Script Execution Log The script log records tables copied, timestamps, and process stages. Performance Baseline A test was run on a 32-core system with a 10 GB table (262,1440 rows) for ASE and SQL. Migration using SSMA took about 3 minutes. Using the BCP script with 10 chunks, the entire export and import finished in 1 minute 7 seconds. This demonstrates how parallelism and chunk-based processing greatly boost efficiency for large datasets. Disclaimer: These results are for illustration purposes only. Actual performance will vary depending on system hardware (CPU cores, memory, disk I/O), database configurations, network latency, and table structures. We recommend validating performance in dev/test to establish a baseline. General Recommendation Larger batch sizes (e.g., 10K–50K) can boost throughput if disk IOPS and memory are sufficient, as they lower commit overhead. More chunks increase parallelism and throughput if CPU resources are available; otherwise, they may cause contention when CPU usage is high. Monitor system’s CPU and IOPS: When the system has high idle CPU and low I/O wait, increasing both the number of chunks and the batch size is appropriate. If CPU load or I/O wait is high, reduce batch size or chunk count to avoid exhausting resources. This method aligns BCP operations with your system's existing capacity and performance characteristics. Steps to Download the script Please send an email to the alias: datasqlninja@microsoft.com and we will send you the download link with instructions. What’s Next: Upcoming Enhancements to the Script Smart Chunking for Tables Without Unique Clustered Indexes Enable chunk-based export using any unique key column, even if the table lacks a unique clustered index. This will extend chunking capabilities to a broader range of tables, ensuring better parallelization. Multi-Table Parallel BCP with Intelligent Chunking Introduce full parallel execution across multiple tables. If a table qualifies for chunking, its export/import will also run in parallel internally, delivering two-tier parallelism: across and within tables. LOB Column Handling (TEXT, IMAGE, BINARY) Add robust support for large object data types. Include optimized handling strategies for exporting and importing tables with TEXT, IMAGE, or BINARY columns, ensuring data fidelity, and avoiding performance bottlenecks. Feedback and Suggestions If you have feedback or suggestions for improving this asset, please contact the Data SQL Ninja Team (datasqlninja@microsoft.com). Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide. Thank you for your support!Seamlessly Moving SQL Server Enabled by Azure Arc to a New Resource Group or Subscription
In a dynamic enterprise environment, organizations often need to restructure their Azure resources for better cost allocation, governance, or compliance. For IT teams managing multiple SQL Server instances enabled by Azure Arc, a reorganization may require moving some instances to a different resource group or subscription. Moving Azure Arc-enabled SQL Server instances is generally straightforward, similar to relocating other Azure resources. However, it becomes more complex when dependent features like Best Practice Assessment (BPA) are enabled. Understanding the Migration Scenarios Instances without BPA enabled: These can be moved seamlessly using the Azure portal by following the official Microsoft documentation: Move SQL Server enabled by Azure Arc resources to a new resource group or subscription. Instances with BPA enabled: Since BPA settings do not persist automatically after migration, additional steps are required to ensure continuity. Migration Approach To ensure a smooth transition while preserving BPA configurations and updating Log Analytics Workspace (LAW) settings, the process involves: Identifying servers where the BPA feature is enabled. Disabling BPA before moving the resource. Migrating the SQL Server instance to the new resource group or subscription. Re-enabling BPA for the affected servers. Updating the Log Analytics Workspace configuration to align with the target subscription’s LAW. Automating the Process This blog provides a step-by-step PowerShell script to automate these tasks for at-scale migrations, minimizing manual effort and ensuring a seamless transition. Alternative Approach: If automation isn't required, you can also use Azure Policy to enable or disable BPA and move Arc resources through the Azure portal. By leveraging either automation or Azure-native tools, organizations can efficiently manage Azure Arc-enabled SQL Server migrations while maintaining their configurations. Steps to Migrate SQL Server Enabled by Azure Arc to a New Resource Group or Subscription Step 1: Open PowerShell as Administrator Click on Start, search for PowerShell ISE or PowerShell. Right-click and select Run as Administrator to ensure the necessary permissions. Step 2: Provide Input Parameters Define the source and destination subscription IDs, resource group names, and Log Analytics Workspace details. Double-check that all values are correctly set before executing the script – “MoveArcResourcesAcrossSubscriptionOrRG.ps1”. Step 3: Connect to Azure and Set Subscription Context Log in to your Azure account when prompted to authenticate the device or application. The script will set the context to access and manage the SQL Server instances based on the input parameters provided. Step 4: Validate the Migration Once the script execution is complete, validate the output to confirm that the resource move was successful. Check the Azure Portal to ensure that the SQL Server instances have been moved to the new resource group or subscription. Note: The Child resources (SQL Servers instances and database) associated with the Azure Arc-enabled machine may take additional time to fully update in the Azure Portal. Allow at least one hour for the move to reflect across all dependent services before performing a final validation. By following these structured steps, organizations can efficiently migrate SQL Server enabled by Azure Arc while maintaining BPA configurations and updating necessary settings to ensure a seamless transition. MoveArcResourcesAcrossSubscriptionOrRG.ps1 <# Name: MoveArcResourcesAcrossSubscriptionOrRG.ps1 Purpose: This script manages the prerequisites for disabling the BPA on each Arc server resource before initiating a resource move. After the resource is successfully relocated to the new resource group (RG), the script then re-enables the BPA settings to their original state. Warranty: This script is provided on as "AS IS" basis and there are no warranties, express or implied, including, but not limited to implied warranties of merchantability or fitness for a particular purpose. USE AT YOUR OWN RISK. #> #____________________________________________ # Input parameters #____________________________________________ $SourceSubscriptionId='2xxxxxxxx-a798-4265-ab7d-d9xxxxx377' # Set the source subscription ID $DestinationSubscriptionId ='0xxxxxxxa-399c-4564-9f74-ffxxxxxx46' # Set the Destination subscription ID. $SourceRgName='arcsqlprod_rg' # Set the Source resource group name $TargetRgName='arcsqldev_rg' # Set the Destination resource group name $logAnalyticsWorkspaceName = 'devloganalyticsworkspace' # Set the Log Analytics Workspace in the destination subscription. #__________________ #local Variables #__________________ $global:ExcludedServerlist = @();$arcServers = @() ;$allResources = @();$global:ArcEnabledServerlist = @(); cls #_________________________________________________ # Check if the Az module is installed & Imported #_________________________________________________ Function LoadRequiredModules { if (-not (Get-Module -Name Az) -and -not (Get-Module -ListAvailable -Name Az) -and -not (Get-Module -ListAvailable -Name Az.Accounts)) { # Install the Az module if not already installed Write-Host "[$(Get-Date)]: Installing the required Az module, please wait." Install-Module -Name Az -AllowClobber -Force -Scope CurrentUser -WarningAction SilentlyContinue } # Import the Az module Write-Host "[$(Get-Date)]: Importing the required Az module, please wait." Import-Module Az.Accounts Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope CurrentUser -Force Connect-AzAccount -Subscription $SourceSubscriptionId -WarningAction SilentlyContinue | Out-Null } #____________________________________________________________________ # Module to verify the existence of the destination resource group #____________________________________________________________________ function CheckDestinationResourceGroup { Set-AzContext -SubscriptionId $DestinationSubscriptionId -WarningAction SilentlyContinue| Out-Null $destinationRG = Get-AzResourceGroup -Name $TargetRgName -ErrorAction SilentlyContinue if (-not $destinationRG) { Write-Host "[$(Get-Date)]: The destination resource group [$TargetRgName] does not exist." -BackgroundColor Yellow -ForegroundColor Red return } else { Write-Host "[$(Get-Date)]: The destination resource group [$TargetRgName] exists." } } #____________________________________________________________________ # Module to verify the existence of Log Analytics Workspace name. #____________________________________________________________________ function CheckLogAnalyticsWorkspace { Set-AzContext -subscriptionId $DestinationSubscriptionId -WarningAction SilentlyContinue | Out-Null *>$null $LAW = Get-AzOperationalInsightsWorkspace | Where-Object { $_.Name -eq $logAnalyticsWorkspaceName } if (-not $LAW) { Write-Host "[$(Get-Date)]: Log Analytics Workspace [$logAnalyticsWorkspaceName] does not exist in [Subscription:$($DestinationSubscription.Name), ResourceGroup:$TargetRgName]." -BackgroundColor Yellow -ForegroundColor Black $userInput = Read-Host "Would you like to create a new Log Analytics Workspace? Press any key to create and continue or [N or 0] to stop the execution" if ($userInput -ieq 'N' -or $userInput -ieq 0) { Write-Host "[$(Get-Date)]: Execution stopped." -ForegroundColor Red EXIT; } else { Write-Host "[$(Get-Date)]: Proceeding to create a new Log Analytics Workspace. Please wait.." try{ $NewLAW=New-AzOperationalInsightsWorkspace -ResourceGroupName $TargetRgName -Name $logAnalyticsWorkspaceName -Location (Get-AzResourceGroup -Name $TargetRgName).Location if ($NewLAW) { Write-Host "[$(Get-Date)]: Successfully created a new Log Analytics Workspace:`n"("_" * 160) Write-Host " Resource ID: $($NewLAW.ResourceId)" Write-Host " Location : $($NewLAW.Location)`n"("_" * 160) } } catch{ Write-Host "[$(Get-Date)]: An error occurred while creating the Log Analytics Workspace." -ForegroundColor Red Write-Host "Error: $($_.Exception.Message)" -ForegroundColor Red } } } else { Write-Host "[$(Get-Date)]: Log Analytics Workspace [$logAnalyticsWorkspaceName] found." } Set-AzContext -SubscriptionId $SourceSubscriptionId -WarningAction SilentlyContinue | Out-Null } #____________________________________________ # Function to check the status of BPA #____________________________________________ function Get-BPAStatus { param ( [string]$machineID,[string]$mode) $subscriptionId = ($machineID -split '/')[2] $resourceGroup = ($machineID -split '/')[4] $machineName = ($machineID -split '/')[8] $MachineState=(Get-AzConnectedMachine -ResourceGroupName $resourceGroup -Name $machineName).Status if ($MachineState -eq 'Disconnected') { Write-Host "[$(Get-Date)]: The Azure Arc machine [$($machineName)] is currently offline or disconnected, which will block the movement of resources or the enabling/disabling of features." -BackgroundColor Yellow -ForegroundColor Red return 'DISCONNECTED'; } else { $extn=$null; $extn= Get-AzConnectedMachineExtension -ResourceGroupName $resourceGroup -MachineName $machineName | where Name -Like 'WindowsAgent.SqlServer' | select ProvisioningState if ($extn -eq $null) { Write-Host "[$(Get-Date)]: SQL Server Extension is not installed on the Machine : [$($machineName)]." -BackgroundColor Green -ForegroundColor black return 'DISCONNECTED-MISSING-SQLExtention'; } elseif (($extn.ProvisioningState -eq 'Succeeded') -or ($extn.ProvisioningState -eq 'Updating')) { $uri = "https://edge.management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.HybridCompute/machines/$($machineName)/extensions/WindowsAgent.SqlServer`?api-version=2022-03-10" try{ $token = (Get-AzAccessToken -ResourceUrl https://management.azure.com/ -AsSecureString -WarningAction SilentlyContinue).Token} catch { Write-Error "Failed to retrieve the Azure Access Token. Error: $_" } $headers = @{Authorization = "Bearer "+[System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($token))} $retryCount = 0 while ($retryCount -lt 4) { try{ $response = Invoke-RestMethod -Uri $uri -Method Get -Headers $headers} catch { Write-Error "Error occurs during the REST API request. Error: $_" } $bpaconfig=$response.properties.settings.AssessmentSettings.Enable if ( -not [string]::IsNullOrEmpty($response) -and -not [string]::IsNullOrEmpty($bpaconfig) ) { break} else{ if ($retryCount -eq 0){ Write-Host "[$(Get-Date)]: Waiting to get the BPA status after the recent update " -NoNewline } else {Write-Host "....reattempt in 15 seconds."} Start-Sleep -Seconds 15 $retryCount++ } } $global:licenceType=$response.properties.settings.LicenseType if ($mode -eq "Validate"){ return $bpaconfig} if ( [string]::IsNullOrEmpty($global:licenceType) -or $LicenseType -eq "LicenseOnly") { switch ($global:licenceType) { $null { Write-Host "[$(Get-Date)]: License Type is NOT configured for machine [$($arcMachine.Name)]." } "LicenseOnly" { Write-Host "[$(Get-Date)]: Best Practices Assessment is not supported on license type '$LicenseType' for machine [$($arcMachine.Name)]." } default { Write-Host "[$(Get-Date)]: Unknown License Type for machine [$($arcMachine.Name)]." } } $global:skippedmachine += $arcMachine.Name} switch ($bpaconfig) { $false { Write-Host "[$(Get-Date)]: SQL Best Practice Assessment is [Disabled] on Machine: [$($machineName)]"} $true { Write-Host "[$(Get-Date)]: SQL Best Practice Assessment is [Enabled] on Machine: [$($machineName)]" } default{ Write-Host "[$(Get-Date)]: SQL Best Practice Assessment is [Not Configured] on Machine: [$($machineName)]" } } return $bpaconfig; } else { Write-Host "[$(Get-Date)]: SQL Server Extension is in [$($extn.ProvisioningState)] state on the Machine : [$($machineName)]. Cannot update the BPA configuration." -BackgroundColor Yellow -ForegroundColor black return 'DISCONNECTED-Unknown-SQLExtention'; } } } #__________________________________________________________ # Function to Enable/Disable BPA for each machine #__________________________________________________________ function Set-BPAConfiguration { param ( [string]$machineID, [string]$valuetoset ) $subscriptionId = ($machineID -split '/')[2] $resourceGroup = ($machineID -split '/')[4] $machineName = ($machineID -split '/')[8] Write-Host "[$(Get-Date)]: $($(($valuetoset).Substring(0, $valuetoset.Length - 1)) + 'ing') BPA for machine [$($machineName)]...." $setvalue = if ($valuetoset -eq "Enable") { $true } else { $false } $uri = "https://edge.management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.HybridCompute/machines/$($machineName)/extensions/WindowsAgent.SqlServer?api-version=2022-03-10" $token = (Get-AzAccessToken -ResourceUrl https://management.azure.com/ -AsSecureString -WarningAction SilentlyContinue).Token $headers = @{Authorization = "Bearer " + [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($token))} # Get the current response to inspect the existing values $response = Invoke-RestMethod -Uri $uri -Method Get -Headers $headers if ($setvalue -eq $true){ if ([string]::IsNullOrEmpty($response.properties.settings.AssessmentSettings)) { $response.properties.settings | Add-Member -MemberType NoteProperty -Name "AssessmentSettings" -Value @{} } $response.properties.settings.AssessmentSettings.Enable =$true $response.properties.settings.AssessmentSettings.WorkspaceResourceId=$LAW.ResourceId $response.properties.settings.AssessmentSettings.WorkspaceLocation=$LAW.Location $response.properties.settings.AssessmentSettings.ResourceNamePrefix=$null $response.properties.settings.AssessmentSettings.RunImmediately=$true $response.properties.settings.AssessmentSettings.schedule = @{ dayOfWeek = "Sunday" Enable = $true monthlyOccurrence = $null StartDate = $null startTime = "00:00" WeeklyInterval = 1 } } else { $response.properties.settings.AssessmentSettings.Enable =$false $response.properties.settings.AssessmentSettings.WorkspaceResourceId=$null $response.properties.settings.AssessmentSettings.WorkspaceLocation=$null } $jsonPayload = $response| ConvertTo-Json -Depth 10 #$jsonPayload #for debug # Prepare the PATCH request headers $headers = @{ Authorization = "Bearer " + [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($token)) "Content-Type" = "application/json" # Specify content type as JSON } # Make the PATCH request try { $response = Invoke-RestMethod -Uri $uri -Method Patch -Headers $headers -Body $jsonPayload Write-Host "[$(Get-Date)]: Successfully submitted the request to [$($valuetoset)] Best Practices Assessment for machine [$($machineName)]." } catch { # Handle the error Write-Host "[$(Get-Date)]: An error occurred while $($BPAtargetstate +'ing') BPA for [$($arcMachine.Name)]: $_" } Start-Sleep -Seconds 10 #Valdate after the change $CurrentState=Get-BPAStatus -machineID $machineID -mode "Validate" switch ($CurrentState) { $true { $state = "Enabled" } $false { $state = "Disabled" } default { $state = $CurrentState } # Default case } if ($setvalue -eq $CurrentState){ Write-Host "[$(Get-Date)]: Successfully [$state] Best Practices Assessment for machine [$($machineName)]." return $setvalue } else { Write-Host "[$(Get-Date)]: Updating the BPA configuration for machine [$($machineName)] has failed. The CurrentState is [$CurrentState]" -BackgroundColor Yellow -ForegroundColor Red return "Error-$CurrentState"} } #__________________________________________________________ # Module to make sure that BPA is disable for each machine #__________________________________________________________ Function Ensure-BPA-IsDisabled { $arcMachines = Get-AzResource -ResourceGroupName $SourceRgName -ResourceType "Microsoft.HybridCompute/machines" Write-Host "[$(Get-Date)]: A total of $($arcMachines.Count) Azure Arc machine(s) found." -BackgroundColor Green -ForegroundColor Black foreach ($arcMachine in $arcMachines) { Write-Host "[$(Get-Date)]: Validating the configuration for Azure Arc machine :[$($arcMachine.Name)]" $MachineState=(Get-AzConnectedMachine -ResourceGroupName $SourceRgName -Name $arcMachine.Name).Status if ($MachineState -eq 'Disconnected') { Write-Host "[$(Get-Date)]: The Azure Arc machine [$($arcMachine.Name)] is currently OFFLINE/DISCONNECTED, Cannot update the BPA configuration. This will also prevent the resource movement of this/child resource(s)." -BackgroundColor Yellow -ForegroundColor Red } else { $extn=$null; $extn= Get-AzConnectedMachineExtension -ResourceGroupName $SourceRgName -MachineName $arcMachine.Name | where Name -Like 'WindowsAgent.SqlServer' | select ProvisioningState if ($extn -eq $null) { Write-Host "[$(Get-Date)]: SQL Server Extension is not installed on the Machine : [$($arcMachine.Name)]." -BackgroundColor Green -ForegroundColor black} elseif ($extn.ProvisioningState -eq 'Succeeded') { $status = Get-BPAStatus -machineID $($arcMachine.ResourceId) -mode "Validate" if ($status -eq $true) { Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is set to : [$($status.ToString().ToUpper())] for Machine:[$($arcMachine.Name)]" Write-Host "[$(Get-Date)]: Attempting to DISABLE SQL Best Practice AssessmentSettings for Machine:[$($arcMachine.Name)]" -BackgroundColor White -ForegroundColor Black $status= Set-BPAConfiguration -machineID $($arcMachine.ResourceId) -valuetoset 'Disable' #$status= Get-BPAStatus -machineID $($arcMachine.ResourceId) -mode "Validate" if ($status -eq $false){ Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is now set to : [$($status.ToString().ToUpper())] for Machine:[$(($($arcMachine.ResourceId) -split '/')[($($arcMachine.ResourceId) -split '/').IndexOf('machines') + 1])] and added to the re-enablement list." $global:ArcEnabledServerlist =$global:ArcEnabledServerlist+$($arcMachine.ResourceId) } else{ Write-Host "[$(Get-Date)]: Failed to update SQL Best Practice AssessmentSetting for Machine:[$(($($arcMachine.ResourceId) -split '/')[($($arcMachine.ResourceId) -split '/').IndexOf('machines') + 1])] and added to the exclusion list." $global:ExcludedServerlist+=$($arcMachine.Name) } } else { switch ($status) { $null { Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is NOT configured on Machine: [$($arcMachine.Name)]" } $False { Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is already set to [$($status.ToString().ToUpper())] for Machine: [$($arcMachine.Name)]" } "Not-Configured" { Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is [$($status.ToString().ToUpper())] for Machine: [$($arcMachine.Name)]" } default { Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is [Unknown] for Machine: [$($arcMachine.Name)]" $global:ExcludedServerlist+=$($arcMachine.Name) } } } } else { Write-Host "[$(Get-Date)]: SQL Server Extension is in [$($extn.ProvisioningState)] state on the Machine : [$($arcMachine.Name)]. Cannot update the BPA configuration." -BackgroundColor Yellow -ForegroundColor Red } } } } #____________________________________________________________________ # Start the move resource operation #____________________________________________________________________ Function move-Arc-machines{ $arcServers= Get-AzResource -ResourceGroupName $SourceRgName -ResourceType "microsoft.hybridcompute/machines" | Where-Object { $_.Name -notin $global:ExcludedServerlist } if ($arcServers.Count -gt 0) { Write-Host "[$(Get-Date)]: Starting the move of Arc server resources. This process may take some time, so please wait until it is completed." if ($global:ExcludedServerlist) { Write-Host "[$(Get-Date)]: List of servers which are skipped for move due to failure in disabling BPA feature:" -ForegroundColor Yellow Write-Host $global:ExcludedServerlist -ForegroundColor Red -BackgroundColor Yellow } else { Write-Host "[$(Get-Date)]: Total resources considered for move : $($arcServers.Count)`n" $arcServers.ResourceID if($arcServers.Count -gt 0){ Write-Host "`n[$(Get-Date)]: Starting the MOVE of Arc server resources. This process may take a few minutes, please do not close the window." Move-AzResource -DestinationSubscriptionId $DestinationSubscriptionId -DestinationResourceGroupName $TargetRgName -ResourceId $arcServers.ResourceId -Force Write-Host "[$(Get-Date)]: Initialization of the resource move has been successfully completed. Moving the child (SQL Server) resource(s) may take some time. Please check the Azure portal later."} } } else { Write-Host "[$(Get-Date)]: No Arc Machines available for the move operation." } } #____________________________________________________________________ # Check for remaining resources in the old resource group #____________________________________________________________________ Function validate-after-MachineMove { $allResources = @(); Set-AzContext -SubscriptionId $SourceSubscriptionId -WarningAction SilentlyContinue | Out-Null $arcServers = Get-AzResource -ResourceGroupName $SourceRgName -ResourceType "microsoft.hybridcompute/machines" $allResources += $arcServers if ($allResources) { Write-Host "[$(Get-Date)]: There are still [$($allResources.count)] resources in the old resource group '$SourceRgName':`n" $allResources.ResourceID } else { Write-Host "[$(Get-Date)]: No resources remaining in the old resource group '$SourceRgName'." if ($global:ArcEnabledServerlist.Count -gt 0) { Write-Host "[$(Get-Date)]: Enabling the BPA for [$($global:ArcEnabledServerlist.Count)] resource(s) on the target resource group." Set-AzContext -SubscriptionId $DestinationSubscriptionId -WarningAction SilentlyContinue | Out-Null $arcMachines=$global:ArcEnabledServerlist foreach ($arcMachine in $arcMachines) { Write-Host "[$(Get-Date)]: Validating the BPA status for Machine:[$($arcMachine.Split('/')[-1])]" $arcMachine = $arcMachine.Replace($SourceSubscriptionId, $DestinationSubscriptionId).Replace($SourceRgName, $TargetRgName) $status = Get-BPAStatus -machineID $($arcMachine) -mode "Validate" switch ($status) { $true {Write-Host "[$(Get-Date)]: `nSQL Best Practice AssessmentSettings is already set to : [$($status.ToString().ToUpper())] for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]"} "Not-Configured" {Write-Host "[$(Get-Date)]: Failed to update SQL Best Practice AssessmentSettings for Machine: [$(($($arcMachine.ResourceId) -split '/')[($($arcMachine.ResourceId) -split '/').IndexOf('machines') + 1])] as it is not Configured" -BackgroundColor Yellow -ForegroundColor Red } $false { Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is set to : [$($status.ToString().ToUpper())] for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]" Write-Host "[$(Get-Date)]: Attempting to ENABLE SQL Best Practice AssessmentSettings for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]" -BackgroundColor White -ForegroundColor Black # Perform status update and check $status = Set-BPAConfiguration -machineID $($arcMachine) -valuetoset 'Enable' #$status = Get-BPAStatus -machineID $($arcMachine) -mode "Validate" switch ($status) { $true { $machineName = ($arcMachine.ResourceId -split '/')[($arcMachine.ResourceId -split '/').IndexOf('machines') + 1] Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is now set to : [$($status.ToString().ToUpper())] for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]" } $false { $machineName = ($arcMachine.ResourceId -split '/')[($arcMachine.ResourceId -split '/').IndexOf('machines') + 1] Write-Host "[$(Get-Date)]: Failed to update SQL Best Practice AssessmentSettings for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]" $global:ExcludedServerlist += $arcMachine.Name } } } "DISCONNECTED" {Write-Host "[$(Get-Date)]: Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])] is in DISCONNECTED state, Skipping the BPA enablement" -BackgroundColor Red -ForegroundColor White} "DISCONNECTED-MISSING-SQLExtention" {Write-Host "[$(Get-Date)]: SQL Extension is missing for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])] , Skipping the BPA enablement" -BackgroundColor Red -ForegroundColor White} default {Write-Host "[$(Get-Date)]: Unknown status value [$status] for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]" -BackgroundColor Red -ForegroundColor White} } } } else {Write-Host "[$(Get-Date)]: No machines found for BPA enablement."} } } # Start capturing the output to the file $outputFile = ([System.IO.Path]::Combine([System.IO.Path]::GetTempPath(), "MoveArcResourcesOutput_" + (Get-Date -Format "yyyy-MM-dd_HH.mm.ss") + '.txt')) Start-Transcript -Path $outputFile > $null #1. Load required modules LoadRequiredModules # Get subscription details for Source and Destination Set-AzContext -SubscriptionId $DestinationSubscriptionId -WarningAction SilentlyContinue | Out-Null $SourceSubscription = Get-AzSubscription -SubscriptionId $SourceSubscriptionId -WarningAction SilentlyContinue Set-AzContext -SubscriptionId $SourceSubscriptionId -WarningAction SilentlyContinue| Out-Null $DestinationSubscription = Get-AzSubscription -SubscriptionId $DestinationSubscriptionId -WarningAction SilentlyContinue Cls # Display the details of inputparameters Write-Host "[$(Get-Date)]: __________________Start of script_______________________________`n[$(Get-Date)]: Input Parameters considered for this execution:`n" Write-Host "Source Subscription ID : $SourceSubscriptionId ($($SourceSubscription.Name)`) `nDestination Subscription ID : $DestinationSubscriptionId ($($DestinationSubscription.Name)`) `nSource Resource Group Name : $SourceRgName `nTarget Resource Group Name : $TargetRgName `nLogAnalyticsWorkspaceName : $logAnalyticsWorkspaceName `n" #2. Check if both subscriptions are in the same tenant if ($sourceSubscription.TenantId -ne $destinationSubscription.TenantId) { Write-Host "[$(Get-Date)]: Cannot move resource as the subscriptions are in different tenants." } else { Write-Host "[$(Get-Date)]: Both subscriptions are in the same tenant. You can proceed with the move." #3. Checks whether a specified destination resource group exists CheckDestinationResourceGroup #4. Verifies the existence and configuration of a Log Analytics Workspace on the Target subscription CheckLogAnalyticsWorkspace #5. Retrieves the current status of the Best Practice Analyzer (BPA) for a Arc machine and disables it to prepare for resource move Ensure-BPA-IsDisabled #6. Initialize the resource move move-Arc-machines #7. Validate the resource move validate-after-MachineMove } Write-Host "[$(Get-Date)]: __________________END of script_______________________________`n`n" # Stop capturing output Stop-Transcript > $null Start-Process "notepad.exe" -ArgumentList $outputFile Sample outputKey Considerations to avoid Implicit Conversion issues in Oracle to Azure SQL Modernization
Overview This blog dives into the mechanics of implicit data type conversions and their impact during the post-migration performance optimization phase of heterogeneous database migrations. Drawing from our observed Engineering field patterns across diverse application architectures, this blog explores why certain platforms like ORMs, JDBC drivers, and cross-platform data models are more prone to implicit conversions than others and how they result in performance issues, break indexes, or cause query regressions. You'll gain actionable strategies to detect, mitigate, and design around these pitfalls to ensure a successful and performant data migration to platforms like Azure SQL. Understanding Implicit Conversions in Database Migrations When an application communicates with a database whether through JDBC, ODBC, ADO.NET, or any other data access API it sends parameters and query values in a specific data type format. However, if the type of the value provided by the application does not exactly match the data type of the target column in the database, the database engine attempts to reconcile the mismatch automatically. This automatic adjustment is known as implicit conversion. For instance, a value passed as a string from the application may be compared against a numeric or date column in the database. This occurs because many front-end systems and APIs transmit values as strings by default, even if the underlying business logic expects numbers or dates. Unless the application explicitly parses or casts these values to match the expected types, the database engine must decide how to handle the type mismatch during query execution. In such cases, the engine applies type conversion internally, either to the parameter or the column, based on its own rules. While this feature can simplify application development by allowing flexible data handling, it often introduces engine-specific behavior that becomes more visible during cross engine database migrations, where assumptions built into one system may not hold true in another. Impact of Implicit Conversions Implicit conversions can adversely affect database performance and functionality in several ways, some of which are discussed below: Performance Degradation: When a database performs an implicit conversion, it may bypass indexes, resulting in slower query execution. For example, comparing a VARCHAR column to an INT value in SQL Server can trigger a table scan instead of an index seek, significantly increasing query time. Migration-Specific Issues and Data Integrity Risks: Implicit conversions can cause data loss or incorrect results during a few instances and one such example is, when a column defined as VARCHAR2 in Oracle, which can store Unicode characters by default is mapped to a VARCHAR column in SQL Server, non-ASCII characters such as Chinese, Russian, or Korean may be silently replaced with incorrect characters/symbols. One example of scenario when this can happen: Oracle VARCHAR2 stores Unicode if the database character set is UTF-8 (AL32UTF8), which is common in modern Oracle installations. SQL Server VARCHAR is ANSI/code-page based, so non-ASCII characters are stored differently, unless the column is explicitly declared as NVARCHAR. -- In Real World this can happen on any other data types Maintenance Challenges: Queries relying on implicit conversions are harder to debug and optimize, as these conversions are not explicitly visible in the code and may only surface during performance regressions. These queries forces the optimizer to compile an execution plan containing scans of large clustered indexes, or tables, instead of a seek resulting in degraded performance Execution Overhead and Resource Consumption: Implicit conversions increase execution times for both queries and API calls, as the engine must perform runtime casting operations. This can lead to higher CPU usage, increased logical reads, and memory pressure. Detection Methods Detecting implicit conversions is crucial for optimizing database performance post-migration. The following methods can be employed to detect: Query Store (QDS): Use QDS post-migration during load testing to track expensive queries based on cost and surface performance regressions caused by type mismatches. Review execution plans captured in QDS for conversion-related patterns. You can also use custom script like below to query the QDS: USE <[Replace_with_actual_DB_name]> -- Replace with the actual database name GO SELECT TOP (100) DB_NAME() AS [Database], qt.query_sql_text AS [Consulta], rs.last_execution_time AS [Last Execution Time], rs.avg_cpu_time AS [Avg Worker Time], rs.max_cpu_time AS [Max Worker Time], rs.avg_duration AS [Avg Elapsed Time], rs.max_duration AS [Max Elapsed Time], rs.avg_logical_io_reads AS [Avg Logical Reads], rs.max_logical_io_reads AS [Max Logical Reads], rs.count_executions AS [Execution Count], q.last_compile_start_time AS [Creation Time], CAST(p.query_plan AS XML) AS [Query Plan] FROM sys.query_store_query_text AS qt JOIN sys.query_store_query AS q ON qt.query_text_id = q.query_text_id JOIN sys.query_store_plan AS p ON q.query_id = p.query_id JOIN sys.query_store_runtime_stats AS rs ON p.plan_id = rs.plan_id WHERE CAST(p.query_plan AS NVARCHAR(MAX)) LIKE '%CONVERT_IMPLICIT%' AND qt.query_sql_text NOT LIKE '%sys.query_store%' ORDER BY rs.avg_cpu_time DESC; Execution Plans: For the expensive queries, in SSMS, hover over operators like Index Scan to inspect the Predicate. If implicit conversion exists, the plan includes something like “CONVERT_IMPLICIT(<data_type>, ..” XML Plan: For a confirmation of above, reviewing the underlying XML execution plan confirms whether implicit conversion is occurring and on which side of the comparison. This technique is particularly valuable when working with parameterized queries or when graphical plan warnings are insufficient. Look for elements like below in the XML plan: <Warnings> <PlanAffectingConvert ConvertIssue="Seek Plan" Expression="CONVERT_IMPLICIT(.. </Warnings> Plan Cache Inspection: Custom scripts can be written to scan the Azure SQL plan cache for any instances of CONVERT_IMPLICIT operations. Below is one such script that can be used to find. SELECT TOP (100) DB_NAME(B.[dbid]) AS [Database], B.[text] AS [SQL_text], A.total_worker_time AS [Total Worker Time], A.total_worker_time / A.execution_count AS [Avg Worker Time], A.max_worker_time AS [Max Worker Time], A.total_elapsed_time / A.execution_count AS [Avg Elapsed Time], A.max_elapsed_time AS [Max Elapsed Time], A.total_logical_reads / A.execution_count AS [Avg Logical Reads], A.max_logical_reads AS [Max Logical Reads], A.execution_count AS [Execution Count], A.creation_time AS [Creation Time], C.query_plan AS [Query Plan] FROM sys.dm_exec_query_stats AS A WITH (NOLOCK) CROSS APPLY sys.dm_exec_sql_text(A.plan_handle) AS B CROSS APPLY sys.dm_exec_query_plan(A.plan_handle) AS C WHERE CAST(C.query_plan AS NVARCHAR(MAX)) LIKE '%CONVERT_IMPLICIT%' AND B.[dbid] = DB_ID() AND B.[text] NOT LIKE '%sys.dm_exec_sql_text%' ORDER BY A.total_worker_time DESC XE event: Extended Events (XE) is valuable in support scenarios when Query Store or telemetry data alone can't pinpoint issues like implicit conversions, especially if plans aren't cached or historical data lacks detail. XE provides real-time capture of plan-affecting convert events, offering granular insights into query behavior that QS might miss during short-lived or dynamic workloads. However, use it sparingly due to overhead, as a targeted diagnostic tool rather than a broad solution. You can use below script to turn it off. Remember to stop and drop the event immediately when you are done collecting. IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'Detect_Conversion_Performance_Issues') DROP EVENT SESSION [Detect_Conversion_Performance_Issues] ON SERVER; GO CREATE EVENT SESSION [Detect_Conversion_Performance_Issues] ON SERVER ADD EVENT sqlserver.plan_affecting_convert( ACTION(sqlserver.database_name, sqlserver.sql_text) WHERE ([sqlserver].[database_name] = N'<Replace_with_your_DB_name>') -- Replace your DB name ) ADD TARGET package0.ring_buffer WITH ( MAX_MEMORY = 4096 KB, EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY = 30 SECONDS, MEMORY_PARTITION_MODE = NONE, TRACK_CAUSALITY = OFF, STARTUP_STATE = OFF ); GO ALTER EVENT SESSION [Detect_Conversion_Performance_Issues] ON SERVER STATE = START; -- View the raw Extended Events buffer SELECT s.name AS session_name, t.target_name, CAST(t.target_data AS XML) AS raw_buffer_xml FROM sys.dm_xe_sessions s JOIN sys.dm_xe_session_targets t ON s.address = t.event_session_address WHERE s.name = 'Detect_Conversion_Performance_Issues'; Documentation Reference: Microsoft docs on conversion and precedence help explain engine behavior and mappings around implicit conversion triggers. This close look at them along with app developers during the schema/code conversion phase can help better understanding and mitigation. Implicit Conversion: Real-World Example To evaluate the impact of implicit conversions in Azure SQL during post-migration scenarios, we created a synthetic workload example using a table named dbo.Customers. It contains one million rows and includes columns such as AccountNumber, CustomerName, PhoneNumber, and JoinDate. The AccountNumber, CustomerName, and PhoneNumber columns were initially defined as VARCHAR, and Nonclustered indexes were created on these fields to enable efficient lookups. From the application layer, parameters were passed using NVARCHAR, which mirrors typical real-world ORM behavior particularly in Java-based applications or when migrating from Oracle, where VARCHAR2 frequently stores Unicode characters. This deliberate mismatch allows us to study the real performance consequences of implicit conversions in Azure SQL’s query execution engine. Although enabling SET STATISTICS XML ON can expose implicit conversions during query execution, our approach tries to reflect how these issues are usually uncovered in real-world scenarios where customers are less aware of this issue. In this case, we used Query Store and execution plan XML inspection Problem: Implicit Conversion Due to Type Mismatch: A NVARCHAR parameter from the application is compared against a VARCHAR column in the database. This scenario highlights a silent performance regression that can go unnoticed post-migration without detailed plan inspection. Query Used: DECLARE ACC NVARCHAR(20) = N’ACC000500’; SELECT CustomerID, AccountNumber, CustomerName, PhoneNumber FROM dbo.Customers WHERE AccountNumber = ACC; Execution Plan Behavior: Fig: 1 SQL Server applies an implicit CONVERT_IMPLICIT(nvarchar, AccountNumber) on the column side as we can see from Fig 1 when you hover to Index Scan and see the Predicate This disables the use of the nonclustered index on AccountNumber, leading to an Index Scan The XML plan includes a <Warning> tag under <PlanAffectingConvert> Extended Events monitoring consistently shows "plan_affecting_convert" warnings indicating suboptimal query plans caused by these conversions What's Missing: Type alignment between the query parameter and the column. Awareness that even matching string lengths won’t help if encoding mismatches exist. Impact: Index Seek is lost, and full scans are triggered. Higher Exec times and Overall costs observed. Mitigation via Explicit CAST – Matching the Column’s Type: In some cases, especially during post-migration tuning, application teams may not be able to change the database schema, but developers can update the query to explicitly align data types. This scenario simulates such a mitigation where an NVARCHAR parameter is explicitly cast to VARCHAR to match the column’s data type and avoid implicit conversions. Query Used: DECLARE ACC NVARCHAR(20) = N'ACC000500'; SELECT CustomerID, AccountNumber, CustomerName, PhoneNumber FROM dbo.Customers WHERE AccountNumber = CAST(@acc AS VARCHAR(20)); -- Explicit use of CAST Execution Plan Behavior: Fig: 2 The CAST operation ensures that the parameter side matches the VARCHAR column type. SQL performs an Index Seek on the IX_AccountNumber index. The Seek Predicates as seen in Fig 2 confirms this showing Scalar “Operator(CONVERT..” No <Warning> tag appears in the XML execution plan indicating the absence of implicit conversions. What's Fixed: Type mismatch is resolved on the query side without altering the database schema. The query is now SARGable, enabling index usage. Impact: Index Seek is lost, and full scans are triggered. Higher Exec times and Overall costs observed. What's Still Missing: This still creates a long-term maintainability concern, especially when many queries or columns are affected. Developers must remember to manually CAST in every affected query, increasing code complexity and the chance of inconsistency. Missed CASTs in other queries can still cause implicit conversions, so the issue isn’t eliminated just patched locally. Fix at DB end – Parameter Usage aligned Schema Column type This fix involves altering the column type to NVARCHAR, aligning it with the NVARCHAR parameter passed from the application. It eliminates implicit conversions and enables index seeks, improving performance. However, it’s a database-side adjustment, the ideal long-term fix lies in ensuring the application sends parameters matching the original column type. Query Used: DECLARE ACC NVARCHAR(20) = N'ACC000500'; SELECT CustomerID, AccountNumber, CustomerName, PhoneNumber FROM dbo.Customers WHERE AccountNumber = CAST(@acc AS VARCHAR(20)); Execution Plan Behavior: Fig: 3 The CAST operation ensures that the parameter side matches the VARCHAR column type. As seen in Fig 3 an Index Seek is performed on the updated IX_AccountNumber index. The Seek Predicates confirm this showing “Scalar Operator(..” No <Warning> tag appears in the XML execution plan indicating the absence of implicit conversions. What's Fixed: The fix is schema-driven and works universally, ensuring consistent performance across tools and interfaces. Encoding alignment between the parameter and column removes conversion logic entirely, making query plans stable and predictable. Impact: Indexes remain fully usable without manual intervention in queries. Application code stays clean, no casts or workarounds are needed. This is the most sustainable fix but may require coordination with application and DB teams. What's Still Missing: Some of implications around data type changes will be around data types consuming additional storage, for example NVARCHAR consumes 2 bytes/char, increasing storage when compared to VARCHAR. Implicit vs Explicit vs Aligned: Execution Plan Behavior Comparison Scenario Predicate Expression in Exec Plan Implicit Conversion Index Seek Used XML Plan Warning Shown Data Type Mismatch CONVERT_IMPLICIT(nvarchar, AccountNumber) = ACC Yes No (results in scan) PlanAffectingConvert Explicit Cast in Query AccountNumber = CONVERT(varchar, ACC) No Yes No Matching Data Types (NVARCHAR) AccountNumber = ACC No Yes No Best Practices for Managing Implicit Conversions Refactoring Application: Legacy systems, especially those using dynamic SQL or lacking strict type enforcement, are prone to implicit conversion issues. Refactor your application code to leverage strongly typed variables and parameter declarations to ensure data type consistency at the source, minimizing implicit conversions during query execution. Explicit Data Type Casting: Use CAST or CONVERT functions to explicitly define conversions, reducing reliance on implicit behavior. In our example we have used CAST, but a CONVERT function would have worked equally well. Both approaches explicitly align the parameter type to the column and avoid implicit conversions, enabling index seek. Data Type Alignment: When you are performing heterogenous migrations which involve different database engines, ensure data types are consistent between source and target DB engines. Check official documents thoroughly to know and see the nuances around your data and application convertibility and know the implications like additional storage, collation changes etc. that can negatively affect your business. Indexing: Create indexes on columns frequently involved in WHERE filters and JOIN predicates with matching data types to avoid implicit conversions that would cause index seeks to degrade into scans and ensures optimal index utilization by the optimizer. Early Testing: Conduct thorough post-migration testing using QDS to identify, then drill down on execution plans and performance metrics to identify and resolve conversion-related issues. Early collaboration between Developer and DBA teams will be crucial. Tools and Scripts: Utilize SQL Server Migration Assistant (SSMA) for Oracle to identify and change mappings early when you know your application needs. Additionally, use can use custom scripts or third-party tools if necessary to detect implicit conversions in the plan cache. References https://learn.microsoft.com/sql/t-sql/data-types/data-type-conversion-database-engine https://learn.microsoft.com/sql/t-sql/data-types/data-type-precedence-transact-sql Final Thoughts We hope that this post has helped you gain actionable strategies to detect, mitigate, and design around implicit conversions in order to ensure a successful and performant data migration to platforms such as SQL Server or Azure SQL. If you have feedback or suggestions for improving this post, please contact the Azure Databases SQL Customer Success Engineering Team. Thanks for your support!Enforcing SQL PaaS backup retention with Azure Policy
Implementation for SQL DB PITR using the portal Azure policy covers much more than SQL; here we are using a small portion of its capabilities. The bits we are using are A policy definition, on what the policy checks for, and what to do about issues A policy assignment, with the scope to check the definition across, and parameter values A remediation task, that makes the required changes The requirement in this example is to ensure that all Azure SQL Databases have a short-term (PITR) backup retention of at least 9 days. Any database created without specifying the retention period will have this added Any update made with a shorter period will have that modified to be 9 days Modifications or database creation that explicitly set the retention period to more than 9 days will have that value honoured All these are built under “Policy” in the portal The definition Open Policy | Authoring | Definitions, and on that blade, use the “+ Policy definition” to create a new one Definition Location: the subscription to hold this (there’s a pull-down menu of valid items) Name: for example, “Enforce SQL DB PITR” Category: for example “Backup” Role Definitions: “Contributor” for this example, but in general this should be the minimum needed for the updates that the definition will make Policy rule: { "mode": "All", "policyRule": { "if": { "anyOf": [ { "field": "Microsoft.Sql/servers/databases/backupShortTermRetentionPolicies/retentionDays", "exists": false }, { "field": "Microsoft.Sql/servers/databases/backupShortTermRetentionPolicies/retentionDays", "less": "[parameters('Minimum_PITR')]" } ] }, "then": { "effect": "modify", "details": { "roleDefinitionIds": [ "/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c" ], "operations": [ { "operation": "addOrReplace", "field": "Microsoft.Sql/servers/databases/backupShortTermRetentionPolicies/retentionDays", "value": "[parameters('Minimum_PITR')]" } ] } } }, "parameters": { "Minimum_PITR": { "type": "Integer", "metadata": { "displayName": "Min PITR", "description": "Min PITR retention days" } } } } In this code Field is what we want to check and/or change; get the list of field names using PowerShell $aliases = Get-AzPolicyAlias -ListAvailable -NamespaceMatch 'Microsoft.Sql' | where ResourceType -like 'retentionpol' | Select-Object -ExpandProperty 'Aliases' $aliases | select Name For the list of fields that can be modified/updated, look at the Modifiable attribute $aliases | Where-Object { $_.DefaultMetadata.Attributes -eq 'Modifiable' } | select Name Minimum_PITR is the name of the parameter the assignment (next step) will pass in. You choose the name of the parameter roleDefinitionIds are the full GUID path of the roles that the update needs. The policy remediation docs talk about this, but we can get the GUID with PowerShell Get-AzRoleDefinition -name 'contributor' # replace contributor with the role needed This definition is saying that if the PITR retention isn’t set, or is less than the parameter value, then make it (via addOrReplace) the parameter value. The Assignment Once you save the definition, use “Assign policy” on the screen that appears For this, there are several tabs Basics: Scope and exclusions let you work on less than the entire subscription enable “policy enforcement” Parameters Enter 9 for Min_PITR (to have policy apply 9 days as the minimum) Remediation Tick “create remediation task” Default is to use a system managed identity Then create this assignment Initial Remediation Once the assignment is created, look at the compliance blade to see it; Azure policy is asynchronous, so for a newly created assignment, it takes a little while before it begins checking resources in its scope. Similarly, “remediation tasks” on the remediation blade shows the task pending to begin with. Once the initial remediation scan completes, you can look at the backup retention policies (in Data Management | backups) on the logical server(s) and see that the PITR retention periods have been increased to a minimum of 9 days. Ongoing operation With the initial remediation complete, the policy will now intercept non-compliant changes, and refactor them on the fly. For example, if we use PowerShell to set the retention to 2 days $DB_PITR = get-azsqldatabasebackupshorttermretentionpolicy -ResourceGroupName mylittlestarter-rg -ServerName mylittlesql -DatabaseName oppo $DB_PITR | Set-AzSqlDatabaseBackupShortTermRetentionPolicy -RetentionDays 2 ResourceGroupName : mylittlestarter-rg ServerName : mylittlesql DatabaseName : oppo RetentionDays : 9 DiffBackupIntervalInHours : 12 The update completes, but the summary shows that the retention stays as 9 days The experience on the portal is the same; we can change the retention to 1 day in the GUI, and the operation succeeds, but with the retention remaining at 9 days. In the activity log of either the logical server or the database, this shows up as a modify, with the JSON detail of the modify showing the policy name and the effect. Tricky bits A few challenges that can cause delays: Retention policies are separate resources – Both short-term and long-term backup retention aren’t direct attributes of the database resource. Instead, they exist as their own resources (e.g., with retentionDays) tied to the database. Keep policies simple – Focusing each policy on a single resource (like SQL DB PITR) proved more effective than trying to create one large, all-encompassing policy. Case sensitivity matters – The policy definition code is case-sensitive, which can easily trip you up if not handled carefully. The definitionRoleID is just the GUID of the security role that the policy is going to need, not anything to do with the identity that’s created for the remediation task…but the GUID is potentially different for each subscription, hence the PowerShell to figure out this GUID Writing the definitions in PowerShell means that they are just plain-text, and don’t have any syntax helpers; syntax issues in the definition tend to appear as strange “error converting to JSON” messages. Waiting patiently for the initial policy remediation cycle to finish; I haven’t found any “make it so” options References The posts mentioned in the introduction are Automatically Enable LTR and PITR Policy upon a Database creation on Azure SQL Managed Instance | Microsoft Community Hub using audits and runbooks Azure custom policy to prevent backup retention period to be below X number - Azure SQL | Microsoft Community Hub which uses ‘Deny’ to fail attempts that don’t meet the requirements. Expanding this using PowerShell With a working example for SQL DB PITR, we now want to expand this to have policies that cover both short and long term retention for both SQL DB and SQL MI. The code below isn’t exhaustive, and being a sample, doesn’t have error checking; note that the code uses “less” for the policy test, but operators like “equals” and “greater” (https://learn.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure-policy-rule#conditions ) are available to build more complex tests, depending on the policy requirements. The document Programmatically create policies - Azure Policy | Microsoft Learn covers using powershell with Azure policy. Other wrinkles that this sample doesn’t explicitly cater for include LTR retentions are held in ISO 8601 format (eg, ‘P8D’ for 8 days), so it’s not trivial to do less than tests; in theory ARM template functions could be used to convert these into the number of days, but this example just does an equality check, and enforces the policy, without any understanding that P4W is a longer period than P20D LTR isn’t available for serverless databases with autopause enabled (https://learn.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview?view=azuresql&tabs=general-purpose#auto-pause ); this would need some form of scope control, potentially either using resource groups, or a more complex test in the policy definition to look at the database attributes A few service levels, for example the Basic database SLO, have different limits for their short term retention PITR for databases that could be offline (stopped managed instances, auto-paused serverless databases, etc) hasn’t been explicitly tested. Remediation tasks just run to completion, with no rescheduling; to ensure that all existing databases are made compliant, this could be expanded to have a loop to check the count of resources needing remediation, and start a task if the relevant existing ones are complete <# /***This Artifact belongs to the Data SQL Ninja Engineering Team***/ Name: Enforce_SQL_PaaS_backup_retention.ps1 Author: Databases SQL CSE/Ninja, Microsoft Corporation Date: August 2025 Version: 1.0 Purpose: This is a sample to create the Azure policy defintions, assignment and remediation tasks to enfore organisational policies for minimum short-term (PITR) and weekly long-term (LTR) backup retention. Prerequisities: - connect to your azure environment using Connect-AzAccount - Register the resource provider (may already be done in your environment) using Register-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights' - if needed to modify/update this script, this can be used to find field names: Get-AzPolicyAlias -ListAvailable -NamespaceMatch 'Microsoft.Sql' | where ResourceType -like '*retentionpol*' | Select-Object -ExpandProperty 'Aliases' | select Name Warranty: This script is provided on as "AS IS" basis and there are no warranties, express or implied, including, but not limited to implied warranties of merchantability or fitness for a particular purpose. USE AT YOUR OWN RISK. Feedback: Please provide comments and feedback to the author at datasqlninja@microsoft.com #> # parameters to modify $Location = 'EastUS' # the region to create the managed identities used by the remediation tasks $subscriptionID = (Get-AzContext).Subscription.id # by default use the current Subscription as the scope; change if needed # the policies to create; PITR can do a less than comparison, but LTR has dates, so uses string equalities [array]$policies = @() $policies += @{type = 'DB'; backups='PITR'; name = 'Enforce SQL DB PITR retention'; ParameterName = 'Minimum_PITR'; ParameterValue = 9; Role = 'contributor'; Category='Backup'} $policies += @{type = 'MI'; backups='PITR'; name = 'Enforce SQL MI PITR retention'; ParameterName = 'Minimum_PITR'; ParameterValue = 9; Role = 'contributor'; Category='Backup'} # LTR retention is in ISO8601 format, eg P2W = 2 weeks, P70D = 70 days; 'PT0S' = no retention $policies += @{type = 'DB'; backups='LTR';name = 'Enforce SQL DB LTR retention'; Weekly = 'P4W'; Monthly = 'PT0S'; Yearly = 'PT0S'; WeekofYear = 1; Role = 'contributor'; Category='Backup'} $policies += @{type = 'MI'; backups='LTR';name = 'Enforce SQL MI LTR retention'; Weekly = 'P4W'; Monthly = 'PT0S'; Yearly = 'PT0S'; WeekofYear = 1; Role = 'contributor'; Category='Backup'} # templates for the Policy definition code; this has placeholders that are replaced in the loop $Policy_definition_template_PITR = @' { "mode": "All", "policyRule": { "if": { "anyOf": [ { "field": "Microsoft.Sql/<Type>/databases/backupShortTermRetentionPolicies/retentionDays", "exists": false }, { "field": "Microsoft.Sql/<Type>/databases/backupShortTermRetentionPolicies/retentionDays", "less": "[parameters('<ParameterName>')]" } ] }, "then": { "effect": "modify", "details": { "roleDefinitionIds": [ "/providers/Microsoft.Authorization/roleDefinitions/<RoleGUID>" ], "operations": [ { "operation": "addOrReplace", "field": "Microsoft.Sql/<Type>/databases/backupShortTermRetentionPolicies/retentionDays", "value": "[parameters('<ParameterName>')]" } ] } } }, "parameters": { "<ParameterName>": { "type": "Integer" } } } '@ # LTR, look for any of the weekly/monthly/yearly retention settings not matching $Policy_definition_template_LTR = @' { "mode": "All", "policyRule": { "if": { "anyOf": [ { "field": "Microsoft.Sql/<Type>/databases/backupLongTermRetentionPolicies/weeklyRetention", "exists": false }, { "field": "Microsoft.Sql/<Type>/databases/backupLongTermRetentionPolicies/weeklyRetention", "notEquals": "[parameters('Weekly_retention')]" }, { "field": "Microsoft.Sql/<Type>/databases/backupLongTermRetentionPolicies/monthlyRetention", "notEquals": "[parameters('Monthly_retention')]" }, { "field": "Microsoft.Sql/<Type>/databases/backupLongTermRetentionPolicies/yearlyRetention", "notEquals": "[parameters('Yearly_retention')]" } ] }, "then": { "effect": "modify", "details": { "roleDefinitionIds": [ "/providers/Microsoft.Authorization/roleDefinitions/<RoleGUID>" ], "operations": [ { "operation": "addOrReplace", "field": "Microsoft.Sql/<Type>/databases/backupLongTermRetentionPolicies/weeklyRetention", "value": "[parameters('Weekly_retention')]" }, { "operation": "addOrReplace", "field": "Microsoft.Sql/<Type>/databases/backupLongTermRetentionPolicies/monthlyRetention", "value": "[parameters('Monthly_retention')]" }, { "operation": "addOrReplace", "field": "Microsoft.Sql/<Type>/databases/backupLongTermRetentionPolicies/yearlyRetention", "value": "[parameters('Yearly_retention')]" }, { "operation": "addOrReplace", "field": "Microsoft.Sql/<Type>/databases/backupLongTermRetentionPolicies/weekOfYear", "value": "[parameters('WeekofYear')]" } ] } } }, "parameters": { "Weekly_retention": { "type": "String" }, "Monthly_retention": { "type": "String" }, "Yearly_retention": { "type": "String" }, "WeekofYear": { "type": "Integer" } } } '@ # main loop foreach ($policy in $policies) { # translate the Role name into its GUID $Role = Get-AzRoleDefinition -name $($policy.Role) $type = $policy.type -replace 'MI','managedInstances' -replace 'DB','servers' $template = if ($policy.backups -eq 'PITR') {$Policy_definition_template_PITR} else {$Policy_definition_template_LTR} # generate the definition code for this policy $policy_definition = $template -replace '<Type>',$type -replace '<RoleGUID>',$($Role.Id) -replace '<ParameterName>',$policy.ParameterName # create the policy definition $PolicyDefinition = new-AzPolicyDefinition -Name $($policy.name) -Policy $policy_definition -Metadata "{'category':'$($policy.Category)'}" # create the assignment if ($policy.backups -eq 'PITR') { $PolicyParameters = @{$($policy.ParameterName)=($($policy.ParameterValue))} } else { $PolicyParameters = @{"Weekly_retention"=($($policy.Weekly)); "Monthly_retention"=($($policy.Monthly)); "Yearly_retention"=($($policy.Yearly)); "WeekofYear"=($($policy.WeekofYear));} } $PolicyAssignment = New-AzPolicyAssignment -Name $($policy.name) -PolicyDefinition $PolicyDefinition -PolicyParameterObject $PolicyParameters -IdentityType 'SystemAssigned' -Location $Location # now follow the docs page to wait for the ID to be created, and assign the roles required to it; https://learn.microsoft.com/en-us/azure/governance/policy/how-to/remediate-resources?tabs=azure-powershell # include a loop to wait until the managed identity created as part of the assignment creation is available do { $ManagedIdentity = Get-AzADServicePrincipal -ObjectId $PolicyAssignment.IdentityPrincipalId -erroraction SilentlyContinue if (!($ManagedIdentity)) {start-sleep -Seconds 1} # wait for a bit... } until ($ManagedIdentity) $roleDefinitionIds = $PolicyDefinition.PolicyRule.then.details.roleDefinitionIds if ($roleDefinitionIds.Count -gt 0) { $roleDefinitionIds | ForEach-Object { $roleDefId = $_.Split("/") | Select-Object -Last 1 $roleAssigned = New-AzRoleAssignment -ObjectId $PolicyAssignment.IdentityPrincipalId -RoleDefinitionId $roleDefId -Scope "/subscriptions/$($subscriptionID)" } } # lastly create the remediation task $RemediationTask = Start-AzPolicyRemediation -Name $($policy.name) -PolicyAssignmentId $PolicyAssignment.Id } # confirm that the policies have been set up Get-AzPolicyDefinition | where name -In $policies.name | format-table Name, PolicyType Get-AzPolicyAssignment | where name -In $policies.name | format-table Name, Parameter Feedback and suggestions If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Customer Success Engineering (Ninja) Team (datasqlninja@microsoft.com). Thanks for your support! Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide.Azure SQL Managed Instance – Sync Agent Jobs and Logins in Failover Groups
In an Azure SQL Managed Instance setup with Failover Group, the schema, data, and database-level users will always be synced between primary and secondary instances. As of October 2021, there is no sync mechanism for SQL Server Agent Jobs or Server Logins/Roles because the functionality of replicating system databases does not exist. This document provides guidance and scripts to implement a workaround solution to have the Agent Jobs and Server Logins/Roles synced between primary and secondary instances.Temporal Table Replication in SQL Server: Common Barriers and Solutions
Introduction Transactional replication is a SQL Server feature that copies and synchronizes data and database objects across servers. It generally begins with a snapshot of the publication database objects and data. After this initial snapshot, any data changes and schema modifications made at the Publisher are delivered to the Subscriber as they occur, typically in near real time. These data changes are applied to the Subscriber in the same order and within the same transaction boundaries as at the Publisher, maintaining transactional consistency within a publication Standard transactional replication in SQL Server does not provide support for system-versioned temporal tables. This constraint presents difficulties for organizations aiming to replicate historical data maintained in temporal columns, such as ValidFrom and ValidTo. The challenge persists even when system versioning is disabled, yet there remains a requirement to retain the original values within the target database. Understanding Temporal Tables System-versioned temporal tables are a specialized form of user table designed to retain a comprehensive record of all data modifications. These tables facilitate point-in-time analysis by automatically recording historical changes. Each temporal table contains two datetime2 period columns that specify the validity duration for each row. In addition to the current table, an associated history table preserves previous versions of rows whenever updates or deletions take place. Scenario & Challenge In one of the migration scenarios, the customer faced an issue where system versioning was disabled, but there was still a requirement to replicate data from the ValidFrom and ValidTo columns to the target database without modification. Although temporal tables are commonly used for auditing and historical analysis, replicating them within a transactional replication setup can present specific technical challenges: System managed period columns complicate schema compliance. Mismatch in ValidFrom and ValidTo columns across environments can compromise audit reliability. As transactional replication currently does not support temporal columns, we devised the following solution to address this requirement. Common Error Example When configuring replication for an article that includes a system-versioned temporal table, the setup process may encounter failures due to SQL Server limitations related to system-generated columns. In certain situations where system versioning is disabled, it may still be necessary to replicate the initial values of the ValidFrom and ValidTo period columns on the target system. However, during the configuration of transactional replication, the snapshot application process can fail on these columns, resulting in the following error: Error message: This issue arises because SQL Server considers these columns system-generated and restricts direct inserts, including during replication. The following workaround addresses this situation. The Workaround To successfully replicate temporal tables, follow these steps: Note: This approach will work in case of scenarios when there is a scope of minimal downtime. Predefine Table Schema on Target: Ensure that the source table schema exists on the target and matches with the source schema. Disable System Versioning Temporarily: Before configuring replication, disable system versioning on the temporal table. This allows replication to treat it like a regular table. ALTER TABLE [dbo].[Department] SET (SYSTEM_VERSIONING = OFF); 3. When you set SYSTEM_VERSIONING = OFF and don't drop the SYSTEM_TIME period, the system continues to update the period columns for every insert and update operation. Use the below script to remove the period for system time. ALTER TABLE dbo.Department DROP PERIOD FOR SYSTEM_TIME; 4. After this step, we can use the below script step by step to configure replication. Replication Setup Steps Set a replication database option for the specified database. This stored procedure is executed at the Publisher or Subscriber on any database. use master GO exec sp_replicationdboption @dbname = N'SourceDBNAme', @optname = N'publish', @value = N'true' GO Create a transactional publication. This stored procedure is executed at the Publisher on the publication database. use [SourceDBName] GO exec sp_addpublication @publication = N'PublicationName', @description = N'Transactional Replication publication of database', @sync_method = N'concurrent', @retention = 0, @allow_push = N'true', @allow_pull = N'true', @allow_anonymous = N'true', @enabled_for_internet = N'false', @SnapShot_in_defaultfolder = N'true', @compress_snapshot = N'false', @ftp_port = 21, @allow_subscription_copy = N'false', @add_to_active_directory = N'false', @repl_freq = N'continuous', @status = N'active', @independent_agent = N'true', @immediate_sync = N'true', @allow_sync_tran = N'false', @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1, @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false', @enabled_for_het_sub = N'false' GO Create the Snapshot Agent for the specified publication. This stored procedure is executed at the Publisher on the publication database. use [SourceDBName] GO exec sp_addpublication_snapshot @publication = N'PublicationName', @frequency_type = 1, @frequency_interval = 1, @frequency_relative_interval = 1, @frequency_recurrence_factor = 0, @frequency_subday = 8, @frequency_subday_interval = 1, @active_start_time_of_day = 0, @active_end_time_of_day = 235959, @active_start_date = 0, @active_end_date = 0, @publisher_security_mode = 0, @job_login = N'', @job_password = N'', @publisher_login = N'', @publisher_password = N'' Create an article and add it to a publication. This stored procedure is executed at the Publisher on the publication database. use [SourceDBName] GO exec sp_addarticle @publication = N'PublicationName', @article = N'ArticleName', @source_owner = N'Source Schema Name', @source_object = N'SourceTableName', @type = N'logbased', @description = null, @creation_script = null, @pre_creation_cmd = N'truncate', @schema_option = 0x000000000803509F, @identityrangemanagementoption = N'manual', @destination_table = N'Destination Table Name', @destination_owner = N'Destination Schema Name', @vertical_partition = N'false', @ins_cmd = N'CALL sp_MSins_dboEmployee', @del_cmd = N'CALL sp_MSdel_dboEmployee', @upd_cmd = N'SCALL sp_MSupd_dboEmployee' GO Add a subscription to a publication and set the Subscriber status. This stored procedure is executed at the Publisher on the publication database. use [SourceDBName] GO exec sp_addsubscription @publication = N'PublicationNAme', @subscriber = N'Azure SQL DB Server NAme', @destination_db = N'Target DB Name', @subscription_type = N'Push', @sync_type = N'automatic', @article = N'all', @update_mode = N'read only', @subscriber_type = 0 GO Add a new scheduled agent job used to synchronize a push subscription to a transactional publication. This stored procedure is executed at the Publisher on the publication database. Use [SourceDBNAme] GO exec sp_addpushsubscription_agent @publication = N'PublicationNAme', @subscriber = N'Azure SQL DB Server NAme', @subscriber_db = N'Target DB Name', @job_login = N'', @job_password = null, @subscriber_security_mode = 0, @subscriber_login = N'', @subscriber_password = null, @frequency_type = 64, @frequency_interval = 1, @frequency_relative_interval = 1, @frequency_recurrence_factor = 0, @frequency_subday = 4, @frequency_subday_interval = 5, @active_start_time_of_day = 0, @active_end_time_of_day = 235959, @active_start_date = 0, @active_end_date = 0, @dts_package_location = N'Distributor' GO 5. Once you have performed all the above steps and completed the data migration on target database you need to stop/delete the replication and again add period for system_time on the target table and enable system versioning. ALTER TABLE dbo.Department ADD PERIOD FOR SYSTEM_TIME(<ValidFrom>,<ValidTo>); ALTER TABLE [dbo].[Department] SET (SYSTEM_VERSIONING = ON); Note: The <ValidFrom> and <ValidTo> columns are datetime2 columns defined as PERIOD FOR SYSTEM_TIME, using GENERATED ALWAYS AS ROW START and ROW END. Request to refer the period column names you have created while creating the temporal table and use the same while adding the period columns in the above script. Conclusion Migrating temporal tables within a transactional replication environment involves managing system-versioned features appropriately. Temporarily disabling system versioning and removing the SYSTEM_TIME period allows for adherence to schema requirements and facilitates data replication. After completing replication on the target platform, re-enabling system versioning reinstates temporal table functionality while maintaining data integrity. This workaround ensures that your replication strategy remains robust while preserving the audit trail and historical insights offered by temporal tables.Oracle to Azure SQL Post Migration Optimization: Handling Oracle Function-Based Indexes in Azure SQL
Why Post-Migration Optimization Matters For customers migrating from Oracle to Azure SQL Databases, understanding Azure SQL optimization techniques can make a significant difference in overall database performance. During the Application Stress Testing phase, it is essential to identify query patterns, evaluate index usage, and refine queries to prevent performance bottlenecks and ensure system efficiency. While Azure SQL databases Intelligent Query Processing (IQP) can significantly improve performance with minimal intervention, certain complex scenarios may still require manual query rewrites to achieve the desired level of performance. Leveraging SARGable Queries in SQL Server One such optimization technique involves ensuring that queries in SQL Server are SARGable (Search ARGument-able). SARGable queries enable the SQL Engine to efficiently leverage indexes, significantly reducing resource consumption and execution time. For instance, small adjustments such as replacing column transformations with range-based conditions, creating computed columns, and adding required indexes on computed columns can have a significant impact on query performance. Metric Non-SARGable Query SARGable Query Execution Plan Index Scan (usually less efficient) Index Seek (usually more efficient) IO Cost High (Reads Many Pages) Low (Reads Fewer Pages) CPU Time Higher (Function Evaluation for Rows) Lower (Direct Use of Index) Logical Reads High (entire index/table scanned) Low (only relevant index pages accessed) Handling Function-Based Indexes: Oracle vs SQL Server Oracle: In Oracle, function-based indexes allow indexing on the result of a function or expression. This enables efficient query execution when functions or expressions are used in WHERE or JOIN conditions. The TRUNC(APPLN_START_DT) function is precomputed and stored in the Index. Queries can get the value of the expression from the index instead of computing it. The following is an example of this in practice: -- Create a function-based index on the TRUNC(APPLN_START_DT) expression CREATE INDEX idx_appln_start_dt ON appln_data ( trunc(appln_start_dt) ); -- Filter rows where the year of APPLN_START_DT matches the current year. SELECT * FROM appln_data WHERE trunc(appln_start_dt, 'YYYY') = trunc(sysdate, 'YYYY'); SQL Server: SQL Server takes a slightly different approach to handling Indexes and Query Optimization, particularly when working with large datasets or tables containing millions of rows. The SQL optimizer excels when queries are SARGable (Search ARGument-able), meaning the predicates are written to allow direct index utilization without applying transformations to columns. Let’s analyze this with a simple and practical example to see how query design impacts performance, resource usage, and execution efficiency. Let's analyze two versions of a query on a table [APPLN_DATA] with 50 million rows to understand how query formulation impacts performance, resource utilization, and execution efficiency. Comparing SARGable and Non-SARGable Queries Non-SARGable Query: -- Filter rows where the year of APPLN_START_DT matches the current year. SELECT * FROM [dbo].[APPLN_DATA] WHERE YEAR(APPLN_START_DT) = YEAR(GETDATE()); Execution Plan: Index Scan. Logical Reads: 928,804 CPU Time: 7,562 ms Physical Reads: 2 Elapsed Time: 63,130 ms Execution Plan: Why It's Inefficient: The YEAR() function transforms the APPLN_START_DT column, which prevents the optimizer from performing an Index Seek and forces an Index Scan. The execution plan confirms this by showing an Index Scan, where SQL Server reads through all rows instead of directly seeking the relevant ones using the index. This behavior increases resource consumption, including logical reads, CPU usage, I/O costs, and overall query execution time. SARGable Query: SELECT * FROM [dbo].[APPLN_DATA] WHERE APPLN_START_DT >= CAST(YEAR(GETDATE()) AS CHAR(4)) + '-01-01' AND APPLN_START_DT < CAST(YEAR(GETDATE()) + 1 AS CHAR(4)) + '-01-01'; Execution Plan: Index Seek. Logical Reads: 173,041 CPU Time: 3,547 ms Elapsed Time: 52,092 ms Execution Plan: Why It's Better: The query uses range-based conditions (>= and <) directly on the APPLN_START_DT column without applying any functions. This makes the query SARGable, allowing SQL Server to efficiently perform an Index Seek, as shown in the execution plan. An Index Seek ensures that SQL Server retrieves only the relevant rows, significantly reducing logical reads, CPU usage, and overall query execution time. Understanding the Key Differences and Impact of SARGable and Non-SARGable Queries Aspect Non-SARGable SARGable Query Example WHERE YEAR(APPLN_START_DT) = YEAR(GETDATE()) WHERE APPLN_START_DT >= '2024-01-01' Index Usage Index Scan (Due to YEAR() function - inefficient) Index Seek (efficient) Logical Reads High (entire index/table scanned) Low (only relevant index pages accessed) CPU Time High Low Elapsed Time Longer Shorter Query Optimization Applies function on the column Uses range conditions without transformations Additional Optimization Techniques: A Computed column with an index was not used here because the query retrieves rows for an entire year, making range-based conditions more efficient. Computed indexes are more suitable for scenarios with specific transformations on smaller subsets of data, such as Calculations and Filtering for a particular month or day. In our example, we use SELECT * for simplicity, but in real-world workloads, queries typically retrieve specific columns based on application requirements. Selecting only the required columns reduces I/O and improves query performance. Additionally, Covering Indexes (indexes that include all queried columns) should be considered to eliminate Key Lookups, further enhancing performance for frequently accessed queries. If your table has billions of rows, even efficient Index Seeks may take considerable time. Filtered Indexes, Indexed Views and Partitioning the table based on a logical key (e.g., date ranges) can help improve query performance further. Key Takeaways for a successful Heterogenous Migration: Oracle's function-based indexes can be adapted in SQL Server using Range-based Conditions, Computed Columns with Indexes, Indexed Views, or Application-level Optimizations. Aim to write SARGable queries to leverage SQL Server's indexing capabilities effectively. Avoid using Functions in Query Predicates. Evaluate and consider the use of Indexed Views and Filtered Indexes. It is crucial to iteratively analyze and Update Index Statistics, Evaluate Index Access patterns, Review Histograms, monitor Query Plans and Fragmentation, Rebuilding Indexes during the App Stress Testing phase to achieve optimal performance. Collaborate with application developers to refactor query logic when needed. References: Intelligent query processing details - SQL Server | Microsoft Learn Create indexed views - SQL Server | Microsoft Learn Create filtered indexes - SQL Server | Microsoft Learn Migrating Oracle Databases to SQL Server (OracleToSQL) - SQL Server | Microsoft Learn Automatic tuning - SQL Server | Microsoft Learn Feedback and suggestions If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Ninja Engineering Team (datasqlninja@microsoft.com). Thanks for your support! Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide.